Science.gov

Sample records for performance vector network

  1. Vector Encoding in Biochemical Networks

    NASA Astrophysics Data System (ADS)

    Potter, Garrett; Sun, Bo

    Encoding of environmental cues via biochemical signaling pathways is of vital importance in the transmission of information for cells in a network. The current literature assumes a single cell state is used to encode information, however, recent research suggests the optimal strategy utilizes a vector of cell states sampled at various time points. To elucidate the optimal sampling strategy for vector encoding, we take an information theoretic approach and determine the mutual information of the calcium signaling dynamics obtained from fibroblast cells perturbed with different concentrations of ATP. Specifically, we analyze the sampling strategies under the cases of fixed and non-fixed vector dimension as well as the efficiency of these strategies. Our results show that sampling with greater frequency is optimal in the case of non-fixed vector dimension but that, in general, a lower sampling frequency is best from both a fixed vector dimension and efficiency standpoint. Further, we find the use of a simple modified Ornstein-Uhlenbeck process as a model qualitatively captures many of our experimental results suggesting that sampling in biochemical networks is based on a few basic components.

  2. Video data compression using artificial neural network differential vector quantization

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, Ashok K.; Bibyk, Steven B.; Ahalt, Stanley C.

    1991-01-01

    An artificial neural network vector quantizer is developed for use in data compression applications such as Digital Video. Differential Vector Quantization is used to preserve edge features, and a new adaptive algorithm, known as Frequency-Sensitive Competitive Learning, is used to develop the vector quantizer codebook. To develop real time performance, a custom Very Large Scale Integration Application Specific Integrated Circuit (VLSI ASIC) is being developed to realize the associative memory functions needed in the vector quantization algorithm. By using vector quantization, the need for Huffman coding can be eliminated, resulting in superior performance against channel bit errors than methods that use variable length codes.

  3. Predictive vector quantization using a neural network approach

    NASA Astrophysics Data System (ADS)

    Mohsenian, Nader; Rizvi, Syed A.; Nasrabadi, Nasser M.

    1993-07-01

    A new predictive vector quantization (PVQ) technique capable of exploring the nonlinear dependencies in addition to the linear dependencies that exist between adjacent blocks (vectors) of pixels is introduced. The two components of the PVQ scheme, the vector predictor and the vector quantizer, are implemented by two different classes of neural networks. A multilayer perceptron is used for the predictive component and Kohonen self- organizing feature maps are used to design the codebook for the vector quantizer. The multilayer perceptron uses the nonlinearity condition associated with its processing units to perform a nonlinear vector prediction. The second component of the PVQ scheme vector quantizers the residual vector that is formed by subtracting the output of the perceptron from the original input vector. The joint-optimization task of designing the two components of the PVQ scheme is also achieved. Simulation results are presented for still images with high visual quality.

  4. Distributed Estimation for Vector Signal in Linear Coherent Sensor Networks

    NASA Astrophysics Data System (ADS)

    Wu, Chien-Hsien; Lin, Ching-An

    We introduce the distributed estimation of a random vector signal in wireless sensor networks that follow coherent multiple access channel model. We adopt the linear minimum mean squared error fusion rule. The problem of interest is to design linear coding matrices for those sensors in the network so as to minimize mean squared error of the estimated vector signal under a total power constraint. We show that the problem can be formulated as a convex optimization problem and we obtain closed form expressions of the coding matrices. Numerical results are used to illustrate the performance of the proposed method.

  5. Fast modular network implementation for support vector machines.

    PubMed

    Huang, Guang-Bin; Mao, K Z; Siew, Chee-Kheong; Huang, De-Shuang

    2005-11-01

    Support vector machines (SVMs) have been extensively used. However, it is known that SVMs face difficulty in solving large complex problems due to the intensive computation involved in their training algorithms, which are at least quadratic with respect to the number of training examples. This paper proposes a new, simple, and efficient network architecture which consists of several SVMs each trained on a small subregion of the whole data sampling space and the same number of simple neural quantizer modules which inhibit the outputs of all the remote SVMs and only allow a single local SVM to fire (produce actual output) at any time. In principle, this region-computing based modular network method can significantly reduce the learning time of SVM algorithms without sacrificing much generalization performance. The experiments on a few real large complex benchmark problems demonstrate that our method can be significantly faster than single SVMs without losing much generalization performance.

  6. Assessing the performance of multiple spectral-spatial features of a hyperspectral image for classification of urban land cover classes using support vector machines and artificial neural network

    NASA Astrophysics Data System (ADS)

    Pullanagari, Reddy; Kereszturi, Gábor; Yule, Ian J.; Ghamisi, Pedram

    2017-04-01

    Accurate and spatially detailed mapping of complex urban environments is essential for land managers. Classifying high spectral and spatial resolution hyperspectral images is a challenging task because of its data abundance and computational complexity. Approaches with a combination of spectral and spatial information in a single classification framework have attracted special attention because of their potential to improve the classification accuracy. We extracted multiple features from spectral and spatial domains of hyperspectral images and evaluated them with two supervised classification algorithms; support vector machines (SVM) and an artificial neural network. The spatial features considered are produced by a gray level co-occurrence matrix and extended multiattribute profiles. All of these features were stacked, and the most informative features were selected using a genetic algorithm-based SVM. After selecting the most informative features, the classification model was integrated with a segmentation map derived using a hidden Markov random field. We tested the proposed method on a real application of a hyperspectral image acquired from AisaFENIX and on widely used hyperspectral images. From the results, it can be concluded that the proposed framework significantly improves the results with different spectral and spatial resolutions over different instrumentation.

  7. Music Signal Processing Using Vector Product Neural Networks

    NASA Astrophysics Data System (ADS)

    Fan, Z. C.; Chan, T. S.; Yang, Y. H.; Jang, J. S. R.

    2017-05-01

    We propose a novel neural network model for music signal processing using vector product neurons and dimensionality transformations. Here, the inputs are first mapped from real values into three-dimensional vectors then fed into a three-dimensional vector product neural network where the inputs, outputs, and weights are all three-dimensional values. Next, the final outputs are mapped back to the reals. Two methods for dimensionality transformation are proposed, one via context windows and the other via spectral coloring. Experimental results on the iKala dataset for blind singing voice separation confirm the efficacy of our model.

  8. NASF transposition network: A computing network for unscrambling p-ordered vectors

    NASA Technical Reports Server (NTRS)

    Lim, R. S.

    1979-01-01

    The viewpoints of design, programming, and application of the transportation network (TN) is presented. The TN is a programmable combinational logic network that connects 521 memory modules to 512 processors. The unscrambling of p-ordered vectors to 1-ordered vectors in one cycle is described. The TN design is based upon the concept of cyclic groups from abstract algebra and primitive roots and indices from number theory. The programming of the TN is very simple, requiring only 20 bits: 10 bits for offset control and 10 bits for barrel switch shift control. This simple control is executed by the control unit (CU), not the processors. Any memory access by a processor must be coordinated with the CU and wait for all other processors to come to a synchronization point. These wait and synchronization events can be a degradation in performance to a computation. The TN application is for multidimensional data manipulation, matrix processing, and data sorting, and can also perform a perfect shuffle. Unlike other more complicated and powerful permutation networks, the TN cannot, if possible at all, unscramble non-p-ordered vectors in one cycle.

  9. Analog neural network for support vector machine learning.

    PubMed

    Perfetti, Renzo; Ricci, Elisa

    2006-07-01

    An analog neural network for support vector machine learning is proposed, based on a partially dual formulation of the quadratic programming problem. It results in a simpler circuit implementation with respect to existing neural solutions for the same application. The effectiveness of the proposed network is shown through some computer simulations concerning benchmark problems.

  10. A one-layer recurrent neural network for support vector machine learning.

    PubMed

    Xia, Youshen; Wang, Jun

    2004-04-01

    This paper presents a one-layer recurrent neural network for support vector machine (SVM) learning in pattern classification and regression. The SVM learning problem is first converted into an equivalent formulation, and then a one-layer recurrent neural network for SVM learning is proposed. The proposed neural network is guaranteed to obtain the optimal solution of support vector classification and regression. Compared with the existing two-layer neural network for the SVM classification, the proposed neural network has a low complexity for implementation. Moreover, the proposed neural network can converge exponentially to the optimal solution of SVM learning. The rate of the exponential convergence can be made arbitrarily high by simply turning up a scaling parameter. Simulation examples based on benchmark problems are discussed to show the good performance of the proposed neural network for SVM learning.

  11. Multifrequency polarimetric microwave scatterometer based on a vector network analyzer

    NASA Astrophysics Data System (ADS)

    D'Alessio, Angelo C.; Posa, Francesco; Sabatelli, Vincenzo; Casarano, Domenico

    1999-12-01

    In order to test a multi-frequency polarimetric scatterometer based on a Vector Network Analyzer, calibration measurements have been performed over point targets. Two trihedral corner reflectors with different dimensions have been employed. The radar cross sections have been measured at different frequency bands (L, C and X) and for different look angles between 23 degree(s) and 50 degree(s). Satisfactory results have been obtained in all three bands, however in the L-band the electromagnetic smog, due to mobile phones and airport radars, caused some difficulties in the extinction of the radiometric information. Other calibration tests have been planned before using the instrument as a ground-truth data acquisition device on the test-sites envisaged for the spaceborne SRTM and ENVISAT SAR missions.

  12. A Distributed Support Vector Machine Learning Over Wireless Sensor Networks.

    PubMed

    Kim, Woojin; Stanković, Milos S; Johansson, Karl H; Kim, H Jin

    2015-11-01

    This paper is about fully-distributed support vector machine (SVM) learning over wireless sensor networks. With the concept of the geometric SVM, we propose to gossip the set of extreme points of the convex hull of local data set with neighboring nodes. It has the advantages of a simple communication mechanism and finite-time convergence to a common global solution. Furthermore, we analyze the scalability with respect to the amount of exchanged information and convergence time, with a specific emphasis on the small-world phenomenon. First, with the proposed naive convex hull algorithm, the message length remains bounded as the number of nodes increases. Second, by utilizing a small-world network, we have an opportunity to drastically improve the convergence performance with only a small increase in power consumption. These properties offer a great advantage when dealing with a large-scale network. Simulation and experimental results support the feasibility and effectiveness of the proposed gossip-based process and the analysis.

  13. Word Vectorization Using Relations among Words for Neural Network

    NASA Astrophysics Data System (ADS)

    Hotta, Hajime; Kittaka, Masanobu; Hagiwara, Masafumi

    In this paper, we propose a new vectorization method for a new generation of computational intelligence including neural networks and natural language processing. In recent years, various techniques of word vectorization have been proposed, many of which rely on the preparation of dictionaries. However, these techniques don't consider the symbol grounding problem for unknown types of data, which is one of the most fundamental issues on artificial intelligence. In order to avoid the symbol-grounding problem, pattern processing based methods, such as neural networks, are often used in various studies on self-directive systems and algorithms, and the merit of neural network is not exception in the natural language processing. The proposed method is a converter from one word input to one real-valued vector, whose algorithm is inspired by neural network architecture. The merits of the method are as follows: (1) the method requires no specific knowledge of linguistics e.g. word classes or grammatical one; (2) the method is a sequence learning technique and it can learn additional knowledge. The experiment showed the efficiency of word vectorization in terms of similarity measurement.

  14. Performance of the butterfly processor-memory interconnection in a vector environment

    NASA Astrophysics Data System (ADS)

    Brooks, E. D., III

    1985-02-01

    A fundamental hurdle impeding the development of large N common memory multiprocessors is the performance limitation in the switch connecting the processors to the memory modules. Multistage networks currently considered for this connection have a memory latency which grows like (ALPHA)log2N*. For scientific computing, it is natural to look for a multiprocessor architecture that will enable the use of vector operations to mask memory latency. The problem to be overcome here is the chaotic behavior introduced by conflicts occurring in the switch. The performance of the butterfly or indirect binary n-cube network in a vector processing environment is examined. A simple modification of the standard 2X2 switch node used in such networks which adaptively removes chaotic behavior during a vector operation is described.

  15. Support vector machines (SVMs) for monitoring network design.

    PubMed

    Asefa, Tirusew; Kemblowski, Mariush; Urroz, Gilberto; McKee, Mac

    2005-01-01

    In this paper we present a hydrologic application of a new statistical learning methodology called support vector machines (SVMs). SVMs are based on minimization of a bound on the generalized error (risk) model, rather than just the mean square error over a training set. Due to Mercer's conditions on the kernels, the corresponding optimization problems are convex and hence have no local minima. In this paper, SVMs are illustratively used to reproduce the behavior of Monte Carlo-based flow and transport models that are in turn used in the design of a ground water contamination detection monitoring system. The traditional approach, which is based on solving transient transport equations for each new configuration of a conductivity field, is too time consuming in practical applications. Thus, there is a need to capture the behavior of the transport phenomenon in random media in a relatively simple manner. The objective of the exercise is to maximize the probability of detecting contaminants that exceed some regulatory standard before they reach a compliance boundary, while minimizing cost (i.e., number of monitoring wells). Application of the method at a generic site showed a rather promising performance, which leads us to believe that SVMs could be successfully employed in other areas of hydrology. The SVM was trained using 510 monitoring configuration samples generated from 200 Monte Carlo flow and transport realizations. The best configurations of well networks selected by the SVM were identical with the ones obtained from the physical model, but the reliabilities provided by the respective networks differ slightly.

  16. Demonstration of Cost-Effective, High-Performance Computing at Performance and Reliability Levels Equivalent to a 1994 Vector Supercomputer

    NASA Technical Reports Server (NTRS)

    Babrauckas, Theresa

    2000-01-01

    The Affordable High Performance Computing (AHPC) project demonstrated that high-performance computing based on a distributed network of computer workstations is a cost-effective alternative to vector supercomputers for running CPU and memory intensive design and analysis tools. The AHPC project created an integrated system called a Network Supercomputer. By connecting computer work-stations through a network and utilizing the workstations when they are idle, the resulting distributed-workstation environment has the same performance and reliability levels as the Cray C90 vector Supercomputer at less than 25 percent of the C90 cost. In fact, the cost comparison between a Cray C90 Supercomputer and Sun workstations showed that the number of distributed networked workstations equivalent to a C90 costs approximately 8 percent of the C90.

  17. A viral protease relocalizes in the presence of the vector to promote vector performance

    PubMed Central

    Bak, Aurélie; Cheung, Andrea L.; Yang, Chunling; Whitham, Steven A.; Casteel, Clare L.

    2017-01-01

    Vector-borne pathogens influence host characteristics relevant to host–vector contact, increasing pathogen transmission and survival. Previously, we demonstrated that infection with Turnip mosaic virus, a member of one of the largest families of plant-infecting viruses, increases vector attraction and reproduction on infected hosts. These changes were due to a single viral protein, NIa-Pro. Here we show that NIa-Pro responds to the presence of the aphid vector during infection by relocalizing to the vacuole. Remarkably, vacuolar localization is required for NIa-Pro's ability to enhance aphid reproduction on host plants, vacuole localization disappears when aphids are removed, and this phenomenon occurs for another potyvirus, Potato virus Y, suggesting a conserved role for the protein in vector–host interactions. Taken together, these results suggest that potyviruses dynamically respond to the presence of their vectors, promoting insect performance and transmission only when needed. PMID:28205516

  18. Improving neural network performance on SIMD architectures

    NASA Astrophysics Data System (ADS)

    Limonova, Elena; Ilin, Dmitry; Nikolaev, Dmitry

    2015-12-01

    Neural network calculations for the image recognition problems can be very time consuming. In this paper we propose three methods of increasing neural network performance on SIMD architectures. The usage of SIMD extensions is a way to speed up neural network processing available for a number of modern CPUs. In our experiments, we use ARM NEON as SIMD architecture example. The first method deals with half float data type for matrix computations. The second method describes fixed-point data type for the same purpose. The third method considers vectorized activation functions implementation. For each method we set up a series of experiments for convolutional and fully connected networks designed for image recognition task.

  19. Performance of Bayesian outlier diagnostic in testing mean vector

    NASA Astrophysics Data System (ADS)

    Mohammad, Rofizah; Hamzah, Firdaus Mohamad

    2014-09-01

    The diagnostic measure kd which is used to measure the effect of a single observation d on model choice was applied to a variety of univariate model. The purpose of this study is to assess the performance of this diagnostic measure when applying to multivariate structure for testing the specified mean vector. We illustrate the method using data generated from multivariate normal distribution. If X a p-variate normal random variable of size n with the mean vector θ and a known covariance matrix, we consider the null hypothesis that the mean vector θ is zero. From this simulation we test the performance of kd for several n and p values.

  20. Biologically relevant neural network architectures for support vector machines.

    PubMed

    Jändel, Magnus

    2014-01-01

    Neural network architectures that implement support vector machines (SVM) are investigated for the purpose of modeling perceptual one-shot learning in biological organisms. A family of SVM algorithms including variants of maximum margin, 1-norm, 2-norm and ν-SVM is considered. SVM training rules adapted for neural computation are derived. It is found that competitive queuing memory (CQM) is ideal for storing and retrieving support vectors. Several different CQM-based neural architectures are examined for each SVM algorithm. Although most of the sixty-four scanned architectures are unconvincing for biological modeling four feasible candidates are found. The seemingly complex learning rule of a full ν-SVM implementation finds a particularly simple and natural implementation in bisymmetric architectures. Since CQM-like neural structures are thought to encode skilled action sequences and bisymmetry is ubiquitous in motor systems it is speculated that trainable pattern recognition in low-level perception has evolved as an internalized motor programme.

  1. Internal performance characteristics of thrust-vectored axisymmetric ejector nozzles

    NASA Technical Reports Server (NTRS)

    Lamb, Milton

    1995-01-01

    A series of thrust-vectored axisymmetric ejector nozzles were designed and experimentally tested for internal performance and pumping characteristics at the Langley research center. This study indicated that discontinuities in the performance occurred at low primary nozzle pressure ratios and that these discontinuities were mitigated by decreasing expansion area ratio. The addition of secondary flow increased the performance of the nozzles. The mid-to-high range of secondary flow provided the most overall improvements, and the greatest improvements were seen for the largest ejector area ratio. Thrust vectoring the ejector nozzles caused a reduction in performance and discharge coefficient. With or without secondary flow, the vectored ejector nozzles produced thrust vector angles that were equivalent to or greater than the geometric turning angle. With or without secondary flow, spacing ratio (ejector passage symmetry) had little effect on performance (gross thrust ratio), discharge coefficient, or thrust vector angle. For the unvectored ejectors, a small amount of secondary flow was sufficient to reduce the pressure levels on the shroud to provide cooling, but for the vectored ejector nozzles, a larger amount of secondary air was required to reduce the pressure levels to provide cooling.

  2. Modeling and performance analysis of GPS vector tracking algorithms

    NASA Astrophysics Data System (ADS)

    Lashley, Matthew

    This dissertation provides a detailed analysis of GPS vector tracking algorithms and the advantages they have over traditional receiver architectures. Standard GPS receivers use a decentralized architecture that separates the tasks of signal tracking and position/velocity estimation. Vector tracking algorithms combine the two tasks into a single algorithm. The signals from the various satellites are processed collectively through a Kalman filter. The advantages of vector tracking over traditional, scalar tracking methods are thoroughly investigated. A method for making a valid comparison between vector and scalar tracking loops is developed. This technique avoids the ambiguities encountered when attempting to make a valid comparison between tracking loops (which are characterized by noise bandwidths and loop order) and the Kalman filters (which are characterized by process and measurement noise covariance matrices) that are used by vector tracking algorithms. The improvement in performance offered by vector tracking is calculated in multiple different scenarios. Rule of thumb analysis techniques for scalar Frequency Lock Loops (FLL) are extended to the vector tracking case. The analysis tools provide a simple method for analyzing the performance of vector tracking loops. The analysis tools are verified using Monte Carlo simulations. Monte Carlo simulations are also used to study the effects of carrier to noise power density (C/N0) ratio estimation and the advantage offered by vector tracking over scalar tracking. The improvement from vector tracking ranges from 2.4 to 6.2 dB in various scenarios. The difference in the performance of the three vector tracking architectures is analyzed. The effects of using a federated architecture with and without information sharing between the receiver's channels are studied. A combination of covariance analysis and Monte Carlo simulation is used to analyze the performance of the three algorithms. The federated algorithm without

  3. Fuzzy learning vector quantization neural network and its application for artificial odor recognition system

    NASA Astrophysics Data System (ADS)

    Kusumoputro, Benyamin; Budiarto, Hary; Jatmiko, Wisnu

    2000-03-01

    In this paper, a kind of fuzzy algorithm for learning vector quantization is developed and used as pattern classifiers with a supervised learning paradigm in artificial odor discrimination system. In this type of FLVQ, the neuron activation is derived through fuzziness of the input data, so that the neural system could deal with the statistical of the measurement error directly. During learning,the similarity between the training vector and the reference vectors are calculated, and the winning reference vector is updated in two ways. Firstly, by shifting the central position of the fuzzy reference vector toward or away from the input vector, and secondly, by modifying its fuzziness. Two types of fuzziness modifications are used, i.e., a constant modification factor and a variable modification factor. This type of FLVQ is different in nature with FALVQ, and in this paper, the performance of FNLVQ network is compared with that of FALVQ in artificial odor recognition system. Experimental results show that both FALVQ and FNLVQ provided high recognition probability in determining various learn-category of odors, however, the FNLVQ neural system has the ability to recognize the unlearn-category of odor that could not recognized by FALVQ neural system.

  4. Optimization of a broadband vector network analyzer calibration

    NASA Astrophysics Data System (ADS)

    Opalski, Leszek J.

    2013-10-01

    This paper presents a novel, three-stage approach to optimum selection of calibration standard lengths for broadband Vector Network Analyzers (VNA). First, an initial standard D-optimal calibration selection problem is reformulated such as to eliminate redundant locally optimal solutions. Second, good quality basic solution to the calibration selection problem is found, as a result of analytic investigation of the problem properties. Finally, a multistep numeric bi-criterion optimization procedure with variable frequency range is proposed to generate a set of candidate solutions, with different relationship of bandwidth and ripple of the normalized determinant of the Fisher matrix. Example results demonstrate high quality of the solutions found and high efficiency of the proposed optimization-based approach.

  5. A high-performance FFT algorithm for vector supercomputers

    NASA Technical Reports Server (NTRS)

    Bailey, David H.

    1988-01-01

    Many traditional algorithms for computing the fast Fourier transform (FFT) on conventional computers are unacceptable for advanced vector and parallel computers because they involve nonunit, power-of-two memory strides. A practical technique for computing the FFT that avoids all such strides and appears to be near-optimal for a variety of current vector and parallel computers is presented. Performance results of a program based on this technique are given. Notable among these results is that a FORTRAN implementation of this algorithm on the CRAY-2 runs up to 77-percent faster than Cray's assembly-coded library routine.

  6. Performance evaluation of the SX-6 vector architecture forscientific computations

    SciTech Connect

    Oliker, Leonid; Canning, Andrew; Carter, Jonathan Carter; Shalf,John; Skinner, David; Ethier, Stephane; Biswas, Rupak; Djomehri,Jahed; Van der Wijngaart, Rob

    2005-01-01

    The growing gap between sustained and peak performance for scientific applications is a well-known problem in high performance computing. The recent development of parallel vector systems offers the potential to reduce this gap for many computational science codes and deliver a substantial increase in computing capabilities. This paper examines the intranode performance of the NEC SX-6 vector processor, and compares it against the cache-based IBMPower3 and Power4 superscalar architectures, across a number of key scientific computing areas. First, we present the performance of a microbenchmark suite that examines many low-level machine characteristics. Next, we study the behavior of the NAS Parallel Benchmarks. Finally, we evaluate the performance of several scientific computing codes. Overall results demonstrate that the SX-6 achieves high performance on a large fraction of our application suite and often significantly outperforms the cache-based architectures. However, certain classes of applications are not easily amenable to vectorization and would require extensive algorithm and implementation reengineering to utilize the SX-6 effectively.

  7. Folksonomical P2P File Sharing Networks Using Vectorized KANSEI Information as Search Tags

    NASA Astrophysics Data System (ADS)

    Ohnishi, Kei; Yoshida, Kaori; Oie, Yuji

    We present the concept of folksonomical peer-to-peer (P2P) file sharing networks that allow participants (peers) to freely assign structured search tags to files. These networks are similar to folksonomies in the present Web from the point of view that users assign search tags to information distributed over a network. As a concrete example, we consider an unstructured P2P network using vectorized Kansei (human sensitivity) information as structured search tags for file search. Vectorized Kansei information as search tags indicates what participants feel about their files and is assigned by the participant to each of their files. A search query also has the same form of search tags and indicates what participants want to feel about files that they will eventually obtain. A method that enables file search using vectorized Kansei information is the Kansei query-forwarding method, which probabilistically propagates a search query to peers that are likely to hold more files having search tags that are similar to the query. The similarity between the search query and the search tags is measured in terms of their dot product. The simulation experiments examine if the Kansei query-forwarding method can provide equal search performance for all peers in a network in which only the Kansei information and the tendency with respect to file collection are different among all of the peers. The simulation results show that the Kansei query forwarding method and a random-walk-based query forwarding method, for comparison, work effectively in different situations and are complementary. Furthermore, the Kansei query forwarding method is shown, through simulations, to be superior to or equal to the random-walk based one in terms of search speed.

  8. Biasing vector network analyzers using variable frequency and amplitude signals

    NASA Astrophysics Data System (ADS)

    Nobles, J. E.; Zagorodnii, V.; Hutchison, A.; Celinski, Z.

    2016-08-01

    We report the development of a test setup designed to provide a variable frequency biasing signal to a vector network analyzer (VNA). The test setup is currently used for the testing of liquid crystal (LC) based devices in the microwave region. The use of an AC bias for LC based devices minimizes the negative effects associated with ionic impurities in the media encountered with DC biasing. The test setup utilizes bias tees on the VNA test station to inject the bias signal. The square wave biasing signal is variable from 0.5 to 36.0 V peak-to-peak (VPP) with a frequency range of DC to 10 kHz. The test setup protects the VNA from transient processes, voltage spikes, and high-frequency leakage. Additionally, the signals to the VNA are fused to ½ amp and clipped to a maximum of 36 VPP based on bias tee limitations. This setup allows us to measure S-parameters as a function of both the voltage and the frequency of the applied bias signal.

  9. Monthly evaporation forecasting using artificial neural networks and support vector machines

    NASA Astrophysics Data System (ADS)

    Tezel, Gulay; Buyukyildiz, Meral

    2016-04-01

    Evaporation is one of the most important components of the hydrological cycle, but is relatively difficult to estimate, due to its complexity, as it can be influenced by numerous factors. Estimation of evaporation is important for the design of reservoirs, especially in arid and semi-arid areas. Artificial neural network methods and support vector machines (SVM) are frequently utilized to estimate evaporation and other hydrological variables. In this study, usability of artificial neural networks (ANNs) (multilayer perceptron (MLP) and radial basis function network (RBFN)) and ɛ-support vector regression (SVR) artificial intelligence methods was investigated to estimate monthly pan evaporation. For this aim, temperature, relative humidity, wind speed, and precipitation data for the period 1972 to 2005 from Beysehir meteorology station were used as input variables while pan evaporation values were used as output. The Romanenko and Meyer method was also considered for the comparison. The results were compared with observed class A pan evaporation data. In MLP method, four different training algorithms, gradient descent with momentum and adaptive learning rule backpropagation (GDX), Levenberg-Marquardt (LVM), scaled conjugate gradient (SCG), and resilient backpropagation (RBP), were used. Also, ɛ-SVR model was used as SVR model. The models were designed via 10-fold cross-validation (CV); algorithm performance was assessed via mean absolute error (MAE), root mean square error (RMSE), and coefficient of determination (R 2). According to the performance criteria, the ANN algorithms and ɛ-SVR had similar results. The ANNs and ɛ-SVR methods were found to perform better than the Romanenko and Meyer methods. Consequently, the best performance using the test data was obtained using SCG(4,2,2,1) with R 2 = 0.905.

  10. Maximizing sparse matrix vector product performance in MIMD computers

    SciTech Connect

    McLay, R.T.; Kohli, H.S.; Swift, S.L.; Carey, G.F.

    1994-12-31

    A considerable component of the computational effort involved in conjugate gradient solution of structured sparse matrix systems is expended during the Matrix-Vector Product (MVP), and hence it is the focus of most efforts at improving performance. Such efforts are hindered on MIMD machines due to constraints on memory, cache and speed of memory-cpu data transfer. This paper describes a strategy for maximizing the performance of the local computations associated with the MVP. The method focuses on single stride memory access, and the efficient use of cache by pre-loading it with data that is re-used while bypassing it for other data. The algorithm is designed to behave optimally for varying grid sizes and number of unknowns per gridpoint. Results from an assembly language implementation of the strategy on the iPSC/860 show a significant improvement over the performance using FORTRAN.

  11. Internal performance characteristics of vectored axisymmetric ejector nozzles

    NASA Technical Reports Server (NTRS)

    Lamb, Milton

    1993-01-01

    A series of vectoring axisymmetric ejector nozzles were designed and experimentally tested for internal performance and pumping characteristics at NASA-Langley Research Center. These ejector nozzles used convergent-divergent nozzles as the primary nozzles. The model geometric variables investigated were primary nozzle throat area, primary nozzle expansion ratio, effective ejector expansion ratio (ratio of shroud exit area to primary nozzle throat area), ratio of minimum ejector area to primary nozzle throat area, ratio of ejector upper slot height to lower slot height (measured on the vertical centerline), and thrust vector angle. The primary nozzle pressure ratio was varied from 2.0 to 10.0 depending upon primary nozzle throat area. The corrected ejector-to-primary nozzle weight-flow ratio was varied from 0 (no secondary flow) to approximately 0.21 (21 percent of primary weight-flow rate) depending on ejector nozzle configuration. In addition to the internal performance and pumping characteristics, static pressures were obtained on the shroud walls.

  12. Locally connected neural network with improved feature vector

    NASA Technical Reports Server (NTRS)

    Thomas, Tyson (Inventor)

    2004-01-01

    A pattern recognizer which uses neuromorphs with a fixed amount of energy that is distributed among the elements. The distribution of the energy is used to form a histogram which is used as a feature vector.

  13. Double Virus Vector Infection to the Prefrontal Network of the Macaque Brain

    PubMed Central

    Tanaka, Shingo; Koizumi, Masashi; Kikusui, Takefumi; Ichihara, Nobutsune; Kato, Shigeki; Kobayashi, Kazuto; Sakagami, Masamichi

    2015-01-01

    To precisely understand how higher cognitive functions are implemented in the prefrontal network of the brain, optogenetic and pharmacogenetic methods to manipulate the signal transmission of a specific neural pathway are required. The application of these methods, however, has been mostly restricted to animals other than the primate, which is the best animal model to investigate higher cognitive functions. In this study, we used a double viral vector infection method in the prefrontal network of the macaque brain. This enabled us to express specific constructs into specific neurons that constitute a target pathway without use of germline genetic manipulation. The double-infection technique utilizes two different virus vectors in two monosynaptically connected areas. One is a vector which can locally infect cell bodies of projection neurons (local vector) and the other can retrogradely infect from axon terminals of the same projection neurons (retrograde vector). The retrograde vector incorporates the sequence which encodes Cre recombinase and the local vector incorporates the “Cre-On” FLEX double-floxed sequence in which a reporter protein (mCherry) was encoded. mCherry thus came to be expressed only in doubly infected projection neurons with these vectors. We applied this method to two macaque monkeys and targeted two different pathways in the prefrontal network: The pathway from the lateral prefrontal cortex to the caudate nucleus and the pathway from the lateral prefrontal cortex to the frontal eye field. As a result, mCherry-positive cells were observed in the lateral prefrontal cortex in all of the four injected hemispheres, indicating that the double virus vector transfection is workable in the prefrontal network of the macaque brain. PMID:26193102

  14. Distributed Vector Estimation for Power- and Bandwidth-Constrained Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Sani, Alireza; Vosoughi, Azadeh

    2016-08-01

    We consider distributed estimation of a Gaussian vector with a linear observation model in an inhomogeneous wireless sensor network, where a fusion center (FC) reconstructs the unknown vector, using a linear estimator. Sensors employ uniform multi-bit quantizers and binary PSK modulation, and communicate with the FC over orthogonal power- and bandwidth-constrained wireless channels. We study transmit power and quantization rate (measured in bits per sensor) allocation schemes that minimize mean-square error (MSE). In particular, we derive two closed-form upper bounds on the MSE, in terms of the optimization parameters and propose coupled and decoupled resource allocation schemes that minimize these bounds. We show that the bounds are good approximations of the simulated MSE and the performance of the proposed schemes approaches the clairvoyant centralized estimation when total transmit power or bandwidth is very large. We study how the power and rate allocation are dependent on sensors observation qualities and channel gains, as well as total transmit power and bandwidth constraints. Our simulations corroborate our analytical results and illustrate the superior performance of the proposed algorithms.

  15. Intercomparison of Terahertz Dielectric Measurements Using Vector Network Analyzer and Time-Domain Spectrometer

    NASA Astrophysics Data System (ADS)

    Naftaly, Mira; Shoaib, Nosherwan; Stokes, Daniel; Ridler, Nick M.

    2016-07-01

    We describe a method for direct intercomparison of terahertz permittivities at 200 GHz obtained by a Vector Network Analyzer and a Time-Domain Spectrometer, whereby both instruments operate in their customary configurations, i.e., the VNA in waveguide and TDS in free-space. The method employs material that can be inserted into a waveguide for VNA measurements or contained in a cell for TDS measurements. The intercomparison experiments were performed using two materials: petroleum jelly and a mixture of petroleum jelly with carbon powder. The obtained values of complex permittivities were similar within the measurement uncertainty. An intercomparison between VNA and TDS measurements is of importance because the two modalities are customarily employed separately and require different approaches. Since material permittivities can and have been measured using either platform, it is necessary to ascertain that the obtained data is similar in both cases.

  16. Diagnosing Anomalous Network Performance with Confidence

    SciTech Connect

    Settlemyer, Bradley W; Hodson, Stephen W; Kuehn, Jeffery A; Poole, Stephen W

    2011-04-01

    Variability in network performance is a major obstacle in effectively analyzing the throughput of modern high performance computer systems. High performance interconnec- tion networks offer excellent best-case network latencies; how- ever, highly parallel applications running on parallel machines typically require consistently high levels of performance to adequately leverage the massive amounts of available computing power. Performance analysts have usually quantified network performance using traditional summary statistics that assume the observational data is sampled from a normal distribution. In our examinations of network performance, we have found this method of analysis often provides too little data to under- stand anomalous network performance. Our tool, Confidence, instead uses an empirically derived probability distribution to characterize network performance. In this paper we describe several instances where the Confidence toolkit allowed us to understand and diagnose network performance anomalies that we could not adequately explore with the simple summary statis- tics provided by traditional measurement tools. In particular, we examine a multi-modal performance scenario encountered with an Infiniband interconnection network and we explore the performance repeatability on the custom Cray SeaStar2 interconnection network after a set of software and driver updates.

  17. An Energy Scaled and Expanded Vector-Based Forwarding Scheme for Industrial Underwater Acoustic Sensor Networks with Sink Mobility.

    PubMed

    Wadud, Zahid; Hussain, Sajjad; Javaid, Nadeem; Bouk, Safdar Hussain; Alrajeh, Nabil; Alabed, Mohamad Souheil; Guizani, Nadra

    2017-09-30

    Industrial Underwater Acoustic Sensor Networks (IUASNs) come with intrinsic challenges like long propagation delay, small bandwidth, large energy consumption, three-dimensional deployment, and high deployment and battery replacement cost. Any routing strategy proposed for IUASN must take into account these constraints. The vector based forwarding schemes in literature forward data packets to sink using holding time and location information of the sender, forwarder, and sink nodes. Holding time suppresses data broadcasts; however, it fails to keep energy and delay fairness in the network. To achieve this, we propose an Energy Scaled and Expanded Vector-Based Forwarding (ESEVBF) scheme. ESEVBF uses the residual energy of the node to scale and vector pipeline distance ratio to expand the holding time. Resulting scaled and expanded holding time of all forwarding nodes has a significant difference to avoid multiple forwarding, which reduces energy consumption and energy balancing in the network. If a node has a minimum holding time among its neighbors, it shrinks the holding time and quickly forwards the data packets upstream. The performance of ESEVBF is analyzed through in network scenario with and without node mobility to ensure its effectiveness. Simulation results show that ESEVBF has low energy consumption, reduces forwarded data copies, and less end-to-end delay.

  18. Neural network vector quantization improves the diagnostic quality of computer-aided diagnosis in dynamic breast MRI

    NASA Astrophysics Data System (ADS)

    Wismüller, Axel; Meyer-Baese, Anke; Leinsinger, Gerda L.; Lange, Oliver; Schlossbauer, Thomas; Reiser, Maximilian F.

    2007-03-01

    We quantitatively evaluate a novel neural network pattern recognition approach for characterization of diagnostically challenging breast lesions in contrast-enhanced dynamic breast MRI. Eighty-two women with 84 indeterminate mammographic lesions (BIRADS III-IV, 38/46 benign/malignant lesions confirmed by histopathology and follow-up, median lesion diameter 12mm) were examined by dynamic contrast-enhanced breast MRI. The temporal signal dynamics results in an intensity time-series for each voxel represented by a 6-dimensional feature vector. These vectors were clustered by minimal-free-energy Vector Quantization (VQ), which identifies groups of pixels with similar enhancement kinetics as prototypical time-series, so-called codebook vectors. For comparison, conventional analysis based on lesion-specific averaged signal-intensity time-courses was performed according to a standardized semi-quantitative evaluation score. For quantitative assessment of diagnostic accuracy, areas under ROC curves (AUC) were computed for both VQ and standard classification methods. VQ increased the diagnostic accuracy for classification between benign and malignant lesions, as confirmed by quantitative ROC analysis: VQ results (AUC=0.760) clearly outperformed the conventional evaluation of lesion-specific averaged time-series (AUC=0.693). Thus, the diagnostic benefit of neural network VQ for MR mammography analysis is quantitatively documented by ROC evaluation in a large data base of diagnostically challenging small focal breast lesions. VQ outperforms the conventional method w.r.t. diagnostic accuracy.

  19. A diagram for evaluating multiple aspects of model performance in simulating vector fields

    NASA Astrophysics Data System (ADS)

    Xu, Zhongfeng; Hou, Zhaolu; Han, Ying; Guo, Weidong

    2016-12-01

    Vector quantities, e.g., vector winds, play an extremely important role in climate systems. The energy and water exchanges between different regions are strongly dominated by wind, which in turn shapes the regional climate. Thus, how well climate models can simulate vector fields directly affects model performance in reproducing the nature of a regional climate. This paper devises a new diagram, termed the vector field evaluation (VFE) diagram, which is a generalized Taylor diagram and able to provide a concise evaluation of model performance in simulating vector fields. The diagram can measure how well two vector fields match each other in terms of three statistical variables, i.e., the vector similarity coefficient, root mean square length (RMSL), and root mean square vector difference (RMSVD). Similar to the Taylor diagram, the VFE diagram is especially useful for evaluating climate models. The pattern similarity of two vector fields is measured by a vector similarity coefficient (VSC) that is defined by the arithmetic mean of the inner product of normalized vector pairs. Examples are provided, showing that VSC can identify how close one vector field resembles another. Note that VSC can only describe the pattern similarity, and it does not reflect the systematic difference in the mean vector length between two vector fields. To measure the vector length, RMSL is included in the diagram. The third variable, RMSVD, is used to identify the magnitude of the overall difference between two vector fields. Examples show that the VFE diagram can clearly illustrate the extent to which the overall RMSVD is attributed to the systematic difference in RMSL and how much is due to the poor pattern similarity.

  20. HYBRID NEURAL NETWORK AND SUPPORT VECTOR MACHINE METHOD FOR OPTIMIZATION

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan (Inventor)

    2005-01-01

    System and method for optimization of a design associated with a response function, using a hybrid neural net and support vector machine (NN/SVM) analysis to minimize or maximize an objective function, optionally subject to one or more constraints. As a first example, the NN/SVM analysis is applied iteratively to design of an aerodynamic component, such as an airfoil shape, where the objective function measures deviation from a target pressure distribution on the perimeter of the aerodynamic component. As a second example, the NN/SVM analysis is applied to data classification of a sequence of data points in a multidimensional space. The NN/SVM analysis is also applied to data regression.

  1. Hybrid Neural Network and Support Vector Machine Method for Optimization

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan (Inventor)

    2007-01-01

    System and method for optimization of a design associated with a response function, using a hybrid neural net and support vector machine (NN/SVM) analysis to minimize or maximize an objective function, optionally subject to one or more constraints. As a first example, the NN/SVM analysis is applied iteratively to design of an aerodynamic component, such as an airfoil shape, where the objective function measures deviation from a target pressure distribution on the perimeter of the aerodynamic component. As a second example, the NN/SVM analysis is applied to data classification of a sequence of data points in a multidimensional space. The NN/SVM analysis is also applied to data regression.

  2. Design of thrust vectoring exhaust nozzles for real-time applications using neural networks

    NASA Technical Reports Server (NTRS)

    Prasanth, Ravi K.; Markin, Robert E.; Whitaker, Kevin W.

    1991-01-01

    Thrust vectoring continues to be an important issue in military aircraft system designs. A recently developed concept of vectoring aircraft thrust makes use of flexible exhaust nozzles. Subtle modifications in the nozzle wall contours produce a non-uniform flow field containing a complex pattern of shock and expansion waves. The end result, due to the asymmetric velocity and pressure distributions, is vectored thrust. Specification of the nozzle contours required for a desired thrust vector angle (an inverse design problem) has been achieved with genetic algorithms. This approach is computationally intensive and prevents the nozzles from being designed in real-time, which is necessary for an operational aircraft system. An investigation was conducted into using genetic algorithms to train a neural network in an attempt to obtain, in real-time, two-dimensional nozzle contours. Results show that genetic algorithm trained neural networks provide a viable, real-time alternative for designing thrust vectoring nozzles contours. Thrust vector angles up to 20 deg were obtained within an average error of 0.0914 deg. The error surfaces encountered were highly degenerate and thus the robustness of genetic algorithms was well suited for minimizing global errors.

  3. Calibration of Loop Antennas Using a Contactless Vector Network Analysis Method

    NASA Astrophysics Data System (ADS)

    Harm, M.; Kullmer, A.; Enders, A.

    2016-05-01

    The well-established calibration techniques for loop antennas have in common that either some parts of the antenna design or the field pattern of a standard field must be well known or fully theoretically describable. In this paper a pure metrological approach, circumventing these requirements is introduced and verified. It is based on a generic two port network model for loop antennas and a traceable contactless vector network analysis method.

  4. Performance modeling of network data services

    SciTech Connect

    Haynes, R.A.; Pierson, L.G.

    1997-01-01

    Networks at major computational organizations are becoming increasingly complex. The introduction of large massively parallel computers and supercomputers with gigabyte memories are requiring greater and greater bandwidth for network data transfers to widely dispersed clients. For networks to provide adequate data transfer services to high performance computers and remote users connected to them, the networking components must be optimized from a combination of internal and external performance criteria. This paper describes research done at Sandia National Laboratories to model network data services and to visualize the flow of data from source to sink when using the data services.

  5. Network Allocation Vector (NAV) Optimization for Underwater Handshaking-Based Protocols

    PubMed Central

    Cho, Junho; Shitiri, Ethungshan; Cho, Ho-Shin

    2016-01-01

    In this paper, we obtained the optimized network allocation vector (NAV) for underwater handshaking-based protocols, as inefficient determination of the NAV leads to unnecessarily long silent periods. We propose a scheme which determines the NAV by taking into account all possible propagation delays: propagation delay between a source and a destination; propagation delay between a source and the neighbors; and propagation delay between a destination and the neighbors. Such an approach effectively allows the NAV to be determined precisely equal to duration of a busy channel, and the silent period can be set commensurate to that duration. This allows for improvements in the performance of handshaking-based protocols, such as the carrier sense multiple access/collision avoidance (CSMA/CA) protocol, in terms of throughput and fairness. To evaluate the performance of the proposed scheme, performance comparisons were carried out through simulations with prior NAV setting methods. The simulation results show that the proposed scheme outperforms the other schemes in terms of throughput and fairness. PMID:28029122

  6. Radio to microwave dielectric characterisation of constitutive electromagnetic soil properties using vector network analyses

    NASA Astrophysics Data System (ADS)

    Schwing, M.; Wagner, N.; Karlovsek, J.; Chen, Z.; Williams, D. J.; Scheuermann, A.

    2016-04-01

    The knowledge of constitutive broadband electromagnetic (EM) properties of porous media such as soils and rocks is essential in the theoretical and numerical modeling of EM wave propagation in the subsurface. This paper presents an experimental and numerical study on the performance EM measuring instruments for broadband EM wave in the radio-microwave frequency range. 3-D numerical calculations of a specific sensor were carried out using the Ansys HFSS (high frequency structural simulator) to further evaluate the probe performance. In addition, six different sensors of varying design, application purpose, and operational frequency range, were tested on different calibration liquids and a sample of fine-grained soil over a frequency range of 1 MHz-40 GHz using four vector network analysers. The resulting dielectric spectrum of the soil was analysed and interpreted using a 3-term Cole-Cole model under consideration of a direct current conductivity contribution. Comparison of sensor performances on calibration materials and fine-grained soils showed consistency in the measured dielectric spectra at a frequency range from 100 MHz-2 GHz. By combining open-ended coaxial line and coaxial transmission line measurements, the observable frequency window could be extended to a truly broad frequency range of 1 MHz-40 GHz.

  7. Network Allocation Vector (NAV) Optimization for Underwater Handshaking-Based Protocols.

    PubMed

    Cho, Junho; Shitiri, Ethungshan; Cho, Ho-Shin

    2016-12-24

    In this paper, we obtained the optimized network allocation vector (NAV) for underwater handshaking-based protocols, as inefficient determination of the NAV leads to unnecessarily long silent periods. We propose a scheme which determines the NAV by taking into account all possible propagation delays: propagation delay between a source and a destination; propagation delay between a source and the neighbors; and propagation delay between a destination and the neighbors. Such an approach effectively allows the NAV to be determined precisely equal to duration of a busy channel, and the silent period can be set commensurate to that duration. This allows for improvements in the performance of handshaking-based protocols, such as the carrier sense multiple access/collision avoidance (CSMA/CA) protocol, in terms of throughput and fairness. To evaluate the performance of the proposed scheme, performance comparisons were carried out through simulations with prior NAV setting methods. The simulation results show that the proposed scheme outperforms the other schemes in terms of throughput and fairness.

  8. Characterization of clustered microcalcifications in digitized mammograms using neural networks and support vector machines.

    PubMed

    Papadopoulos, A; Fotiadis, D I; Likas, A

    2005-06-01

    Detection and characterization of microcalcification clusters in mammograms is vital in daily clinical practice. The scope of this work is to present a novel computer-based automated method for the characterization of microcalcification clusters in digitized mammograms. The proposed method has been implemented in three stages: (a) the cluster detection stage to identify clusters of microcalcifications, (b) the feature extraction stage to compute the important features of each cluster and (c) the classification stage, which provides with the final characterization. In the classification stage, a rule-based system, an artificial neural network (ANN) and a support vector machine (SVM) have been implemented and evaluated using receiver operating characteristic (ROC) analysis. The proposed method was evaluated using the Nijmegen and Mammographic Image Analysis Society (MIAS) mammographic databases. The original feature set was enhanced by the addition of four rule-based features. In the case of Nijmegen dataset, the performance of the SVM was Az=0.79 and 0.77 for the original and enhanced feature set, respectively, while for the MIAS dataset the corresponding characterization scores were Az=0.81 and 0.80. Utilizing neural network classification methodology, the corresponding performance for the Nijmegen dataset was Az=0.70 and 0.76 while for the MIAS dataset it was Az=0.73 and 0.78. Although the obtained high classification performance can be successfully applied to microcalcification clusters characterization, further studies must be carried out for the clinical evaluation of the system using larger datasets. The use of additional features originating either from the image itself (such as cluster location and orientation) or from the patient data may further improve the diagnostic value of the system.

  9. Effects of internal yaw-vectoring devices on the static performance of a pitch-vectoring nonaxisymmetric convergent-divergent nozzle

    NASA Technical Reports Server (NTRS)

    Asbury, Scott C.

    1993-01-01

    An investigation was conducted in the static test facility of the Langley 16-Foot Transonic Tunnel to evaluate the internal performance of a nonaxisymmetric convergent divergent nozzle designed to have simultaneous pitch and yaw thrust vectoring capability. This concept utilized divergent flap deflection for thrust vectoring in the pitch plane and flow-turning deflectors installed within the divergent flaps for yaw thrust vectoring. Modifications consisting of reducing the sidewall length and deflecting the sidewall outboard were investigated as means to increase yaw-vectoring performance. This investigation studied the effects of multiaxis (pitch and yaw) thrust vectoring on nozzle internal performance characteristics. All tests were conducted with no external flow, and nozzle pressure ratio was varied from 2.0 to approximately 13.0. The results indicate that this nozzle concept can successfully generate multiaxis thrust vectoring. Deflection of the divergent flaps produced resultant pitch vector angles that, although dependent on nozzle pressure ratio, were nearly equal to the geometric pitch vector angle. Losses in resultant thrust due to pitch vectoring were small or negligible. The yaw deflectors produced resultant yaw vector angles up to 21 degrees that were controllable by varying yaw deflector rotation. However, yaw deflector rotation resulted in significant losses in thrust ratios and, in some cases, nozzle discharge coefficient. Either of the sidewall modifications generally reduced these losses and increased maximum resultant yaw vector angle. During multiaxis (simultaneous pitch and yaw) thrust vectoring, little or no cross coupling between the thrust vectoring processes was observed.

  10. Target Localization in Wireless Sensor Networks Using Online Semi-Supervised Support Vector Regression

    PubMed Central

    Yoo, Jaehyun; Kim, H. Jin

    2015-01-01

    Machine learning has been successfully used for target localization in wireless sensor networks (WSNs) due to its accurate and robust estimation against highly nonlinear and noisy sensor measurement. For efficient and adaptive learning, this paper introduces online semi-supervised support vector regression (OSS-SVR). The first advantage of the proposed algorithm is that, based on semi-supervised learning framework, it can reduce the requirement on the amount of the labeled training data, maintaining accurate estimation. Second, with an extension to online learning, the proposed OSS-SVR automatically tracks changes of the system to be learned, such as varied noise characteristics. We compare the proposed algorithm with semi-supervised manifold learning, an online Gaussian process and online semi-supervised colocalization. The algorithms are evaluated for estimating the unknown location of a mobile robot in a WSN. The experimental results show that the proposed algorithm is more accurate under the smaller amount of labeled training data and is robust to varying noise. Moreover, the suggested algorithm performs fast computation, maintaining the best localization performance in comparison with the other methods. PMID:26024420

  11. The holographic neural network: Performance comparison with other neural networks

    NASA Astrophysics Data System (ADS)

    Klepko, Robert

    1991-10-01

    The artificial neural network shows promise for use in recognition of high resolution radar images of ships. The holographic neural network (HNN) promises a very large data storage capacity and excellent generalization capability, both of which can be achieved with only a few learning trials, unlike most neural networks which require on the order of thousands of learning trials. The HNN is specially designed for pattern association storage, and mathematically realizes the storage and retrieval mechanisms of holograms. The pattern recognition capability of the HNN was studied, and its performance was compared with five other commonly used neural networks: the Adaline, Hamming, bidirectional associative memory, recirculation, and back propagation networks. The patterns used for testing represented artificial high resolution radar images of ships, and appear as a two dimensional topology of peaks with various amplitudes. The performance comparisons showed that the HNN does not perform as well as the other neural networks when using the same test data. However, modification of the data to make it appear more Gaussian distributed, improved the performance of the network. The HNN performs best if the data is completely Gaussian distributed.

  12. Network management and performance monitoring at SLAC

    SciTech Connect

    Logg, C.A.; Cottrell, R.L.A.

    1995-08-01

    The physical network plant and everything attached to it, including the software running on ``computers`` and other peripheral devices is the {bold system}. Subjectively, the ultimate measurers of {bold system} performance are the users and their perceptions of the performance of their networked applications. The performance of a {bold system} is affected by the physical network plant (routers, bridges, hubs, etc.) as well as by every ``computer`` and peripheral device that is attached to it, and the software running on the computers and devices. Performance monitoring of a network must therefore include computer systems and services monitoring as well as monitoring of the physical network plant. This paper will describe how this challenge has been tackled at SLAC, and how, via the World Wide Web, this information is made available for quick perusal by concerned personnel and users.

  13. Performance Analysis of Wireless Networks

    DTIC Science & Technology

    2005-12-01

    committee, Professors John Vesecky and Patrick Mantey. I also want to thank Carol Mullane, Tracie Tucker and Jodi Rieger for their help and advice. My...2002. [17] O. Dousse, P. Thiran, and M. Hasler , “Connectivity in ad-hoc and hybrid networks,” in Proc. of IEEE Infocom, New York, New York, June 2002

  14. Artificial Astrocytes Improve Neural Network Performance

    PubMed Central

    Porto-Pazos, Ana B.; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-01-01

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function. PMID:21526157

  15. Artificial astrocytes improve neural network performance.

    PubMed

    Porto-Pazos, Ana B; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-04-19

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function.

  16. Forecasting performance of support vector machine for the Poyang Lake's water level.

    PubMed

    Lan, Yingying

    2014-01-01

    The growth of forecasting models has resulted in the development of an excellent model known as the support vector machine (SVM). SVMs can find a global optimal solution equipped with kernel functions. This research trains and tests the SVM network and constructs the support vector regression prediction model by using hydrologic data. Six hydrologic time series were calculated by different kernel functions (namely, linear, polynomial, radial basis function (RBF)), to determine which kernel is the more suitable hydrologic time series in practice. A new solution is presented to identify the good parameter (C; g) by using grid-search and cross-validation. Results prove that linear SVM is a superior model to polynomial and RBF and produced the most accurate results for modeling hydrologic time series behavior as complex hydrologic phenomena. The case study also shows that the calculation errors were correlated with data characteristics. More stable raw data will result in a more accurate result, whereas more random data will result in a more inaccurate result. Model performance could also be dependent on base data nonlinearity.

  17. High Performance Networks for High Impact Science

    SciTech Connect

    Scott, Mary A.; Bair, Raymond A.

    2003-02-13

    This workshop was the first major activity in developing a strategic plan for high-performance networking in the Office of Science. Held August 13 through 15, 2002, it brought together a selection of end users, especially representing the emerging, high-visibility initiatives, and network visionaries to identify opportunities and begin defining the path forward.

  18. Internal performance of two nozzles utilizing gimbal concepts for thrust vectoring

    NASA Technical Reports Server (NTRS)

    Berrier, Bobby L.; Taylor, John G.

    1990-01-01

    The internal performance of an axisymmetric convergent-divergent nozzle and a nonaxisymmetric convergent-divergent nozzle, both of which utilized a gimbal type mechanism for thrust vectoring was evaluated in the Static Test Facility of the Langley 16-Foot Transonic Tunnel. The nonaxisymmetric nozzle used the gimbal concept for yaw thrust vectoring only; pitch thrust vectoring was accomplished by simultaneous deflection of the upper and lower divergent flaps. The model geometric parameters investigated were pitch vector angle for the axisymmetric nozzle and pitch vector angle, yaw vector angle, nozzle throat aspect ratio, and nozzle expansion ratio for the nonaxisymmetric nozzle. All tests were conducted with no external flow, and nozzle pressure ratio was varied from 2.0 to approximately 12.0.

  19. Static internal performance of a single expansion ramp nozzle with multiaxis thrust vectoring capability

    NASA Technical Reports Server (NTRS)

    Capone, Francis J.; Schirmer, Alberto W.

    1993-01-01

    An investigation was conducted at static conditions in order to determine the internal performance characteristics of a multiaxis thrust vectoring single expansion ramp nozzle. Yaw vectoring was achieved by deflecting yaw flaps in the nozzle sidewall into the nozzle exhaust flow. In order to eliminate any physical interference between the variable angle yaw flap deflected into the exhaust flow and the nozzle upper ramp and lower flap which were deflected for pitch vectoring, the downstream corners of both the nozzle ramp and lower flap were cut off to allow for up to 30 deg of yaw vectoring. The effects of nozzle upper ramp and lower flap cutout, yaw flap hinge line location and hinge inclination angle, sidewall containment, geometric pitch vector angle, and geometric yaw vector angle were studied. This investigation was conducted in the static-test facility of the Langley 16-Foot Transonic Tunnel at nozzle pressure ratios up to 8.0.

  20. Enhancing neural-network performance via assortativity.

    PubMed

    de Franciscis, Sebastiano; Johnson, Samuel; Torres, Joaquín J

    2011-03-01

    The performance of attractor neural networks has been shown to depend crucially on the heterogeneity of the underlying topology. We take this analysis a step further by examining the effect of degree-degree correlations--assortativity--on neural-network behavior. We make use of a method recently put forward for studying correlated networks and dynamics thereon, both analytically and computationally, which is independent of how the topology may have evolved. We show how the robustness to noise is greatly enhanced in assortative (positively correlated) neural networks, especially if it is the hub neurons that store the information.

  1. High-performance neural networks. [Neural computers

    SciTech Connect

    Dress, W.B.

    1987-06-01

    The new Forth hardware architectures offer an intermediate solution to high-performance neural networks while the theory and programming details of neural networks for synthetic intelligence are developed. This approach has been used successfully to determine the parameters and run the resulting network for a synthetic insect consisting of a 200-node ''brain'' with 1760 interconnections. Both the insect's environment and its sensor input have thus far been simulated. However, the frequency-coded nature of the Browning network allows easy replacement of the simulated sensors by real-world counterparts.

  2. Enhancing neural-network performance via assortativity

    SciTech Connect

    Franciscis, Sebastiano de; Johnson, Samuel; Torres, Joaquin J.

    2011-03-15

    The performance of attractor neural networks has been shown to depend crucially on the heterogeneity of the underlying topology. We take this analysis a step further by examining the effect of degree-degree correlations - assortativity - on neural-network behavior. We make use of a method recently put forward for studying correlated networks and dynamics thereon, both analytically and computationally, which is independent of how the topology may have evolved. We show how the robustness to noise is greatly enhanced in assortative (positively correlated) neural networks, especially if it is the hub neurons that store the information.

  3. Measurements by a Vector Network Analyzer at 325 to 508 GHz

    NASA Technical Reports Server (NTRS)

    Fung, King Man; Samoska, Lorene; Chattopadhyay, Goutam; Gaier, Todd; Kangaslahti, Pekka; Pukala, David; Lau, Yuenie; Oleson, Charles; Denning, Anthony

    2008-01-01

    Recent experiments were performed in which return loss and insertion loss of waveguide test assemblies in the frequency range from 325 to 508 GHz were measured by use of a swept-frequency two-port vector network analyzer (VNA) test set. The experiments were part of a continuing effort to develop means of characterizing passive and active electronic components and systems operating at ever increasing frequencies. The waveguide test assemblies comprised WR-2.2 end sections collinear with WR-3.3 middle sections. The test set, assembled from commercially available components, included a 50-GHz VNA scattering- parameter test set and external signal synthesizers, augmented with recently developed frequency extenders, and further augmented with attenuators and amplifiers as needed to adjust radiofrequency and intermediate-frequency power levels between the aforementioned components. The tests included line-reflect-line calibration procedures, using WR-2.2 waveguide shims as the "line" standards and waveguide flange short circuits as the "reflect" standards. Calibrated dynamic ranges somewhat greater than about 20 dB for return loss and 35 dB for insertion loss were achieved. The measurement data of the test assemblies were found to substantially agree with results of computational simulations.

  4. Comparing error minimized extreme learning machines and support vector sequential feed-forward neural networks.

    PubMed

    Romero, Enrique; Alquézar, René

    2012-01-01

    Recently, error minimized extreme learning machines (EM-ELMs) have been proposed as a simple and efficient approach to build single-hidden-layer feed-forward networks (SLFNs) sequentially. They add random hidden nodes one by one (or group by group) and update the output weights incrementally to minimize the sum-of-squares error in the training set. Other very similar methods that also construct SLFNs sequentially had been reported earlier with the main difference that their hidden-layer weights are a subset of the data instead of being random. These approaches are referred to as support vector sequential feed-forward neural networks (SV-SFNNs), and they are a particular case of the sequential approximation with optimal coefficients and interacting frequencies (SAOCIF) method. In this paper, it is firstly shown that EM-ELMs can also be cast as a particular case of SAOCIF. In particular, EM-ELMs can easily be extended to test some number of random candidates at each step and select the best of them, as SAOCIF does. Moreover, it is demonstrated that the cost of the computation of the optimal output-layer weights in the originally proposed EM-ELMs can be improved if it is replaced by the one included in SAOCIF. Secondly, we present the results of an experimental study on 10 benchmark classification and 10 benchmark regression data sets, comparing EM-ELMs and SV-SFNNs, that was carried out under the same conditions for the two models. Although both models have the same (efficient) computational cost, a statistically significant improvement in generalization performance of SV-SFNNs vs. EM-ELMs was found in 12 out of the 20 benchmark problems.

  5. Evaluation of Raman spectra of human brain tumor tissue using the learning vector quantization neural network

    NASA Astrophysics Data System (ADS)

    Liu, Tuo; Chen, Changshui; Shi, Xingzhe; Liu, Chengyong

    2016-05-01

    The Raman spectra of tissue of 20 brain tumor patients was recorded using a confocal microlaser Raman spectroscope with 785 nm excitation in vitro. A total of 133 spectra were investigated. Spectra peaks from normal white matter tissue and tumor tissue were analyzed. Algorithms, such as principal component analysis, linear discriminant analysis, and the support vector machine, are commonly used to analyze spectral data. However, in this study, we employed the learning vector quantization (LVQ) neural network, which is typically used for pattern recognition. By applying the proposed method, a normal diagnosis accuracy of 85.7% and a glioma diagnosis accuracy of 89.5% were achieved. The LVQ neural network is a recent approach to excavating Raman spectra information. Moreover, it is fast and convenient, does not require the spectra peak counterpart, and achieves a relatively high accuracy. It can be used in brain tumor prognostics and in helping to optimize the cutting margins of gliomas.

  6. Belief network algorithms: A study of performance

    SciTech Connect

    Jitnah, N.

    1996-12-31

    This abstract gives an overview of the work. We present a survey of Belief Network algorithms and propose a domain characterization system to be used as a basis for algorithm comparison and for predicting algorithm performance.

  7. Speech recognition method based on genetic vector quantization and BP neural network

    NASA Astrophysics Data System (ADS)

    Gao, Li'ai; Li, Lihua; Zhou, Jian; Zhao, Qiuxia

    2009-07-01

    Vector Quantization is one of popular codebook design methods for speech recognition at present. In the process of codebook design, traditional LBG algorithm owns the advantage of fast convergence, but it is easy to get the local optimal result and be influenced by initial codebook. According to the understanding that Genetic Algorithm has the capability of getting the global optimal result, this paper proposes a hybrid clustering method GA-L based on Genetic Algorithm and LBG algorithm to improve the codebook.. Then using genetic neural networks for speech recognition. consequently search a global optimization codebook of the training vector space. The experiments show that neural network identification method based on genetic algorithm can extricate from its local maximum value and the initial restrictions, it can show superior to the standard genetic algorithm and BP neural network algorithm from various sources, and the genetic BP neural networks has a higher recognition rate and the unique application advantages than the general BP neural network in the same GA-VQ codebook, it can achieve a win-win situation in the time and efficiency.

  8. Performance analysis of a VSAT network

    NASA Astrophysics Data System (ADS)

    Karam, Fouad G.; Miller, Neville; Karam, Antoine

    With the growing need for efficient satellite networking facilities, the very small aperture terminal (VSAT) technology emerges as the leading edge of satellite communications. Achieving the required performance of a VSAT network is dictated by the multiple access technique utilized. Determining the inbound access method best suited for a particular application involves trade-offs between response time and space segment utilization. In this paper, the slotted Aloha and dedicated stream access techniques are compared. It is shown that network performance is dependent on the traffic offered from remote earth stations as well as the sensitivity of customer's applications to satellite delay.

  9. Diversity Performance Analysis on Multiple HAP Networks.

    PubMed

    Dong, Feihong; Li, Min; Gong, Xiangwu; Li, Hongjun; Gao, Fengyue

    2015-06-30

    One of the main design challenges in wireless sensor networks (WSNs) is achieving a high-data-rate transmission for individual sensor devices. The high altitude platform (HAP) is an important communication relay platform for WSNs and next-generation wireless networks. Multiple-input multiple-output (MIMO) techniques provide the diversity and multiplexing gain, which can improve the network performance effectively. In this paper, a virtual MIMO (V-MIMO) model is proposed by networking multiple HAPs with the concept of multiple assets in view (MAV). In a shadowed Rician fading channel, the diversity performance is investigated. The probability density function (PDF) and cumulative distribution function (CDF) of the received signal-to-noise ratio (SNR) are derived. In addition, the average symbol error rate (ASER) with BPSK and QPSK is given for the V-MIMO model. The system capacity is studied for both perfect channel state information (CSI) and unknown CSI individually. The ergodic capacity with various SNR and Rician factors for different network configurations is also analyzed. The simulation results validate the effectiveness of the performance analysis. It is shown that the performance of the HAPs network in WSNs can be significantly improved by utilizing the MAV to achieve overlapping coverage, with the help of the V-MIMO techniques.

  10. Performance benchmarking of core optical networking paradigms.

    PubMed

    Drakos, Andreas; Orphanoudakis, Theofanis G; Stavdas, Alexandros

    2012-07-30

    The sustainability of Future Internet critically depends on networking paradigms able to provide optimum and balanced performance over an extended set of efficiency and Quality of Service (QoS) metrics. In this work we benchmark the most established networking modes through appropriate performance metrics for three network topologies. The results demonstrate that the static reservation of WDM channels, as used in IP/WDM schemes, is severely limiting scalability, since it cannot efficiently adapt to the dynamic traffic fluctuations that are frequently observed in today's networks. Optical Burst Switching (OBS) schemes do provide dynamic resource reservation but their performance is compromised due to high burst loss. It is shown that the CANON (Clustered Architecture for Nodes in an Optical Network) architecture exploiting statistical multiplexing over a large scale core optical network and efficient grooming at appropriate granularity levels could be a viable alternative to existing static as well as dynamic wavelength reservation schemes. Through extensive simulation results we quantify performance gains and we show that CANON demonstrates the highest efficiency achieving both targets for statistical multiplexing gains and QoS guarantees.

  11. Diversity Performance Analysis on Multiple HAP Networks

    PubMed Central

    Dong, Feihong; Li, Min; Gong, Xiangwu; Li, Hongjun; Gao, Fengyue

    2015-01-01

    One of the main design challenges in wireless sensor networks (WSNs) is achieving a high-data-rate transmission for individual sensor devices. The high altitude platform (HAP) is an important communication relay platform for WSNs and next-generation wireless networks. Multiple-input multiple-output (MIMO) techniques provide the diversity and multiplexing gain, which can improve the network performance effectively. In this paper, a virtual MIMO (V-MIMO) model is proposed by networking multiple HAPs with the concept of multiple assets in view (MAV). In a shadowed Rician fading channel, the diversity performance is investigated. The probability density function (PDF) and cumulative distribution function (CDF) of the received signal-to-noise ratio (SNR) are derived. In addition, the average symbol error rate (ASER) with BPSK and QPSK is given for the V-MIMO model. The system capacity is studied for both perfect channel state information (CSI) and unknown CSI individually. The ergodic capacity with various SNR and Rician factors for different network configurations is also analyzed. The simulation results validate the effectiveness of the performance analysis. It is shown that the performance of the HAPs network in WSNs can be significantly improved by utilizing the MAV to achieve overlapping coverage, with the help of the V-MIMO techniques. PMID:26134102

  12. Performance characterization of a broadband vector Apodizing Phase Plate coronagraph.

    PubMed

    Otten, Gilles P P L; Snik, Frans; Kenworthy, Matthew A; Miskiewicz, Matthew N; Escuti, Michael J

    2014-12-01

    One of the main challenges for the direct imaging of planets around nearby stars is the suppression of the diffracted halo from the primary star. Coronagraphs are angular filters that suppress this diffracted halo. The Apodizing Phase Plate coronagraph modifies the pupil-plane phase with an anti-symmetric pattern to suppress diffraction over a 180 degree region from 2 to 7 λ/D and achieves a mean raw contrast of 10(-4) in this area, independent of the tip-tilt stability of the system. Current APP coronagraphs implemented using classical phase techniques are limited in bandwidth and suppression region geometry (i.e. only on one side of the star). In this paper, we introduce the vector-APP (vAPP) whose phase pattern is implemented through the vector phase imposed by the orientation of patterned liquid crystals. Beam-splitting according to circular polarization states produces two, complementary PSFs with dark holes on either side. We have developed a prototype vAPP that consists of a stack of three twisting liquid crystal layers to yield a bandwidth of 500 to 900 nm. We characterize the properties of this device using reconstructions of the pupil-plane pattern, and of the ensuing PSF structures. By imaging the pupil between crossed and parallel polarizers we reconstruct the fast axis pattern, transmission, and retardance of the vAPP, and use this as input for a PSF model. This model includes aberrations of the laboratory set-up, and matches the measured PSF, which shows a raw contrast of 10(-3.8) between 2 and 7 λ/D in a 135 degree wedge. The vAPP coronagraph is relatively easy to manufacture and can be implemented together with a broadband quarter-wave plate and Wollaston prism in a pupil wheel in high-contrast imaging instruments. The liquid crystal patterning technique permits the application of extreme phase patterns with deeper contrasts inside the dark holes, and the multilayer liquid crystal achromatization technique enables unprecedented spectral bandwidths

  13. Performance analysis of distributed symmetric sparse matrix vector multiplication algorithm for multi-core architectures

    DOE PAGES

    Oryspayev, Dossay; Aktulga, Hasan Metin; Sosonkina, Masha; ...

    2015-07-14

    In this article, sparse matrix vector multiply (SpMVM) is an important kernel that frequently arises in high performance computing applications. Due to its low arithmetic intensity, several approaches have been proposed in literature to improve its scalability and efficiency in large scale computations. In this paper, our target systems are high end multi-core architectures and we use messaging passing interface + open multiprocessing hybrid programming model for parallelism. We analyze the performance of recently proposed implementation of the distributed symmetric SpMVM, originally developed for large sparse symmetric matrices arising in ab initio nuclear structure calculations. We also study important featuresmore » of this implementation and compare with previously reported implementations that do not exploit underlying symmetry. Our SpMVM implementations leverage the hybrid paradigm to efficiently overlap expensive communications with computations. Our main comparison criterion is the "CPU core hours" metric, which is the main measure of resource usage on supercomputers. We analyze the effects of topology-aware mapping heuristic using simplified network load model. Furthermore, we have tested the different SpMVM implementations on two large clusters with 3D Torus and Dragonfly topology. Our results show that the distributed SpMVM implementation that exploits matrix symmetry and hides communication yields the best value for the "CPU core hours" metric and significantly reduces data movement overheads.« less

  14. Performance analysis of distributed symmetric sparse matrix vector multiplication algorithm for multi-core architectures

    SciTech Connect

    Oryspayev, Dossay; Aktulga, Hasan Metin; Sosonkina, Masha; Maris, Pieter; Vary, James P.

    2015-07-14

    In this article, sparse matrix vector multiply (SpMVM) is an important kernel that frequently arises in high performance computing applications. Due to its low arithmetic intensity, several approaches have been proposed in literature to improve its scalability and efficiency in large scale computations. In this paper, our target systems are high end multi-core architectures and we use messaging passing interface + open multiprocessing hybrid programming model for parallelism. We analyze the performance of recently proposed implementation of the distributed symmetric SpMVM, originally developed for large sparse symmetric matrices arising in ab initio nuclear structure calculations. We also study important features of this implementation and compare with previously reported implementations that do not exploit underlying symmetry. Our SpMVM implementations leverage the hybrid paradigm to efficiently overlap expensive communications with computations. Our main comparison criterion is the "CPU core hours" metric, which is the main measure of resource usage on supercomputers. We analyze the effects of topology-aware mapping heuristic using simplified network load model. Furthermore, we have tested the different SpMVM implementations on two large clusters with 3D Torus and Dragonfly topology. Our results show that the distributed SpMVM implementation that exploits matrix symmetry and hides communication yields the best value for the "CPU core hours" metric and significantly reduces data movement overheads.

  15. Predictable nonwandering localization of covariant Lyapunov vectors and cluster synchronization in scale-free networks of chaotic maps.

    PubMed

    Kuptsov, Pavel V; Kuptsova, Anna V

    2014-09-01

    Covariant Lyapunov vectors for scale-free networks of Hénon maps are highly localized. We revealed two mechanisms of the localization related to full and phase cluster synchronization of network nodes. In both cases the localization nodes remain unaltered in the course of the dynamics, i.e., the localization is nonwandering. Moreover, this is predictable: The localization nodes are found to have specific dynamical and topological properties and they can be found without computing of the covariant vectors. This is an example of explicit relations between the system topology, its phase-space dynamics, and the associated tangent-space dynamics of covariant Lyapunov vectors.

  16. Application of structured support vector machine backpropagation to a convolutional neural network for human pose estimation.

    PubMed

    Witoonchart, Peerajak; Chongstitvatana, Prabhas

    2017-08-01

    In this study, for the first time, we show how to formulate a structured support vector machine (SSVM) as two layers in a convolutional neural network, where the top layer is a loss augmented inference layer and the bottom layer is the normal convolutional layer. We show that a deformable part model can be learned with the proposed structured SVM neural network by backpropagating the error of the deformable part model to the convolutional neural network. The forward propagation calculates the loss augmented inference and the backpropagation calculates the gradient from the loss augmented inference layer to the convolutional layer. Thus, we obtain a new type of convolutional neural network called an Structured SVM convolutional neural network, which we applied to the human pose estimation problem. This new neural network can be used as the final layers in deep learning. Our method jointly learns the structural model parameters and the appearance model parameters. We implemented our method as a new layer in the existing Caffe library. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Static performance of an axisymmetric nozzle with post-exit vanes for multiaxis thrust vectoring

    NASA Technical Reports Server (NTRS)

    Berrier, Bobby L.; Mason, Mary L.

    1988-01-01

    An investigation was conducted in the static test facility of the Langley 16-Foot Transonic Tunnel to determine the flow-turning capability and the nozzle internal performance of an axisymmetric convergent-divergent nozzle with post-exit vanes installed for multiaxis thrust vectoring. The effects of vane curvature, vane location relative to the nozzle exit, number of vanes, and vane deflection angle were determined. A comparison of the post-exit-vane thrust-vectoring concept with other thrust-vectoring concepts is provided. All tests were conducted with no external flow, and nozzle pressure ratio was varied from 1.6 to 6.0.

  18. Spatial Variance in Resting fMRI Networks of Schizophrenia Patients: An Independent Vector Analysis.

    PubMed

    Gopal, Shruti; Miller, Robyn L; Michael, Andrew; Adali, Tulay; Cetin, Mustafa; Rachakonda, Srinivas; Bustillo, Juan R; Cahill, Nathan; Baum, Stefi A; Calhoun, Vince D

    2016-01-01

    Spatial variability in resting functional MRI (fMRI) brain networks has not been well studied in schizophrenia, a disease known for both neurodevelopmental and widespread anatomic changes. Motivated by abundant evidence of neuroanatomical variability from previous studies of schizophrenia, we draw upon a relatively new approach called independent vector analysis (IVA) to assess this variability in resting fMRI networks. IVA is a blind-source separation algorithm, which segregates fMRI data into temporally coherent but spatially independent networks and has been shown to be especially good at capturing spatial variability among subjects in the extracted networks. We introduce several new ways to quantify differences in variability of IVA-derived networks between schizophrenia patients (SZs = 82) and healthy controls (HCs = 89). Voxelwise amplitude analyses showed significant group differences in the spatial maps of auditory cortex, the basal ganglia, the sensorimotor network, and visual cortex. Tests for differences (HC-SZ) in the spatial variability maps suggest, that at rest, SZs exhibit more activity within externally focused sensory and integrative network and less activity in the default mode network thought to be related to internal reflection. Additionally, tests for difference of variance between groups further emphasize that SZs exhibit greater network variability. These results, consistent with our prediction of increased spatial variability within SZs, enhance our understanding of the disease and suggest that it is not just the amplitude of connectivity that is different in schizophrenia, but also the consistency in spatial connectivity patterns across subjects. © The Author 2015. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  19. New model for prediction binary mixture of antihistamine decongestant using artificial neural networks and least squares support vector machine by spectrophotometry method

    NASA Astrophysics Data System (ADS)

    Mofavvaz, Shirin; Sohrabi, Mahmoud Reza; Nezamzadeh-Ejhieh, Alireza

    2017-07-01

    In the present study, artificial neural networks (ANNs) and least squares support vector machines (LS-SVM) as intelligent methods based on absorption spectra in the range of 230-300 nm have been used for determination of antihistamine decongestant contents. In the first step, one type of network (feed-forward back-propagation) from the artificial neural network with two different training algorithms, Levenberg-Marquardt (LM) and gradient descent with momentum and adaptive learning rate back-propagation (GDX) algorithm, were employed and their performance was evaluated. The performance of the LM algorithm was better than the GDX algorithm. In the second one, the radial basis network was utilized and results compared with the previous network. In the last one, the other intelligent method named least squares support vector machine was proposed to construct the antihistamine decongestant prediction model and the results were compared with two of the aforementioned networks. The values of the statistical parameters mean square error (MSE), Regression coefficient (R2), correlation coefficient (r) and also mean recovery (%), relative standard deviation (RSD) used for selecting the best model between these methods. Moreover, the proposed methods were compared to the high- performance liquid chromatography (HPLC) as a reference method. One way analysis of variance (ANOVA) test at the 95% confidence level applied to the comparison results of suggested and reference methods that there were no significant differences between them.

  20. New model for prediction binary mixture of antihistamine decongestant using artificial neural networks and least squares support vector machine by spectrophotometry method.

    PubMed

    Mofavvaz, Shirin; Sohrabi, Mahmoud Reza; Nezamzadeh-Ejhieh, Alireza

    2017-07-05

    In the present study, artificial neural networks (ANNs) and least squares support vector machines (LS-SVM) as intelligent methods based on absorption spectra in the range of 230-300nm have been used for determination of antihistamine decongestant contents. In the first step, one type of network (feed-forward back-propagation) from the artificial neural network with two different training algorithms, Levenberg-Marquardt (LM) and gradient descent with momentum and adaptive learning rate back-propagation (GDX) algorithm, were employed and their performance was evaluated. The performance of the LM algorithm was better than the GDX algorithm. In the second one, the radial basis network was utilized and results compared with the previous network. In the last one, the other intelligent method named least squares support vector machine was proposed to construct the antihistamine decongestant prediction model and the results were compared with two of the aforementioned networks. The values of the statistical parameters mean square error (MSE), Regression coefficient (R(2)), correlation coefficient (r) and also mean recovery (%), relative standard deviation (RSD) used for selecting the best model between these methods. Moreover, the proposed methods were compared to the high- performance liquid chromatography (HPLC) as a reference method. One way analysis of variance (ANOVA) test at the 95% confidence level applied to the comparison results of suggested and reference methods that there were no significant differences between them. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Periodically Estimated Reflection Coefficient Measurement Uncertainties for a Vector Network Analyzer

    SciTech Connect

    DUDA JR.,LEONARD E.

    1999-12-08

    This paper describes the model and method used to obtain the periodically estimated uncertainties for measurement of the scattering parameters S{sub 11} and S{sub 22} on a Vector Network Analyzer (VNA). A thru-reflect-line (TRL) method is employed as a second tier calibration to obtain uncertainty estimates using an NIST-calibrated standard. An example of tabulated listings of these uncertainty estimates is presented and the uncertainties obtained for a VNA with 7 mm, 3.5 mm, and type N coaxial interfaces used in the laboratory over several years are summarized.

  2. Performance Analysis of IIUM Wireless Campus Network

    NASA Astrophysics Data System (ADS)

    Abd Latif, Suhaimi; Masud, Mosharrof H.; Anwar, Farhat

    2013-12-01

    International Islamic University Malaysia (IIUM) is one of the leading universities in the world in terms of quality of education that has been achieved due to providing numerous facilities including wireless services to every enrolled student. The quality of this wireless service is controlled and monitored by Information Technology Division (ITD), an ISO standardized organization under the university. This paper aims to investigate the constraints of wireless campus network of IIUM. It evaluates the performance of the IIUM wireless campus network in terms of delay, throughput and jitter. QualNet 5.2 simulator tool has employed to measure these performances of IIUM wireless campus network. The observation from the simulation result could be one of the influencing factors in improving wireless services for ITD and further improvement.

  3. Static performance investigation of a skewed-throat multiaxis thrust-vectoring nozzle concept

    NASA Technical Reports Server (NTRS)

    Wing, David J.

    1994-01-01

    The static performance of a jet exhaust nozzle which achieves multiaxis thrust vectoring by physically skewing the geometric throat has been characterized in the static test facility of the 16-Foot Transonic Tunnel at NASA Langley Research Center. The nozzle has an asymmetric internal geometry defined by four surfaces: a convergent-divergent upper surface with its ridge perpendicular to the nozzle centerline, a convergent-divergent lower surface with its ridge skewed relative to the nozzle centerline, an outwardly deflected sidewall, and a straight sidewall. The primary goal of the concept is to provide efficient yaw thrust vectoring by forcing the sonic plane (nozzle throat) to form at a yaw angle defined by the skewed ridge of the lower surface contour. A secondary goal is to provide multiaxis thrust vectoring by combining the skewed-throat yaw-vectoring concept with upper and lower pitch flap deflections. The geometric parameters varied in this investigation included lower surface ridge skew angle, nozzle expansion ratio (divergence angle), aspect ratio, pitch flap deflection angle, and sidewall deflection angle. Nozzle pressure ratio was varied from 2 to a high of 11.5 for some configurations. The results of the investigation indicate that efficient, substantial multiaxis thrust vectoring was achieved by the skewed-throat nozzle concept. However, certain control surface deflections destabilized the internal flow field, which resulted in substantial shifts in the position and orientation of the sonic plane and had an adverse effect on thrust-vectoring and weight flow characteristics. By increasing the expansion ratio, the location of the sonic plane was stabilized. The asymmetric design resulted in interdependent pitch and yaw thrust vectoring as well as nonzero thrust-vector angles with undeflected control surfaces. By skewing the ridges of both the upper and lower surface contours, the interdependency between pitch and yaw thrust vectoring may be eliminated

  4. On-wafer vector network analyzer measurements in the 220-325 Ghz frequency band

    NASA Technical Reports Server (NTRS)

    Fung, King Man Andy; Dawson, D.; Samoska, L.; Lee, K.; Oleson, C.; Boll, G.

    2006-01-01

    We report on a full two-port on-wafer vector network analyzer test set for the 220-325 GHz (WR3) frequency band. The test set utilizes Oleson Microwave Labs frequency extenders with the Agilent 8510C network analyzer. Two port on-wafer measurements are made with GGB Industries coplanar waveguide (CPW) probes. With this test set we have measured the WR3 band S-parameters of amplifiers on-wafer, and the characteristics of the CPW wafer probes. Results for a three stage InP HEMT amplifier show 10 dB gain at 235 GHz [1], and that of a single stage amplifier, 2.9 dB gain at 231 GHz. The approximate upper limit of loss per CPW probe range from 3.0 to 4.8 dB across the WR3 frequency band.

  5. Understanding transmissibility patterns of Chagas disease through complex vector-host networks.

    PubMed

    Rengifo-Correa, Laura; Stephens, Christopher R; Morrone, Juan J; Téllez-Rendón, Juan Luis; González-Salazar, Constantino

    2017-01-12

    Chagas disease is one of the most important vector-borne zoonotic diseases in Latin America. Control strategies could be improved if transmissibility patterns of its aetiologic agent, Trypanosoma cruzi, were better understood. To understand transmissibility patterns of Chagas disease in Mexico, we inferred potential vectors and hosts of T. cruzi from geographic distributions of nine species of Triatominae and 396 wild mammal species, respectively. The most probable vectors and hosts of T. cruzi were represented in a Complex Inference Network, from which we formulated a predictive model and several associated hypotheses about the ecological epidemiology of Chagas disease. We compiled a list of confirmed mammal hosts to test our hypotheses. Our tests allowed us to predict the most important potential hosts of T. cruzi and to validate the model showing that the confirmed hosts were those predicted to be the most important hosts. We were also able to predict differences in the transmissibility of T. cruzi among triatomine species from spatial data. We hope our findings help drive efforts for future experimental studies.

  6. Initial Flight Test Evaluation of the F-15 ACTIVE Axisymmetric Vectoring Nozzle Performance

    NASA Technical Reports Server (NTRS)

    Orme, John S.; Hathaway, Ross; Ferguson, Michael D.

    1998-01-01

    A full envelope database of a thrust-vectoring axisymmetric nozzle performance for the Pratt & Whitney Pitch/Yaw Balance Beam Nozzle (P/YBBN) is being developed using the F-15 Advanced Control Technology for Integrated Vehicles (ACTIVE) aircraft. At this time, flight research has been completed for steady-state pitch vector angles up to 20' at an altitude of 30,000 ft from low power settings to maximum afterburner power. The nozzle performance database includes vector forces, internal nozzle pressures, and temperatures all of which can be used for regression analysis modeling. The database was used to substantiate a set of nozzle performance data from wind tunnel testing and computational fluid dynamic analyses. Findings from initial flight research at Mach 0.9 and 1.2 are presented in this paper. The results show that vector efficiency is strongly influenced by power setting. A significant discrepancy in nozzle performance has been discovered between predicted and measured results during vectoring.

  7. Performance of BGP Among Mobile Military Networks

    DTIC Science & Technology

    2011-04-08

    04-2011 Technical Paper MAR 2011 - APR 2011 Performance of BGP Among Mobile Military Networks FA8720-05-C-0002 Glenn Carl and Scott Arbiv MIT Lincoln...networks is BGP routing policy. As such, this paper begins the study of BGP’s applicability to manage an evolution of the GIG In which IP-based tactical...radios proliferate (i.e., the Future GIG). To this end, this paper first presents a modification to BGP that allows for dynamic management of its

  8. Analysis of a general SIS model with infective vectors on the complex networks

    NASA Astrophysics Data System (ADS)

    Juang, Jonq; Liang, Yu-Hao

    2015-11-01

    A general SIS model with infective vectors on complex networks is studied in this paper. In particular, the model considers the linear combination of three possible routes of disease propagation between infected and susceptible individuals as well as two possible transmission types which describe how the susceptible vectors attack the infected individuals. A new technique based on the basic reproduction matrix is introduced to obtain the following results. First, necessary and sufficient conditions are obtained for the global stability of the model through a unified approach. As a result, we are able to produce the exact basic reproduction number and the precise epidemic thresholds with respect to three spreading strengths, the curing strength or the immunization strength all at once. Second, the monotonicity of the basic reproduction number and the above mentioned epidemic thresholds with respect to all other parameters can be rigorously characterized. Finally, we are able to compare the effectiveness of various immunization strategies under the assumption that the number of persons getting vaccinated is the same for all strategies. In particular, we prove that in the scale-free networks, both targeted and acquaintance immunizations are more effective than uniform and active immunizations and that active immunization is the least effective strategy among those four. We are also able to determine how the vaccine should be used at minimum to control the outbreak of the disease.

  9. Network-level reproduction number and extinction threshold for vector-borne diseases.

    PubMed

    Xue, Ling; Scoglio, Caterina

    2015-06-01

    The basic reproduction number of deterministic models is an essential quantity to predict whether an epidemic will spread or not. Thresholds for disease extinction contribute crucial knowledge of disease control, elimination, and mitigation of infectious diseases. Relationships between basic reproduction numbers of two deterministic network-based ordinary differential equation vector-host models, and extinction thresholds of corresponding stochastic continuous-time Markov chain models are derived under some assumptions. Numerical simulation results for malaria and Rift Valley fever transmission on heterogeneous networks are in agreement with analytical results without any assumptions, reinforcing that the relationships may always exist and proposing a mathematical problem for proving existence of the relationships in general. Moreover, numerical simulations show that the basic reproduction number does not monotonically increase or decrease with the extinction threshold. Consistent trends of extinction probability observed through numerical simulations provide novel insights into mitigation strategies to increase the disease extinction probability. Research findings may improve understandings of thresholds for disease persistence in order to control vector-borne diseases.

  10. Calibration-measurement unit for the automation of vector network analyzer measurements

    NASA Astrophysics Data System (ADS)

    Rolfes, I.; Will, B.; Schiek, B.

    2008-05-01

    With the availability of multi-port vector network analyzers, the need for automated, calibrated measurement facilities increases. In this contribution, a calibration-measurement unit is presented which realizes a repeatable automated calibration of the measurement setup as well as a user-friendly measurement of the device under test (DUT). In difference to commercially available calibration units, which are connected to the ports of the vector network analyzer preceding a measurement and which are then removed so that the DUT can be connected, the presented calibration-measurement unit is permanently connected to the ports of the VNA for the calibration as well as for the measurement of the DUT. This helps to simplify the calibrated measurement of complex scattering parameters. Moreover, a full integration of the calibration unit into the analyzer setup becomes possible. The calibration-measurement unit is based on a multiport switch setup of e.g. electromechanical relays. Under the assumption of symmetry of a switch, on the one hand the unit realizes the connection of calibration standards like one-port reflection standards and two-port through connections between different ports and on the other hand it enables the connection of the DUT. The calibration-measurement unit is applicable for two-port VNAs as well as for multiport VNAs. For the calibration of the unit, methods with completely known calibration standards like SOLT (short, open, load, through) as well as self-calibration procedures like TMR or TLR can be applied.

  11. Networked Chemoreceptors Benefit Bacterial Chemotaxis Performance

    PubMed Central

    Frank, Vered; Piñas, Germán E.; Cohen, Harel; Parkinson, John S.

    2016-01-01

    ABSTRACT Motile bacteria use large receptor arrays to detect and follow chemical gradients in their environment. Extended receptor arrays, composed of networked signaling complexes, promote cooperative stimulus control of their associated signaling kinases. Here, we used structural lesions at the communication interface between core complexes to create an Escherichia coli strain with functional but dispersed signaling complexes. This strain allowed us to directly study how networking of signaling complexes affects chemotactic signaling and gradient-tracking performance. We demonstrate that networking of receptor complexes provides bacterial cells with about 10-fold-heightened detection sensitivity to attractants while maintaining a wide dynamic range over which receptor adaptational modifications can tune response sensitivity. These advantages proved especially critical for chemotaxis toward an attractant source under conditions in which bacteria are unable to alter the attractant gradient. PMID:27999161

  12. Characterization of interdigitated electrode structures for water contaminant detection using a hybrid voltage divider and a vector network analyzer.

    PubMed

    Rodríguez-Delgado, José Manuel; Rodríguez-Delgado, Melissa Marlene; Mendoza-Buenrostro, Christian; Dieck-Assad, Graciano; Omar Martínez-Chapa, Sergio

    2012-01-01

    Interdigitated capacitive electrode structures have been used to monitor or actuate over organic and electrochemical media in efforts to characterize biochemical properties. This article describes a method to perform a pre-characterization of interdigitated electrode structures using two methods: a hybrid voltage divider (HVD) and a vector network analyzer (VNA). Both methodologies develop some tests under two different conditions: free air and bi-distilled water media. Also, the HVD methodology is used for other two conditions: phosphate buffer with laccase (polyphenoloxidase; EC 1.10.3.2) and contaminated media composed by a mix of phosphate buffer and 3-ethylbenzothiazoline-6-sulfonic acid (ABTS). The purpose of this study is to develop and validate a characterization methodology using both, a hybrid voltage divider and VNA T-# network impedance models of the interdigitated capacitive electrode structure that will provide a shunt RC network of particular interest in detecting the amount of contamination existing in the water solution for the media conditions. This methodology should provide us with the best possible sensitivity in monitoring water contaminant media characteristics. The results show that both methods, the hybrid voltage divider and the VNA methodology, are feasible in determining impedance modeling parameters. These parameters can be used to develop electric interrogation procedures and devices such as dielectric characteristics to identify contaminant substances in water solutions.

  13. Performance of TCP variants over LTE network

    NASA Astrophysics Data System (ADS)

    Nor, Shahrudin Awang; Maulana, Ade Novia

    2016-08-01

    One of the implementation of a wireless network is based on mobile broadband technology Long Term Evolution (LTE). LTE offers a variety of advantages, especially in terms of access speed, capacity, architectural simplicity and ease of implementation, as well as the breadth of choice of the type of user equipment (UE) that can establish the access. The majority of the Internet connections in the world happen using the TCP (Transmission Control Protocol) due to the TCP's reliability in transmitting packets in the network. TCP reliability lies in the ability to control the congestion. TCP was originally designed for wired media, but LTE connected through a wireless medium that is not stable in comparison to wired media. A wide variety of TCP has been made to produce a better performance than its predecessor. In this study, we simulate the performance provided by the TCP NewReno and TCP Vegas based on simulation using network simulator version 2 (ns2). The TCP performance is analyzed in terms of throughput, packet loss and end-to-end delay. In comparing the performance of TCP NewReno and TCP Vegas, the simulation result shows that the throughput of TCP NewReno is slightly higher than TCP Vegas, while TCP Vegas gives significantly better end-to-end delay and packet loss. The analysis of throughput, packet loss and end-to-end delay are made to evaluate the simulation.

  14. Neurodynamics of learning and network performance

    NASA Astrophysics Data System (ADS)

    Wilson, Charles L.; Blue, James L.; Omidvar, Omid M.

    1997-07-01

    A simple dynamic model of a neural network is presented. Using the dynamic model of a neural network, we improve the performance of a three-layer multilayer perceptron (MLP). The dynamic model of a MLP is used to make fundamental changes in the network optimization strategy. These changes are: neuron activation functions are used, which reduces the probability of singular Jacobians; successive regularization is used to constrain the volume of the weight space being minimized; Boltzmann pruning is used to constrain the dimension of the weight space; and prior class probabilities are used to normalize all error calculations, so that statistically significant samples of rare but important classes can be included without distortion of the error surface. All four of these changes are made in the inner loop of a conjugate gradient optimization iteration and are intended to simplify the training dynamics of the optimization. On handprinted digits and fingerprint classification problems, these modifications improve error-reject performance by factors between 2 and 4 and reduce network size by 40 to 60%.

  15. Performance Evaluation Modeling of Network Sensors

    NASA Technical Reports Server (NTRS)

    Clare, Loren P.; Jennings, Esther H.; Gao, Jay L.

    2003-01-01

    Substantial benefits are promised by operating many spatially separated sensors collectively. Such systems are envisioned to consist of sensor nodes that are connected by a communications network. A simulation tool is being developed to evaluate the performance of networked sensor systems, incorporating such metrics as target detection probabilities, false alarms rates, and classification confusion probabilities. The tool will be used to determine configuration impacts associated with such aspects as spatial laydown, and mixture of different types of sensors (acoustic, seismic, imaging, magnetic, RF, etc.), and fusion architecture. The QualNet discrete-event simulation environment serves as the underlying basis for model development and execution. This platform is recognized for its capabilities in efficiently simulating networking among mobile entities that communicate via wireless media. We are extending QualNet's communications modeling constructs to capture the sensing aspects of multi-target sensing (analogous to multiple access communications), unimodal multi-sensing (broadcast), and multi-modal sensing (multiple channels and correlated transmissions). Methods are also being developed for modeling the sensor signal sources (transmitters), signal propagation through the media, and sensors (receivers) that are consistent with the discrete event paradigm needed for performance determination of sensor network systems. This work is supported under the Microsensors Technical Area of the Army Research Laboratory (ARL) Advanced Sensors Collaborative Technology Alliance.

  16. USING MULTITAIL NETWORKS IN HIGH PERFORMANCE CLUSTERS

    SciTech Connect

    S. COLL; E. FRACHTEMBERG; F. PETRINI; A. HOISIE; L. GURVITS

    2001-03-01

    Using multiple independent networks (also known as rails) is an emerging technique to overcome bandwidth limitations and enhance fault-tolerance of current high-performance clusters. We present and analyze various venues for exploiting multiple rails. Different rail access policies are presented and compared, including static and dynamic allocation schemes. An analytical lower bound on the number of networks required for static rail allocation is shown. We also present an extensive experimental comparison of the behavior of various allocation schemes in terms of bandwidth and latency. Striping messages over multiple rails can substantially reduce network latency, depending on average message size, network load and allocation scheme. The methods compared include a static rail allocation, a round-robin rail allocation, a dynamic allocation based on local knowledge, and a rail allocation that reserves both end-points of a message before sending it. The latter is shown to perform better than other methods at higher loads: up to 49% better than local-knowledge allocation and 37% better than the round-robin allocation. This allocation scheme also shows lower latency and it saturates on higher loads (for messages large enough). Most importantly, this proposed allocation scheme scales well with the number of rails and message sizes.

  17. Static performance of a cruciform nozzle with multiaxis thrust-vectoring and reverse-thrust capabilities

    NASA Technical Reports Server (NTRS)

    Wing, David J.; Asbury, Scott C.

    1992-01-01

    A multiaxis thrust vectoring nozzle designed to have equal flow turning capability in pitch and yaw was conceived and experimentally tested for internal, static performance. The cruciform-shaped convergent-divergent nozzle turned the flow for thrust vectoring by deflecting the divergent surfaces of the nozzle, called flaps. Methods for eliminating physical interference between pitch and yaw flaps at the larger multiaxis deflection angles was studied. These methods included restricting the pitch flaps from the path of the yaw flaps and shifting the flow path at the throat off the nozzle centerline to permit larger pitch-flap deflections without interfering with the operation of the yaw flaps. Two flap widths were tested at both dry and afterburning settings. Vertical and reverse thrust configurations at dry power were also tested. Comparison with two dimensional convergent-divergent nozzles showed lower but still competitive thrust performance and thrust vectoring capability.

  18. The Helioseismic and Magnetic Imager (HMI) Vector Magnetic Field Pipeline: Overview and Performance

    NASA Astrophysics Data System (ADS)

    Hoeksema, J. Todd; Liu, Yang; Hayashi, Keiji; Sun, Xudong; Schou, Jesper; Couvidat, Sebastien; Norton, Aimee; Bobra, Monica; Centeno, Rebecca; Leka, K. D.; Barnes, Graham; Turmon, Michael

    2014-09-01

    The Helioseismic and Magnetic Imager (HMI) began near-continuous full-disk solar measurements on 1 May 2010 from the Solar Dynamics Observatory (SDO). An automated processing pipeline keeps pace with observations to produce observable quantities, including the photospheric vector magnetic field, from sequences of filtergrams. The basic vector-field frame list cadence is 135 seconds, but to reduce noise the filtergrams are combined to derive data products every 720 seconds. The primary 720 s observables were released in mid-2010, including Stokes polarization parameters measured at six wavelengths, as well as intensity, Doppler velocity, and the line-of-sight magnetic field. More advanced products, including the full vector magnetic field, are now available. Automatically identified HMI Active Region Patches (HARPs) track the location and shape of magnetic regions throughout their lifetime. The vector field is computed using the Very Fast Inversion of the Stokes Vector (VFISV) code optimized for the HMI pipeline; the remaining 180∘ azimuth ambiguity is resolved with the Minimum Energy (ME0) code. The Milne-Eddington inversion is performed on all full-disk HMI observations. The disambiguation, until recently run only on HARP regions, is now implemented for the full disk. Vector and scalar quantities in the patches are used to derive active region indices potentially useful for forecasting; the data maps and indices are collected in the SHARP data series, hmi.sharp_720s. Definitive SHARP processing is completed only after the region rotates off the visible disk; quick-look products are produced in near real time. Patches are provided in both CCD and heliographic coordinates. HMI provides continuous coverage of the vector field, but has modest spatial, spectral, and temporal resolution. Coupled with limitations of the analysis and interpretation techniques, effects of the orbital velocity, and instrument performance, the resulting measurements have a certain dynamic

  19. Network- and network-element-level parameters for configuration, fault, and performance management of optical networks

    NASA Astrophysics Data System (ADS)

    Drion, Christophe; Berthelon, Luc; Chambon, Olivier; Eilenberger, Gert; Peden, Francoise R.; Jourdan, Amaury

    1998-10-01

    With the high interest of network operators and manufacturers for wavelength division multiplexing (WDM) networking technology, the need for management systems adapted to this new technology keeps increasing. We investigated this topic and produced outputs through the specification of the functional architecture, network layered model, and through the development of new, TMN- based, information models for the management of optical networks and network elements. Based on these first outputs, defects in each layer together with parameters for performance management/monitoring have been identified for each type of optical network element, and each atomic function describing the element, including functions for both the transport of payload signals and of overhead information. The list of probable causes has been established for the identified defects. A second aspect consists in the definition of network-level parameters, if such photonic technology-related parameters are to be considered at this level. It is our conviction that some parameters can be taken into account at the network level for performance management, based on physical measurements within the network. Some parameters could possibly be used as criteria for configuration management, in the route calculation processes, including protection. The outputs of these specification activities are taken into account in the development of a manageable WDM network prototype which will be used as a test platform to demonstrate configuration, fault, protection and performance management in a real network, in the scope of the ACTS-MEPHISTO project. This network prototype will also be used in a larger size experiment in the context of the ACTS-PELICAN field trial (Pan-European Lightwave Core and Access Network).

  20. Scheduling and performance limits of networks with constantly changing topology

    SciTech Connect

    Tassiulas, L.

    1997-01-01

    A communication network with time-varying topology is considered. The network consists of M receivers and N transmitters that may access in principle every receiver. An underlying network state process with Markovian statistics is considered, that reflects the physical characteristics of the network affecting the link service capacity. The transmissions are scheduled dynamically, based on information about the link capacities and the backlog in the network. The region of achievable throughputs is characterized. A transmission scheduling policy is proposed, that utilizes current topology state information and achieves all throughput vectors achievable by any anticipative policy. The changing topology model applies to networks of Low Earth Orbit (LEO) satellites, meteor-burst communication networks and networks with mobile users. {copyright} {ital 1997 American Institute of Physics.}

  1. Static internal performance of single expansion-ramp nozzles with thrust vectoring and reversing

    NASA Technical Reports Server (NTRS)

    Re, R. J.; Berrier, B. L.

    1982-01-01

    The effects of geometric design parameters on the internal performance of nonaxisymmetric single expansion-ramp nozzles were investigated at nozzle pressure ratios up to approximately 10. Forward-flight (cruise), vectored-thrust, and reversed-thrust nozzle operating modes were investigated.

  2. Cluster Expansion Method for Evolving Weighted Networks Having Vector-Like Nodes

    NASA Astrophysics Data System (ADS)

    Ausloos, M.; Gligor, M.

    2008-09-01

    The cluster variation method known in statistical mechanics and condensed matter is revived for weighted bipartite networks. The decomposition (or expansion) of a Hamiltonian through a finite number of components, whence serving to define variable clusters, is recalled. As an illustration the network built from data representing correlations between (4) macroeconomic features, i.e. the so-called vector components, of 15 EU countries, as (function) nodes, is discussed. We show that statistical physics principles, like the maximum entropy criterion points to clusters, here in a (4) variable phase space: Gross Domestic Product, Final Consumption Expenditure, Gross Capital Formation and Net Exports. It is observed that the maximum entropy corresponds to a cluster which does not explicitly include the Gross Domestic Product but only the other (3) "axes", i.e. consumption, investment and trade components. On the other hand, the minimal entropy clustering scheme is obtained from a coupling necessarily including Gross Domestic Product and Final Consumption Expenditure. The results confirm intuitive economic theory and practice expectations at least as regards geographical connexions. The technique can of course be applied to many other cases in the physics of socio-economy networks.

  3. Parallel-vector unsymmetric Eigen-Solver on high performance computers

    NASA Technical Reports Server (NTRS)

    Nguyen, Duc T.; Jiangning, Qin

    1993-01-01

    The popular QR algorithm for solving all eigenvalues of an unsymmetric matrix is reviewed. Among the basic components in the QR algorithm, it was concluded from this study, that the reduction of an unsymmetric matrix to a Hessenberg form (before applying the QR algorithm itself) can be done effectively by exploiting the vector speed and multiple processors offered by modern high-performance computers. Numerical examples of several test cases have indicated that the proposed parallel-vector algorithm for converting a given unsymmetric matrix to a Hessenberg form offers computational advantages over the existing algorithm. The time saving obtained by the proposed methods is increased as the problem size increased.

  4. Modeling and Performance Simulation of the Mass Storage Network Environment

    NASA Technical Reports Server (NTRS)

    Kim, Chan M.; Sang, Janche

    2000-01-01

    This paper describes the application of modeling and simulation in evaluating and predicting the performance of the mass storage network environment. Network traffic is generated to mimic the realistic pattern of file transfer, electronic mail, and web browsing. The behavior and performance of the mass storage network and a typical client-server Local Area Network (LAN) are investigated by modeling and simulation. Performance characteristics in throughput and delay demonstrate the important role of modeling and simulation in network engineering and capacity planning.

  5. Deep learning of support vector machines with class probability output networks.

    PubMed

    Kim, Sangwook; Yu, Zhibin; Kil, Rhee Man; Lee, Minho

    2015-04-01

    Deep learning methods endeavor to learn features automatically at multiple levels and allow systems to learn complex functions mapping from the input space to the output space for the given data. The ability to learn powerful features automatically is increasingly important as the volume of data and range of applications of machine learning methods continues to grow. This paper proposes a new deep architecture that uses support vector machines (SVMs) with class probability output networks (CPONs) to provide better generalization power for pattern classification problems. As a result, deep features are extracted without additional feature engineering steps, using multiple layers of the SVM classifiers with CPONs. The proposed structure closely approaches the ideal Bayes classifier as the number of layers increases. Using a simulation of classification problems, the effectiveness of the proposed method is demonstrated. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Detecting and sorting targeting peptides with neural networks and support vector machines.

    PubMed

    Hawkins, John; Bodén, Mikael

    2006-02-01

    This paper presents a composite multi-layer classifier system for predicting the subcellular localization of proteins based on their amino acid sequence. The work is an extension of our previous predictor PProwler v1.1 which is itself built upon the series of predictors SignalP and TargetP. In this study we outline experiments conducted to improve the classifier design. The major improvement came from using Support Vector machines as a "smart gate" sorting the outputs of several different targeting peptide detection networks. Our final model (PProwler v1.2) gives MCC values of 0.873 for non-plant and 0.849 for plant proteins. The model improves upon the accuracy of our previous subcellular localization predictor (PProwler v1.1) by 2% for plant data (which represents 7.5% improvement upon TargetP).

  7. Static Thrust and Vectoring Performance of a Spherical Convergent Flap Nozzle with a Nonrectangular Divergent Duct

    NASA Technical Reports Server (NTRS)

    Wing, David J.

    1998-01-01

    The static internal performance of a multiaxis-thrust-vectoring, spherical convergent flap (SCF) nozzle with a non-rectangular divergent duct was obtained in the model preparation area of the Langley 16-Foot Transonic Tunnel. Duct cross sections of hexagonal and bowtie shapes were tested. Additional geometric parameters included throat area (power setting), pitch flap deflection angle, and yaw gimbal angle. Nozzle pressure ratio was varied from 2 to 12 for dry power configurations and from 2 to 6 for afterburning power configurations. Approximately a 1-percent loss in thrust efficiency from SCF nozzles with a rectangular divergent duct was incurred as a result of internal oblique shocks in the flow field. The internal oblique shocks were the result of cross flow generated by the vee-shaped geometric throat. The hexagonal and bowtie nozzles had mirror-imaged flow fields and therefore similar thrust performance. Thrust vectoring was not hampered by the three-dimensional internal geometry of the nozzles. Flow visualization indicates pitch thrust-vector angles larger than 10' may be achievable with minimal adverse effect on or a possible gain in resultant thrust efficiency as compared with the performance at a pitch thrust-vector angle of 10 deg.

  8. Artificial neural network simulation of battery performance

    SciTech Connect

    O`Gorman, C.C.; Ingersoll, D.; Jungst, R.G.; Paez, T.L.

    1998-12-31

    Although they appear deceptively simple, batteries embody a complex set of interacting physical and chemical processes. While the discrete engineering characteristics of a battery such as the physical dimensions of the individual components, are relatively straightforward to define explicitly, their myriad chemical and physical processes, including interactions, are much more difficult to accurately represent. Within this category are the diffusive and solubility characteristics of individual species, reaction kinetics and mechanisms of primary chemical species as well as intermediates, and growth and morphology characteristics of reaction products as influenced by environmental and operational use profiles. For this reason, development of analytical models that can consistently predict the performance of a battery has only been partially successful, even though significant resources have been applied to this problem. As an alternative approach, the authors have begun development of a non-phenomenological model for battery systems based on artificial neural networks. Both recurrent and non-recurrent forms of these networks have been successfully used to develop accurate representations of battery behavior. The connectionist normalized linear spline (CMLS) network has been implemented with a self-organizing layer to model a battery system with the generalized radial basis function net. Concurrently, efforts are under way to use the feedforward back propagation network to map the {open_quotes}state{close_quotes} of a battery system. Because of the complexity of battery systems, accurate representation of the input and output parameters has proven to be very important. This paper describes these initial feasibility studies as well as the current models and makes comparisons between predicted and actual performance.

  9. Functional Forms of Optimum Spoofing Attacks for Vector Parameter Estimation in Quantized Sensor Networks

    NASA Astrophysics Data System (ADS)

    Zhang, Jiangfan; Blum, Rick S.; Kaplan, Lance M.; Lu, Xuanxuan

    2017-02-01

    Estimation of an unknown deterministic vector from quantized sensor data is considered in the presence of spoofing attacks which alter the data presented to several sensors. Contrary to previous work, a generalized attack model is employed which manipulates the data using transformations with arbitrary functional forms determined by some attack parameters whose values are unknown to the attacked system. For the first time, necessary and sufficient conditions are provided under which the transformations provide a guaranteed attack performance in terms of Cramer-Rao Bound (CRB) regardless of the processing the estimation system employs, thus defining a highly desirable attack. Interestingly, these conditions imply that, for any such attack when the attacked sensors can be perfectly identified by the estimation system, either the Fisher Information Matrix (FIM) for jointly estimating the desired and attack parameters is singular or that the attacked system is unable to improve the CRB for the desired vector parameter through this joint estimation even though the joint FIM is nonsingular. It is shown that it is always possible to construct such a highly desirable attack by properly employing a sufficiently large dimension attack vector parameter relative to the number of quantization levels employed, which was not observed previously. To illustrate the theory in a concrete way, we also provide some numerical results which corroborate that under the highly desirable attack, attacked data is not useful in reducing the CRB.

  10. Scientific Application Performance on Leading Scalar and VectorSupercomputing Platforms

    SciTech Connect

    Oliker, Leonid; Canning, Andrew; Carter, Jonathan; Shalf, John; Ethier, Stephane

    2007-01-01

    The last decade has witnessed a rapid proliferation of superscalar cache-based microprocessors to build high-end computing (HEC) platforms, primarily because of their generality, scalability, and cost effectiveness. However, the growing gap between sustained and peak performance for full-scale scientific applications on conventional supercomputers has become a major concern in high performance computing, requiring significantly larger systems and application scalability than implied by peak performance in order to achieve desired performance. The latest generation of custom-built parallel vector systems have the potential to address this issue for numerical algorithms with sufficient regularity in their computational structure. In this work we explore applications drawn from four areas: magnetic fusion (GTC), plasma physics (LBMHD3D), astrophysics (Cactus), and material science (PARATEC). We compare performance of the vector-based Cray X1, X1E, Earth Simulator, NEC SX-8, with performance of three leading commodity-based superscalar platforms utilizing the IBM Power3, Intel Itanium2, and AMD Opteron processors. Our work makes several significant contributions: a new data-decomposition scheme for GTC that (for the first time) enables a breakthrough of the Teraflop barrier; the introduction of a new three-dimensional Lattice Boltzmann magneto-hydrodynamic implementation used to study the onset evolution of plasma turbulence that achieves over 26Tflop/s on 4800 ES processors; the highest per processor performance (by far) achieved by the full-production version of the Cactus ADM-BSSN; and the largest PARATEC cell size atomistic simulation to date. Overall, results show that the vector architectures attain unprecedented aggregate performance across our application suite, demonstrating the tremendous potential of modern parallel vector systems.

  11. Dynamic changes of spatial functional network connectivity in healthy individuals and schizophrenia patients using independent vector analysis.

    PubMed

    Ma, Sai; Calhoun, Vince D; Phlypo, Ronald; Adalı, Tülay

    2014-04-15

    Recent work on both task-induced and resting-state functional magnetic resonance imaging (fMRI) data suggests that functional connectivity may fluctuate, rather than being stationary during an entire scan. Most dynamic studies are based on second-order statistics between fMRI time series or time courses derived from blind source separation, e.g., independent component analysis (ICA), to investigate changes of temporal interactions among brain regions. However, fluctuations related to spatial components over time are of interest as well. In this paper, we examine higher-order statistical dependence between pairs of spatial components, which we define as spatial functional network connectivity (sFNC), and changes of sFNC across a resting-state scan. We extract time-varying components from healthy controls and patients with schizophrenia to represent brain networks using independent vector analysis (IVA), which is an extension of ICA to multiple data sets and enables one to capture spatial variations. Based on mutual information among IVA components, we perform statistical analysis and Markov modeling to quantify the changes in spatial connectivity. Our experimental results suggest significantly more fluctuations in patient group and show that patients with schizophrenia have more variable patterns of spatial concordance primarily between the frontoparietal, cerebellar and temporal lobe regions. This study extends upon earlier studies showing temporal connectivity differences in similar areas on average by providing evidence that the dynamic spatial interplay between these regions is also impacted by schizophrenia. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Advancements and performance of iterative methods in industrial applications codes on CRAY parallel/vector supercomputers

    SciTech Connect

    Poole, G.; Heroux, M.

    1994-12-31

    This paper will focus on recent work in two widely used industrial applications codes with iterative methods. The ANSYS program, a general purpose finite element code widely used in structural analysis applications, has now added an iterative solver option. Some results are given from real applications comparing performance with the tradition parallel/vector frontal solver used in ANSYS. Discussion of the applicability of iterative solvers as a general purpose solver will include the topics of robustness, as well as memory requirements and CPU performance. The FIDAP program is a widely used CFD code which uses iterative solvers routinely. A brief description of preconditioners used and some performance enhancements for CRAY parallel/vector systems is given. The solution of large-scale applications in structures and CFD includes examples from industry problems solved on CRAY systems.

  13. Curriculum Assessment Using Artificial Neural Network and Support Vector Machine Modeling Approaches: A Case Study. IR Applications. Volume 29

    ERIC Educational Resources Information Center

    Chen, Chau-Kuang

    2010-01-01

    Artificial Neural Network (ANN) and Support Vector Machine (SVM) approaches have been on the cutting edge of science and technology for pattern recognition and data classification. In the ANN model, classification accuracy can be achieved by using the feed-forward of inputs, back-propagation of errors, and the adjustment of connection weights. In…

  14. Evaluation models for soil nutrient based on support vector machine and artificial neural networks.

    PubMed

    Li, Hao; Leng, Weijia; Zhou, Yibing; Chen, Fudi; Xiu, Zhilong; Yang, Dazuo

    2014-01-01

    Soil nutrient is an important aspect that contributes to the soil fertility and environmental effects. Traditional evaluation approaches of soil nutrient are quite hard to operate, making great difficulties in practical applications. In this paper, we present a series of comprehensive evaluation models for soil nutrient by using support vector machine (SVM), multiple linear regression (MLR), and artificial neural networks (ANNs), respectively. We took the content of organic matter, total nitrogen, alkali-hydrolysable nitrogen, rapidly available phosphorus, and rapidly available potassium as independent variables, while the evaluation level of soil nutrient content was taken as dependent variable. Results show that the average prediction accuracies of SVM models are 77.87% and 83.00%, respectively, while the general regression neural network (GRNN) model's average prediction accuracy is 92.86%, indicating that SVM and GRNN models can be used effectively to assess the levels of soil nutrient with suitable dependent variables. In practical applications, both SVM and GRNN models can be used for determining the levels of soil nutrient.

  15. Fuzzy-possibilistic neural network to vector quantizer in frequency domains

    NASA Astrophysics Data System (ADS)

    Lin, Jzau-Sheng

    2002-04-01

    The fuzzy possibilistic c-means (FPCM) is embedded into a 2-D Hopfield neural network termed the fuzzy possibilistic Hopfield network (FPHN) to generate an optimal solution for vector quantization (VQ) in the discrete cosine transform (DCT) and the Hadamard transform (HT) domains. The information transformed by DCT or HT is separated into dc and ac coefficients. Then, the ac coefficients are trained using the proposed methods to generate a better codebook based on VQ. The energy function of the FPHN is defined as the fuzzy membership grades and possibilistic typicality degrees between training samples and codevectors. A near global-minimum codebook in the frequency domains can be obtained when the energy function converges to a stable state. Instead of one state in a neuron for the conventional Hopfield nets, each neuron occupies two states called the membership state and the typicality state in the proposed FPHN. The simulated results show that a valid and promising codebook can be generated in the DCT or HT domains using the FPHN.

  16. Performance of wireless sensor networks under random node failures

    SciTech Connect

    Bradonjic, Milan; Hagberg, Aric; Feng, Pan

    2011-01-28

    Networks are essential to the function of a modern society and the consequence of damages to a network can be large. Assessing network performance of a damaged network is an important step in network recovery and network design. Connectivity, distance between nodes, and alternative routes are some of the key indicators to network performance. In this paper, random geometric graph (RGG) is used with two types of node failure, uniform failure and localized failure. Since the network performance are multi-facet and assessment can be time constrained, we introduce four measures, which can be computed in polynomial time, to estimate performance of damaged RGG. Simulation experiments are conducted to investigate the deterioration of networks through a period of time. With the empirical results, the performance measures are analyzed and compared to provide understanding of different failure scenarios in a RGG.

  17. Identifying regulational alterations in gene regulatory networks by state space representation of vector autoregressive models and variational annealing.

    PubMed

    Kojima, Kaname; Imoto, Seiya; Yamaguchi, Rui; Fujita, André; Yamauchi, Mai; Gotoh, Noriko; Miyano, Satoru

    2012-01-01

    In the analysis of effects by cell treatment such as drug dosing, identifying changes on gene network structures between normal and treated cells is a key task. A possible way for identifying the changes is to compare structures of networks estimated from data on normal and treated cells separately. However, this approach usually fails to estimate accurate gene networks due to the limited length of time series data and measurement noise. Thus, approaches that identify changes on regulations by using time series data on both conditions in an efficient manner are demanded. We propose a new statistical approach that is based on the state space representation of the vector autoregressive model and estimates gene networks on two different conditions in order to identify changes on regulations between the conditions. In the mathematical model of our approach, hidden binary variables are newly introduced to indicate the presence of regulations on each condition. The use of the hidden binary variables enables an efficient data usage; data on both conditions are used for commonly existing regulations, while for condition specific regulations corresponding data are only applied. Also, the similarity of networks on two conditions is automatically considered from the design of the potential function for the hidden binary variables. For the estimation of the hidden binary variables, we derive a new variational annealing method that searches the configuration of the binary variables maximizing the marginal likelihood. For the performance evaluation, we use time series data from two topologically similar synthetic networks, and confirm that our proposed approach estimates commonly existing regulations as well as changes on regulations with higher coverage and precision than other existing approaches in almost all the experimental settings. For a real data application, our proposed approach is applied to time series data from normal Human lung cells and Human lung cells treated by

  18. Performance improvements for nuclear reaction network integration

    NASA Astrophysics Data System (ADS)

    Longland, R.; Martin, D.; José, J.

    2014-03-01

    Aims: The aim of this work is to compare the performance of three reaction network integration methods used in stellar nucleosynthesis calculations. These are the Gear's backward differentiation method, Wagoner's method (a 2nd-order Runge-Kutta method), and the Bader-Deuflehard semi-implicit multi-step method. Methods: To investigate the efficiency of each of the integration methods considered here, a test suite of temperature and density versus time profiles is used. This suite provides a range of situations ranging from constant temperature and density to the dramatically varying conditions present in white dwarf mergers, novae, and X-ray bursts. Some of these profiles are obtained separately from full hydrodynamic calculations. The integration efficiencies are investigated with respect to input parameters that constrain the desired accuracy and precision. Results: Gear's backward differentiation method is found to improve accuracy, performance, and stability in integrating nuclear reaction networks. For temperature-density profiles that vary strongly with time, it is found to outperform the Bader-Deuflehard method (although that method is very powerful for more smoothly varying profiles). Wagoner's method, while relatively fast for many scenarios, exhibits hard-to-predict inaccuracies for some choices of integration parameters owing to its lack of error estimations.

  19. Improving Memory Subsystem Performance Using ViVA: Virtual Vector Architecture

    SciTech Connect

    Gebis, Joseph; Oliker, Leonid; Shalf, John; Williams, Samuel; Yelick, Katherine

    2009-01-12

    The disparity between microprocessor clock frequencies and memory latency is a primary reason why many demanding applications run well below peak achievable performance. Software controlled scratchpad memories, such as the Cell local store, attempt to ameliorate this discrepancy by enabling precise control over memory movement; however, scratchpad technology confronts the programmer and compiler with an unfamiliar and difficult programming model. In this work, we present the Virtual Vector Architecture (ViVA), which combines the memory semantics of vector computers with a software-controlled scratchpad memory in order to provide a more effective and practical approach to latency hiding. ViVA requires minimal changes to the core design and could thus be easily integrated with conventional processor cores. To validate our approach, we implemented ViVA on the Mambo cycle-accurate full system simulator, which was carefully calibrated to match the performance on our underlying PowerPC Apple G5 architecture. Results show that ViVA is able to deliver significant performance benefits over scalar techniques for a variety of memory access patterns as well as two important memory-bound compact kernels, corner turn and sparse matrix-vector multiplication -- achieving 2x-13x improvement compared the scalar version. Overall, our preliminary ViVA exploration points to a promising approach for improving application performance on leading microprocessors with minimal design and complexity costs, in a power efficient manner.

  20. Analysis of interference performance of tactical radio network

    NASA Astrophysics Data System (ADS)

    Nie, Hao; Cai, Xiaoxia; Chen, Hong

    2017-08-01

    Mobile Ad hoc network has a strong military background for its development as the core technology of the backbone network of US tactical Internet. And which tactical radio network, is the war in today's tactical use of the Internet more mature form of networking, mainly used in brigade and brigade following forces. This paper analyzes the typical protocol AODV in the tactical radio network, and then carries on the networking. By adding the interference device to the whole network, the battlefield environment is simulated, and then the throughput, delay and packet loss rate are analyzed, and the performance of the whole network and the single node before and after the interference is obtained.

  1. Online monitoring and control of particle size in the grinding process using least square support vector regression and resilient back propagation neural network.

    PubMed

    Pani, Ajaya Kumar; Mohanta, Hare Krishna

    2015-05-01

    Particle size soft sensing in cement mills will be largely helpful in maintaining desired cement fineness or Blaine. Despite the growing use of vertical roller mills (VRM) for clinker grinding, very few research work is available on VRM modeling. This article reports the design of three types of feed forward neural network models and least square support vector regression (LS-SVR) model of a VRM for online monitoring of cement fineness based on mill data collected from a cement plant. In the data pre-processing step, a comparative study of the various outlier detection algorithms has been performed. Subsequently, for model development, the advantage of algorithm based data splitting over random selection is presented. The training data set obtained by use of Kennard-Stone maximal intra distance criterion (CADEX algorithm) was used for development of LS-SVR, back propagation neural network, radial basis function neural network and generalized regression neural network models. Simulation results show that resilient back propagation model performs better than RBF network, regression network and LS-SVR model. Model implementation has been done in SIMULINK platform showing the online detection of abnormal data and real time estimation of cement Blaine from the knowledge of the input variables. Finally, closed loop study shows how the model can be effectively utilized for maintaining cement fineness at desired value.

  2. Performance of an integrated network model

    PubMed Central

    Lehmann, François; Dunn, David; Beaulieu, Marie-Dominique; Brophy, James

    2016-01-01

    Objective To evaluate the changes in accessibility, patients’ care experiences, and quality-of-care indicators following a clinic’s transformation into a fully integrated network clinic. Design Mixed-methods study. Setting Verdun, Que. Participants Data on all patient visits were used, in addition to 2 distinct patient cohorts: 134 patients with chronic illness (ie, diabetes, arteriosclerotic heart disease, or both); and 450 women between the ages of 20 and 70 years. Main outcome measures Accessibility was measured by the number of walk-in visits, scheduled visits, and new patient enrolments. With the first cohort, patients’ care experiences were measured using validated serial questionnaires; and quality-of-care indicators were measured using biologic data. With the second cohort, quality of preventive care was measured using the number of Papanicolaou tests performed as a surrogate marker. Results Despite a negligible increase in the number of physicians, there was an increase in accessibility after the clinic’s transition to an integrated network model. During the first 4 years of operation, the number of scheduled visits more than doubled, nonscheduled visits (walk-in visits) increased by 29%, and enrolment of vulnerable patients (those with chronic illnesses) at the clinic remained high. Patient satisfaction with doctors was rated very highly at all points of time that were evaluated. While the number of Pap tests done did not increase with time, the proportion of patients meeting hemoglobin A1c and low-density lipoprotein guideline target levels increased, as did the number of patients tested for microalbuminuria. Conclusion Transformation to an integrated network model of care led to increased efficiency and enhanced accessibility with no negative effects on the doctor-patient relationship. Improvements in biologic data also suggested better quality of care. PMID:27521410

  3. Non-metallic coating thickness prediction using artificial neural network and support vector machine with time resolved thermography

    NASA Astrophysics Data System (ADS)

    Wang, Hongjin; Hsieh, Sheng-Jen; Peng, Bo; Zhou, Xunfei

    2016-07-01

    A method without requirements on knowledge about thermal properties of coatings or those of substrates will be interested in the industrial application. Supervised machine learning regressions may provide possible solution to the problem. This paper compares the performances of two regression models (artificial neural networks (ANN) and support vector machines for regression (SVM)) with respect to coating thickness estimations made based on surface temperature increments collected via time resolved thermography. We describe SVM roles in coating thickness prediction. Non-dimensional analyses are conducted to illustrate the effects of coating thicknesses and various factors on surface temperature increments. It's theoretically possible to correlate coating thickness with surface increment. Based on the analyses, the laser power is selected in such a way: during the heating, the temperature increment is high enough to determine the coating thickness variance but low enough to avoid surface melting. Sixty-one pain-coated samples with coating thicknesses varying from 63.5 μm to 571 μm are used to train models. Hyper-parameters of the models are optimized by 10-folder cross validation. Another 28 sets of data are then collected to test the performance of the three methods. The study shows that SVM can provide reliable predictions of unknown data, due to its deterministic characteristics, and it works well when used for a small input data group. The SVM model generates more accurate coating thickness estimates than the ANN model.

  4. A comparative study of artificial neural networks and support vector machines for predicting groundwater levels in a coastal aquifer

    NASA Astrophysics Data System (ADS)

    Yoon, Heesung; Jun, Seong-Chun; Hyun, Yunjung; Bae, Gwang-Ok; Lee, Kang-Kun

    2011-01-01

    SummaryWe have developed two nonlinear time-series models for predicting groundwater level (GWL) fluctuations using artificial neural networks (ANNs) and support vector machines (SVMs). The models were applied to GWL prediction of two wells at a coastal aquifer in Korea. Among the possible variables (past GWL, precipitation, and tide level) for an input structure, the past GWL was the most effective input variable for this study site. Tide level was more frequently selected as an input variable than precipitation. The results of the model performance show that root mean squared error (RMSE) values of ANN models are lower than those of SVM in model training and testing stages. However, the overall model performance criteria of the SVM are similar to or even better than those of the ANN in model prediction stage. The generalization ability of a SVM model is superior to an ANN model for input structures and lead times. The uncertainty analysis for model parameters detects an equifinality of model parameter sets and higher uncertainty for ANN model than SVM in this case. These results imply that the model-building process should be carefully conducted, especially when using ANN models for GWL forecasting in a coastal aquifer.

  5. Application of Artificial Neural Network and Support Vector Machines in Predicting Metabolizable Energy in Compound Feeds for Pigs.

    PubMed

    Ahmadi, Hamed; Rodehutscord, Markus

    2017-01-01

    In the nutrition literature, there are several reports on the use of artificial neural network (ANN) and multiple linear regression (MLR) approaches for predicting feed composition and nutritive value, while the use of support vector machines (SVM) method as a new alternative approach to MLR and ANN models is still not fully investigated. The MLR, ANN, and SVM models were developed to predict metabolizable energy (ME) content of compound feeds for pigs based on the German energy evaluation system from analyzed contents of crude protein (CP), ether extract (EE), crude fiber (CF), and starch. A total of 290 datasets from standardized digestibility studies with compound feeds was provided from several institutions and published papers, and ME was calculated thereon. Accuracy and precision of developed models were evaluated, given their produced prediction values. The results revealed that the developed ANN [R(2) = 0.95; root mean square error (RMSE) = 0.19 MJ/kg of dry matter] and SVM (R(2) = 0.95; RMSE = 0.21 MJ/kg of dry matter) models produced better prediction values in estimating ME in compound feed than those produced by conventional MLR (R(2) = 0.89; RMSE = 0.27 MJ/kg of dry matter). The developed ANN and SVM models produced better prediction values in estimating ME in compound feed than those produced by conventional MLR; however, there were not obvious differences between performance of ANN and SVM models. Thus, SVM model may also be considered as a promising tool for modeling the relationship between chemical composition and ME of compound feeds for pigs. To provide the readers and nutritionist with the easy and rapid tool, an Excel(®) calculator, namely, SVM_ME_pig, was created to predict the metabolizable energy values in compound feeds for pigs using developed support vector machine model.

  6. Probabilistic Analysis of Multistage Interconnection Network Performance

    DTIC Science & Technology

    1992-04-01

    out itndep~endlence of channel loads has b~een pie -co.nr)tited. and channels have been assigned itatnes genleratedl from thle namies of the their nodes...lthe exam pie below: > (setq d8x8 (parse-multistage-network determinist ically-interwired-8x8-rep)) (HA PTER I. j1I:1?F� Nx(’l0 A ( I’ . iI’l.I...performs considerably worse than either. (71.1 I’TI’) I . tI’IIOXI.I.IJTO.\\’.0’ -)? A1 I’L TIPO I’ll Xl7J0IK.K 71 Throughput 12 1 10 8 O 6 4 2 0 0.2 0.4

  7. Estimating urban impervious surfaces from Landsat-5 TM imagery using multilayer perceptron neural network and support vector machine

    NASA Astrophysics Data System (ADS)

    Sun, Zhongchang; Guo, Huadong; Li, Xinwu; Lu, Linlin; Du, Xiaoping

    2011-01-01

    In recent years, the urban impervious surface has been recognized as a key quantifiable indicator in assessing urbanization impacts on environmental and ecological conditions. A surge of research interests has resulted in the estimation of urban impervious surface using remote sensing studies. The objective of this paper is to examine and compare the effectiveness of two algorithms for extracting impervious surfaces from Landsat TM imagery; the multilayer perceptron neural network (MLPNN) and the support vector machine (SVM). An accuracy assessment was performed using the high-resolution WorldView images. The root mean square error (RMSE), the mean absolute error (MAE), and the coefficient of determination (R2) were calculated to validate the classification performance and accuracies of MLPNN and SVM. For the MLPNN model, the RMSE, MAE, and R2 were 17.18%, 11.10%, and 0.8474, respectively. The SVM yielded a result with an RMSE of 13.75%, an MAE of 8.92%, and an R2 of 0.9032. The results indicated that SVM performance was superior to that of MLPNN in impervious surface classification. To further evaluate the performance of MLPNN and SVM in handling the mixed-pixels, an accuracy assessment was also conducted for the selected test areas, including commercial, residential, and rural areas. Our results suggested that SVM had better capability in handling the mixed-pixel problem than MLPNN. The superior performance of SVM over MLPNN is mainly attributed to the SVM's capability of deriving the global optimum and handling the over-fitting problem by suitable parameter selection. Overall, SVM provides an efficient and useful method for estimating the impervious surface.

  8. Impact of Trust on Security and Performance in Tactical Networks

    DTIC Science & Technology

    2013-06-01

    conditions of networks to maximize performance in several networking applications. 1 Introduction Tactical networks have been designed and operated with...based approaches to augment traditional networking methods allows one to exploit the multi- genre aspects of the problem. In this paper, we propose our...In addition, Section 4 address how trust can be modeled in a different domain and a multi-domain dealing with multi- genre networks. Section 5 describes

  9. Static internal performance including thrust vectoring and reversing of two-dimensional convergent-divergent nozzles

    NASA Technical Reports Server (NTRS)

    Re, R. J.; Leavitt, L. D.

    1984-01-01

    The effects of geometric design parameters on two dimensional convergent-divergent nozzles were investigated at nozzle pressure ratios up to 12 in the static test facility. Forward flight (dry and afterburning power settings), vectored-thrust (afterburning power setting), and reverse-thrust (dry power setting) nozzles were investigated. The nozzles had thrust vector angles from 0 deg to 20.26 deg, throat aspect ratios of 3.696 to 7.612, throat radii from sharp to 2.738 cm, expansion ratios from 1.089 to 1.797, and various sidewall lengths. The results indicate that unvectored two dimensional convergent-divergent nozzles have static internal performance comparable to axisymmetric nozzles with similar expansion ratios.

  10. Classification of bifurcations regions in IVOCT images using support vector machine and artificial neural network models

    NASA Astrophysics Data System (ADS)

    Porto, C. D. N.; Costa Filho, C. F. F.; Macedo, M. M. G.; Gutierrez, M. A.; Costa, M. G. F.

    2017-03-01

    Studies in intravascular optical coherence tomography (IV-OCT) have demonstrated the importance of coronary bifurcation regions in intravascular medical imaging analysis, as plaques are more likely to accumulate in this region leading to coronary disease. A typical IV-OCT pullback acquires hundreds of frames, thus developing an automated tool to classify the OCT frames as bifurcation or non-bifurcation can be an important step to speed up OCT pullbacks analysis and assist automated methods for atherosclerotic plaque quantification. In this work, we evaluate the performance of two state-of-the-art classifiers, SVM and Neural Networks in the bifurcation classification task. The study included IV-OCT frames from 9 patients. In order to improve classification performance, we trained and tested the SVM with different parameters by means of a grid search and different stop criteria were applied to the Neural Network classifier: mean square error, early stop and regularization. Different sets of features were tested, using feature selection techniques: PCA, LDA and scalar feature selection with correlation. Training and test were performed in sets with a maximum of 1460 OCT frames. We quantified our results in terms of false positive rate, true positive rate, accuracy, specificity, precision, false alarm, f-measure and area under ROC curve. Neural networks obtained the best classification accuracy, 98.83%, overcoming the results found in literature. Our methods appear to offer a robust and reliable automated classification of OCT frames that might assist physicians indicating potential frames to analyze. Methods for improving neural networks generalization have increased the classification performance.

  11. Effects of Cavity on the Performance of Dual Throat Nozzle During the Thrust-Vectoring Starting Transient Process.

    PubMed

    Gu, Rui; Xu, Jinglei

    2014-01-01

    The dual throat nozzle (DTN) technique is capable to achieve higher thrust-vectoring efficiencies than other fluidic techniques, without compromising thrust efficiency significantly during vectoring operation. The excellent performance of the DTN is mainly due to the concaved cavity. In this paper, two DTNs of different scales have been investigated by unsteady numerical simulations to compare the parameter variations and study the effects of cavity during the vector starting process. The results remind us that during the vector starting process, dynamic loads may be generated, which is a potentially challenging problem for the aircraft trim and control.

  12. Communication Network Patterns and Employee Performance with New Technology.

    ERIC Educational Resources Information Center

    Papa, Michael J.

    1990-01-01

    Investigates the relationship between employee performance, new technology, employee communication network variables (activity, size, diversity, and integrativeness), and productivity at two corporate offices. Reports significant positive relationships between three of the network variables and employee productivity with new technology. Discusses…

  13. Sensor network based solar forecasting using a local vector autoregressive ridge framework

    SciTech Connect

    Xu, J.; Yoo, S.; Heiser, J.; Kalb, P.

    2016-04-04

    The significant improvements and falling costs of photovoltaic (PV) technology make solar energy a promising resource, yet the cloud induced variability of surface solar irradiance inhibits its effective use in grid-tied PV generation. Short-term irradiance forecasting, especially on the minute scale, is critically important for grid system stability and auxiliary power source management. Compared to the trending sky imaging devices, irradiance sensors are inexpensive and easy to deploy but related forecasting methods have not been well researched. The prominent challenge of applying classic time series models on a network of irradiance sensors is to address their varying spatio-temporal correlations due to local changes in cloud conditions. We propose a local vector autoregressive framework with ridge regularization to forecast irradiance without explicitly determining the wind field or cloud movement. By using local training data, our learned forecast model is adaptive to local cloud conditions and by using regularization, we overcome the risk of overfitting from the limited training data. Our systematic experimental results showed an average of 19.7% RMSE and 20.2% MAE improvement over the benchmark Persistent Model for 1-5 minute forecasts on a comprehensive 25-day dataset.

  14. Note: Vector network analyzer-ferromagnetic resonance spectrometer using high Q-factor cavity.

    PubMed

    Lo, C K; Lai, W C; Cheng, J C

    2011-08-01

    A ferromagnetic resonance (FMR) spectrometer whose main components consist of an X-band resonator and a vector network analyzer (VNA) was developed. This spectrometer takes advantage of a high Q-factor (9600) cavity and state-of-the-art VNA. Accordingly, field modulation lock-in technique for signal to noise ratio (SNR) enhancement is no longer necessary, and FMR absorption can therefore be extracted directly. Its derivative for the ascertainment of full width at half maximum height of FMR peak can be found by taking the differentiation of original data. This system was characterized with different thicknesses of permalloy (Py) films and its multilayer, and found that the SNR of 5 nm Py on glass was better than 50, and did not have significant reduction even at low microwave excitation power (-20 dBm), and at low Q-factor (3000). The FMR other than X-band can also be examined in the same manner by using a suitable band cavity within the frequency range of VNA.

  15. Note: Vector network analyzer-ferromagnetic resonance spectrometer using high Q-factor cavity

    NASA Astrophysics Data System (ADS)

    Lo, C. K.; Lai, W. C.; Cheng, J. C.

    2011-08-01

    A ferromagnetic resonance (FMR) spectrometer whose main components consist of an X-band resonator and a vector network analyzer (VNA) was developed. This spectrometer takes advantage of a high Q-factor (9600) cavity and state-of-the-art VNA. Accordingly, field modulation lock-in technique for signal to noise ratio (SNR) enhancement is no longer necessary, and FMR absorption can therefore be extracted directly. Its derivative for the ascertainment of full width at half maximum height of FMR peak can be found by taking the differentiation of original data. This system was characterized with different thicknesses of permalloy (Py) films and its multilayer, and found that the SNR of 5 nm Py on glass was better than 50, and did not have significant reduction even at low microwave excitation power (-20 dBm), and at low Q-factor (3000). The FMR other than X-band can also be examined in the same manner by using a suitable band cavity within the frequency range of VNA.

  16. Efficient modeling of vector hysteresis using a novel Hopfield neural network implementation of Stoner–Wohlfarth-like operators

    PubMed Central

    Adly, Amr A.; Abd-El-Hafiz, Salwa K.

    2012-01-01

    Incorporation of hysteresis models in electromagnetic analysis approaches is indispensable to accurate field computation in complex magnetic media. Throughout those computations, vector nature and computational efficiency of such models become especially crucial when sophisticated geometries requiring massive sub-region discretization are involved. Recently, an efficient vector Preisach-type hysteresis model constructed from only two scalar models having orthogonally coupled elementary operators has been proposed. This paper presents a novel Hopfield neural network approach for the implementation of Stoner–Wohlfarth-like operators that could lead to a significant enhancement in the computational efficiency of the aforementioned model. Advantages of this approach stem from the non-rectangular nature of these operators that substantially minimizes the number of operators needed to achieve an accurate vector hysteresis model. Details of the proposed approach, its identification and experimental testing are presented in the paper. PMID:25685446

  17. Efficient modeling of vector hysteresis using a novel Hopfield neural network implementation of Stoner-Wohlfarth-like operators.

    PubMed

    Adly, Amr A; Abd-El-Hafiz, Salwa K

    2013-07-01

    Incorporation of hysteresis models in electromagnetic analysis approaches is indispensable to accurate field computation in complex magnetic media. Throughout those computations, vector nature and computational efficiency of such models become especially crucial when sophisticated geometries requiring massive sub-region discretization are involved. Recently, an efficient vector Preisach-type hysteresis model constructed from only two scalar models having orthogonally coupled elementary operators has been proposed. This paper presents a novel Hopfield neural network approach for the implementation of Stoner-Wohlfarth-like operators that could lead to a significant enhancement in the computational efficiency of the aforementioned model. Advantages of this approach stem from the non-rectangular nature of these operators that substantially minimizes the number of operators needed to achieve an accurate vector hysteresis model. Details of the proposed approach, its identification and experimental testing are presented in the paper.

  18. Support Vector Machine and Artificial Neural Network Models for the Classification of Grapevine Varieties Using a Portable NIR Spectrophotometer.

    PubMed

    Gutiérrez, Salvador; Tardaguila, Javier; Fernández-Novales, Juan; Diago, María P

    2015-01-01

    The identification of different grapevine varieties, currently attended using visual ampelometry, DNA analysis and very recently, by hyperspectral analysis under laboratory conditions, is an issue of great importance in the wine industry. This work presents support vector machine and artificial neural network's modelling for grapevine varietal classification from in-field leaf spectroscopy. Modelling was attempted at two scales: site-specific and a global scale. Spectral measurements were obtained on the near-infrared (NIR) spectral range between 1600 to 2400 nm under field conditions in a non-destructive way using a portable spectrophotometer. For the site specific approach, spectra were collected from the adaxial side of 400 individual leaves of 20 grapevine (Vitis vinifera L.) varieties one week after veraison. For the global model, two additional sets of spectra were collected one week before harvest from two different vineyards in another vintage, each one consisting on 48 measurement from individual leaves of six varieties. Several combinations of spectra scatter correction and smoothing filtering were studied. For the training of the models, support vector machines and artificial neural networks were employed using the pre-processed spectra as input and the varieties as the classes of the models. The results from the pre-processing study showed that there was no influence whether using scatter correction or not. Also, a second-degree derivative with a window size of 5 Savitzky-Golay filtering yielded the highest outcomes. For the site-specific model, with 20 classes, the best results from the classifiers thrown an overall score of 87.25% of correctly classified samples. These results were compared under the same conditions with a model trained using partial least squares discriminant analysis, which showed a worse performance in every case. For the global model, a 6-class dataset involving samples from three different vineyards, two years and leaves

  19. An Improved Ensemble of Random Vector Functional Link Networks Based on Particle Swarm Optimization with Double Optimization Strategy.

    PubMed

    Ling, Qing-Hua; Song, Yu-Qing; Han, Fei; Yang, Dan; Huang, De-Shuang

    2016-01-01

    For ensemble learning, how to select and combine the candidate classifiers are two key issues which influence the performance of the ensemble system dramatically. Random vector functional link networks (RVFL) without direct input-to-output links is one of suitable base-classifiers for ensemble systems because of its fast learning speed, simple structure and good generalization performance. In this paper, to obtain a more compact ensemble system with improved convergence performance, an improved ensemble of RVFL based on attractive and repulsive particle swarm optimization (ARPSO) with double optimization strategy is proposed. In the proposed method, ARPSO is applied to select and combine the candidate RVFL. As for using ARPSO to select the optimal base RVFL, ARPSO considers both the convergence accuracy on the validation data and the diversity of the candidate ensemble system to build the RVFL ensembles. In the process of combining RVFL, the ensemble weights corresponding to the base RVFL are initialized by the minimum norm least-square method and then further optimized by ARPSO. Finally, a few redundant RVFL is pruned, and thus the more compact ensemble of RVFL is obtained. Moreover, in this paper, theoretical analysis and justification on how to prune the base classifiers on classification problem is presented, and a simple and practically feasible strategy for pruning redundant base classifiers on both classification and regression problems is proposed. Since the double optimization is performed on the basis of the single optimization, the ensemble of RVFL built by the proposed method outperforms that built by some single optimization methods. Experiment results on function approximation and classification problems verify that the proposed method could improve its convergence accuracy as well as reduce the complexity of the ensemble system.

  20. An Improved Ensemble of Random Vector Functional Link Networks Based on Particle Swarm Optimization with Double Optimization Strategy

    PubMed Central

    Ling, Qing-Hua; Song, Yu-Qing; Han, Fei; Yang, Dan; Huang, De-Shuang

    2016-01-01

    For ensemble learning, how to select and combine the candidate classifiers are two key issues which influence the performance of the ensemble system dramatically. Random vector functional link networks (RVFL) without direct input-to-output links is one of suitable base-classifiers for ensemble systems because of its fast learning speed, simple structure and good generalization performance. In this paper, to obtain a more compact ensemble system with improved convergence performance, an improved ensemble of RVFL based on attractive and repulsive particle swarm optimization (ARPSO) with double optimization strategy is proposed. In the proposed method, ARPSO is applied to select and combine the candidate RVFL. As for using ARPSO to select the optimal base RVFL, ARPSO considers both the convergence accuracy on the validation data and the diversity of the candidate ensemble system to build the RVFL ensembles. In the process of combining RVFL, the ensemble weights corresponding to the base RVFL are initialized by the minimum norm least-square method and then further optimized by ARPSO. Finally, a few redundant RVFL is pruned, and thus the more compact ensemble of RVFL is obtained. Moreover, in this paper, theoretical analysis and justification on how to prune the base classifiers on classification problem is presented, and a simple and practically feasible strategy for pruning redundant base classifiers on both classification and regression problems is proposed. Since the double optimization is performed on the basis of the single optimization, the ensemble of RVFL built by the proposed method outperforms that built by some single optimization methods. Experiment results on function approximation and classification problems verify that the proposed method could improve its convergence accuracy as well as reduce the complexity of the ensemble system. PMID:27835638

  1. Performance of SMARTer at Very Low Scattering Vector q-Range Revealed by Monodisperse Nanoparticles

    SciTech Connect

    Putra, E. Giri Rachman; Ikram, A.; Bharoto; Santoso, E.; Sairun

    2008-03-17

    A monodisperse nanoparticle sample of polystyrene has been employed to determine performance of the 36 meter small-angle neutron scattering (SANS) BATAN spectrometer (SMARTer) at the Neutron Scattering Laboratory (NSL)--Serpong, Indonesia, in a very low scattering vector q-range. Detector position at 18 m from sample position, beam stopper of 50 mm in diameter, neutron wavelength of 5.66 A as well as 18 m-long collimator had been set up to achieve very low scattering vector q-range of SMARTer. A polydisperse smeared-spherical particle model was applied to fit the corrected small-angle scattering data of monodisperse polystyrene nanoparticle sample. The mean average of particle radius of 610 A, volume fraction of 0.0026, and polydispersity of 0.1 were obtained from the fitting results. The experiment results from SMARTer are comparable to SANS-J, JAEA - Japan and it is revealed that SMARTer is powerfully able to achieve the lowest scattering vector down to 0.002 A{sup -1}.

  2. A cross-sectional evaluation of meditation experience on electroencephalography data by artificial neural network and support vector machine classifiers

    PubMed Central

    Lee, Yu-Hao; Hsieh, Ya-Ju; Shiah, Yung-Jong; Lin, Yu-Huei; Chen, Chiao-Yun; Tyan, Yu-Chang; GengQiu, JiaCheng; Hsu, Chung-Yao; Chen, Sharon Chia-Ju

    2017-01-01

    Abstract To quantitate the meditation experience is a subjective and complex issue because it is confounded by many factors such as emotional state, method of meditation, and personal physical condition. In this study, we propose a strategy with a cross-sectional analysis to evaluate the meditation experience with 2 artificial intelligence techniques: artificial neural network and support vector machine. Within this analysis system, 3 features of the electroencephalography alpha spectrum and variant normalizing scaling are manipulated as the evaluating variables for the detection of accuracy. Thereafter, by modulating the sliding window (the period of the analyzed data) and shifting interval of the window (the time interval to shift the analyzed data), the effect of immediate analysis for the 2 methods is compared. This analysis system is performed on 3 meditation groups, categorizing their meditation experiences in 10-year intervals from novice to junior and to senior. After an exhausted calculation and cross-validation across all variables, the high accuracy rate >98% is achievable under the criterion of 0.5-minute sliding window and 2 seconds shifting interval for both methods. In a word, the minimum analyzable data length is 0.5 minute and the minimum recognizable temporal resolution is 2 seconds in the decision of meditative classification. Our proposed classifier of the meditation experience promotes a rapid evaluation system to distinguish meditation experience and a beneficial utilization of artificial techniques for the big-data analysis. PMID:28422856

  3. A cross-sectional evaluation of meditation experience on electroencephalography data by artificial neural network and support vector machine classifiers.

    PubMed

    Lee, Yu-Hao; Hsieh, Ya-Ju; Shiah, Yung-Jong; Lin, Yu-Huei; Chen, Chiao-Yun; Tyan, Yu-Chang; GengQiu, JiaCheng; Hsu, Chung-Yao; Chen, Sharon Chia-Ju

    2017-04-01

    To quantitate the meditation experience is a subjective and complex issue because it is confounded by many factors such as emotional state, method of meditation, and personal physical condition. In this study, we propose a strategy with a cross-sectional analysis to evaluate the meditation experience with 2 artificial intelligence techniques: artificial neural network and support vector machine. Within this analysis system, 3 features of the electroencephalography alpha spectrum and variant normalizing scaling are manipulated as the evaluating variables for the detection of accuracy. Thereafter, by modulating the sliding window (the period of the analyzed data) and shifting interval of the window (the time interval to shift the analyzed data), the effect of immediate analysis for the 2 methods is compared. This analysis system is performed on 3 meditation groups, categorizing their meditation experiences in 10-year intervals from novice to junior and to senior. After an exhausted calculation and cross-validation across all variables, the high accuracy rate >98% is achievable under the criterion of 0.5-minute sliding window and 2 seconds shifting interval for both methods. In a word, the minimum analyzable data length is 0.5 minute and the minimum recognizable temporal resolution is 2 seconds in the decision of meditative classification. Our proposed classifier of the meditation experience promotes a rapid evaluation system to distinguish meditation experience and a beneficial utilization of artificial techniques for the big-data analysis.

  4. Radial basis function network-based transform for a nonlinear support vector machine as optimized by a particle swarm optimization algorithm with application to QSAR studies.

    PubMed

    Tang, Li-Juan; Zhou, Yan-Ping; Jiang, Jian-Hui; Zou, Hong-Yan; Wu, Hai-Long; Shen, Guo-Li; Yu, Ru-Qin

    2007-01-01

    The support vector machine (SVM) has been receiving increasing interest in an area of QSAR study for its ability in function approximation and remarkable generalization performance. However, selection of support vectors and intensive optimization of kernel width of a nonlinear SVM are inclined to get trapped into local optima, leading to an increased risk of underfitting or overfitting. To overcome these problems, a new nonlinear SVM algorithm is proposed using adaptive kernel transform based on a radial basis function network (RBFN) as optimized by particle swarm optimization (PSO). The new algorithm incorporates a nonlinear transform of the original variables to feature space via a RBFN with one input and one hidden layer. Such a transform intrinsically yields a kernel transform of the original variables. A synergetic optimization of all parameters including kernel centers and kernel widths as well as SVM model coefficients using PSO enables the determination of a flexible kernel transform according to the performance of the total model. The implementation of PSO demonstrates a relatively high efficiency in convergence to a desired optimum. Applications of the proposed algorithm to QSAR studies of binding affinity of HIV-1 reverse transcriptase inhibitors and activity of 1-phenylbenzimidazoles reveal that the new algorithm provides superior performance to the backpropagation neural network and a conventional nonlinear SVM, indicating that this algorithm holds great promise in nonlinear SVM learning.

  5. Analysis of complex network performance and heuristic node removal strategies

    NASA Astrophysics Data System (ADS)

    Jahanpour, Ehsan; Chen, Xin

    2013-12-01

    Removing important nodes from complex networks is a great challenge in fighting against criminal organizations and preventing disease outbreaks. Six network performance metrics, including four new metrics, are applied to quantify networks' diffusion speed, diffusion scale, homogeneity, and diameter. In order to efficiently identify nodes whose removal maximally destroys a network, i.e., minimizes network performance, ten structured heuristic node removal strategies are designed using different node centrality metrics including degree, betweenness, reciprocal closeness, complement-derived closeness, and eigenvector centrality. These strategies are applied to remove nodes from the September 11, 2001 hijackers' network, and their performance are compared to that of a random strategy, which removes randomly selected nodes, and the locally optimal solution (LOS), which removes nodes to minimize network performance at each step. The computational complexity of the 11 strategies and LOS is also analyzed. Results show that the node removal strategies using degree and betweenness centralities are more efficient than other strategies.

  6. Wireless Local Area Network Performance Inside Aircraft Passenger Cabins

    NASA Technical Reports Server (NTRS)

    Whetten, Frank L.; Soroker, Andrew; Whetten, Dennis A.; Whetten, Frank L.; Beggs, John H.

    2005-01-01

    An examination of IEEE 802.11 wireless network performance within an aircraft fuselage is performed. This examination measured the propagated RF power along the length of the fuselage, and the associated network performance: the link speed, total throughput, and packet losses and errors. A total of four airplanes: one single-aisle and three twin-aisle airplanes were tested with 802.11a, 802.11b, and 802.11g networks.

  7. Performance Evaluation of Network Centric Warfare Oriented Intelligent Systems

    DTIC Science & Technology

    2001-09-01

    Performance Evaluation o f Network Centric Warfare Oriented Intelligent Systems Edward Dawidowicz, Member, IEEE Abstract The concepts o f Network...performance evaluation o f NCW oriented intelligent systems . The warfighter desires the ’right’ information at the ’right’ time. Such information can be...to 00-00-2001 4. TITLE AND SUBTITLE Performance Evaluation of Network Centric Warfare Oriented Intelligent Systems 5a. CONTRACT NUMBER 5b. GRANT

  8. Static internal performance of a two-dimensional convergent-divergent nozzle with thrust vectoring

    NASA Technical Reports Server (NTRS)

    Bare, E. Ann; Reubush, David E.

    1987-01-01

    A parametric investigation of the static internal performance of multifunction two-dimensional convergent-divergent nozzles has been made in the static test facility of the Langley 16-Foot Transonic Tunnel. All nozzles had a constant throat area and aspect ratio. The effects of upper and lower flap angles, divergent flap length, throat approach angle, sidewall containment, and throat geometry were determined. All nozzles were tested at a thrust vector angle that varied from 5.60 tp 23.00 deg. The nozzle pressure ratio was varied up to 10 for all configurations.

  9. Using adaline neural network for performance improvement of smart antennas in TDD wireless communications.

    PubMed

    Kavak, Adnan; Yigit, Halil; Ertunc, H Metin

    2005-11-01

    In time-division-duplex (TDD) mode wireless communications, downlink beamforming performance of a smart antenna system at the base station can be degraded due to variation of spatial signature vectors corresponding to mobile users especially in fast fading scenarios. To mitigate this, downlink beams must be controlled by properly adjusting their weight vectors in response to changing propagation dynamics. This can be achieved by modeling the spatial signature vectors in the uplink period and then predicting them to be used as beamforming weight vectors for the new mobile position in the downlink transmission period. We show that ADAptive LInear NEuron (ADALINE) network modeling based prediction of spatial signatures provides certain level of performance improvement compared to conventional beamforming method that employs spatial signature obtained in previous uplink interval. We compare the performance of ADALINE with autoregressive (AR) modeling based predictions under varying channel propagation (mobile speed, multipath angle spread, and number of multipaths), and filter order/delay conditions. ADALINE modeling outperforms AR modeling in terms of downlink SNR improvement and relative error improvement especially under high mobile speeds, i.e., V = 100 km/h.

  10. Performance characteristics of a one-third-scale, vectorable ventral nozzle for SSTOVL aircraft

    NASA Technical Reports Server (NTRS)

    Esker, Barbara S.; Mcardle, Jack G.

    1990-01-01

    Several proposed configurations for supersonic short takeoff, vertical landing aircraft will require one or more ventral nozzles for lift and pitch control. The swivel nozzle is one possible ventral nozzle configuration. A swivel nozzle (approximately one-third scale) was built and tested on a generic model tailpipe. This nozzle was capable of vectoring the flow up to + or - 23 deg from the vertical position. Steady-state performance data were obtained at pressure ratios to 4.5, and pitot-pressure surveys of the nozzle exit plane were made. Two configurations were tested: the swivel nozzle with a square contour of the leading edge of the ventral duct inlet, and the same nozzle with a round leading edge contour. The swivel nozzle showed good performance overall, and the round-leading edge configuration showed an improvement in performance over the square-leading edge configuration.

  11. Supporting performance and configuration management of GTE cellular networks

    SciTech Connect

    Tan, Ming; Lafond, C.; Jakobson, G.; Young, G.

    1996-12-31

    GTE Laboratories, in cooperation with GTE Mobilnet, has developed and deployed PERFFEX (PERFormance Expert), an intelligent system for performance and configuration management of cellular networks. PERFEX assists cellular network performance and radio engineers in the analysis of large volumes of cellular network performance and configuration data. It helps them locate and determine the probable causes of performance problems, and provides intelligent suggestions about how to correct them. The system combines an expert cellular network performance tuning capability with a map-based graphical user interface, data visualization programs, and a set of special cellular engineering tools. PERFEX is in daily use at more than 25 GTE Mobile Switching Centers. Since the first deployment of the system in late 1993, PERFEX has become a major GTE cellular network performance optimization tool.

  12. Improving matrix-vector product performance and multi-level preconditioning for the parallel PCG package

    SciTech Connect

    McLay, R.T.; Carey, G.F.

    1996-12-31

    In this study we consider parallel solution of sparse linear systems arising from discretized PDE`s. As part of our continuing work on our parallel PCG Solver package, we have made improvements in two areas. The first is improving the performance of the matrix-vector product. Here on regular finite-difference grids, we are able to use the cache memory more efficiently for smaller domains or where there are multiple degrees of freedom. The second problem of interest in the present work is the construction of preconditioners in the context of the parallel PCG solver we are developing. Here the problem is partitioned over a set of processors subdomains and the matrix-vector product for PCG is carried out in parallel for overlapping grid subblocks. For problems of scaled speedup, the actual rate of convergence of the unpreconditioned system deteriorates as the mesh is refined. Multigrid and subdomain strategies provide a logical approach to resolving the problem. We consider the parallel trade-offs between communication and computation and provide a complexity analysis of a representative algorithm. Some preliminary calculations using the parallel package and comparisons with other preconditioners are provided together with parallel performance results.

  13. Vector flow mapping analysis of left ventricular energetic performance in healthy adult volunteers.

    PubMed

    Akiyama, Koichi; Maeda, Sachiko; Matsuyama, Tasuku; Kainuma, Atsushi; Ishii, Maki; Naito, Yoshifumi; Kinoshita, Mao; Hamaoka, Saeko; Kato, Hideya; Nakajima, Yasufumi; Nakamura, Naotoshi; Itatani, Keiichi; Sawa, Teiji

    2017-01-09

    Vector flow mapping, a novel flow visualization echocardiographic technology, is increasing in popularity. Energy loss reference values for children have been established using vector flow mapping, but those for adults have not yet been provided. We aimed to establish reference values in healthy adults for energy loss, kinetic energy in the left ventricular outflow tract, and the energetic performance index (defined as the ratio of kinetic energy to energy loss over one cardiac cycle). Transthoracic echocardiography was performed in fifty healthy volunteers, and the stored images were analyzed to calculate energy loss, kinetic energy, and energetic performance index and obtain ranges of reference values for these. Mean energy loss over one cardiac cycle ranged from 10.1 to 59.1 mW/m (mean ± SD, 27.53 ± 13.46 mW/m), with a reference range of 10.32 ~ 58.63 mW/m. Mean systolic energy loss ranged from 8.5 to 80.1 (23.52 ± 14.53) mW/m, with a reference range of 8.86 ~ 77.30 mW/m. Mean diastolic energy loss ranged from 7.9 to 86 (30.41 ± 16.93) mW/m, with a reference range of 8.31 ~ 80.36 mW/m. Mean kinetic energy in the left ventricular outflow tract over one cardiac cycle ranged from 200 to 851.6 (449.74 ± 177.51) mW/m with a reference range of 203.16 ~ 833.15 mW/m. The energetic performance index ranged from 5.3 to 37.6 (18.48 ± 7.74), with a reference range of 5.80 ~ 36.67. Energy loss, kinetic energy, and energetic performance index reference values were defined using vector flow mapping. These reference values enable the assessment of various cardiac conditions in any clinical situation.

  14. Network interface unit design options performance analysis

    NASA Technical Reports Server (NTRS)

    Miller, Frank W.

    1991-01-01

    An analysis is presented of three design options for the Space Station Freedom (SSF) onboard Data Management System (DMS) Network Interface Unit (NIU). The NIU provides the interface from the Fiber Distributed Data Interface (FDDI) local area network (LAN) to the DMS processing elements. The FDDI LAN provides the primary means for command and control and low and medium rate telemetry data transfers on board the SSF. The results of this analysis provide the basis for the implementation of the NIU.

  15. Performance of a novel micro force vector sensor and outlook into its biomedical applications

    NASA Astrophysics Data System (ADS)

    Meiss, Thorsten; Rossner, Tim; Minamisava Faria, Carlos; Völlmeke, Stefan; Opitz, Thomas; Werthschützky, Roland

    2011-05-01

    For the HapCath system, which provides haptic feedback of the forces acting on a guide wire's tip during vascular catheterization, very small piezoresistive force sensors of 200•200•640μm3 have been developed. This paper focuses on the characterization of the measurement performance and on possible new applications. Besides the determination of the dynamic measurement performance, special focus is put onto the results of the 3- component force vector calibration. This article addresses special advantageous characteristics of the sensor, but also the limits of applicability will be addressed. As for the special characteristics of the sensor, the second part of the article demonstrates new applications which can be opened up with the novel force sensor, like automatic navigation of medical or biological instruments without impacting surrounding tissue, surface roughness evaluation in biomedical systems, needle insertion with tactile or higher level feedback, or even building tactile hairs for artificial organisms.

  16. Performance Evaluation of Plasma and Astrophysics Applications onModern Parallel Vector Systems

    SciTech Connect

    Carter, Jonathan; Oliker, Leonid; Shalf, John

    2005-10-28

    The last decade has witnessed a rapid proliferation ofsuperscalar cache-based microprocessors to build high-endcomputing (HEC)platforms, primarily because of their generality,scalability, and costeffectiveness. However, the growing gap between sustained and peakperformance for full-scale scientific applications on such platforms hasbecome major concern in highperformance computing. The latest generationof custom-built parallel vector systems have the potential to addressthis concern for numerical algorithms with sufficient regularity in theircomputational structure. In this work, we explore two and threedimensional implementations of a plasma physics application, as well as aleading astrophysics package on some of today's most powerfulsupercomputing platforms. Results compare performance between the thevector-based Cray X1, EarthSimulator, and newly-released NEC SX- 8, withthe commodity-based superscalar platforms of the IBM Power3, IntelItanium2, and AMDOpteron. Overall results show that the SX-8 attainsunprecedented aggregate performance across our evaluatedapplications.

  17. Microseismic Network Performance Estimation: Comparing Predictions to an Earthquake Catalogue

    NASA Astrophysics Data System (ADS)

    Greig, Wesley; Ackerley, Nick

    2014-05-01

    The design of networks for monitoring induced seismicity is of critical importance as specific standards of performance are necessary. One of the difficulties involved in designing networks for monitoring induced seismicity is that it is difficult to determine whether or not the network meets these standards without first developing an earthquake catalog. We develop a tool that can assess two key measures of network performance without an earthquake catalog: location accuracy and magnitude of completeness. Site noise is measured either at existing seismic stations or as part of a noise survey. We then interpolate measured values to determine a noise map for the entire region. This information is combined with instrument noise for each station to accurately assess total ambient noise at each station. Location accuracy is evaluated according to the approach of Peters and Crosson (1972). Magnitude of completeness is computed by assuming isotropic radiation and mandating a threshold signal to noise ratio (similar to Stabile et al. 2013). We apply this tool to a seismic network in the central United States. We predict the magnitude of completeness and the location accuracy and compare predicted values with observed values generated from the existing earthquake catalog for the network. We investigate the effects of hypothetical station additions and removals to a network to simulate network expansions and station failures. We find that the addition of stations to areas of low noise results in significantly larger improvements in network performance than station additions to areas of elevated noise, particularly with respect to magnitude of completeness. Our results highlight the importance of site noise considerations in the design of a seismic network. The ability to predict hypothetical station performance allows for the optimization of seismic network design and enables the prediction of performance for a purely hypothetical seismic network. If near real

  18. On the MAC/network/energy performance evaluation of Wireless Sensor Networks: Contrasting MPH, AODV, DSR and ZTR routing protocols.

    PubMed

    Del-Valle-Soto, Carolina; Mex-Perera, Carlos; Orozco-Lugo, Aldo; Lara, Mauricio; Galván-Tejada, Giselle M; Olmedo, Oscar

    2014-12-02

    Wireless Sensor Networks deliver valuable information for long periods, then it is desirable to have optimum performance, reduced delays, low overhead, and reliable delivery of information. In this work, proposed metrics that influence energy consumption are used for a performance comparison among our proposed routing protocol, called Multi-Parent Hierarchical (MPH), the well-known protocols for sensor networks, Ad hoc On-Demand Distance Vector (AODV), Dynamic Source Routing (DSR), and Zigbee Tree Routing (ZTR), all of them working with the IEEE 802.15.4 MAC layer. Results show how some communication metrics affect performance, throughput, reliability and energy consumption. It can be concluded that MPH is an efficient protocol since it reaches the best performance against the other three protocols under evaluation, such as 19.3% reduction of packet retransmissions, 26.9% decrease of overhead, and 41.2% improvement on the capacity of the protocol for recovering the topology from failures with respect to AODV protocol. We implemented and tested MPH in a real network of 99 nodes during ten days and analyzed parameters as number of hops, connectivity and delay, in order to validate our Sensors 2014, 14 22812 simulator and obtain reliable results. Moreover, an energy model of CC2530 chip is proposed and used for simulations of the four aforementioned protocols, showing that MPH has 15.9% reduction of energy consumption with respect to AODV, 13.7% versus DSR, and 5% against ZTR.

  19. On the MAC/Network/Energy Performance Evaluation of Wireless Sensor Networks: Contrasting MPH, AODV, DSR and ZTR Routing Protocols

    PubMed Central

    Del-Valle-Soto, Carolina; Mex-Perera, Carlos; Orozco-Lugo, Aldo; Lara, Mauricio; Galván-Tejada, Giselle M.; Olmedo, Oscar

    2014-01-01

    Wireless Sensor Networks deliver valuable information for long periods, then it is desirable to have optimum performance, reduced delays, low overhead, and reliable delivery of information. In this work, proposed metrics that influence energy consumption are used for a performance comparison among our proposed routing protocol, called Multi-Parent Hierarchical (MPH), the well-known protocols for sensor networks, Ad hoc On-Demand Distance Vector (AODV), Dynamic Source Routing (DSR), and Zigbee Tree Routing (ZTR), all of them working with the IEEE 802.15.4 MAC layer. Results show how some communication metrics affect performance, throughput, reliability and energy consumption. It can be concluded that MPH is an efficient protocol since it reaches the best performance against the other three protocols under evaluation, such as 19.3% reduction of packet retransmissions, 26.9% decrease of overhead, and 41.2% improvement on the capacity of the protocol for recovering the topology from failures with respect to AODV protocol. We implemented and tested MPH in a real network of 99 nodes during ten days and analyzed parameters as number of hops, connectivity and delay, in order to validate our simulator and obtain reliable results. Moreover, an energy model of CC2530 chip is proposed and used for simulations of the four aforementioned protocols, showing that MPH has 15.9% reduction of energy consumption with respect to AODV, 13.7% versus DSR, and 5% against ZTR. PMID:25474377

  20. Error estimation for ORION baseline vector determination

    NASA Technical Reports Server (NTRS)

    Wu, S. C.

    1980-01-01

    Effects of error sources on Operational Radio Interferometry Observing Network (ORION) baseline vector determination are studied. Partial derivatives of delay observations with respect to each error source are formulated. Covariance analysis is performed to estimate the contribution of each error source to baseline vector error. System design parameters such as antenna sizes, system temperatures and provision for dual frequency operation are discussed.

  1. Experimental performance evaluation of software defined networking (SDN) based data communication networks for large scale flexi-grid optical networks.

    PubMed

    Zhao, Yongli; He, Ruiying; Chen, Haoran; Zhang, Jie; Ji, Yuefeng; Zheng, Haomian; Lin, Yi; Wang, Xinbo

    2014-04-21

    Software defined networking (SDN) has become the focus in the current information and communication technology area because of its flexibility and programmability. It has been introduced into various network scenarios, such as datacenter networks, carrier networks, and wireless networks. Optical transport network is also regarded as an important application scenario for SDN, which is adopted as the enabling technology of data communication networks (DCN) instead of general multi-protocol label switching (GMPLS). However, the practical performance of SDN based DCN for large scale optical networks, which is very important for the technology selection in the future optical network deployment, has not been evaluated up to now. In this paper we have built a large scale flexi-grid optical network testbed with 1000 virtual optical transport nodes to evaluate the performance of SDN based DCN, including network scalability, DCN bandwidth limitation, and restoration time. A series of network performance parameters including blocking probability, bandwidth utilization, average lightpath provisioning time, and failure restoration time have been demonstrated under various network environments, such as with different traffic loads and different DCN bandwidths. The demonstration in this work can be taken as a proof for the future network deployment.

  2. Using high-performance networks to enable computational aerosciences applications

    NASA Technical Reports Server (NTRS)

    Johnson, Marjory J.

    1992-01-01

    One component of the U.S. Federal High Performance Computing and Communications Program (HPCCP) is the establishment of a gigabit network to provide a communications infrastructure for researchers across the nation. This gigabit network will provide new services and capabilities, in addition to increased bandwidth, to enable future applications. An understanding of these applications is necessary to guide the development of the gigabit network and other high-performance networks of the future. In this paper we focus on computational aerosciences applications run remotely using the Numerical Aerodynamic Simulation (NAS) facility located at NASA Ames Research Center. We characterize these applications in terms of network-related parameters and relate user experiences that reveal limitations imposed by the current wide-area networking infrastructure. Then we investigate how the development of a nationwide gigabit network would enable users of the NAS facility to work in new, more productive ways.

  3. Support Vector Machine and Artificial Neural Network Models for the Classification of Grapevine Varieties Using a Portable NIR Spectrophotometer

    PubMed Central

    Gutiérrez, Salvador; Tardaguila, Javier; Fernández-Novales, Juan; Diago, María P.

    2015-01-01

    The identification of different grapevine varieties, currently attended using visual ampelometry, DNA analysis and very recently, by hyperspectral analysis under laboratory conditions, is an issue of great importance in the wine industry. This work presents support vector machine and artificial neural network’s modelling for grapevine varietal classification from in-field leaf spectroscopy. Modelling was attempted at two scales: site-specific and a global scale. Spectral measurements were obtained on the near-infrared (NIR) spectral range between 1600 to 2400 nm under field conditions in a non-destructive way using a portable spectrophotometer. For the site specific approach, spectra were collected from the adaxial side of 400 individual leaves of 20 grapevine (Vitis vinifera L.) varieties one week after veraison. For the global model, two additional sets of spectra were collected one week before harvest from two different vineyards in another vintage, each one consisting on 48 measurement from individual leaves of six varieties. Several combinations of spectra scatter correction and smoothing filtering were studied. For the training of the models, support vector machines and artificial neural networks were employed using the pre-processed spectra as input and the varieties as the classes of the models. The results from the pre-processing study showed that there was no influence whether using scatter correction or not. Also, a second-degree derivative with a window size of 5 Savitzky-Golay filtering yielded the highest outcomes. For the site-specific model, with 20 classes, the best results from the classifiers thrown an overall score of 87.25% of correctly classified samples. These results were compared under the same conditions with a model trained using partial least squares discriminant analysis, which showed a worse performance in every case. For the global model, a 6-class dataset involving samples from three different vineyards, two years and leaves

  4. Performance characteristics of two multiaxis thrust-vectoring nozzles at Mach numbers up to 1.28

    NASA Technical Reports Server (NTRS)

    Wing, David J.; Capone, Francis J.

    1993-01-01

    The thrust-vectoring axisymmetric (VA) nozzle and a spherical convergent flap (SCF) thrust-vectoring nozzle were tested along with a baseline nonvectoring axisymmetric (NVA) nozzle in the Langley 16-Foot Transonic Tunnel at Mach numbers from 0 to 1.28 and nozzle pressure ratios from 1 to 8. Test parameters included geometric yaw vector angle and unvectored divergent flap length. No pitch vectoring was studied. Nozzle drag, thrust minus drag, yaw thrust vector angle, discharge coefficient, and static thrust performance were measured and analyzed, as well as external static pressure distributions. The NVA nozzle and the VA nozzle displayed higher static thrust performance than the SCF nozzle throughout the nozzle pressure ratio (NPR) range tested. The NVA nozzle had higher overall thrust minus drag than the other nozzles throughout the NPR and Mach number ranges tested. The SCF nozzle had the lowest jet-on nozzle drag of the three nozzles throughout the test conditions. The SCF nozzle provided yaw thrust angles that were equal to the geometric angle and constant with NPR. The VA nozzle achieved yaw thrust vector angles that were significantly higher than the geometric angle but not constant with NPR. Nozzle drag generally increased with increases in thrust vectoring for all the nozzles tested.

  5. Alterations to the orientation of the ground reaction force vector affect sprint acceleration performance in team sports athletes.

    PubMed

    Bezodis, Neil E; North, Jamie S; Razavet, Jane L

    2017-09-01

    A more horizontally oriented ground reaction force vector is related to higher levels of sprint acceleration performance across a range of athletes. However, the effects of acute experimental alterations to the force vector orientation within athletes are unknown. Fifteen male team sports athletes completed maximal effort 10-m accelerations in three conditions following different verbal instructions intended to manipulate the force vector orientation. Ground reaction forces (GRFs) were collected from the step nearest 5-m and stance leg kinematics at touchdown were also analysed to understand specific kinematic features of touchdown technique which may influence the consequent force vector orientation. Magnitude-based inferences were used to compare findings between conditions. There was a likely more horizontally oriented ground reaction force vector and a likely lower peak vertical force in the control condition compared with the experimental conditions. 10-m sprint time was very likely quickest in the control condition which confirmed the importance of force vector orientation for acceleration performance on a within-athlete basis. The stance leg kinematics revealed that a more horizontally oriented force vector during stance was preceded at touchdown by a likely more dorsiflexed ankle, a likely more flexed knee, and a possibly or likely greater hip extension velocity.

  6. Network Coded TCP (CTCP) Performance over Satellite Networks

    DTIC Science & Technology

    2013-12-22

    control mechanism based off of H- TCP that opens the congestion window quickly to overcome the challenges of large latency networks. Preliminary results are...Solomon (RS) coding with TCP to overcome this issue, but it requires the use of explicit congestion control (ECN) and the RS code can result in... TCP and Hybla TCP ) for high packet loss rates (e.g., > 2.5%). We then explore the possibility of a modified congestion control mechanism based off of

  7. Building and measuring a high performance network architecture

    SciTech Connect

    Kramer, William T.C.; Toole, Timothy; Fisher, Chuck; Dugan, Jon; Wheeler, David; Wing, William R; Nickless, William; Goddard, Gregory; Corbato, Steven; Love, E. Paul; Daspit, Paul; Edwards, Hal; Mercer, Linden; Koester, David; Decina, Basil; Dart, Eli; Paul Reisinger, Paul; Kurihara, Riki; Zekauskas, Matthew J; Plesset, Eric; Wulf, Julie; Luce, Douglas; Rogers, James; Duncan, Rex; Mauth, Jeffery

    2001-04-20

    Once a year, the SC conferences present a unique opportunity to create and build one of the most complex and highest performance networks in the world. At SC2000, large-scale and complex local and wide area networking connections were demonstrated, including large-scale distributed applications running on different architectures. This project was designed to use the unique opportunity presented at SC2000 to create a testbed network environment and then use that network to demonstrate and evaluate high performance computational and communication applications. This testbed was designed to incorporate many interoperable systems and services and was designed for measurement from the very beginning. The end results were key insights into how to use novel, high performance networking technologies and to accumulate measurements that will give insights into the networks of the future.

  8. Brain Network Organization and Social Executive Performance in Frontotemporal Dementia.

    PubMed

    Sedeño, Lucas; Couto, Blas; García-Cordero, Indira; Melloni, Margherita; Baez, Sandra; Morales Sepúlveda, Juan Pablo; Fraiman, Daniel; Huepe, David; Hurtado, Esteban; Matallana, Diana; Kuljis, Rodrigo; Torralva, Teresa; Chialvo, Dante; Sigman, Mariano; Piguet, Olivier; Manes, Facundo; Ibanez, Agustin

    2016-02-01

    Behavioral variant frontotemporal dementia (bvFTD) is characterized by early atrophy in the frontotemporoinsular regions. These regions overlap with networks that are engaged in social cognition-executive functions, two hallmarks deficits of bvFTD. We examine (i) whether Network Centrality (a graph theory metric that measures how important a node is in a brain network) in the frontotemporoinsular network is disrupted in bvFTD, and (ii) the level of involvement of this network in social-executive performance. Patients with probable bvFTD, healthy controls, and frontoinsular stroke patients underwent functional MRI resting-state recordings and completed social-executive behavioral measures. Relative to the controls and the stroke group, the bvFTD patients presented decreased Network Centrality. In addition, this measure was associated with social cognition and executive functions. To test the specificity of these results for the Network Centrality of the frontotemporoinsular network, we assessed the main areas from six resting-state networks. No group differences or behavioral associations were found in these networks. Finally, Network Centrality and behavior distinguished bvFTD patients from the other groups with a high classification rate. bvFTD selectively affects Network Centrality in the frontotemporoinsular network, which is associated with high-level social and executive profile.

  9. Performance enhancement for a GPS vector-tracking loop utilizing an adaptive iterated extended Kalman filter.

    PubMed

    Chen, Xiyuan; Wang, Xiying; Xu, Yuan

    2014-12-09

    This paper deals with the problem of state estimation for the vector-tracking loop of a software-defined Global Positioning System (GPS) receiver. For a nonlinear system that has the model error and white Gaussian noise, a noise statistics estimator is used to estimate the model error, and based on this, a modified iterated extended Kalman filter (IEKF) named adaptive iterated Kalman filter (AIEKF) is proposed. A vector-tracking GPS receiver utilizing AIEKF is implemented to evaluate the performance of the proposed method. Through road tests, it is shown that the proposed method has an obvious accuracy advantage over the IEKF and Adaptive Extended Kalman filter (AEKF) in position determination. The results show that the proposed method is effective to reduce the root-mean-square error (RMSE) of position (including longitude, latitude and altitude). Comparing with EKF, the position RMSE values of AIEKF are reduced by about 45.1%, 40.9% and 54.6% in the east, north and up directions, respectively. Comparing with IEKF, the position RMSE values of AIEKF are reduced by about 25.7%, 19.3% and 35.7% in the east, north and up directions, respectively. Compared with AEKF, the position RMSE values of AIEKF are reduced by about 21.6%, 15.5% and 30.7% in the east, north and up directions, respectively.

  10. Performance evaluation of random forest and support vector regressions in natural hazard change detection

    NASA Astrophysics Data System (ADS)

    Eisavi, Vahid; Homayouni, Saeid

    2016-10-01

    Information on land use and land cover changes is considered as a foremost requirement for monitoring environmental change. Developing change detection methodology in the remote sensing community is an active research topic. However, to the best of our knowledge, no research has been conducted so far on the application of random forest regression (RFR) and support vector regression (SVR) for natural hazard change detection from high-resolution optical remote sensing observations. Hence, the objective of this study is to examine the use of RFR and SVR to discriminate between changed and unchanged areas after a tsunami. For this study, RFR and SVR were applied to two different pilot coastlines in Indonesia and Japan. Two different remotely sensed data sets acquired by Quickbird and Ikonos sensors were used for efficient evaluation of the proposed methodology. The results demonstrated better performance of SVM compared to random forest (RF) with an overall accuracy higher by 3% to 4% and kappa coefficient by 0.05 to 0.07. Using McNemar's test, statistically significant differences (Z≥1.96), at the 5% significance level, between the confusion matrices of the RF classifier and the support vector classifier were observed in both study areas. The high accuracy of change detection obtained in this study confirms that these methods have the potential to be used for detecting changes due to natural hazards.

  11. Performance Statistics of the DWD Ceilometer Network

    NASA Astrophysics Data System (ADS)

    Wagner, Frank; Mattis, Ina; Flentje, Harald; Thomas, Werner

    2015-04-01

    The DWD ceilometer network was created in 2008. In the following years more and more ceilometers of type CHM15k (manufacturer Jenoptik) were installed with the aim of observing atmospheric aerosol particles. Now, 58 ceilometers are in continuous operation. The presentation aims on the one side on the statistical behavior of a several instrumental parameters which are related to the measurement performance. Some problems are addressed and conclusions or recommendations which parameters should be monitored for unattended automated operation. On the other side, the presentation aims on a statistical analysis of several measured quantities. Differences between geographic locations (e.g. north versus south, mountainous versus flat terrain) are investigated. For instance the occurrence of fog in lowlands is associated with the overall meteorological situation whereas mountain stations such as Hohenpeissenberg are often within a cumulus cloud which appears as fog in the measurements. The longest time series of data were acquired at Lindenberg. The ceilometer was installed in 2008. Until the end of 2008 the number of installed ceilometers increased to 28 and in the end of 2009 already 42 instruments were measuring. In 2011 the ceilometers were upgraded to the so-called Nimbus instruments. The nimbus instruments have enhanced capabilities of coping and correcting short-term instrumental fluctuations (e.g. detector sensitivity). About 30% of all ceilometer measurements were done under clear skies and hence can be used without limitations for aerosol particle observations. Multiple cloud layers could only be detected in about 23% of all cases with clouds. This is caused either by the presence of only 1 cloud layer or that the ceilometer laser beam could not see through the lowest cloud and hence was blind for the detection of several cloud layers. 3 cloud layers could only be detected in 5% of all cases with clouds. Considering only cases without clouds the diurnal cycle for

  12. Urban Heat Island Growth Modeling Using Artificial Neural Networks and Support Vector Regression: A case study of Tehran, Iran

    NASA Astrophysics Data System (ADS)

    Sherafati, Sh. A.; Saradjian, M. R.; Niazmardi, S.

    2013-09-01

    Numerous investigations on Urban Heat Island (UHI) show that land cover change is the main factor of increasing Land Surface Temperature (LST) in urban areas. Therefore, to achieve a model which is able to simulate UHI growth, urban expansion should be concerned first. Considerable researches on urban expansion modeling have been done based on cellular automata. Accordingly the objective of this paper is to implement CA method for trend detection of Tehran UHI spatiotemporal growth based on urban sprawl parameters (such as Distance to nearest road, Digital Elevation Model (DEM), Slope and Aspect ratios). It should be mentioned that UHI growth modeling may have more complexities in comparison with urban expansion, since the amount of each pixel's temperature should be investigated instead of its state (urban and non-urban areas). The most challenging part of CA model is the definition of Transfer Rules. Here, two methods have used to find appropriate transfer Rules which are Artificial Neural Networks (ANN) and Support Vector Regression (SVR). The reason of choosing these approaches is that artificial neural networks and support vector regression have significant abilities to handle the complications of such a spatial analysis in comparison with other methods like Genetic or Swarm intelligence. In this paper, UHI change trend has discussed between 1984 and 2007. For this purpose, urban sprawl parameters in 1984 have calculated and added to the retrieved LST of this year. In order to achieve LST, Thematic Mapper (TM) and Enhanced Thematic Mapper (ETM+) night-time images have exploited. The reason of implementing night-time images is that UHI phenomenon is more obvious during night hours. After that multilayer feed-forward neural networks and support vector regression have used separately to find the relationship between this data and the retrieved LST in 2007. Since the transfer rules might not be the same in different regions, the satellite image of the city has

  13. Vector performance analysis of three supercomputers - Cray-2, Cray Y-MP, and ETA10-Q

    NASA Technical Reports Server (NTRS)

    Fatoohi, Rod A.

    1989-01-01

    Results are presented of a series of experiments to study the single-processor performance of three supercomputers: Cray-2, Cray Y-MP, and ETA10-Q. The main object of this study is to determine the impact of certain architectural features on the performance of modern supercomputers. Features such as clock period, memory links, memory organization, multiple functional units, and chaining are considered. A simple performance model is used to examine the impact of these features on the performance of a set of basic operations. The results of implementing this set on these machines for three vector lengths and three memory strides are presented and compared. For unit stride operations, the Cray Y-MP outperformed the Cray-2 by as much as three times and the ETA10-Q by as much as four times for these operations. Moreover, unlike the Cray-2 and ETA10-Q, even-numbered strides do not cause a major performance degradation on the Cray Y-MP. Two numerical algorithms are also used for comparison. For three problem sizes of both algorithms, the Cray Y-MP outperformed the Cray-2 by 43 percent to 68 percent and the ETA10-Q by four to eight times.

  14. Topology design and performance analysis of an integrated communication network

    NASA Technical Reports Server (NTRS)

    Li, V. O. K.; Lam, Y. F.; Hou, T. C.; Yuen, J. H.

    1985-01-01

    A research study on the topology design and performance analysis for the Space Station Information System (SSIS) network is conducted. It is begun with a survey of existing research efforts in network topology design. Then a new approach for topology design is presented. It uses an efficient algorithm to generate candidate network designs (consisting of subsets of the set of all network components) in increasing order of their total costs, and checks each design to see if it forms an acceptable network. This technique gives the true cost-optimal network, and is particularly useful when the network has many constraints and not too many components. The algorithm for generating subsets is described in detail, and various aspects of the overall design procedure are discussed. Two more efficient versions of this algorithm (applicable in specific situations) are also given. Next, two important aspects of network performance analysis: network reliability and message delays are discussed. A new model is introduced to study the reliability of a network with dependent failures. For message delays, a collection of formulas from existing research results is given to compute or estimate the delays of messages in a communication network without making the independence assumption. The design algorithm coded in PASCAL is included as an appendix.

  15. Topology design and performance analysis of an integrated communication network

    NASA Astrophysics Data System (ADS)

    Li, V. O. K.; Lam, Y. F.; Hou, T. C.; Yuen, J. H.

    1985-09-01

    A research study on the topology design and performance analysis for the Space Station Information System (SSIS) network is conducted. It is begun with a survey of existing research efforts in network topology design. Then a new approach for topology design is presented. It uses an efficient algorithm to generate candidate network designs (consisting of subsets of the set of all network components) in increasing order of their total costs, and checks each design to see if it forms an acceptable network. This technique gives the true cost-optimal network, and is particularly useful when the network has many constraints and not too many components. The algorithm for generating subsets is described in detail, and various aspects of the overall design procedure are discussed. Two more efficient versions of this algorithm (applicable in specific situations) are also given. Next, two important aspects of network performance analysis: network reliability and message delays are discussed. A new model is introduced to study the reliability of a network with dependent failures. For message delays, a collection of formulas from existing research results is given to compute or estimate the delays of messages in a communication network without making the independence assumption. The design algorithm coded in PASCAL is included as an appendix.

  16. Identifying cysteines and histidines in transition-metal-binding sites using support vector machines and neural networks.

    PubMed

    Passerini, Andrea; Punta, Marco; Ceroni, Alessio; Rost, Burkhard; Frasconi, Paolo

    2006-11-01

    Accurate predictions of metal-binding sites in proteins by using sequence as the only source of information can significantly help in the prediction of protein structure and function, genome annotation, and in the experimental determination of protein structure. Here, we introduce a method for identifying histidines and cysteines that participate in binding of several transition metals and iron complexes. The method predicts histidines as being in either of two states (free or metal bound) and cysteines in either of three states (free, metal bound, or in disulfide bridges). The method uses only sequence information by utilizing position-specific evolutionary profiles as well as more global descriptors such as protein length and amino acid composition. Our solution is based on a two-stage machine-learning approach. The first stage consists of a support vector machine trained to locally classify the binding state of single histidines and cysteines. The second stage consists of a bidirectional recurrent neural network trained to refine local predictions by taking into account dependencies among residues within the same protein. A simple finite state automaton is employed as a postprocessing in the second stage in order to enforce an even number of disulfide-bonded cysteines. We predict histidines and cysteines in transition-metal-binding sites at 73% precision and 61% recall. We observe significant differences in performance depending on the ligand (histidine or cysteine) and on the metal bound. We also predict cysteines participating in disulfide bridges at 86% precision and 87% recall. Results are compared to those that would be obtained by using expert information as represented by PROSITE motifs and, for disulfide bonds, to state-of-the-art methods.

  17. Protein interaction networks at the host–microbe interface in Diaphorina citri, the insect vector of the citrus greening pathogen

    PubMed Central

    Chavez, J. D.; Johnson, R.; Hosseinzadeh, S.; Mahoney, J. E.; Mohr, J. P.; Robison, F.; Zhong, X.; Hall, D. G.; MacCoss, M.; Bruce, J.; Cilia, M.

    2017-01-01

    The Asian citrus psyllid (Diaphorina citri) is the insect vector responsible for the worldwide spread of ‘Candidatus Liberibacter asiaticus’ (CLas), the bacterial pathogen associated with citrus greening disease. Developmental changes in the insect vector impact pathogen transmission, such that D. citri transmission of CLas is more efficient when bacteria are acquired by nymphs when compared with adults. We hypothesize that expression changes in the D. citri immune system and commensal microbiota occur during development and regulate vector competency. In support of this hypothesis, more proteins, with greater fold changes, were differentially expressed in response to CLas in adults when compared with nymphs, including insect proteins involved in bacterial adhesion and immunity. Compared with nymphs, adult insects had a higher titre of CLas and the bacterial endosymbionts Wolbachia, Profftella and Carsonella. All Wolbachia and Profftella proteins differentially expressed between nymphs and adults are upregulated in adults, while most differentially expressed Carsonella proteins are upregulated in nymphs. Discovery of protein interaction networks has broad applicability to the study of host–microbe relationships. Using protein interaction reporter technology, a D. citri haemocyanin protein highly upregulated in response to CLas was found to physically interact with the CLas coenzyme A (CoA) biosynthesis enzyme phosphopantothenoylcysteine synthetase/decarboxylase. CLas pantothenate kinase, which catalyses the rate-limiting step of CoA biosynthesis, was found to interact with a D. citri myosin protein. Two Carsonella enzymes involved in histidine and tryptophan biosynthesis were found to physically interact with D. citri proteins. These co-evolved protein interaction networks at the host–microbe interface are highly specific targets for controlling the insect vector responsible for the spread of citrus greening. PMID:28386418

  18. Protein interaction networks at the host-microbe interface in Diaphorina citri, the insect vector of the citrus greening pathogen.

    PubMed

    Ramsey, J S; Chavez, J D; Johnson, R; Hosseinzadeh, S; Mahoney, J E; Mohr, J P; Robison, F; Zhong, X; Hall, D G; MacCoss, M; Bruce, J; Cilia, M

    2017-02-01

    The Asian citrus psyllid (Diaphorina citri) is the insect vector responsible for the worldwide spread of 'Candidatus Liberibacter asiaticus' (CLas), the bacterial pathogen associated with citrus greening disease. Developmental changes in the insect vector impact pathogen transmission, such that D. citri transmission of CLas is more efficient when bacteria are acquired by nymphs when compared with adults. We hypothesize that expression changes in the D. citri immune system and commensal microbiota occur during development and regulate vector competency. In support of this hypothesis, more proteins, with greater fold changes, were differentially expressed in response to CLas in adults when compared with nymphs, including insect proteins involved in bacterial adhesion and immunity. Compared with nymphs, adult insects had a higher titre of CLas and the bacterial endosymbionts Wolbachia, Profftella and Carsonella. All Wolbachia and Profftella proteins differentially expressed between nymphs and adults are upregulated in adults, while most differentially expressed Carsonella proteins are upregulated in nymphs. Discovery of protein interaction networks has broad applicability to the study of host-microbe relationships. Using protein interaction reporter technology, a D. citri haemocyanin protein highly upregulated in response to CLas was found to physically interact with the CLas coenzyme A (CoA) biosynthesis enzyme phosphopantothenoylcysteine synthetase/decarboxylase. CLas pantothenate kinase, which catalyses the rate-limiting step of CoA biosynthesis, was found to interact with a D. citri myosin protein. Two Carsonella enzymes involved in histidine and tryptophan biosynthesis were found to physically interact with D. citri proteins. These co-evolved protein interaction networks at the host-microbe interface are highly specific targets for controlling the insect vector responsible for the spread of citrus greening.

  19. Network based high performance concurrent computing

    SciTech Connect

    Sunderam, V.S.

    1991-01-01

    The overall objectives of this project are to investigate research issues pertaining to programming tools and efficiency issues in network based concurrent computing systems. The basis for these efforts is the PVM project that evolved during my visits to Oak Ridge Laboratories under the DOE Faculty Research Participation program; I continue to collaborate with researchers at Oak Ridge on some portions of the project.

  20. A parallel-vector algorithm for rapid structural analysis on high-performance computers

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.; Nguyen, Duc T.; Agarwal, Tarun K.

    1990-01-01

    A fast, accurate Choleski method for the solution of symmetric systems of linear equations is presented. This direct method is based on a variable-band storage scheme and takes advantage of column heights to reduce the number of operations in the Choleski factorization. The method employs parallel computation in the outermost DO-loop and vector computation via the 'loop unrolling' technique in the innermost DO-loop. The method avoids computations with zeros outside the column heights, and as an option, zeros inside the band. The close relationship between Choleski and Gauss elimination methods is examined. The minor changes required to convert the Choleski code to a Gauss code to solve non-positive-definite symmetric systems of equations are identified. The results for two large-scale structural analyses performed on supercomputers, demonstrate the accuracy and speed of the method.

  1. A parallel-vector algorithm for rapid structural analysis on high-performance computers

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.; Nguyen, Duc T.; Agarwal, Tarun K.

    1990-01-01

    A fast, accurate Choleski method for the solution of symmetric systems of linear equations is presented. This direct method is based on a variable-band storage scheme and takes advantage of column heights to reduce the number of operations in the Choleski factorization. The method employs parallel computation in the outermost DO-loop and vector computation via the loop unrolling technique in the innermost DO-loop. The method avoids computations with zeros outside the column heights, and as an option, zeros inside the band. The close relationship between Choleski and Gauss elimination methods is examined. The minor changes required to convert the Choleski code to a Gauss code to solve non-positive-definite symmetric systems of equations are identified. The results for two large scale structural analyses performed on supercomputers, demonstrate the accuracy and speed of the method.

  2. End-to-end network/application performance troubleshooting methodology

    SciTech Connect

    Wu, Wenji; Bobyshev, Andrey; Bowden, Mark; Crawford, Matt; Demar, Phil; Grigaliunas, Vyto; Grigoriev, Maxim; Petravick, Don; /Fermilab

    2007-09-01

    The computing models for HEP experiments are globally distributed and grid-based. Obstacles to good network performance arise from many causes and can be a major impediment to the success of the computing models for HEP experiments. Factors that affect overall network/application performance exist on the hosts themselves (application software, operating system, hardware), in the local area networks that support the end systems, and within the wide area networks. Since the computer and network systems are globally distributed, it can be very difficult to locate and identify the factors that are hurting application performance. In this paper, we present an end-to-end network/application performance troubleshooting methodology developed and in use at Fermilab. The core of our approach is to narrow down the problem scope with a divide and conquer strategy. The overall complex problem is split into two distinct sub-problems: host diagnosis and tuning, and network path analysis. After satisfactorily evaluating, and if necessary resolving, each sub-problem, we conduct end-to-end performance analysis and diagnosis. The paper will discuss tools we use as part of the methodology. The long term objective of the effort is to enable site administrators and end users to conduct much of the troubleshooting themselves, before (or instead of) calling upon network and operating system 'wizards,' who are always in short supply.

  3. Switching performance of OBS network model under prefetched real traffic

    NASA Astrophysics Data System (ADS)

    Huang, Zhenhua; Xu, Du; Lei, Wen

    2005-11-01

    Optical Burst Switching (OBS) [1] is now widely considered as an efficient switching technique in building the next generation optical Internet .So it's very important to precisely evaluate the performance of the OBS network model. The performance of the OBS network model is variable in different condition, but the most important thing is that how it works under real traffic load. In the traditional simulation models, uniform traffics are usually generated by simulation software to imitate the data source of the edge node in the OBS network model, and through which the performance of the OBS network is evaluated. Unfortunately, without being simulated by real traffic, the traditional simulation models have several problems and their results are doubtable. To deal with this problem, we present a new simulation model for analysis and performance evaluation of the OBS network, which uses prefetched IP traffic to be data source of the OBS network model. The prefetched IP traffic can be considered as real IP source of the OBS edge node and the OBS network model has the same clock rate with a real OBS system. So it's easy to conclude that this model is closer to the real OBS system than the traditional ones. The simulation results also indicate that this model is more accurate to evaluate the performance of the OBS network system and the results of this model are closer to the actual situation.

  4. Static performance of nonaxisymmetric nozzles with yaw thrust-vectoring vanes

    NASA Technical Reports Server (NTRS)

    Mason, Mary L.; Berrier, Bobby L.

    1988-01-01

    A static test was conducted in the static test facility of the Langley 16 ft Transonic Tunnel to evaluate the effects of post exit vane vectoring on nonaxisymmetric nozzles. Three baseline nozzles were tested: an unvectored two dimensional convergent nozzle, an unvectored two dimensional convergent-divergent nozzle, and a pitch vectored two dimensional convergent-divergent nozzle. Each nozzle geometry was tested with 3 exit aspect ratios (exit width divided by exit height) of 1.5, 2.5 and 4.0. Two post exit yaw vanes were externally mounted on the nozzle sidewalls at the nozzle exit to generate yaw thrust vectoring. Vane deflection angle (0, -20 and -30 deg), vane planform and vane curvature were varied during the test. Results indicate that the post exit vane concept produced resultant yaw vector angles which were always smaller than the geometric yaw vector angle. Losses in resultant thrust ratio increased with the magnitude of resultant yaw vector angle. The widest post exit vane produced the largest degree of flow turning, but vane curvature had little effect on thrust vectoring. Pitch vectoring was independent of yaw vectoring, indicating that multiaxis thrust vectoring is feasible for the nozzle concepts tested.

  5. A performance data network for solar process heat systems

    SciTech Connect

    Barker, G.; Hale, M.J.

    1996-03-01

    A solar process heat (SPH) data network has been developed to access remote-site performance data from operational solar heat systems. Each SPH system in the data network is outfitted with monitoring equipment and a datalogger. The datalogger is accessed via modem from the data network computer at the National Renewable Energy Laboratory (NREL). The dataloggers collect both ten-minute and hourly data and download it to the data network every 24-hours for archiving, processing, and plotting. The system data collected includes energy delivered (fluid temperatures and flow rates) and site meteorological conditions, such as solar insolation and ambient temperature. The SPH performance data network was created for collecting performance data from SPH systems that are serving in industrial applications or from systems using technologies that show promise for industrial applications. The network will be used to identify areas of SPH technology needing further development, to correlate computer models with actual performance, and to improve the credibility of SPH technology. The SPH data network also provides a centralized bank of user-friendly performance data that will give prospective SPH users an indication of how actual systems perform. There are currently three systems being monitored and archived under the SPH data network: two are parabolic trough systems and the third is a flat-plate system. The two trough systems both heat water for prisons; the hot water is used for personal hygiene, kitchen operations, and laundry. The flat plate system heats water for meat processing at a slaughter house. We plan to connect another parabolic trough system to the network during the first months of 1996. We continue to look for good examples of systems using other types of collector technologies and systems serving new applications (such as absorption chilling) to include in the SPH performance data network.

  6. Reduced-Complexity Models for Network Performance Prediction

    DTIC Science & Technology

    2005-05-01

    traffic over the network . To understand such a complex system it is necessary to develop accurate, yet simple, models to describe the performance...interconnected in complex ways, with millions of users sending traffic over the network . To understand such a complex system, it is necessary to develop...number of downloaders . . . . . . . . . . . . . . . . . 17 11 A network of ISP clouds. In this figure, the ISPs are connected via peering points, denoted

  7. Performance of a Regional Aeronautical Telecommunications Network

    NASA Technical Reports Server (NTRS)

    Bretmersky, Steven C.; Ripamonti, Claudio; Konangi, Vijay K.; Kerczewski, Robert J.

    2001-01-01

    This paper reports the findings of the simulation of the ATN (Aeronautical Telecommunications Network) for three typical average-sized U.S. airports and their associated air traffic patterns. The models of the protocols were designed to achieve the same functionality and meet the ATN specifications. The focus of this project is on the subnetwork and routing aspects of the simulation. To maintain continuous communication between the aircrafts and the ground facilities, a model based on mobile IP is used. The results indicate that continuous communication is indeed possible. The network can support two applications of significance in the immediate future FTP and HTTP traffic. Results from this simulation prove the feasibility of development of the ATN concept for AC/ATM (Advanced Communications for Air Traffic Management).

  8. Integrated network control and performance monitoring

    NASA Astrophysics Data System (ADS)

    Schaefer, D. J.

    A brief description is given of an integrated satellite system monitor utilizing remote control operation from a centralized satellite network operations center. This monitoring facility can measure selected Time Division Multiple Access (TDMA) transmission parameters in real time and report the measurement results to the central satellite operations center having responsibility for overall system operation. The monitor system, while similar to other existing TDMA monitors, is unique in that it features dual rate and frequency band operation in a frequency-hopped environment.

  9. Optimal Beamforming and Performance Analysis of Wireless Relay Networks with Unmanned Aerial Vehicle

    NASA Astrophysics Data System (ADS)

    Ouyang, Jian; Lin, Min

    2015-03-01

    In this paper, we investigate a wireless communication system employing a multi-antenna unmanned aerial vehicle (UAV) as the relay to improve the connectivity between the base station (BS) and the receive node (RN), where the BS-UAV link undergoes the correlated Rician fading while the UAV-RN link follows the correlated Rayleigh fading with large scale path loss. By assuming that the amplify-and-forward (AF) protocol is adopted at UAV, we first propose an optimal beamforming (BF) scheme to maximize the mutual information of the UAV-assisted dual-hop relay network, by calculating the BF weight vectors and the power allocation coefficient. Then, we derive the analytical expressions for the outage probability (OP) and the ergodic capacity (EC) of the relay network to evaluate the system performance conveniently. Finally, computer simulation results are provided to demonstrate the validity and efficiency of the proposed scheme as well as the performance analysis.

  10. Capacity and performance analysis of signaling networks in multivendor environments

    NASA Astrophysics Data System (ADS)

    Bafutto, Marcos; Kuehn, Paul J.; Willmann, Gert

    1994-04-01

    The load of common channel signaling networks is being increased through the introduction of new services such as supplementary services or mobile communication services. This may lead to a performance degradation of the signaling network, which affects both the quality of the new services and of the services already offered by the network. In this paper, a generic modeling methodology for the signaling load and the signaling network performance as a result of the various communication services is extended in order to include certain implementation-dependent particularities. The models are obtained by considering the protocol functions of Signaling System No. 7 as specified by the CCITT, as well as the information flows through these functions. With this approach, virtual processor models are derived which can be mapped onto particular implementations. This allows the analysis of signaling networks in a multivendor environment. Using these principles, a signaling network planning tool concept has been developed which provides the distinct loading of hardware and software signaling network resources, and on which hierarchical performance analysis and planning procedures are based. This allows to support the planning of signaling networks according to given service, load, and grade-of-service figures. A simple case study outlines the application of the tool concept to a network supporting Freephone, Credit Card, and ISDN voice services.

  11. Performance evaluation of network-based steam generator level control

    SciTech Connect

    Li, Q.; Jiang, J.

    2006-07-01

    The performance of a network-based control system (NCS) is systematically and quantitatively analyzed by simulation and experiments. A U-Tube Steam Generator (UTSG) is used as the process in this study. The simulation results show that the offset and rise time are largely unaffected by either network-induced delays or data loss, but the overshoot, range, and settling time can be influenced by delay and/or data loss. In addition, the simulation results also indicate that Model Predictive Control (MPC) is more robust to tolerate network-induced delays and data loss than its PI counterpart. The experimental tests show that the introduction of a network (Foundation Field-bus in this case) to the control loop does not degrade the overall control system performance if the network is used for small number of control loops, but it may degrade the control system performance if more control loops are added. (authors)

  12. CyNetSVM: A Cytoscape App for Cancer Biomarker Identification Using Network Constrained Support Vector Machines

    PubMed Central

    Chen, Li; Hilakivi-Clarke, Leena; Clarke, Robert

    2017-01-01

    One of the important tasks in cancer research is to identify biomarkers and build classification models for clinical outcome prediction. In this paper, we develop a CyNetSVM software package, implemented in Java and integrated with Cytoscape as an app, to identify network biomarkers using network-constrained support vector machines (NetSVM). The Cytoscape app of NetSVM is specifically designed to improve the usability of NetSVM with the following enhancements: (1) user-friendly graphical user interface (GUI), (2) computationally efficient core program and (3) convenient network visualization capability. The CyNetSVM app has been used to analyze breast cancer data to identify network genes associated with breast cancer recurrence. The biological function of these network genes is enriched in signaling pathways associated with breast cancer progression, showing the effectiveness of CyNetSVM for cancer biomarker identification. The CyNetSVM package is available at Cytoscape App Store and http://sourceforge.net/projects/netsvmjava; a sample data set is also provided at sourceforge.net. PMID:28122019

  13. CyNetSVM: A Cytoscape App for Cancer Biomarker Identification Using Network Constrained Support Vector Machines.

    PubMed

    Shi, Xu; Banerjee, Sharmi; Chen, Li; Hilakivi-Clarke, Leena; Clarke, Robert; Xuan, Jianhua

    2017-01-01

    One of the important tasks in cancer research is to identify biomarkers and build classification models for clinical outcome prediction. In this paper, we develop a CyNetSVM software package, implemented in Java and integrated with Cytoscape as an app, to identify network biomarkers using network-constrained support vector machines (NetSVM). The Cytoscape app of NetSVM is specifically designed to improve the usability of NetSVM with the following enhancements: (1) user-friendly graphical user interface (GUI), (2) computationally efficient core program and (3) convenient network visualization capability. The CyNetSVM app has been used to analyze breast cancer data to identify network genes associated with breast cancer recurrence. The biological function of these network genes is enriched in signaling pathways associated with breast cancer progression, showing the effectiveness of CyNetSVM for cancer biomarker identification. The CyNetSVM package is available at Cytoscape App Store and http://sourceforge.net/projects/netsvmjava; a sample data set is also provided at sourceforge.net.

  14. An intercomparison of different topography effects on discrimination performance of fuzzy change vector analysis algorithm

    NASA Astrophysics Data System (ADS)

    Singh, Sartajvir; Talwar, Rajneesh

    2016-12-01

    Detection of snow cover changes is vital for avalanche hazard analysis and flood flashes that arise due to variation in temperature. Hence, multitemporal change detection is one of the practical mean to estimate the snow cover changes over larger area using remotely sensed data. There have been some previous studies that examined how accuracy of change detection analysis is affected by different topography effects over Northwestern Indian Himalayas. The present work emphases on the intercomparison of different topography effects on discrimination performance of fuzzy based change vector analysis (FCVA) as change detection algorithm that includes extraction of change-magnitude and change-direction from a specific pixel belongs multiple or partial membership. The qualitative and quantitative analysis of the proposed FCVA algorithm is performed under topographic conditions and topographic correction conditions. The experimental outcomes confirmed that in change category discrimination procedure, FCVA with topographic correction achieved 86.8% overall accuracy and 4.8% decay (82% of overall accuracy) is found in FCVA without topographic correction. This study suggests that by incorporating the topographic correction model over mountainous region satellite imagery, performance of FCVA algorithm can be significantly improved up to great extent in terms of determining actual change categories.

  15. Towards a Social Networks Model for Online Learning & Performance

    ERIC Educational Resources Information Center

    Chung, Kon Shing Kenneth; Paredes, Walter Christian

    2015-01-01

    In this study, we develop a theoretical model to investigate the association between social network properties, "content richness" (CR) in academic learning discourse, and performance. CR is the extent to which one contributes content that is meaningful, insightful and constructive to aid learning and by social network properties we…

  16. Wireless imaging sensor network design and performance analysis

    NASA Astrophysics Data System (ADS)

    Sundaram, Ramakrishnan

    2016-05-01

    This paper discusses (a) the design and implementation of the integrated radio tomographic imaging (RTI) interface for radio signal strength (RSS) data obtained from a wireless imaging sensor network (WISN) (b) the use of model-driven methods to determine the extent of regularization to be applied to reconstruct images from the RSS data, and (c) preliminary study of the performance of the network.

  17. Towards a Social Networks Model for Online Learning & Performance

    ERIC Educational Resources Information Center

    Chung, Kon Shing Kenneth; Paredes, Walter Christian

    2015-01-01

    In this study, we develop a theoretical model to investigate the association between social network properties, "content richness" (CR) in academic learning discourse, and performance. CR is the extent to which one contributes content that is meaningful, insightful and constructive to aid learning and by social network properties we…

  18. Performance of velocity vector estimation using an improved dynamic beamforming setup

    NASA Astrophysics Data System (ADS)

    Munk, Peter; Jensen, Joergen A.

    2001-05-01

    Estimation of velocity vectors using transverse spatial modulation has previously been presented. Initially, the velocity estimation was improved using an approximated dynamic beamformer setup instead of a static combined with a new velocity estimation scheme. A new beamformer setup for dynamic control of the acoustic field, based on the Pulsed Plane Wave Decomposition (PPWD), is presented. The PPWD gives an unambiguous relation between a given acoustic field and the time functions needed on an array transducer for transmission. Applying this method for the receive beamformation results in a setup of the beamformer with different filters for each channel for each estimation depth. The method of the PPWD is illustrated by analytical expressions of the decomposed acoustic field and these results are used for simulation. Results of velocity estimates using the new setup are given on the basis of simulated and experimental data. The simulation setup is an attempt to approximate the situation present when performing a scanning of the carotid artery with a linear array. Measurement of the flow perpendicular to the emission direction is possible using the approach of transverse spatial modulation. This is most often the case in a scanning of the carotid artery, where the situation is handled by an angled Doppler setup in the present ultrasound scanners. The modulation period of 2 mm is controlled for a range of 20-40 mm which covers the typical range of the carotid artery. A 6 MHz array on a 128-channel system is simulated. The flow setup in the simulation is based on a vessel with a parabolic flow profile for a 60 and 90-degree flow angle. The experimental results are based on the backscattered signal from a sponge mounted in a stepping device. The bias and std. Dev. Of the velocity estimate are calculated for four different flow angles (50,60,75 and 90 degrees). The velocity vector is calculated using the improved 2D estimation approach at a range of depths.

  19. IBM SP high-performance networking with a GRF.

    SciTech Connect

    Navarro, J.P.

    1999-05-27

    Increasing use of highly distributed applications, demand for faster data exchange, and highly parallel applications can push the limits of conventional external networking for IBM SP sites. In technical computing applications we have observed a growing use of a pipeline of hosts and networks collaborating to collect, process, and visualize large amounts of realtime data. The GRF, a high-performance IP switch from Ascend and IBM, is the first backbone network switch to offer a media card that can directly connect to an SP Switch. This enables switch attached hosts in an SP complex to communicate at near SP Switch speeds with other GRF attached hosts and networks.

  20. Influence of station topography and Moho depth on the mislocation vectors for the Kyrgyz Broadband Seismic Network (KNET)

    NASA Astrophysics Data System (ADS)

    Jacobeit, Erdmann; Thomas, Christine; Vernon, Frank

    2013-05-01

    Deviations of slowness and backazimuth from theoretically calculated values, the so-called mislocation vectors, are measured for the Kyrgyz Broadband Seismic Network (KNET) in the Tien Shan region. 870 events have been analysed for arrivals of P and PKP waves from all azimuths. The deviations of slowness and backazimuth show a strong trend with values up to 1 s deg-1 for slowness values for waves arriving from the North and South and backazimuth deviations of, in some cases, more than 10° for waves arriving from the East and West. Calculating the traveltime deviations of the stations for topography of the Tien Shan region and Moho depth values appropriate for this area shows that most slowness and backazimuth deviations can be reduced to very small values. The remaining mislocation vectors show no strong trends and are on average smaller than 0.2 s deg-1 for slowness and 2° for backazimuth values, which is within the error bars of these measurements. Results from array methods that rely on the knowledge of the backazimuth values show much improved resolution after the correction of the mislocation vectors which shows the importance of knowing and correcting for structures directly beneath arrays.

  1. High-performance, bare silver nanowire network transparent heaters.

    PubMed

    Ergun, Orcun; Coskun, Sahin; Yusufoglu, Yusuf; Unalan, Husnu Emrah

    2016-11-04

    Silver nanowire (Ag NW) networks are one of the most promising candidates for the replacement of indium tin oxide (ITO) thin films in many different applications. Recently, Ag-NW-based transparent heaters (THs) showed excellent heating performance. In order to overcome the instability issues of Ag NW networks, researchers have offered different hybrid structures. However, these approaches not only require extra processing, but also decrease the optical performance of Ag NW networks. So, it is important to investigate and determine the thermal performance limits of bare-Ag-NW-network-based THs. Herein, we report on the effect of NW density, contact geometry, applied bias, flexing and incremental bias application on the TH performance of Ag NW networks. Ag-NW-network-based THs with a sheet resistance and percentage transmittance of 4.3 Ω sq(-1) and 83.3%, respectively, and a NW density of 1.6 NW μm(-2) reached a maximum temperature of 275 °C under incremental bias application (5 V maximum). With this performance, our results provide a different perspective on bare-Ag-NW-network-based transparent heaters.

  2. High-performance, bare silver nanowire network transparent heaters

    NASA Astrophysics Data System (ADS)

    Ergun, Orcun; Coskun, Sahin; Yusufoglu, Yusuf; Emrah Unalan, Husnu

    2016-11-01

    Silver nanowire (Ag NW) networks are one of the most promising candidates for the replacement of indium tin oxide (ITO) thin films in many different applications. Recently, Ag-NW-based transparent heaters (THs) showed excellent heating performance. In order to overcome the instability issues of Ag NW networks, researchers have offered different hybrid structures. However, these approaches not only require extra processing, but also decrease the optical performance of Ag NW networks. So, it is important to investigate and determine the thermal performance limits of bare-Ag-NW-network-based THs. Herein, we report on the effect of NW density, contact geometry, applied bias, flexing and incremental bias application on the TH performance of Ag NW networks. Ag-NW-network-based THs with a sheet resistance and percentage transmittance of 4.3 Ω sq-1 and 83.3%, respectively, and a NW density of 1.6 NW μm-2 reached a maximum temperature of 275 °C under incremental bias application (5 V maximum). With this performance, our results provide a different perspective on bare-Ag-NW-network-based transparent heaters.

  3. Performance analysis of local area networks

    NASA Technical Reports Server (NTRS)

    Alkhatib, Hasan S.; Hall, Mary Grace

    1990-01-01

    A simulation of the TCP/IP protocol running on a CSMA/CD data link layer was described. The simulation was implemented using the simula language, and object oriented discrete event language. It allows the user to set the number of stations at run time, as well as some station parameters. Those parameters are the interrupt time and the dma transfer rate for each station. In addition, the user may configure the network at run time with stations of differing characteristics. Two types are available, and the parameters of both types are read from input files at run time. The parameters include the dma transfer rate, interrupt time, data rate, average message size, maximum frame size and the average interarrival time of messages per station. The information collected for the network is the throughput and the mean delay per packet. For each station, the number of messages attempted as well as the number of messages successfully transmitted is collected in addition to the throughput and mean packet delay per station.

  4. The performance analysis of linux networking - packet receiving

    SciTech Connect

    Wu, Wenji; Crawford, Matt; Bowden, Mark; /Fermilab

    2006-11-01

    The computing models for High-Energy Physics experiments are becoming ever more globally distributed and grid-based, both for technical reasons (e.g., to place computational and data resources near each other and the demand) and for strategic reasons (e.g., to leverage equipment investments). To support such computing models, the network and end systems, computing and storage, face unprecedented challenges. One of the biggest challenges is to transfer scientific data sets--now in the multi-petabyte (10{sup 15} bytes) range and expected to grow to exabytes within a decade--reliably and efficiently among facilities and computation centers scattered around the world. Both the network and end systems should be able to provide the capabilities to support high bandwidth, sustained, end-to-end data transmission. Recent trends in technology are showing that although the raw transmission speeds used in networks are increasing rapidly, the rate of advancement of microprocessor technology has slowed down. Therefore, network protocol-processing overheads have risen sharply in comparison with the time spent in packet transmission, resulting in degraded throughput for networked applications. More and more, it is the network end system, instead of the network, that is responsible for degraded performance of network applications. In this paper, the Linux system's packet receive process is studied from NIC to application. We develop a mathematical model to characterize the Linux packet receiving process. Key factors that affect Linux systems network performance are analyzed.

  5. Leveraging Structure to Improve Classification Performance in Sparsely Labeled Networks

    SciTech Connect

    Gallagher, B; Eliassi-Rad, T

    2007-10-22

    We address the problem of classification in a partially labeled network (a.k.a. within-network classification), with an emphasis on tasks in which we have very few labeled instances to start with. Recent work has demonstrated the utility of collective classification (i.e., simultaneous inferences over class labels of related instances) in this general problem setting. However, the performance of collective classification algorithms can be adversely affected by the sparseness of labels in real-world networks. We show that on several real-world data sets, collective classification appears to offer little advantage in general and hurts performance in the worst cases. In this paper, we explore a complimentary approach to within-network classification that takes advantage of network structure. Our approach is motivated by the observation that real-world networks often provide a great deal more structural information than attribute information (e.g., class labels). Through experiments on supervised and semi-supervised classifiers of network data, we demonstrate that a small number of structural features can lead to consistent and sometimes dramatic improvements in classification performance. We also examine the relative utility of individual structural features and show that, in many cases, it is a combination of both local and global network structure that is most informative.

  6. Storage Area Networks and The High Performance Storage System

    SciTech Connect

    Hulen, H; Graf, O; Fitzgerald, K; Watson, R W

    2002-03-04

    The High Performance Storage System (HPSS) is a mature Hierarchical Storage Management (HSM) system that was developed around a network-centered architecture, with client access to storage provided through third-party controls. Because of this design, HPSS is able to leverage today's Storage Area Network (SAN) infrastructures to provide cost effective, large-scale storage systems and high performance global file access for clients. Key attributes of SAN file systems are found in HPSS today, and more complete SAN file system capabilities are being added. This paper traces the HPSS storage network architecture from the original implementation using HIPPI and IPI-3 technology, through today's local area network (LAN) capabilities, and to SAN file system capabilities now in development. At each stage, HPSS capabilities are compared with capabilities generally accepted today as characteristic of storage area networks and SAN file systems.

  7. Semi-supervised multimodal relevance vector regression improves cognitive performance estimation from imaging and biological biomarkers.

    PubMed

    Cheng, Bo; Zhang, Daoqiang; Chen, Songcan; Kaufer, Daniel I; Shen, Dinggang

    2013-07-01

    Accurate estimation of cognitive scores for patients can help track the progress of neurological diseases. In this paper, we present a novel semi-supervised multimodal relevance vector regression (SM-RVR) method for predicting clinical scores of neurological diseases from multimodal imaging and biological biomarker, to help evaluate pathological stage and predict progression of diseases, e.g., Alzheimer's diseases (AD). Unlike most existing methods, we predict clinical scores from multimodal (imaging and biological) biomarkers, including MRI, FDG-PET, and CSF. Considering that the clinical scores of mild cognitive impairment (MCI) subjects are often less stable compared to those of AD and normal control (NC) subjects due to the heterogeneity of MCI, we use only the multimodal data of MCI subjects, but no corresponding clinical scores, to train a semi-supervised model for enhancing the estimation of clinical scores for AD and NC subjects. We also develop a new strategy for selecting the most informative MCI subjects. We evaluate the performance of our approach on 202 subjects with all three modalities of data (MRI, FDG-PET and CSF) from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. The experimental results show that our SM-RVR method achieves a root-mean-square error (RMSE) of 1.91 and a correlation coefficient (CORR) of 0.80 for estimating the MMSE scores, and also a RMSE of 4.45 and a CORR of 0.78 for estimating the ADAS-Cog scores, demonstrating very promising performances in AD studies.

  8. Cross Deployment Networking and Systematic Performance Analysis of Underwater Wireless Sensor Networks

    PubMed Central

    Wei, Zhengxian; Song, Min; Yin, Guisheng; Wang, Hongbin; Ma, Xuefei

    2017-01-01

    Underwater wireless sensor networks (UWSNs) have become a new hot research area. However, due to the work dynamics and harsh ocean environment, how to obtain an UWSN with the best systematic performance while deploying as few sensor nodes as possible and setting up self-adaptive networking is an urgent problem that needs to be solved. Consequently, sensor deployment, networking, and performance calculation of UWSNs are challenging issues, hence the study in this paper centers on this topic and three relevant methods and models are put forward. Firstly, the normal body-centered cubic lattice to cross body-centered cubic lattice (CBCL) has been improved, and a deployment process and topology generation method are built. Then most importantly, a cross deployment networking method (CDNM) for UWSNs suitable for the underwater environment is proposed. Furthermore, a systematic quar-performance calculation model (SQPCM) is proposed from an integrated perspective, in which the systematic performance of a UWSN includes coverage, connectivity, durability and rapid-reactivity. Besides, measurement models are established based on the relationship between systematic performance and influencing parameters. Finally, the influencing parameters are divided into three types, namely, constraint parameters, device performance and networking parameters. Based on these, a networking parameters adjustment method (NPAM) for optimized systematic performance of UWSNs has been presented. The simulation results demonstrate that the approach proposed in this paper is feasible and efficient in networking and performance calculation of UWSNs. PMID:28704959

  9. Cross Deployment Networking and Systematic Performance Analysis of Underwater Wireless Sensor Networks.

    PubMed

    Wei, Zhengxian; Song, Min; Yin, Guisheng; Wang, Hongbin; Ma, Xuefei; Song, Houbing

    2017-07-12

    Underwater wireless sensor networks (UWSNs) have become a new hot research area. However, due to the work dynamics and harsh ocean environment, how to obtain an UWSN with the best systematic performance while deploying as few sensor nodes as possible and setting up self-adaptive networking is an urgent problem that needs to be solved. Consequently, sensor deployment, networking, and performance calculation of UWSNs are challenging issues, hence the study in this paper centers on this topic and three relevant methods and models are put forward. Firstly, the normal body-centered cubic lattice to cross body-centered cubic lattice (CBCL) has been improved, and a deployment process and topology generation method are built. Then most importantly, a cross deployment networking method (CDNM) for UWSNs suitable for the underwater environment is proposed. Furthermore, a systematic quar-performance calculation model (SQPCM) is proposed from an integrated perspective, in which the systematic performance of a UWSN includes coverage, connectivity, durability and rapid-reactivity. Besides, measurement models are established based on the relationship between systematic performance and influencing parameters. Finally, the influencing parameters are divided into three types, namely, constraint parameters, device performance and networking parameters. Based on these, a networking parameters adjustment method (NPAM) for optimized systematic performance of UWSNs has been presented. The simulation results demonstrate that the approach proposed in this paper is feasible and efficient in networking and performance calculation of UWSNs.

  10. Optical interconnection networks for high-performance computing systems.

    PubMed

    Biberman, Aleksandr; Bergman, Keren

    2012-04-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers.

  11. Differentiation of several interstitial lung disease patterns in HRCT images using support vector machine: role of databases on performance

    NASA Astrophysics Data System (ADS)

    Kale, Mandar; Mukhopadhyay, Sudipta; Dash, Jatindra K.; Garg, Mandeep; Khandelwal, Niranjan

    2016-03-01

    Interstitial lung disease (ILD) is complicated group of pulmonary disorders. High Resolution Computed Tomography (HRCT) considered to be best imaging technique for analysis of different pulmonary disorders. HRCT findings can be categorised in several patterns viz. Consolidation, Emphysema, Ground Glass Opacity, Nodular, Normal etc. based on their texture like appearance. Clinician often find it difficult to diagnosis these pattern because of their complex nature. In such scenario computer-aided diagnosis system could help clinician to identify patterns. Several approaches had been proposed for classification of ILD patterns. This includes computation of textural feature and training /testing of classifier such as artificial neural network (ANN), support vector machine (SVM) etc. In this paper, wavelet features are calculated from two different ILD database, publically available MedGIFT ILD database and private ILD database, followed by performance evaluation of ANN and SVM classifiers in terms of average accuracy. It is found that average classification accuracy by SVM is greater than ANN where trained and tested on same database. Investigation continued further to test variation in accuracy of classifier when training and testing is performed with alternate database and training and testing of classifier with database formed by merging samples from same class from two individual databases. The average classification accuracy drops when two independent databases used for training and testing respectively. There is significant improvement in average accuracy when classifiers are trained and tested with merged database. It infers dependency of classification accuracy on training data. It is observed that SVM outperforms ANN when same database is used for training and testing.

  12. Challenges for high-performance networking for exascale computing.

    SciTech Connect

    Barrett, Brian W.; Hemmert, K. Scott; Underwood, Keith Douglas; Brightwell, Ronald Brian

    2010-05-01

    Achieving the next three orders of magnitude performance increase to move from petascale to exascale computing will require a significant advancements in several fundamental areas. Recent studies have outlined many of the challenges in hardware and software that will be needed. In this paper, we examine these challenges with respect to high-performance networking. We describe the repercussions of anticipated changes to computing and networking hardware and discuss the impact that alternative parallel programming models will have on the network software stack. We also present some ideas on possible approaches that address some of these challenges.

  13. Success factors in hospital network performance: evidence from Korea.

    PubMed

    Kim, Kwang-Jum; Burns, Lawton R

    2007-08-01

    Collaborative networks have become a common organizational strategy to deal with uncertain and dynamic environments. Like their counterparts in the USA, Korean hospitals are establishing cooperative relationships with one another, with varying performance results. This paper analyses some of the sources of variation in hospital network performance and identifies some of the possible success factors. The study finds that the quality of cooperation and information sharing between network partners are critical. The paper concludes with a discussion of the implications for researchers and practitioners.

  14. Causal interactions in attention networks predict behavioral performance.

    PubMed

    Wen, Xiaotong; Yao, Li; Liu, Yijun; Ding, Mingzhou

    2012-01-25

    Lesion and functional brain imaging studies have suggested that there are two anatomically nonoverlapping attention networks. The dorsal frontoparietal network controls goal-oriented top-down deployment of attention; the ventral frontoparietal network mediates stimulus-driven bottom-up attentional reorienting. The interaction between the two networks and its functional significance has been considered in the past but no direct test has been carried out. We addressed this problem by recording fMRI data from human subjects performing a trial-by-trial cued visual spatial attention task in which the subject had to respond to target stimuli in the attended hemifield and ignore all stimuli in the unattended hemifield. Correlating Granger causal influences between regions of interest with behavioral performance, we report two main results. First, stronger Granger causal influences from the dorsal attention network (DAN) to the ventral attention network (VAN), i.e., DAN→VAN, are generally associated with enhanced performance, with right intraparietal sulcus (IPS), left IPS, and right frontal eye field being the main sources of behavior-enhancing influences. Second, stronger Granger causal influences from VAN to DAN, i.e., VAN→DAN, are generally associated with degraded performance, with right temporal-parietal junction being the main sources of behavior-degrading influences. These results support the hypothesis that signals from DAN to VAN suppress and filter out unimportant distracter information, whereas signals from VAN to DAN break the attentional set maintained by the dorsal attention network to enable attentional reorienting.

  15. Arrhythmia Identification with Two-Lead Electrocardiograms Using Artificial Neural Networks and Support Vector Machines for a Portable ECG Monitor System

    PubMed Central

    Liu, Shing-Hong; Cheng, Da-Chuan; Lin, Chih-Ming

    2013-01-01

    An automatic configuration that can detect the position of R-waves, classify the normal sinus rhythm (NSR) and other four arrhythmic types from the continuous ECG signals obtained from the MIT-BIH arrhythmia database is proposed. In this configuration, a support vector machine (SVM) was used to detect and mark the ECG heartbeats with raw signals and differential signals of a lead ECG. An algorithm based on the extracted markers segments waveforms of Lead II and V1 of the ECG as the pattern classification features. A self-constructing neural fuzzy inference network (SoNFIN) was used to classify NSR and four arrhythmia types, including premature ventricular contraction (PVC), premature atrium contraction (PAC), left bundle branch block (LBBB), and right bundle branch block (RBBB). In a real scenario, the classification results show the accuracy achieved is 96.4%. This performance is suitable for a portable ECG monitor system for home care purposes. PMID:23303379

  16. Performance Analysis and Improvement of WPAN MAC for Home Networks

    PubMed Central

    Mehta, Saurabh; Kwak, Kyung Sup

    2010-01-01

    The wireless personal area network (WPAN) is an emerging wireless technology for future short range indoor and outdoor communication applications. The IEEE 802.15.3 medium access control (MAC) is proposed to coordinate the access to the wireless medium among the competing devices, especially for short range and high data rate applications in home networks. In this paper we use analytical modeling to study the performance analysis of WPAN (IEEE 802.15.3) MAC in terms of throughput, efficient bandwidth utilization, and delay with various ACK policies under error channel condition. This allows us to introduce a K-Dly-ACK-AGG policy, payload size adjustment mechanism, and Improved Backoff algorithm to improve the performance of the WPAN MAC. Performance evaluation results demonstrate the impact of our improvements on network capacity. Moreover, these results can be very useful to WPAN application designers and protocol architects to easily and correctly implement WPAN for home networking. PMID:22319274

  17. Performance analysis and improvement of WPAN MAC for home networks.

    PubMed

    Mehta, Saurabh; Kwak, Kyung Sup

    2010-01-01

    The wireless personal area network (WPAN) is an emerging wireless technology for future short range indoor and outdoor communication applications. The IEEE 802.15.3 medium access control (MAC) is proposed to coordinate the access to the wireless medium among the competing devices, especially for short range and high data rate applications in home networks. In this paper we use analytical modeling to study the performance analysis of WPAN (IEEE 802.15.3) MAC in terms of throughput, efficient bandwidth utilization, and delay with various ACK policies under error channel condition. This allows us to introduce a K-Dly-ACK-AGG policy, payload size adjustment mechanism, and Improved Backoff algorithm to improve the performance of the WPAN MAC. Performance evaluation results demonstrate the impact of our improvements on network capacity. Moreover, these results can be very useful to WPAN application designers and protocol architects to easily and correctly implement WPAN for home networking.

  18. Performance Analysis of a NASA Integrated Network Array

    NASA Technical Reports Server (NTRS)

    Nessel, James A.

    2012-01-01

    The Space Communications and Navigation (SCaN) Program is planning to integrate its individual networks into a unified network which will function as a single entity to provide services to user missions. This integrated network architecture is expected to provide SCaN customers with the capabilities to seamlessly use any of the available SCaN assets to support their missions to efficiently meet the collective needs of Agency missions. One potential optimal application of these assets, based on this envisioned architecture, is that of arraying across existing networks to significantly enhance data rates and/or link availabilities. As such, this document provides an analysis of the transmit and receive performance of a proposed SCaN inter-network antenna array. From the study, it is determined that a fully integrated internetwork array does not provide any significant advantage over an intra-network array, one in which the assets of an individual network are arrayed for enhanced performance. Therefore, it is the recommendation of this study that NASA proceed with an arraying concept, with a fundamental focus on a network-centric arraying.

  19. Development of task network models of human performance in microgravity

    NASA Technical Reports Server (NTRS)

    Diaz, Manuel F.; Adam, Susan

    1992-01-01

    This paper discusses the utility of task-network modeling for quantifying human performance variability in microgravity. The data are gathered for: (1) improving current methodologies for assessing human performance and workload in the operational space environment; (2) developing tools for assessing alternative system designs; and (3) developing an integrated set of methodologies for the evaluation of performance degradation during extended duration spaceflight. The evaluation entailed an analysis of the Remote Manipulator System payload-grapple task performed on many shuttle missions. Task-network modeling can be used as a tool for assessing and enhancing human performance in man-machine systems, particularly for modeling long-duration manned spaceflight. Task-network modeling can be directed toward improving system efficiency by increasing the understanding of basic capabilities of the human component in the system and the factors that influence these capabilities.

  20. Development of task network models of human performance in microgravity

    NASA Technical Reports Server (NTRS)

    Diaz, Manuel F.; Adam, Susan

    1992-01-01

    This paper discusses the utility of task-network modeling for quantifying human performance variability in microgravity. The data are gathered for: (1) improving current methodologies for assessing human performance and workload in the operational space environment; (2) developing tools for assessing alternative system designs; and (3) developing an integrated set of methodologies for the evaluation of performance degradation during extended duration spaceflight. The evaluation entailed an analysis of the Remote Manipulator System payload-grapple task performed on many shuttle missions. Task-network modeling can be used as a tool for assessing and enhancing human performance in man-machine systems, particularly for modeling long-duration manned spaceflight. Task-network modeling can be directed toward improving system efficiency by increasing the understanding of basic capabilities of the human component in the system and the factors that influence these capabilities.

  1. High-Performance Satellite/Terrestrial-Network Gateway

    NASA Technical Reports Server (NTRS)

    Beering, David R.

    2005-01-01

    A gateway has been developed to enable digital communication between (1) the high-rate receiving equipment at NASA's White Sands complex and (2) a standard terrestrial digital communication network at data rates up to 622 Mb/s. The design of this gateway can also be adapted for use in commercial Earth/satellite and digital communication networks, and in terrestrial digital communication networks that include wireless subnetworks. Gateway as used here signifies an electronic circuit that serves as an interface between two electronic communication networks so that a computer (or other terminal) on one network can communicate with a terminal on the other network. The connection between this gateway and the high-rate receiving equipment is made via a synchronous serial data interface at the emitter-coupled-logic (ECL) level. The connection between this gateway and a standard asynchronous transfer mode (ATM) terrestrial communication network is made via a standard user network interface with a synchronous optical network (SONET) connector. The gateway contains circuitry that performs the conversion between the ECL and SONET interfaces. The data rate of the SONET interface can be either 155.52 or 622.08 Mb/s. The gateway derives its clock signal from a satellite modem in the high-rate receiving equipment and, hence, is agile in the sense that it adapts to the data rate of the serial interface.

  2. Urban traffic-network performance: flow theory and simulation experiments

    SciTech Connect

    Williams, J.C.

    1986-01-01

    Performance models for urban street networks were developed to describe the response of a traffic network to given travel-demand levels. The three basic traffic flow variables, speed, flow, and concentration, are defined at the network level, and three model systems are proposed. Each system consists of a series of interrelated, consistent functions between the three basic traffic-flow variables as well as the fraction of stopped vehicles in the network. These models are subsequently compared with the results of microscopic simulation of a small test network. The sensitivity of one of the model systems to a variety of network features was also explored. Three categories of features were considered, with the specific features tested listed in parentheses: network topology (block length and street width), traffic control (traffic signal coordination), and traffic characteristics (level of inter-vehicular interaction). Finally, a fundamental issue concerning the estimation of two network-level parameters (from a nonlinear relation in the two-fluid theory) was examined. The principal concern was that of comparability of these parameters when estimated with information from a single vehicle (or small group of vehicles), as done in conjunction with previous field studies, and when estimated with network-level information (i.e., all the vehicles), as is possible with simulation.

  3. Body Area Networks performance analysis using UWB.

    PubMed

    Fatehy, Mohammed; Kohno, Ryuji

    2013-01-01

    The successful realization of a Wireless Body Area Network (WBAN) using Ultra Wideband (UWB) technology supports different medical and consumer electronics (CE) applications but stand in a need for an innovative solution to meet the different requirements of these applications. Previously, we proposed to use adaptive processing gain (PG) to fulfill the different QoS requirements of these WBAN applications. In this paper, interference occurred between two different BANs in a UWB-based system has been analyzed in terms of acceptable ratio of overlapping between these BANs' PG providing the required QoS for each BAN. The first BAN employed for a healthcare device (e.g. EEG, ECG, etc.) with a relatively longer spreading sequence is used and the second customized for entertainment application (e.g. wireless headset, wireless game pad, etc.) where a shorter spreading code is assigned. Considering bandwidth utilization and difference in the employed spreading sequence, the acceptable ratio of overlapping between these BANs should fall between 0.05 and 0.5 in order to optimize the used spreading sequence and in the meantime satisfying the required QoS for these applications.

  4. Performance characteristics of a variable-area vane nozzle for vectoring an ASTOVL exhaust jet up to 45 deg

    NASA Technical Reports Server (NTRS)

    Mcardle, Jack G.; Esker, Barbara S.

    1993-01-01

    Many conceptual designs for advanced short-takeoff, vertical landing (ASTOVL) aircraft need exhaust nozzles that can vector the jet to provide forces and moments for controlling the aircraft's movement or attitude in flight near the ground. A type of nozzle that can both vector the jet and vary the jet flow area is called a vane nozzle. Basically, the nozzle consists of parallel, spaced-apart flow passages formed by pairs of vanes (vanesets) that can be rotated on axes perpendicular to the flow. Two important features of this type of nozzle are the abilities to vector the jet rearward up to 45 degrees and to produce less harsh pressure and velocity footprints during vertical landing than does an equivalent single jet. A one-third-scale model of a generic vane nozzle was tested with unheated air at the NASA Lewis Research Center's Powered Lift Facility. The model had three parallel flow passages. Each passage was formed by a vaneset consisting of a long and a short vane. The longer vanes controlled the jet vector angle, and the shorter controlled the flow area. Nozzle performance for three nominal flow areas (basic and plus or minus 21 percent of basic area), each at nominal jet vector angles from -20 deg (forward of vertical) to +45 deg (rearward of vertical) are presented. The tests were made with the nozzle mounted on a model tailpipe with a blind flange on the end to simulate a closed cruise nozzle, at tailpipe-to-ambient pressure ratios from 1.8 to 4.0. Also included are jet wake data, single-vaneset vector performance for long/short and equal-length vane designs, and pumping capability. The pumping capability arises from the subambient pressure developed in the cavities between the vanesets, which could be used to aspirate flow from a source such as the engine compartment. Some of the performance characteristics are compared with characteristics of a single-jet nozzle previously reported.

  5. Performance evaluation of reactive and proactive routing protocol in IEEE 802.11 ad hoc network

    NASA Astrophysics Data System (ADS)

    Hamma, Salima; Cizeron, Eddy; Issaka, Hafiz; Guédon, Jean-Pierre

    2006-10-01

    Wireless technology based on the IEEE 802.11 standard is widely deployed. This technology is used to support multiple types of communication services (data, voice, image) with different QoS requirements. MANET (Mobile Adhoc NETwork) does not require a fixed infrastructure. Mobile nodes communicate through multihop paths. The wireless communication medium has variable and unpredictable characteristics. Furthermore, node mobility creates a continuously changing communication topology in which paths break and new one form dynamically. The routing table of each router in an adhoc network must be kept up-to-date. MANET uses Distance Vector or Link State algorithms which insure that the route to every host is always known. However, this approach must take into account the adhoc networks specific characteristics: dynamic topologies, limited bandwidth, energy constraints, limited physical security, ... Two main routing protocols categories are studied in this paper: proactive protocols (e.g. Optimised Link State Routing - OLSR) and reactive protocols (e.g. Ad hoc On Demand Distance Vector - AODV, Dynamic Source Routing - DSR). The proactive protocols are based on periodic exchanges that update the routing tables to all possible destinations, even if no traffic goes through. The reactive protocols are based on on-demand route discoveries that update routing tables only for the destination that has traffic going through. The present paper focuses on study and performance evaluation of these categories using NS2 simulations. We have considered qualitative and quantitative criteria. The first one concerns distributed operation, loop-freedom, security, sleep period operation. The second are used to assess performance of different routing protocols presented in this paper. We can list end-to-end data delay, jitter, packet delivery ratio, routing load, activity distribution. Comparative study will be presented with number of networking context consideration and the results show

  6. Diversity improves performance in excitable networks.

    PubMed

    Gollo, Leonardo L; Copelli, Mauro; Roberts, James A

    2016-01-01

    As few real systems comprise indistinguishable units, diversity is a hallmark of nature. Diversity among interacting units shapes properties of collective behavior such as synchronization and information transmission. However, the benefits of diversity on information processing at the edge of a phase transition, ordinarily assumed to emerge from identical elements, remain largely unexplored. Analyzing a general model of excitable systems with heterogeneous excitability, we find that diversity can greatly enhance optimal performance (by two orders of magnitude) when distinguishing incoming inputs. Heterogeneous systems possess a subset of specialized elements whose capability greatly exceeds that of the nonspecialized elements. We also find that diversity can yield multiple percolation, with performance optimized at tricriticality. Our results are robust in specific and more realistic neuronal systems comprising a combination of excitatory and inhibitory units, and indicate that diversity-induced amplification can be harnessed by neuronal systems for evaluating stimulus intensities.

  7. Diversity improves performance in excitable networks

    PubMed Central

    Copelli, Mauro; Roberts, James A.

    2016-01-01

    As few real systems comprise indistinguishable units, diversity is a hallmark of nature. Diversity among interacting units shapes properties of collective behavior such as synchronization and information transmission. However, the benefits of diversity on information processing at the edge of a phase transition, ordinarily assumed to emerge from identical elements, remain largely unexplored. Analyzing a general model of excitable systems with heterogeneous excitability, we find that diversity can greatly enhance optimal performance (by two orders of magnitude) when distinguishing incoming inputs. Heterogeneous systems possess a subset of specialized elements whose capability greatly exceeds that of the nonspecialized elements. We also find that diversity can yield multiple percolation, with performance optimized at tricriticality. Our results are robust in specific and more realistic neuronal systems comprising a combination of excitatory and inhibitory units, and indicate that diversity-induced amplification can be harnessed by neuronal systems for evaluating stimulus intensities. PMID:27168961

  8. Project Performance Evaluation Using Deep Belief Networks

    NASA Astrophysics Data System (ADS)

    Nguvulu, Alick; Yamato, Shoso; Honma, Toshihisa

    A Project Assessment Indicator (PAI) Model has recently been applied to evaluate monthly project performance based on 15 project elements derived from the project management (PM) knowledge areas. While the PAI Model comprehensively evaluates project performance, it lacks objectivity and universality. It lacks objectivity because experts assign model weights intuitively based on their PM skills and experience. It lacks universality because the allocation of ceiling scores to project elements is done ad hoc based on the empirical rule without taking into account the interactions between the project elements. This study overcomes these limitations by applying a DBN approach where the model automatically assigns weights and allocates ceiling scores to the project elements based on the DBN weights which capture the interaction between the project elements. We train our DBN on 5 IT projects of 12 months duration and test it on 8 IT projects with less than 12 months duration. We completely eliminate the manual assigning of weights and compute ceiling scores of project elements based on DBN weights. Our trained DBN evaluates monthly project performance of the 8 test projects based on the 15 project elements to within a monthly relative error margin of between ±1.03 and ±3.30%.

  9. Static internal performance of single-expansion-ramp nozzles with thrust-vectoring capability up to 60 deg

    NASA Technical Reports Server (NTRS)

    Berrier, B. L.; Leavitt, L. D.

    1984-01-01

    An investigation has been conducted at static conditions (wind off) in the static-test facility of the Langley 16-Foot Transonic Tunnel. The effects of geometric thrust-vector angle, sidewall containment, ramp curvature, lower-flap lip angle, and ramp length on the internal performance of nonaxisymmetric single-expansion-ramp nozzles were investigated. Geometric thrust-vector angle was varied from -20 deg. to 60 deg., and nozzle pressure ratio was varied from 1.0 (jet off) to approximately 10.0.

  10. A Comprehensive Performance Comparison of On-Demand Routing Protocols in Mobile Ad-Hoc Networks

    NASA Astrophysics Data System (ADS)

    Khan, Jahangir; Hayder, Syed Irfan

    Mobile ad hoc network is an autonomous system of mobile nodes connected by wireless links. Each node operates not only as an end system, but also as a router to forward packets. The nodes are free to move about and organize themselves on a fly. In this paper we focus on the performance of the on-demand routing protocols such as DSR and AODV in ad-hoc networks. We have observed the performance change of each protocol through simulation with varying the data in intermediate nodes and to compare data throughput in each mobile modes of each protocol to analyze the packet fraction for application data. The objective of this work is to evaluate two routing protocols such as On-demand behavior, namely, Ad hoc Demand Distance vector (AODV) and Dynamic Source Routing (DSR), for wireless ad hoc networks based on performance of intermediate nodes for the delivery of data form source to destination and vice versa in order to compare the efficiency of throughput in the neighbors nodes. To overcome we have proposed OPNET simulator for performance comparison of hop to hop delivery of data packet in autonomous system.

  11. Performance limitations for networked control systems with plant uncertainty

    NASA Astrophysics Data System (ADS)

    Chi, Ming; Guan, Zhi-Hong; Cheng, Xin-Ming; Yuan, Fu-Shun

    2016-04-01

    There has recently been significant interest in performance study for networked control systems with communication constraints. But the existing work mainly assumes that the plant has an exact model. The goal of this paper is to investigate the optimal tracking performance for networked control system in the presence of plant uncertainty. The plant under consideration is assumed to be non-minimum phase and unstable, while the two-parameter controller is employed and the integral square criterion is adopted to measure the tracking error. And we formulate the uncertainty by utilising stochastic embedding. The explicit expression of the tracking performance has been obtained. The results show that the network communication noise and the model uncertainty, as well as the unstable poles and non-minimum phase zeros, can worsen the tracking performance.

  12. High-performance multicasting schemes in optical packet switched networks

    NASA Astrophysics Data System (ADS)

    Ji, Yuefeng; Liu, Xin; Zhang, Jie; Zhang, Min

    2009-11-01

    Current trends in communications indicate that multicasting is becoming increasingly popular and important in networking applications. Since multicasting can be supported more efficiently in optical domain by utilizing the inherent light-splitting capacity of optical devices, such as optical splitters, than by copying data in electronic domain, issues concerning running multicast sessions in the all-optical networks have received much attention in recent years. In this paper, different multicasting schemes and their performance in the Optical Packet Switched networks are investigated, including the parallel mode, serial mode, and hybrid mode multicasting schemes. Computer simulation results show that compared with the parallel-mode and serial-mode multicasting schemes, hybrid-mode multicasting scheme is the best way to deliver multicast sessions in the Optical Packet Switched networks due to its highest performance.

  13. High performance network and channel-based storage

    NASA Technical Reports Server (NTRS)

    Katz, Randy H.

    1991-01-01

    In the traditional mainframe-centered view of a computer system, storage devices are coupled to the system through complex hardware subsystems called input/output (I/O) channels. With the dramatic shift towards workstation-based computing, and its associated client/server model of computation, storage facilities are now found attached to file servers and distributed throughout the network. We discuss the underlying technology trends that are leading to high performance network-based storage, namely advances in networks, storage devices, and I/O controller and server architectures. We review several commercial systems and research prototypes that are leading to a new approach to high performance computing based on network-attached storage.

  14. Using AQM to improve TCP performance over wireless networks

    NASA Astrophysics Data System (ADS)

    Li, Victor H.; Liu, Zhi-Qiang; Low, Steven H.

    2002-07-01

    TCP flow control algorithms have been designed for wireline networks where congestion is measured by packet loss due to buffer overflow. However, wireless networks also suffer from significant packet losses due to bit errors and handoffs. TCP responds to all the packet losses by invoking congestion control and avoidance algorithms and this results in degraded end-to-end performance in wireless networks. In this paper, we describe an Wireless Random Exponential Marking(WREM) scheme which effectively improves TCP performance over wireless networks by decoupling loss recovery from congestion control. Moreover, WREM is capable of handling the coexistence of both ECN-Capable and Non-ECN-Capable routers. We present simulation results to show its effectiveness and compatibility.

  15. High-performance network and channel based storage

    NASA Technical Reports Server (NTRS)

    Katz, Randy H.

    1992-01-01

    In the traditional mainframe-centered view of a computer system, storage devices are coupled to the system through complex hardware subsystems called I/O channels. With the dramatic shift toward workstation-based computing, and its associated client/server model of computation, storage facilities are now found attached to file servers and distributed throughout the network. In this paper, we discuss the underlying technology trends that are leading to high-performance network-based storage, namely advances in networks, storage devices, and I/O controller and server architectures. We review several commercial systems and research prototypes that are leading to a new approach to high-performance computing based on network-attached storage.

  16. Performance analysis of a common-mode signal based low-complexity crosstalk cancelation scheme in vectored VDSL

    NASA Astrophysics Data System (ADS)

    Zafaruddin, SM; Prakriya, Shankar; Prasad, Surendra

    2012-12-01

    In this article, we propose a vectored system by using both common mode (CM) and differential mode (DM) signals in upstream VDSL. We first develop a multi-input multi-output (MIMO) CM channel by using the single-pair CM and MIMO DM channels proposed recently, and study the characteristics of the resultant CM-DM channel matrix. We then propose a low complexity receiver structure in which the CM and DM signals of each twisted-pair (TP) are combined before the application of a MIMO zero forcing (ZF) receiver. We study capacity of the proposed system, and show that the vectored CM-DM processing provides higher data-rates at longer loop-lengths. In the absence of alien crosstalk, application of the ZF receiver on the vectored CM-DM signals yields performance close to the single user bound (SUB). In the presence of alien crosstalk, we show that the vectored CM-DM processing exploits the spatial correlation of CM and DM signals and provides higher data rates than with DM processing only. Simulation results validate the analysis and demonstrate the importance of CM-DM joint processing in vectored VDSL systems.

  17. Statistical performance evaluation of ECG transmission using wireless networks.

    PubMed

    Shakhatreh, Walid; Gharaibeh, Khaled; Al-Zaben, Awad

    2013-07-01

    This paper presents simulation of the transmission of biomedical signals (using ECG signal as an example) over wireless networks. Investigation of the effect of channel impairments including SNR, pathloss exponent, path delay and network impairments such as packet loss probability; on the diagnosability of the received ECG signal are presented. The ECG signal is transmitted through a wireless network system composed of two communication protocols; an 802.15.4- ZigBee protocol and an 802.11b protocol. The performance of the transmission is evaluated using higher order statistics parameters such as kurtosis and Negative Entropy in addition to the common techniques such as the PRD, RMS and Cross Correlation.

  18. Architecture and Performance Analysis of General Bio-Molecular Networks

    DTIC Science & Technology

    2012-01-14

    General Bio -Molecular Networks Contract/Grant #: FA9550-10-1-0128 Table of Contents...14-10-2011 4. TITLE AND SUBTITLE Architecture and Performance Analysis of Bio -Molecular Network 5a. CONTRACT NUMBER FA9550-10-1-0128 5b...method is expected to be much better, in terms of the running time, for the system with more molecules. 15. SUBJECT TERMS Stochastic Bio -molecular

  19. The Use of Neural Network Technology to Model Swimming Performance

    PubMed Central

    Silva, António José; Costa, Aldo Manuel; Oliveira, Paulo Moura; Reis, Victor Machado; Saavedra, José; Perl, Jurgen; Rouboa, Abel; Marinho, Daniel Almeida

    2007-01-01

    The aims of the present study were: to identify the factors which are able to explain the performance in the 200 meters individual medley and 400 meters front crawl events in young swimmers, to model the performance in those events using non-linear mathematic methods through artificial neural networks (multi-layer perceptrons) and to assess the neural network models precision to predict the performance. A sample of 138 young swimmers (65 males and 73 females) of national level was submitted to a test battery comprising four different domains: kinanthropometric evaluation, dry land functional evaluation (strength and flexibility), swimming functional evaluation (hydrodynamics, hydrostatic and bioenergetics characteristics) and swimming technique evaluation. To establish a profile of the young swimmer non-linear combinations between preponderant variables for each gender and swim performance in the 200 meters medley and 400 meters font crawl events were developed. For this purpose a feed forward neural network was used (Multilayer Perceptron) with three neurons in a single hidden layer. The prognosis precision of the model (error lower than 0.8% between true and estimated performances) is supported by recent evidence. Therefore, we consider that the neural network tool can be a good approach in the resolution of complex problems such as performance modeling and the talent identification in swimming and, possibly, in a wide variety of sports. Key pointsThe non-linear analysis resulting from the use of feed forward neural network allowed us the development of four performance models.The mean difference between the true and estimated results performed by each one of the four neural network models constructed was low.The neural network tool can be a good approach in the resolution of the performance modeling as an alternative to the standard statistical models that presume well-defined distributions and independence among all inputs.The use of neural networks for sports

  20. Virulence Factors of Geminivirus Interact with MYC2 to Subvert Plant Resistance and Promote Vector Performance[C][W

    PubMed Central

    Li, Ran; Weldegergis, Berhane T.; Li, Jie; Jung, Choonkyun; Qu, Jing; Sun, Yanwei; Qian, Hongmei; Tee, ChuanSia; van Loon, Joop J.A.; Dicke, Marcel; Chua, Nam-Hai; Liu, Shu-Sheng

    2014-01-01

    A pathogen may cause infected plants to promote the performance of its transmitting vector, which accelerates the spread of the pathogen. This positive effect of a pathogen on its vector via their shared host plant is termed indirect mutualism. For example, terpene biosynthesis is suppressed in begomovirus-infected plants, leading to reduced plant resistance and enhanced performance of the whiteflies (Bemisia tabaci) that transmit these viruses. Although begomovirus-whitefly mutualism has been known, the underlying mechanism is still elusive. Here, we identified βC1 of Tomato yellow leaf curl China virus, a monopartite begomovirus, as the viral genetic factor that suppresses plant terpene biosynthesis. βC1 directly interacts with the basic helix-loop-helix transcription factor MYC2 to compromise the activation of MYC2-regulated terpene synthase genes, thereby reducing whitefly resistance. MYC2 associates with the bipartite begomoviral protein BV1, suggesting that MYC2 is an evolutionarily conserved target of begomoviruses for the suppression of terpene-based resistance and the promotion of vector performance. Our findings describe how this viral pathogen regulates host plant metabolism to establish mutualism with its insect vector. PMID:25490915

  1. Runtime Performance and Virtual Network Control Alternatives in VM-Based High-Fidelity Network Simulations

    DTIC Science & Technology

    2012-12-01

    described in detail in (Yoginath, Perumalla and Henz 2012). The MPI benchmarks comprise two scenarios, namely, Constant Network Delay ( CND ) and...Varying Network Delay (VND). With CND , we evaluate the performance of NSX and CSX scheduler support for time-ordered event execution when the...identifier of the jth message in the ith run. CND Benchmark Performance Figure 2: CND benchmark error plots (left); CND benchmark runtime plots

  2. Bearing performance degradation assessment based on time-frequency code features and SOM network

    NASA Astrophysics Data System (ADS)

    Zhang, Yan; Tang, Baoping; Han, Yan; Deng, Lei

    2017-04-01

    Bearing performance degradation assessment and prognostics are extremely important in supporting maintenance decision and guaranteeing the system’s reliability. To achieve this goal, this paper proposes a novel feature extraction method for the degradation assessment and prognostics of bearings. Features of time-frequency codes (TFCs) are extracted from the time-frequency distribution using a hybrid procedure based on short-time Fourier transform (STFT) and non-negative matrix factorization (NMF) theory. An alternative way to design the health indicator is investigated by quantifying the similarity between feature vectors using a self-organizing map (SOM) network. On the basis of this idea, a new health indicator called time-frequency code quantification error (TFCQE) is proposed to assess the performance degradation of the bearing. This indicator is constructed based on the bearing real-time behavior and the SOM model that is previously trained with only the TFC vectors under the normal condition. Vibration signals collected from the bearing run-to-failure tests are used to validate the developed method. The comparison results demonstrate the superiority of the proposed TFCQE indicator over many other traditional features in terms of feature quality metrics, incipient degradation identification and achieving accurate prediction. Highlights • Time-frequency codes are extracted to reflect the signals’ characteristics. • SOM network served as a tool to quantify the similarity between feature vectors. • A new health indicator is proposed to demonstrate the whole stage of degradation development. • The method is useful for extracting the degradation features and detecting the incipient degradation. • The superiority of the proposed method is verified using experimental data.

  3. Hospital network performance: a survey of hospital stakeholders' perspectives.

    PubMed

    Bravi, F; Gibertoni, D; Marcon, A; Sicotte, C; Minvielle, E; Rucci, P; Angelastro, A; Carradori, T; Fantini, M P

    2013-02-01

    Hospital networks are an emerging organizational form designed to face the new challenges of public health systems. Although the benefits introduced by network models in terms of rationalization of resources are known, evidence about stakeholders' perspectives on hospital network performance from the literature is scanty. Using the Competing Values Framework of organizational effectiveness and its subsequent adaptation by Minvielle et al., we conducted in 2009 a survey in five hospitals of an Italian network for oncological care to examine and compare the views on hospital network performance of internal stakeholders (physicians, nurses and the administrative staff). 329 questionnaires exploring stakeholders' perspectives were completed, with a response rate of 65.8%. Using exploratory factor analysis of the 66 items of the questionnaire, we identified 4 factors, i.e. Centrality of relationships, Quality of care, Attractiveness/Reputation and Staff empowerment and Protection of workers' rights. 42 items were retained in the analysis. Factor scores proved to be high (mean score>8 on a 10-item scale), except for Attractiveness/Reputation (mean score 6.79), indicating that stakeholders attach a higher importance to relational and health care aspects. Comparison of factor scores among stakeholders did not reveal significant differences, suggesting a broadly shared view on hospital network performance. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  4. UltraSciencenet: High- Performance Network Research Test-Bed

    SciTech Connect

    Rao, Nageswara S; Wing, William R; Poole, Stephen W; Hicks, Susan Elaine; DeNap, Frank A; Carter, Steven M; Wu, Qishi

    2009-04-01

    The high-performance networking requirements for next generation large-scale applications belong to two broad classes: (a) high bandwidths, typically multiples of 10Gbps, to support bulk data transfers, and (b) stable bandwidths, typically at much lower bandwidths, to support computational steering, remote visualization, and remote control of instrumentation. Current Internet technologies, however, are severely limited in meeting these demands because such bulk bandwidths are available only in the backbone, and stable control channels are hard to realize over shared connections. The UltraScience Net (USN) facilitates the development of such technologies by providing dynamic, cross-country dedicated 10Gbps channels for large data transfers, and 150 Mbps channels for interactive and control operations. Contributions of the USN project are two-fold: (a) Infrastructure Technologies for Network Experimental Facility: USN developed and/or demonstrated a number of infrastructure technologies needed for a national-scale network experimental facility. Compared to Internet, USN's data-plane is different in that it can be partitioned into isolated layer-1 or layer-2 connections, and its control-plane is different in the ability of users and applications to setup and tear down channels as needed. Its design required several new components including a Virtual Private Network infrastructure, a bandwidth and channel scheduler, and a dynamic signaling daemon. The control-plane employs a centralized scheduler to compute the channel allocations and a signaling daemon to generate configuration signals to switches. In a nutshell, USN demonstrated the ability to build and operate a stable national-scale switched network. (b) Structured Network Research Experiments: A number of network research experiments have been conducted on USN that cannot be easily supported over existing network facilities, including test-beds and production networks. It settled an open matter by demonstrating

  5. Input vector optimization of feed-forward neural networks for fitting ab initio potential-energy databases

    NASA Astrophysics Data System (ADS)

    Malshe, M.; Raff, L. M.; Hagan, M.; Bukkapatnam, S.; Komanduri, R.

    2010-05-01

    The variation in the fitting accuracy of neural networks (NNs) when used to fit databases comprising potential energies obtained from ab initio electronic structure calculations is investigated as a function of the number and nature of the elements employed in the input vector to the NN. Ab initio databases for H2O2, HONO, Si5, and H2CCHBr were employed in the investigations. These systems were chosen so as to include four-, five-, and six-body systems containing first, second, third, and fourth row elements with a wide variety of chemical bonding and whose conformations cover a wide range of structures that occur under high-energy machining conditions and in chemical reactions involving cis-trans isomerizations, six different types of two-center bond ruptures, and two different three-center dissociation reactions. The ab initio databases for these systems were obtained using density functional theory/B3LYP, MP2, and MP4 methods with extended basis sets. A total of 31 input vectors were investigated. In each case, the elements of the input vector were chosen from interatomic distances, inverse powers of the interatomic distance, three-body angles, and dihedral angles. Both redundant and nonredundant input vectors were investigated. The results show that among all the input vectors investigated, the set employed in the Z-matrix specification of the molecular configurations in the electronic structure calculations gave the lowest NN fitting accuracy for both Si5 and vinyl bromide. The underlying reason for this result appears to be the discontinuity present in the dihedral angle for planar geometries. The use of trigometric functions of the angles as input elements produced significantly improved fitting accuracy as this choice eliminates the discontinuity. The most accurate fitting was obtained when the elements of the input vector were taken to have the form Rij-n, where the Rij are the interatomic distances. When the Levenberg-Marquardt procedure was modified

  6. Input vector optimization of feed-forward neural networks for fitting ab initio potential-energy databases.

    PubMed

    Malshe, M; Raff, L M; Hagan, M; Bukkapatnam, S; Komanduri, R

    2010-05-28

    The variation in the fitting accuracy of neural networks (NNs) when used to fit databases comprising potential energies obtained from ab initio electronic structure calculations is investigated as a function of the number and nature of the elements employed in the input vector to the NN. Ab initio databases for H(2)O(2), HONO, Si(5), and H(2)C[Double Bond]CHBr were employed in the investigations. These systems were chosen so as to include four-, five-, and six-body systems containing first, second, third, and fourth row elements with a wide variety of chemical bonding and whose conformations cover a wide range of structures that occur under high-energy machining conditions and in chemical reactions involving cis-trans isomerizations, six different types of two-center bond ruptures, and two different three-center dissociation reactions. The ab initio databases for these systems were obtained using density functional theory/B3LYP, MP2, and MP4 methods with extended basis sets. A total of 31 input vectors were investigated. In each case, the elements of the input vector were chosen from interatomic distances, inverse powers of the interatomic distance, three-body angles, and dihedral angles. Both redundant and nonredundant input vectors were investigated. The results show that among all the input vectors investigated, the set employed in the Z-matrix specification of the molecular configurations in the electronic structure calculations gave the lowest NN fitting accuracy for both Si(5) and vinyl bromide. The underlying reason for this result appears to be the discontinuity present in the dihedral angle for planar geometries. The use of trigometric functions of the angles as input elements produced significantly improved fitting accuracy as this choice eliminates the discontinuity. The most accurate fitting was obtained when the elements of the input vector were taken to have the form R(ij) (-n), where the R(ij) are the interatomic distances. When the Levenberg

  7. Efficient resting-state EEG network facilitates motor imagery performance

    NASA Astrophysics Data System (ADS)

    Zhang, Rui; Yao, Dezhong; Valdés-Sosa, Pedro A.; Li, Fali; Li, Peiyang; Zhang, Tao; Ma, Teng; Li, Yongjie; Xu, Peng

    2015-12-01

    Objective. Motor imagery-based brain-computer interface (MI-BCI) systems hold promise in motor function rehabilitation and assistance for motor function impaired people. But the ability to operate an MI-BCI varies across subjects, which becomes a substantial problem for practical BCI applications beyond the laboratory. Approach. Several previous studies have demonstrated that individual MI-BCI performance is related to the resting state of brain. In this study, we further investigate offline MI-BCI performance variations through the perspective of resting-state electroencephalography (EEG) network. Main results. Spatial topologies and statistical measures of the network have close relationships with MI classification accuracy. Specifically, mean functional connectivity, node degrees, edge strengths, clustering coefficient, local efficiency and global efficiency are positively correlated with MI classification accuracy, whereas the characteristic path length is negatively correlated with MI classification accuracy. The above results indicate that an efficient background EEG network may facilitate MI-BCI performance. Finally, a multiple linear regression model was adopted to predict subjects’ MI classification accuracy based on the efficiency measures of the resting-state EEG network, resulting in a reliable prediction. Significance. This study reveals the network mechanisms of the MI-BCI and may help to find new strategies for improving MI-BCI performance.

  8. Equivalent Vectors

    ERIC Educational Resources Information Center

    Levine, Robert

    2004-01-01

    The cross-product is a mathematical operation that is performed between two 3-dimensional vectors. The result is a vector that is orthogonal or perpendicular to both of them. Learning about this for the first time while taking Calculus-III, the class was taught that if AxB = AxC, it does not necessarily follow that B = C. This seemed baffling. The…

  9. On a vector space representation in genetic algorithms for sensor scheduling in wireless sensor networks.

    PubMed

    Martins, F V C; Carrano, E G; Wanner, E F; Takahashi, R H C; Mateus, G R; Nakamura, F G

    2014-01-01

    Recent works raised the hypothesis that the assignment of a geometry to the decision variable space of a combinatorial problem could be useful both for providing meaningful descriptions of the fitness landscape and for supporting the systematic construction of evolutionary operators (the geometric operators) that make a consistent usage of the space geometric properties in the search for problem optima. This paper introduces some new geometric operators that constitute the realization of searches along the combinatorial space versions of the geometric entities descent directions and subspaces. The new geometric operators are stated in the specific context of the wireless sensor network dynamic coverage and connectivity problem (WSN-DCCP). A genetic algorithm (GA) is developed for the WSN-DCCP using the proposed operators, being compared with a formulation based on integer linear programming (ILP) which is solved with exact methods. That ILP formulation adopts a proxy objective function based on the minimization of energy consumption in the network, in order to approximate the objective of network lifetime maximization, and a greedy approach for dealing with the system's dynamics. To the authors' knowledge, the proposed GA is the first algorithm to outperform the lifetime of networks as synthesized by the ILP formulation, also running in much smaller computational times for large instances.

  10. Adaptive Optimization of Aircraft Engine Performance Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Long, Theresa W.

    1995-01-01

    Preliminary results are presented on the development of an adaptive neural network based control algorithm to enhance aircraft engine performance. This work builds upon a previous National Aeronautics and Space Administration (NASA) effort known as Performance Seeking Control (PSC). PSC is an adaptive control algorithm which contains a model of the aircraft's propulsion system which is updated on-line to match the operation of the aircraft's actual propulsion system. Information from the on-line model is used to adapt the control system during flight to allow optimal operation of the aircraft's propulsion system (inlet, engine, and nozzle) to improve aircraft engine performance without compromising reliability or operability. Performance Seeking Control has been shown to yield reductions in fuel flow, increases in thrust, and reductions in engine fan turbine inlet temperature. The neural network based adaptive control, like PSC, will contain a model of the propulsion system which will be used to calculate optimal control commands on-line. Hopes are that it will be able to provide some additional benefits above and beyond those of PSC. The PSC algorithm is computationally intensive, it is valid only at near steady-state flight conditions, and it has no way to adapt or learn on-line. These issues are being addressed in the development of the optimal neural controller. Specialized neural network processing hardware is being developed to run the software, the algorithm will be valid at steady-state and transient conditions, and will take advantage of the on-line learning capability of neural networks. Future plans include testing the neural network software and hardware prototype against an aircraft engine simulation. In this paper, the proposed neural network software and hardware is described and preliminary neural network training results are presented.

  11. A network application for modeling a centrifugal compressor performance map

    NASA Astrophysics Data System (ADS)

    Nikiforov, A.; Popova, D.; Soldatova, K.

    2017-08-01

    The approximation of aerodynamic performance of a centrifugal compressor stage and vaneless diffuser by neural networks is presented. Advantages, difficulties and specific features of the method are described. An example of a neural network and its structure is shown. The performances in terms of efficiency, pressure ratio and work coefficient of 39 model stages within the range of flow coefficient from 0.01 to 0.08 were modeled with mean squared error 1.5 %. In addition, the loss and friction coefficients of vaneless diffusers of relative widths 0.014-0.10 are modeled with mean squared error 2.45 %.

  12. Performance of social network sensors during Hurricane Sandy.

    PubMed

    Kryvasheyeu, Yury; Chen, Haohui; Moro, Esteban; Van Hentenryck, Pascal; Cebrian, Manuel

    2015-01-01

    Information flow during catastrophic events is a critical aspect of disaster management. Modern communication platforms, in particular online social networks, provide an opportunity to study such flow and derive early-warning sensors, thus improving emergency preparedness and response. Performance of the social networks sensor method, based on topological and behavioral properties derived from the "friendship paradox", is studied here for over 50 million Twitter messages posted before, during, and after Hurricane Sandy. We find that differences in users' network centrality effectively translate into moderate awareness advantage (up to 26 hours); and that geo-location of users within or outside of the hurricane-affected area plays a significant role in determining the scale of such an advantage. Emotional response appears to be universal regardless of the position in the network topology, and displays characteristic, easily detectable patterns, opening a possibility to implement a simple "sentiment sensing" technique that can detect and locate disasters.

  13. Performance of Social Network Sensors during Hurricane Sandy

    PubMed Central

    Kryvasheyeu, Yury; Chen, Haohui; Moro, Esteban; Van Hentenryck, Pascal; Cebrian, Manuel

    2015-01-01

    Information flow during catastrophic events is a critical aspect of disaster management. Modern communication platforms, in particular online social networks, provide an opportunity to study such flow and derive early-warning sensors, thus improving emergency preparedness and response. Performance of the social networks sensor method, based on topological and behavioral properties derived from the “friendship paradox”, is studied here for over 50 million Twitter messages posted before, during, and after Hurricane Sandy. We find that differences in users’ network centrality effectively translate into moderate awareness advantage (up to 26 hours); and that geo-location of users within or outside of the hurricane-affected area plays a significant role in determining the scale of such an advantage. Emotional response appears to be universal regardless of the position in the network topology, and displays characteristic, easily detectable patterns, opening a possibility to implement a simple “sentiment sensing” technique that can detect and locate disasters. PMID:25692690

  14. Asynchronous transfer mode link performance over ground networks

    NASA Technical Reports Server (NTRS)

    Chow, E. T.; Markley, R. W.

    1993-01-01

    The results of an experiment to determine the feasibility of using asynchronous transfer mode (ATM) technology to support advanced spacecraft missions that require high-rate ground communications and, in particular, full-motion video are reported. Potential nodes in such a ground network include Deep Space Network (DSN) antenna stations, the Jet Propulsion Laboratory, and a set of national and international end users. The experiment simulated a lunar microrover, lunar lander, the DSN ground communications system, and distributed science users. The users were equipped with video-capable workstations. A key feature was an optical fiber link between two high-performance workstations equipped with ATM interfaces. Video was also transmitted through JPL's institutional network to a user 8 km from the experiment. Variations in video depending on the networks and computers were observed, the results are reported.

  15. Full service access networks: experimental realization and performance

    NASA Astrophysics Data System (ADS)

    Faulkner, David W.; Quayle, Alan; Smith, Phillip A.; Clarke, Don; Fisher, Simon; Adams, Richard; Kelly, James; Smee, Dave; Cook, John G.

    1997-10-01

    This paper describes how an experimental full services access network has been constructed at BT Labs and presents views on how its performance could be improved to meet the reliability and traffic loading requirements expected in real applications such as fiber to the business and fiber to the cabinet. The experimental network included: asynchronous transfer mode (ATM) switch, an ATM passive optical network (PON), very high speed digital subscriber loop (VDSL) customer drop and ATM forum 25 Mbit/s customer network. The design and realization of the VDSL customer drop, the signaling system and the interfaces between the system elements formed a major part of the design and construction work at BT Labs. The ability to cope with varying service demand and achieving the necessary quality of service are important requirements for roll-out systems. This paper describes how these requirements could be met in the design of future proprietary equipment.

  16. Comparative performance of three experimental hut designs for measuring malaria vector responses to insecticides in Tanzania.

    PubMed

    Massue, Dennis J; Kisinza, William N; Malongo, Bernard B; Mgaya, Charles S; Bradley, John; Moore, Jason D; Tenu, Filemoni F; Moore, Sarah J

    2016-03-15

    Experimental huts are simplified, standardized representations of human habitations that provide model systems to evaluate insecticides used in indoor residual spray (IRS) and long-lasting insecticidal nets (LLINs) to kill disease vectors. Hut volume, construction materials and size of entry points impact mosquito entry and exposure to insecticides. The performance of three standard experimental hut designs was compared to evaluate insecticide used in LLINs. Field studies were conducted at the World Health Organization Pesticide Evaluation Scheme (WHOPES) testing site in Muheza, Tanzania. Three East African huts, three West African huts, and three Ifakara huts were compared using Olyset(®) and Permanet 2.0(®) versus untreated nets as a control. Outcomes measured were mortality, induced exophily (exit rate), blood feeding inhibition and deterrence (entry rate). Data were analysed using linear mixed effect regression and Bland-Altman comparison of paired differences. A total of 613 mosquitoes were collected in 36 nights, of which 13.5% were Anopheles gambiae sensu lato, 21% Anopheles funestus sensu stricto, 38% Mansonia species and 28% Culex species. Ifakara huts caught three times more mosquitoes than the East African and West African huts, while the West African huts caught significantly fewer mosquitoes than the other hut types. Mosquito densities were low, very little mosquito exit was measured in any of the huts with no measurable exophily caused by the use of either Olyset or Permanet. When the huts were directly compared, the West African huts measured greater exophily than other huts. As unholed nets were used in the experiments and few mosquitoes were captured, it was not possible to measure difference in feeding success either between treatments or hut types. In each of the hut types there was increased mortality when Permanet or Olyset were present inside the huts compared to the control, however this did not vary between the hut types. Both East African

  17. Neural Network Research on Tilt-Rotor Hover Performance

    NASA Technical Reports Server (NTRS)

    Kozapalli, S.; Warmbrodt, William (Technical Monitor)

    1997-01-01

    Results from a neural network study on XV-15 tilt-rotor hover performance are presented. Two XV-15 test data bases, acquired during separate tests conducted in the National Full-Scale Aerodynamics Complex at NASA Ames, were used in this study. The two XV-15 test data bases used were: the 1997 80- by 120-Foot Wind Tunnel test data base and the 1984 Outdoor Aerodynamic Research Facility (OARF) test data base. The objectives associated with the 80- by 120-Foot Wind Tunnel test data were to conduct data quality checks, obtain neural network representations, and demonstrate sensitivity to test conditions. The objectives associated with the OARF outdoor test data were to obtain a 'zero wind' neural network representation, and formulate and implement a neural network based wind correction procedure. An additional objective, common to both test data bases, was to conduct error comparisons. The conclusions from the present study were as follows: Compared to measured parameters (i.e., collective pitch angle), derived parameters (i.e., thrust coefficient) were found preferable as neural network inputs. For the 80- by 120-Foot Wind Tunnel test data base, the neural network based figure of merit representations did not extrapolate rotor stall. Using the OARF outdoor test database, a neural network based wind correction procedure was formulated and successfully implemented. Based on an error comparison, the present neural network based wind correction procedure was found to be more accurate compared to the existing, momentum-theory-based wind correction procedure. Also, a separate error comparison showed that the 80- by 120-Foot Wind Tunnel hover performance data were comparable to non-wind-corrected outdoor test data.

  18. Effect of traffic self-similarity on network performance

    NASA Astrophysics Data System (ADS)

    Park, Kihong; Kim, Gitae; Crovella, Mark E.

    1997-10-01

    Recent measurements of network traffic have shown that self- similarity is an ubiquitous phenomenon present in both local area and wide area traffic traces. In previous work, we have shown a simple, robust application layer causal mechanism of traffic self-similarity, namely, the transfer of files in a network system where the file size distributions are heavy- tailed. In this paper, we study the effect of scale- invariant burstiness on network performance when the functionality of the transport layer and the interaction of traffic sources sharing bounded network resources is incorporated. First, we show that transport layer mechanisms are important factors in translating the application layer causality into link traffic self-similarity. Network performance as captured by throughput, packet loss rate, and packet retransmission rate degrades gradually with increased heavy-tailedness while queueing delay, response time, and fairness deteriorate more drastically. The degree to which heavy-tailedness affects self-similarity is determined by how well congestion control is able to shape a source traffic into an on-average constant output stream while conserving information. Second, we show that increasing network resources such as link bandwidth and buffer capacity results in a superlinear improvement in performance. When large file transfers occur with nonnegligible probability, the incremental improvement in throughput achieved for large buffer sizes is accompanied by long queueing delays vis-a- vis the case when the file size distribution is not heavy- tailed. Buffer utilization continues to remain at a high level implying that further improvement in throughput is only achieved at the expense of a disproportionate increase in queueing delay. A similar trade-off relationship exists between queueing delay and packet loss rate, the curvature of the performance curve being highly sensitive to the degree of self-similarity. Third, we investigate the effect of congestion

  19. A new electromagnetic induction sensor using Vector Network Analyzer technology for accurate characterisation of soil electrical properties

    NASA Astrophysics Data System (ADS)

    André, F.; Lambot, S.; Moghadas, D.; Vereecken, H.

    2009-04-01

    Electromagnetic induction (EMI) has been widely used since the 70s to retrieve soil physico-chemical properties through the measurement of soil electrical conductivity. Soil electrical conductivity integrates several factors, mainly soil water content, salinity, clay content and temperature, and to a lesser extent, mineralogy, porosity, structure, cation exchange capacity, organic matter and bulk density. EMI has been shown to be useful for a wide range of environmental applications. EMI is non invasive and individual measurements are almost instantaneous, which permits to characterise large areas with fine spatial and/or temporal resolutions. Nevertheless, current EMI systems present some limitations. First, EMI usually operates at a single or at a limited number of fixed frequencies, which limits the information that can be retrieved from the subsurface. In addition, the calibration of existing commercial sensors is generally rather empirical and not accurate, which reduces the reliability of the data. Finally, the data processing techniques that are used to retrieve the soil electrical properties from EMI data often rely on strong simplifying assumptions with respect to wave propagation through the antenna-air-soil system. Performing EMI measurements with Vector Network Analyzer (VNA) technology would overcome a part of these limitations, allowing to work simultaneously at a wide range of frequencies and to readily perform robust calibrations, which are defined as an international standard. On that basis, we have developed a new algorithm for off-ground, zero-offset, frequency domain EMI based on full-waveform inverse modelling. The EMI forward model is based on a linear system of complex transfer functions for describing the loop antenna and its interactions with soil and an exact solution of Maxwell's equations for wave propagation in three-dimensional multilayered media. The approach has been validated in laboratory conditions for measurements at different

  20. Investigation into the relationship between the gravity vector and the flow vector to improve performance in two-phase continuous flow biodiesel reactor.

    PubMed

    Unker, S A; Boucher, M B; Hawley, K R; Midgette, A A; Stuart, J D; Parnas, R S

    2010-10-01

    The following study analyzes the performance of a continuous flow biodiesel reactor/separator. The reactor achieves high conversion of vegetable oil triglycerides to biodiesel while simultaneously separating co-product glycerol. The influence of the flow direction, relative to the gravity vector, on the reactor performance was measured. Reactor performance was assessed by both the conversion of vegetable oil triglycerides to biodiesel and the separation efficiency of removing the co-product glycerol. At slightly elevated temperatures of 40-50 degrees C, an overall feed of 1.2 L/min, a 6:1 M ratio of methanol to vegetable oil triglycerides, and a 1-1.3 wt.% potassium hydroxide catalyst loading, the reactor converted more than 96% of the pretreated waste vegetable oil to biodiesel. The reactor also separated 36-95% of the glycerol that was produced. Tilting the reactor away from the vertical direction produced a large increase in glycerol separation efficiency and only a small decrease in conversion.

  1. Distribution and larval habitat characterization of Anopheles moucheti, Anopheles nili, and other malaria vectors in river networks of southern Cameroon.

    PubMed

    Antonio-Nkondjio, Christophe; Ndo, Cyrille; Costantini, Carlo; Awono-Ambene, Parfait; Fontenille, Didier; Simard, Frédéric

    2009-12-01

    Despite their importance as malaria vectors, little is known of the bionomic of Anopheles nili and Anopheles moucheti. Larval collections from 24 sites situated along the dense hydrographic network of south Cameroon were examined to assess key ecological factors associated with these mosquitoes distribution in river networks. Morphological identification of the III and IV instar larvae by the use of microscopy revealed that 47.6% of the larvae belong to An. nili and 22.6% to An. moucheti. Five variables were significantly involved with species distribution, the pace of flow of the river (lotic, or lentic), the light exposure (sunny or shady), vegetation (presence or absence of vegetation) the temperature and the presence or absence of debris. Using canonical correspondence analysis, it appeared that lotic rivers, exposed to light, with vegetation or debris were the best predictors of An. nili larval abundance. Whereas, An. moucheti and An. ovengensis were highly associated with lentic rivers, low temperature, having Pistia. An. nili and An. moucheti distribution along river systems across south Cameroon was highly correlated with environmental variables. The distribution of An. nili conforms to that of a generalist species which is adapted to exploiting a variety of environmental conditions, Whereas, An. moucheti, Anopheles ovengensis and Anopheles carnevalei appeared as specialist forest mosquitoes.

  2. Integrated healthcare networks' performance: a growth curve modeling approach.

    PubMed

    Wan, Thomas T H; Wang, Bill B L

    2003-05-01

    This study examines the effects of integration on the performance ratings of the top 100 integrated healthcare networks (IHNs) in the United States. A strategic-contingency theory is used to identify the relationship of IHNs' performance to their structural and operational characteristics and integration strategies. To create a database for the panel study, the top 100 IHNs selected by the SMG Marketing Group in 1998 were followed up in 1999 and 2000. The data were merged with the Dorenfest data on information system integration. A growth curve model was developed and validated by the Mplus statistical program. Factors influencing the top 100 IHNs' performance in 1998 and their subsequent rankings in the consecutive years were analyzed. IHNs' initial performance scores were positively influenced by network size, number of affiliated physicians and profit margin, and were negatively associated with average length of stay and technical efficiency. The continuing high performance, judged by maintaining higher performance scores, tended to be enhanced by the use of more managerial or executive decision-support systems. Future studies should include time-varying operational indicators to serve as predictors of network performance.

  3. Naturally occurring singleton residues in AAV capsid impact vector performance and illustrate structural constraints.

    PubMed

    Vandenberghe, L H; Breous, E; Nam, H-J; Gao, G; Xiao, R; Sandhu, A; Johnston, J; Debyser, Z; Agbandje-McKenna, M; Wilson, J M

    2009-12-01

    Vectors based on the adeno-associated virus (AAV) are attractive and versatile vehicles for in vivo gene transfer. The virus capsid is the primary interface with the cell that defines many pharmacological, immunological and molecular properties. Determinants of these interactions are often restricted to a limited number of capsid amino acids. In this study, a portfolio of novel AAV vectors was developed after a structure-function analysis of naturally occurring AAV capsid isolates. Singletons, which are particular residues on the AAV capsid that were variable in otherwise conserved amino acid positions, were found to impact on vector's ability to be manufactured or to transduce. Data for those residues that mapped to monomer-monomer interface regions on the particle structure suggested a role in particle assembly. The change of singleton residues to the conserved amino acid resulted in the rescue of many isolates that were defective on initial isolation. This led to the development of an AAV vector portfolio that encompasses six different clades and 3 other distinct AAV niches. Evaluation of the in vivo gene transfer efficiency of this portfolio after intravenous and intramuscular administration highlighted a clade-specific tropism. These studies further the design and selection of AAV capsids for gene therapy applications.

  4. Performance verification of network function virtualization in software defined optical transport networks

    NASA Astrophysics Data System (ADS)

    Zhao, Yongli; Hu, Liyazhou; Wang, Wei; Li, Yajie; Zhang, Jie

    2017-01-01

    With the continuous opening of resource acquisition and application, there are a large variety of network hardware appliances deployed as the communication infrastructure. To lunch a new network application always implies to replace the obsolete devices and needs the related space and power to accommodate it, which will increase the energy and capital investment. Network function virtualization1 (NFV) aims to address these problems by consolidating many network equipment onto industry standard elements such as servers, switches and storage. Many types of IT resources have been deployed to run Virtual Network Functions (vNFs), such as virtual switches and routers. Then how to deploy NFV in optical transport networks is a of great importance problem. This paper focuses on this problem, and gives an implementation architecture of NFV-enabled optical transport networks based on Software Defined Optical Networking (SDON) with the procedure of vNFs call and return. Especially, an implementation solution of NFV-enabled optical transport node is designed, and a parallel processing method for NFV-enabled OTN nodes is proposed. To verify the performance of NFV-enabled SDON, the protocol interaction procedures of control function virtualization and node function virtualization are demonstrated on SDON testbed. Finally, the benefits and challenges of the parallel processing method for NFV-enabled OTN nodes are simulated and analyzed.

  5. High Performance Computing and Networking for Science--Background Paper.

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. Office of Technology Assessment.

    The Office of Technology Assessment is conducting an assessment of the effects of new information technologies--including high performance computing, data networking, and mass data archiving--on research and development. This paper offers a view of the issues and their implications for current discussions about Federal supercomputer initiatives…

  6. Competitive Learning Neural Network Ensemble Weighted by Predicted Performance

    ERIC Educational Resources Information Center

    Ye, Qiang

    2010-01-01

    Ensemble approaches have been shown to enhance classification by combining the outputs from a set of voting classifiers. Diversity in error patterns among base classifiers promotes ensemble performance. Multi-task learning is an important characteristic for Neural Network classifiers. Introducing a secondary output unit that receives different…

  7. Student Performance Assessment Using Bayesian Network and Web Portfolios.

    ERIC Educational Resources Information Center

    Liu, Chen-Chung; Chen, Gwo-Dong; Wang, Chin-Yeh; Lu, Ching-Fang

    2002-01-01

    Proposes a novel methodology that employs Bayesian network software to assist teachers in efficiently deriving and utilizing the student model of activity performance from Web portfolios online. This system contains Web portfolios that record in detail students' learning activities, peer interaction, and knowledge progress. (AEF)

  8. The Influence of Social Networks on High School Students' Performance

    ERIC Educational Resources Information Center

    Abu-Shanab, Emad; Al-Tarawneh, Heyam

    2015-01-01

    Social networks are becoming an integral part of people's lives. Students are spending much time on social media and are considered the largest category that uses such application. This study tries to explore the influence of social media use, and especially Facebook, on high school students' performance. The study used the GPA of students in four…

  9. Metrics for evaluating performance and uncertainty of Bayesian network models

    Treesearch

    Bruce G. Marcot

    2012-01-01

    This paper presents a selected set of existing and new metrics for gauging Bayesian network model performance and uncertainty. Selected existing and new metrics are discussed for conducting model sensitivity analysis (variance reduction, entropy reduction, case file simulation); evaluating scenarios (influence analysis); depicting model complexity (numbers of model...

  10. Competitive Learning Neural Network Ensemble Weighted by Predicted Performance

    ERIC Educational Resources Information Center

    Ye, Qiang

    2010-01-01

    Ensemble approaches have been shown to enhance classification by combining the outputs from a set of voting classifiers. Diversity in error patterns among base classifiers promotes ensemble performance. Multi-task learning is an important characteristic for Neural Network classifiers. Introducing a secondary output unit that receives different…

  11. The Influence of Social Networks on High School Students' Performance

    ERIC Educational Resources Information Center

    Abu-Shanab, Emad; Al-Tarawneh, Heyam

    2015-01-01

    Social networks are becoming an integral part of people's lives. Students are spending much time on social media and are considered the largest category that uses such application. This study tries to explore the influence of social media use, and especially Facebook, on high school students' performance. The study used the GPA of students in four…

  12. Comparison of Support Vector Machine, Neural Network, and CART Algorithms for the Land-Cover Classification Using Limited Training Data Points

    EPA Science Inventory

    Support vector machine (SVM) was applied for land-cover characterization using MODIS time-series data. Classification performance was examined with respect to training sample size, sample variability, and landscape homogeneity (purity). The results were compared to two convention...

  13. Comparison of Support Vector Machine, Neural Network, and CART Algorithms for the Land-Cover Classification Using Limited Training Data Points

    EPA Science Inventory

    Support vector machine (SVM) was applied for land-cover characterization using MODIS time-series data. Classification performance was examined with respect to training sample size, sample variability, and landscape homogeneity (purity). The results were compared to two convention...

  14. Performance Evaluation in Network-Based Parallel Computing

    NASA Technical Reports Server (NTRS)

    Dezhgosha, Kamyar

    1996-01-01

    Network-based parallel computing is emerging as a cost-effective alternative for solving many problems which require use of supercomputers or massively parallel computers. The primary objective of this project has been to conduct experimental research on performance evaluation for clustered parallel computing. First, a testbed was established by augmenting our existing SUNSPARCs' network with PVM (Parallel Virtual Machine) which is a software system for linking clusters of machines. Second, a set of three basic applications were selected. The applications consist of a parallel search, a parallel sort, a parallel matrix multiplication. These application programs were implemented in C programming language under PVM. Third, we conducted performance evaluation under various configurations and problem sizes. Alternative parallel computing models and workload allocations for application programs were explored. The performance metric was limited to elapsed time or response time which in the context of parallel computing can be expressed in terms of speedup. The results reveal that the overhead of communication latency between processes in many cases is the restricting factor to performance. That is, coarse-grain parallelism which requires less frequent communication between processes will result in higher performance in network-based computing. Finally, we are in the final stages of installing an Asynchronous Transfer Mode (ATM) switch and four ATM interfaces (each 155 Mbps) which will allow us to extend our study to newer applications, performance metrics, and configurations.

  15. Performance analysis of wireless sensor networks in geophysical sensing applications

    NASA Astrophysics Data System (ADS)

    Uligere Narasimhamurthy, Adithya

    Performance is an important criteria to consider before switching from a wired network to a wireless sensing network. Performance is especially important in geophysical sensing where the quality of the sensing system is measured by the precision of the acquired signal. Can a wireless sensing network maintain the same reliability and quality metrics that a wired system provides? Our work focuses on evaluating the wireless GeoMote sensor motes that were developed by previous computer science graduate students at Mines. Specifically, we conducted a set of experiments, namely WalkAway and Linear Array experiments, to characterize the performance of the wireless motes. The motes were also equipped with the Sticking Heartbeat Aperture Resynchronization Protocol (SHARP), a time synchronization protocol developed by a previous computer science graduate student at Mines. This protocol should automatically synchronize the mote's internal clocks and reduce time synchronization errors. We also collected passive data to evaluate the response of GeoMotes to various frequency components associated with the seismic waves. With the data collected from these experiments, we evaluated the performance of the SHARP protocol and compared the performance of our GeoMote wireless system against the industry standard wired seismograph system (Geometric-Geode). Using arrival time analysis and seismic velocity calculations, we set out to answer the following question. Can our wireless sensing system (GeoMotes) perform similarly to a traditional wired system in a realistic scenario?

  16. Performance Evaluation in Network-Based Parallel Computing

    NASA Technical Reports Server (NTRS)

    Dezhgosha, Kamyar

    1996-01-01

    Network-based parallel computing is emerging as a cost-effective alternative for solving many problems which require use of supercomputers or massively parallel computers. The primary objective of this project has been to conduct experimental research on performance evaluation for clustered parallel computing. First, a testbed was established by augmenting our existing SUNSPARCs' network with PVM (Parallel Virtual Machine) which is a software system for linking clusters of machines. Second, a set of three basic applications were selected. The applications consist of a parallel search, a parallel sort, a parallel matrix multiplication. These application programs were implemented in C programming language under PVM. Third, we conducted performance evaluation under various configurations and problem sizes. Alternative parallel computing models and workload allocations for application programs were explored. The performance metric was limited to elapsed time or response time which in the context of parallel computing can be expressed in terms of speedup. The results reveal that the overhead of communication latency between processes in many cases is the restricting factor to performance. That is, coarse-grain parallelism which requires less frequent communication between processes will result in higher performance in network-based computing. Finally, we are in the final stages of installing an Asynchronous Transfer Mode (ATM) switch and four ATM interfaces (each 155 Mbps) which will allow us to extend our study to newer applications, performance metrics, and configurations.

  17. Network Performance Testing for the BaBar Event Builder

    SciTech Connect

    Pavel, Tomas J

    1998-11-17

    We present an overview of the design of event building in the BABAR Online, based upon TCP/IP and commodity networking technology. BABAR is a high-rate experiment to study CP violation in asymmetric e{sup +}e{sup {minus}} collisions. In order to validate the event-builder design, an extensive program was undertaken to test the TCP performance delivered by various machine types with both ATM OC-3 and Fast Ethernet networks. The buffering characteristics of several candidate switches were examined and found to be generally adequate for our purposes. We highlight the results of this testing and present some of the more significant findings.

  18. Performance evaluation of TCP implementations in wireless networks

    NASA Astrophysics Data System (ADS)

    ElAarag, Hala; Bassiouni, Mostafa A.

    1999-06-01

    The performance of TCP has been well tuned for traditional networks made up of wired links and stationary hosts. Mobile networks, however, differs from conventional wired computer networks and usually suffer from high bit error rates and frequent disconnections due to handoffs. In this paper, we present simulation results for the performance of various TCP implementations in the presence of a wireless link. To concentrate on the mobility and reliability aspects of the wireless connection, our simulation tests used sufficiently large buffer sizes in the fixed host and the base station of the TCP connection. The results show that throughput of the TCP connection is largely influenced by the link-up period of the wireless link. By varying the link-up and the link- down periods, it is possible to obtain better throughput at higher disconnection probability. For example, the throughput of TCP Reno with disconnection probability of 28.6% and a link-up period of 5 is better than the throughput with disconnection probability of 9% and a link- up period of less than 3. The paper presents timing graphs tracing the movement of packets and acknowledgements between the fixed and mobile hosts. Dropped packets or acknowledgements shown in these graphs are the result of mobile disconnection or wireless bit errors and not because of buffer congestion. Unlike wired networks, Reno TCP was found to perform better than Sack in the wireless mobile environment.

  19. Performance and optimization of support vector machines in high-energy physics classification problems

    NASA Astrophysics Data System (ADS)

    Sahin, M. Ö.; Krücker, D.; Melzer-Pellmann, I.-A.

    2016-12-01

    In this paper we promote the use of Support Vector Machines (SVM) as a machine learning tool for searches in high-energy physics. As an example for a new-physics search we discuss the popular case of Supersymmetry at the Large Hadron Collider. We demonstrate that the SVM is a valuable tool and show that an automated discovery-significance based optimization of the SVM hyper-parameters is a highly efficient way to prepare an SVM for such applications.

  20. Performance evaluation of a routing algorithm based on Hopfield Neural Network for network-on-chip

    NASA Astrophysics Data System (ADS)

    Esmaelpoor, Jamal; Ghafouri, Abdollah

    2015-12-01

    Network on chip (NoC) has emerged as a solution to overcome the system on chip growing complexity and design challenges. A proper routing algorithm is a key issue of an NoC design. An appropriate routing method balances load across the network channels and keeps path length as short as possible. This survey investigates the performance of a routing algorithm based on Hopfield Neural Network. It is a dynamic programming to provide optimal path and network monitoring in real time. The aim of this article is to analyse the possibility of using a neural network as a router. The algorithm takes into account the path with the lowest delay (cost) form source to destination. In other words, the path a message takes from source to destination depends on network traffic situation at the time and it is the fastest one. The simulation results show that the proposed approach improves average delay, throughput and network congestion efficiently. At the same time, the increase in power consumption is almost negligible.

  1. Satellite panicum mosaic virus coat protein enhances the performance of plant virus gene vectors.

    PubMed

    Everett, Anthany L; Scholthof, Herman B; Scholthof, Karen-Beth G

    2010-01-05

    The coat protein of satellite panicum mosaic virus (SPCP) is known to effectively protect its cognate RNA from deleterious events, and here, we tested its stabilizing potential for heterologous virus-based gene vectors in planta. In support of this, a Potato virus X (PVX) vector carrying the SPMV capsid protein (PVX-SPCP) gene was stable for at least three serial systemic passages through Nicotiana benthamiana. To test the effect of SPCP in trans, PVX-SPCP was co-inoculated onto N. benthamiana together with a Tomato bushy stunt virus (TBSV) vector carrying a green fluorescent protein (GFP) gene that normally does not support systemic GFP expression. In contrast, co-inoculation of TBSV-GFP plus PVX-SPCP resulted in GFP accumulation and concomitant green fluorescent spots in upper, non-inoculated leaves in a temperature-responsive manner. These results suggest that the multifaceted SPMV CP has intriguing effects on virus-host interactions that surface in heterologous systems.

  2. Enhanced memory performance thanks to neural network assortativity

    SciTech Connect

    Franciscis, S. de; Johnson, S.; Torres, J. J.

    2011-03-24

    The behaviour of many complex dynamical systems has been found to depend crucially on the structure of the underlying networks of interactions. An intriguing feature of empirical networks is their assortativity--i.e., the extent to which the degrees of neighbouring nodes are correlated. However, until very recently it was difficult to take this property into account analytically, most work being exclusively numerical. We get round this problem by considering ensembles of equally correlated graphs and apply this novel technique to the case of attractor neural networks. Assortativity turns out to be a key feature for memory performance in these systems - so much so that for sufficiently correlated topologies the critical temperature diverges. We predict that artificial and biological neural systems could significantly enhance their robustness to noise by developing positive correlations.

  3. Team Assembly Mechanisms Determine Collaboration Network Structure and Team Performance

    PubMed Central

    Guimerà, Roger; Uzzi, Brian; Spiro, Jarrett; Nunes Amaral, Luís A.

    2007-01-01

    Agents in creative enterprises are embedded in networks that inspire, support, and evaluate their work. Here, we investigate how the mechanisms by which creative teams self-assemble determine the structure of these collaboration networks. We propose a model for the self-assembly of creative teams that has its basis in three parameters: team size, the fraction of newcomers in new productions, and the tendency of incumbents to repeat previous collaborations. The model suggests that the emergence of a large connected community of practitioners can be described as a phase transition. We find that team assembly mechanisms determine both the structure of the collaboration network and team performance for teams derived from both artistic and scientific fields. PMID:15860629

  4. Evaluation of GPFS Connectivity Over High-Performance Networks

    SciTech Connect

    Srinivasan, Jay; Canon, Shane; Andrews, Matthew

    2009-02-17

    We present the results of an evaluation of new features of the latest release of IBM's GPFS filesystem (v3.2). We investigate different ways of connecting to a high-performance GPFS filesystem from a remote cluster using Infiniband (IB) and 10 Gigabit Ethernet. We also examine the performance of the GPFS filesystem with both serial and parallel I/O. Finally, we also present our recommendations for effective ways of utilizing high-bandwidth networks for high-performance I/O to parallel file systems.

  5. On the Performance of TCP Spoofing in Satellite Networks

    NASA Technical Reports Server (NTRS)

    Ishac, Joseph; Allman, Mark

    2001-01-01

    In this paper, we analyze the performance of Transmission Control Protocol (TCP) in a network that consists of both satellite and terrestrial components. One method, proposed by outside research, to improve the performance of data transfers over satellites is to use a performance enhancing proxy often dubbed 'spoofing.' Spoofing involves the transparent splitting of a TCP connection between the source and destination by some entity within the network path. In order to analyze the impact of spoofing, we constructed a simulation suite based around the network simulator ns-2. The simulation reflects a host with a satellite connection to the Internet and allows the option to spoof connections just prior to the satellite. The methodology used in our simulation allows us to analyze spoofing over a large range of file sizes and under various congested conditions, while prior work on this topic has primarily focused on bulk transfers with no congestion. As a result of these simulations, we find that the performance of spoofing is dependent upon a number of conditions.

  6. Integrated System for Performance Monitoring of the ATLAS TDAQ Network

    NASA Astrophysics Data System (ADS)

    Octavian Savu, Dan; Al-Shabibi, Ali; Martin, Brian; Sjoen, Rune; Batraneanu, Silvia Maria; Stancu, Stefan

    2011-12-01

    The ATLAS TDAQ Network consists of three separate networks spanning four levels of the experimental building. Over 200 edge switches and 5 multi-blade chassis routers are used to interconnect 2000 processors, adding up to more than 7000 high speed interfaces. In order to substantially speed-up ad-hoc and post mortem analysis, a scalable, yet flexible, integrated system for monitoring both network statistics and environmental conditions, processor parameters and data taking characteristics was required. For successful up-to-the-minute monitoring, information from many SNMP compliant devices, independent databases and custom APIs was gathered, stored and displayed in an optimal way. Easy navigation and compact aggregation of multiple data sources were the main requirements; characteristics not found in any of the tested products, either open-source or commercial. This paper describes how performance, scalability and display issues were addressed and what challenges the project faced during development and deployment. A full set of modules, including a fast polling SNMP engine, user interfaces using latest web technologies and caching mechanisms, has been designed and developed from scratch. Over the last year the system proved to be stable and reliable, replacing the previous performance monitoring system and extending its capabilities. Currently it is operated using a precision interval of 25 seconds (the industry standard is 300 seconds). Although it was developed in order to address the needs for integrated performance monitoring of the ATLAS TDAQ network, the package can be used for monitoring any network with rigid demands of precision and scalability, exceeding normal industry standards.

  7. Design and implementation of a high performance network security processor

    NASA Astrophysics Data System (ADS)

    Wang, Haixin; Bai, Guoqiang; Chen, Hongyi

    2010-03-01

    The last few years have seen many significant progresses in the field of application-specific processors. One example is network security processors (NSPs) that perform various cryptographic operations specified by network security protocols and help to offload the computation intensive burdens from network processors (NPs). This article presents a high performance NSP system architecture implementation intended for both internet protocol security (IPSec) and secure socket layer (SSL) protocol acceleration, which are widely employed in virtual private network (VPN) and e-commerce applications. The efficient dual one-way pipelined data transfer skeleton and optimised integration scheme of the heterogenous parallel crypto engine arrays lead to a Gbps rate NSP, which is programmable with domain specific descriptor-based instructions. The descriptor-based control flow fragments large data packets and distributes them to the crypto engine arrays, which fully utilises the parallel computation resources and improves the overall system data throughput. A prototyping platform for this NSP design is implemented with a Xilinx XC3S5000 based FPGA chip set. Results show that the design gives a peak throughput for the IPSec ESP tunnel mode of 2.85 Gbps with over 2100 full SSL handshakes per second at a clock rate of 95 MHz.

  8. Parallel access alignment network with barrel switch implementation for d-ordered vector elements

    NASA Technical Reports Server (NTRS)

    Barnes, George H. (Inventor)

    1980-01-01

    An alignment network between N parallel data input ports and N parallel data outputs includes a first and a second barrel switch. The first barrel switch fed by the N parallel input ports shifts the N outputs thereof and in turn feeds the N-1 input data paths of the second barrel switch according to the relationship X=k.sup.y modulo N wherein x represents the output data path ordering of the first barrel switch, y represents the input data path ordering of the second barrel switch, and k equals a primitive root of the number N. The zero (0) ordered output data path of the first barrel switch is fed directly to the zero ordered output port. The N-1 output data paths of the second barrel switch are connected to the N output ports in the reverse ordering of the connections between the output data paths of the first barrel switch and the input data paths of the second barrel switch. The second switch is controlled by a value m, which in the preferred embodiment is produced at the output of a ROM addressed by the value d wherein d represents the incremental spacing or distance between data elements to be accessed from the N input ports, and m is generated therefrom according to the relationship d=k.sup.m modulo N.

  9. Comparison of Bayesian network and support vector machine models for two-year survival prediction in lung cancer patients treated with radiotherapy

    SciTech Connect

    Jayasurya, K.; Fung, G.; Yu, S.; Dehing-Oberije, C.; De Ruysscher, D.; Hope, A.; De Neve, W.; Lievens, Y.; Lambin, P.; Dekker, A. L. A. J.

    2010-04-15

    Purpose: Classic statistical and machine learning models such as support vector machines (SVMs) can be used to predict cancer outcome, but often only perform well if all the input variables are known, which is unlikely in the medical domain. Bayesian network (BN) models have a natural ability to reason under uncertainty and might handle missing data better. In this study, the authors hypothesize that a BN model can predict two-year survival in non-small cell lung cancer (NSCLC) patients as accurately as SVM, but will predict survival more accurately when data are missing. Methods: A BN and SVM model were trained on 322 inoperable NSCLC patients treated with radiotherapy from Maastricht and validated in three independent data sets of 35, 47, and 33 patients from Ghent, Leuven, and Toronto. Missing variables occurred in the data set with only 37, 28, and 24 patients having a complete data set. Results: The BN model structure and parameter learning identified gross tumor volume size, performance status, and number of positive lymph nodes on a PET as prognostic factors for two-year survival. When validated in the full validation set of Ghent, Leuven, and Toronto, the BN model had an AUC of 0.77, 0.72, and 0.70, respectively. A SVM model based on the same variables had an overall worse performance (AUC 0.71, 0.68, and 0.69) especially in the Ghent set, which had the highest percentage of missing the important GTV size data. When only patients with complete data sets were considered, the BN and SVM model performed more alike. Conclusions: Within the limitations of this study, the hypothesis is supported that BN models are better at handling missing data than SVM models and are therefore more suitable for the medical domain. Future works have to focus on improving the BN performance by including more patients, more variables, and more diversity.

  10. Copercolating Networks: An Approach for Realizing High-Performance Transparent Conductors using Multicomponent Nanostructured Networks

    NASA Astrophysics Data System (ADS)

    Das, Suprem R.; Sadeque, Sajia; Jeong, Changwook; Chen, Ruiyi; Alam, Muhammad A.; Janes, David B.

    2016-06-01

    Although transparent conductive oxides such as indium tin oxide (ITO) are widely employed as transparent conducting electrodes (TCEs) for applications such as touch screens and displays, new nanostructured TCEs are of interest for future applications, including emerging transparent and flexible electronics. A number of twodimensional networks of nanostructured elements have been reported, including metallic nanowire networks consisting of silver nanowires, metallic carbon nanotubes (m-CNTs), copper nanowires or gold nanowires, and metallic mesh structures. In these single-component systems, it has generally been difficult to achieve sheet resistances that are comparable to ITO at a given broadband optical transparency. A relatively new third category of TCEs consisting of networks of 1D-1D and 1D-2D nanocomposites (such as silver nanowires and CNTs, silver nanowires and polycrystalline graphene, silver nanowires and reduced graphene oxide) have demonstrated TCE performance comparable to, or better than, ITO. In such hybrid networks, copercolation between the two components can lead to relatively low sheet resistances at nanowire densities corresponding to high optical transmittance. This review provides an overview of reported hybrid networks, including a comparison of the performance regimes achievable with those of ITO and single-component nanostructured networks. The performance is compared to that expected from bulk thin films and analyzed in terms of the copercolation model. In addition, performance characteristics relevant for flexible and transparent applications are discussed. The new TCEs are promising, but significant work must be done to ensure earth abundance, stability, and reliability so that they can eventually replace traditional ITO-based transparent conductors.

  11. Trypanosoma cruzi reservoir—triatomine vector co-occurrence networks reveal meta-community effects by synanthropic mammals on geographic dispersal

    PubMed Central

    Valiente-Banuet, Leopoldo; Sánchez-Cordero, Víctor; Stephens, Christopher R.

    2017-01-01

    Contemporary patterns of land use and global climate change are modifying regional pools of parasite host species. The impact of host community changes on human disease risk, however, is difficult to assess due to a lack of information about zoonotic parasite host assemblages. We have used a recently developed method to infer parasite-host interactions for Chagas Disease (CD) from vector-host co-occurrence networks. Vector-host networks were constructed to analyze topological characteristics of the network and ecological traits of species’ nodes, which could provide information regarding parasite regional dispersal in Mexico. Twenty-eight triatomine species (vectors) and 396 mammal species (potential hosts) were included using a data-mining approach to develop models to infer most-likely interactions. The final network contained 1,576 links which were analyzed to calculate centrality, connectivity, and modularity. The model predicted links of independently registered Trypanosoma cruzi hosts, which correlated with the degree of parasite-vector co-occurrence. Wiring patterns differed according to node location, while edge density was greater in Neotropical as compared to Nearctic regions. Vectors with greatest public health importance (i.e., Triatoma dimidiata, T. barberi, T. pallidipennis, T. longipennis, etc), did not have stronger links with particular host species, although they had a greater frequency of significant links. In contrast, hosts classified as important based on network properties were synanthropic mammals. The latter were the most common parasite hosts and are likely bridge species between these communities, thereby integrating meta-community scenarios beneficial for long-range parasite dispersal. This was particularly true for rodents, >50% of species are synanthropic and more than 20% have been identified as T. cruzi hosts. In addition to predicting potential host species using the co-occurrence networks, they reveal regions with greater

  12. Study of Fe/Cr Magnetic Multilayers and Periodic Arrays of Submicron Magnetic Dots by Vector Network Analyzer Technique

    NASA Astrophysics Data System (ADS)

    Aliev, Farkhad; Francisco Sierra, Juan; Awad, Ahmad; Pryadun, Vladimir; Kakazei, Gleb

    2008-03-01

    Vector network analyzer (VNA) technique up to 8.5 GHz was applied to measure in-plane dynamic response in Fe/Cr magnetic multilayers and for the in-plane magnetized periodic arrays of Permalloy circular magnetic dots. In the antiferromagnetically coupled [Fe/Cr]n multilayers (n=10,20,40) we have investigated field dependence of the acoustic resonance in a wide range of temperatures between 300K down to 2K both for the low magnetic fields and close to the saturation field. FMR studies of the array of FeNi dots with diameter of 1 micron, the aspect ratio L/R=0.1 and with centre to centre distance varying between 1.2 to 2.5 micron allowed to resolve multiple FMR resonances as a function of magnetic field. We have found the main FMR linewidth to be dependent on the magnetic history. For the magnetic fields below 300 Oe, where magnetic vortex state forms, we have observed the field dependence of the radial modes (fr > 6GHz) to show minima close to the zero magnetic field.

  13. Simulation of groundwater level variations using wavelet combined with neural network, linear regression and support vector machine

    NASA Astrophysics Data System (ADS)

    Ebrahimi, Hadi; Rajaee, Taher

    2017-01-01

    Simulation of groundwater level (GWL) fluctuations is an important task in management of groundwater resources. In this study, the effect of wavelet analysis on the training of the artificial neural network (ANN), multi linear regression (MLR) and support vector regression (SVR) approaches was investigated, and the ANN, MLR and SVR along with the wavelet-ANN (WNN), wavelet-MLR (WLR) and wavelet-SVR (WSVR) models were compared in simulating one-month-ahead of GWL. The only variable used to develop the models was the monthly GWL data recorded over a period of 11 years from two wells in the Qom plain, Iran. The results showed that decomposing GWL time series into several sub-time series, extremely improved the training of the models. For both wells 1 and 2, the Meyer and Db5 wavelets produced better results compared to the other wavelets; which indicated wavelet types had similar behavior in similar case studies. The optimal number of delays was 6 months, which seems to be due to natural phenomena. The best WNN model, using Meyer mother wavelet with two decomposition levels, simulated one-month-ahead with RMSE values being equal to 0.069 m and 0.154 m for wells 1 and 2, respectively. The RMSE values for the WLR model were 0.058 m and 0.111 m, and for WSVR model were 0.136 m and 0.060 m for wells 1 and 2, respectively.

  14. On using multiple routing metrics with destination sequenced distance vector protocol for MultiHop wireless ad hoc networks

    NASA Astrophysics Data System (ADS)

    Mehic, M.; Fazio, P.; Voznak, M.; Partila, P.; Komosny, D.; Tovarek, J.; Chmelikova, Z.

    2016-05-01

    A mobile ad hoc network is a collection of mobile nodes which communicate without a fixed backbone or centralized infrastructure. Due to the frequent mobility of nodes, routes connecting two distant nodes may change. Therefore, it is not possible to establish a priori fixed paths for message delivery through the network. Because of its importance, routing is the most studied problem in mobile ad hoc networks. In addition, if the Quality of Service (QoS) is demanded, one must guarantee the QoS not only over a single hop but over an entire wireless multi-hop path which may not be a trivial task. In turns, this requires the propagation of QoS information within the network. The key to the support of QoS reporting is QoS routing, which provides path QoS information at each source. To support QoS for real-time traffic one needs to know not only minimum delay on the path to the destination but also the bandwidth available on it. Therefore, throughput, end-to-end delay, and routing overhead are traditional performance metrics used to evaluate the performance of routing protocol. To obtain additional information about the link, most of quality-link metrics are based on calculation of the lost probabilities of links by broadcasting probe packets. In this paper, we address the problem of including multiple routing metrics in existing routing packets that are broadcasted through the network. We evaluate the efficiency of such approach with modified version of DSDV routing protocols in ns-3 simulator.

  15. Sensor Networking Testbed with IEEE 1451 Compatibility and Network Performance Monitoring

    NASA Technical Reports Server (NTRS)

    Gurkan, Deniz; Yuan, X.; Benhaddou, D.; Figueroa, F.; Morris, Jonathan

    2007-01-01

    Design and implementation of a testbed for testing and verifying IEEE 1451-compatible sensor systems with network performance monitoring is of significant importance. The performance parameters measurement as well as decision support systems implementation will enhance the understanding of sensor systems with plug-and-play capabilities. The paper will present the design aspects for such a testbed environment under development at University of Houston in collaboration with NASA Stennis Space Center - SSST (Smart Sensor System Testbed).

  16. The Algerian Seismic Network: Performance from data quality analysis

    NASA Astrophysics Data System (ADS)

    Yelles, Abdelkarim; Allili, Toufik; Alili, Azouaou

    2013-04-01

    densify the network and to enhance performance of the Algerian Digital Seismic Network.

  17. Design and Performance Analysis of Incremental Networked Predictive Control Systems.

    PubMed

    Pang, Zhong-Hua; Liu, Guo-Ping; Zhou, Donghua

    2016-06-01

    This paper is concerned with the design and performance analysis of networked control systems with network-induced delay, packet disorder, and packet dropout. Based on the incremental form of the plant input-output model and an incremental error feedback control strategy, an incremental networked predictive control (INPC) scheme is proposed to actively compensate for the round-trip time delay resulting from the above communication constraints. The output tracking performance and closed-loop stability of the resulting INPC system are considered for two cases: 1) plant-model match case and 2) plant-model mismatch case. For the former case, the INPC system can achieve the same output tracking performance and closed-loop stability as those of the corresponding local control system. For the latter case, a sufficient condition for the stability of the closed-loop INPC system is derived using the switched system theory. Furthermore, for both cases, the INPC system can achieve a zero steady-state output tracking error for step commands. Finally, both numerical simulations and practical experiments on an Internet-based servo motor system illustrate the effectiveness of the proposed method.

  18. Network Performance Measurements for NASA's Earth Observation System

    NASA Technical Reports Server (NTRS)

    Loiacono, Joe; Gormain, Andy; Smith, Jeff

    2004-01-01

    NASA's Earth Observation System (EOS) Project studies all aspects of planet Earth from space, including climate change, and ocean, ice, land, and vegetation characteristics. It consists of about 20 satellite missions over a period of about a decade. Extensive collaboration is used, both with other US. agencies (e.g., National Oceanic and Atmospheric Administration (NOA), United States Geological Survey (USGS), Department of Defense (DoD), and international agencies (e.g., European Space Agency (ESA), Japan Aerospace Exploration Agency (JAXA)), to improve cost effectiveness and obtain otherwise unavailable data. Scientific researchers are located at research institutions worldwide, primarily government research facilities and research universities. The EOS project makes extensive use of networks to support data acquisition, data production, and data distribution. Many of these functions impose requirements on the networks, including throughput and availability. In order to verify that these requirements are being met, and be pro-active in recognizing problems, NASA conducts on-going performance measurements. The purpose of this paper is to examine techniques used by NASA to measure the performance of the networks used by EOSDIS (EOS Data and Information System) and to indicate how this performance information is used.

  19. Simulation Modeling and Performance Evaluation of Space Networks

    NASA Technical Reports Server (NTRS)

    Jennings, Esther H.; Segui, John

    2006-01-01

    In space exploration missions, the coordinated use of spacecraft as communication relays increases the efficiency of the endeavors. To conduct trade-off studies of the performance and resource usage of different communication protocols and network designs, JPL designed a comprehensive extendable tool, the Multi-mission Advanced Communications Hybrid Environment for Test and Evaluation (MACHETE). The design and development of MACHETE began in 2000 and is constantly evolving. Currently, MACHETE contains Consultative Committee for Space Data Systems (CCSDS) protocol standards such as Proximity-1, Advanced Orbiting Systems (AOS), Packet Telemetry/Telecommand, Space Communications Protocol Specification (SCPS), and the CCSDS File Delivery Protocol (CFDP). MACHETE uses the Aerospace Corporation s Satellite Orbital Analysis Program (SOAP) to generate the orbital geometry information and contact opportunities. Matlab scripts provide the link characteristics. At the core of MACHETE is a discrete event simulator, QualNet. Delay Tolerant Networking (DTN) is an end-to-end architecture providing communication in and/or through highly stressed networking environments. Stressed networking environments include those with intermittent connectivity, large and/or variable delays, and high bit error rates. To provide its services, the DTN protocols reside at the application layer of the constituent internets, forming a store-and-forward overlay network. The key capabilities of the bundling protocols include custody-based reliability, ability to cope with intermittent connectivity, ability to take advantage of scheduled and opportunistic connectivity, and late binding of names to addresses. In this presentation, we report on the addition of MACHETE models needed to support DTN, namely: the Bundle Protocol (BP) model. To illustrate the use of MACHETE with the additional DTN model, we provide an example simulation to benchmark its performance. We demonstrate the use of the DTN protocol

  20. Passive and Active Monitoring on a High Performance Research Network.

    SciTech Connect

    Matthews, Warren

    2001-05-01

    The bold network challenges described in ''Internet End-to-end Performance Monitoring for the High Energy and Nuclear Physics Community'' presented at PAM 2000 have been tackled by the intrepid administrators and engineers providing the network services. After less than a year, the BaBar collaboration has collected almost 100 million particle collision events in a database approaching 165TB (Tera=10{sup 12}). Around 20TB has been exported via the Internet to the BaBar regional center at IN2P3 in Lyon, France, for processing and around 40 TB of simulated events have been imported to SLAC from Lawrence Livermore National Laboratory (LLNL). An unforseen challenge has arisen due to recent events and highlighted security concerns at DoE funded labs. New rules and regulations suggest it is only a matter of time before many active performance measurements may not be possible between many sites. Yet, at the same time, the importance of understanding every aspect of the network and eradicating packet loss for high throughput data transfers has become apparent. Work at SLAC to employ passive monitoring using netflow and OC3MON is underway and techniques to supplement and possibly replace the active measurements are being considered. This paper will detail the special needs and traffic characterization of a remarkable research project, and how the networking hurdles have been resolved (or not!) to achieve the required high data throughput. Results from active and passive measurements will be compared, and methods for achieving high throughput and the effect on the network will be assessed along with tools that directly measure throughput and applications used to actually transfer data.

  1. Simulation Modeling and Performance Evaluation of Space Networks

    NASA Technical Reports Server (NTRS)

    Jennings, Esther H.; Segui, John

    2006-01-01

    In space exploration missions, the coordinated use of spacecraft as communication relays increases the efficiency of the endeavors. To conduct trade-off studies of the performance and resource usage of different communication protocols and network designs, JPL designed a comprehensive extendable tool, the Multi-mission Advanced Communications Hybrid Environment for Test and Evaluation (MACHETE). The design and development of MACHETE began in 2000 and is constantly evolving. Currently, MACHETE contains Consultative Committee for Space Data Systems (CCSDS) protocol standards such as Proximity-1, Advanced Orbiting Systems (AOS), Packet Telemetry/Telecommand, Space Communications Protocol Specification (SCPS), and the CCSDS File Delivery Protocol (CFDP). MACHETE uses the Aerospace Corporation s Satellite Orbital Analysis Program (SOAP) to generate the orbital geometry information and contact opportunities. Matlab scripts provide the link characteristics. At the core of MACHETE is a discrete event simulator, QualNet. Delay Tolerant Networking (DTN) is an end-to-end architecture providing communication in and/or through highly stressed networking environments. Stressed networking environments include those with intermittent connectivity, large and/or variable delays, and high bit error rates. To provide its services, the DTN protocols reside at the application layer of the constituent internets, forming a store-and-forward overlay network. The key capabilities of the bundling protocols include custody-based reliability, ability to cope with intermittent connectivity, ability to take advantage of scheduled and opportunistic connectivity, and late binding of names to addresses. In this presentation, we report on the addition of MACHETE models needed to support DTN, namely: the Bundle Protocol (BP) model. To illustrate the use of MACHETE with the additional DTN model, we provide an example simulation to benchmark its performance. We demonstrate the use of the DTN protocol

  2. Radar target classification method with high accuracy and decision speed performance using MUSIC spectrum vectors and PCA projection

    NASA Astrophysics Data System (ADS)

    Secmen, Mustafa

    2011-10-01

    This paper introduces the performance of an electromagnetic target recognition method in resonance scattering region, which includes pseudo spectrum Multiple Signal Classification (MUSIC) algorithm and principal component analysis (PCA) technique. The aim of this method is to classify an "unknown" target as one of the "known" targets in an aspect-independent manner. The suggested method initially collects the late-time portion of noise-free time-scattered signals obtained from different reference aspect angles of known targets. Afterward, these signals are used to obtain MUSIC spectrums in real frequency domain having super-resolution ability and noise resistant feature. In the final step, PCA technique is applied to these spectrums in order to reduce dimensionality and obtain only one feature vector per known target. In the decision stage, noise-free or noisy scattered signal of an unknown (test) target from an unknown aspect angle is initially obtained. Subsequently, MUSIC algorithm is processed for this test signal and resulting test vector is compared with feature vectors of known targets one by one. Finally, the highest correlation gives the type of test target. The method is applied to wire models of airplane targets, and it is shown that it can tolerate considerable noise levels although it has a few different reference aspect angles. Besides, the runtime of the method for a test target is sufficiently low, which makes the method suitable for real-time applications.

  3. Neural net approach to predictive vector quantization

    NASA Astrophysics Data System (ADS)

    Mohsenian, Nader; Nasrabadi, Nasser M.

    1992-11-01

    A new predictive vector quantization (PVQ) technique, capable of exploring the nonlinear dependencies in addition to the linear dependencies that exist between adjacent blocks of pixels, is introduced. Two different classes of neural nets form the components of the PVQ scheme. A multi-layer perceptron is embedded in the predictive component of the compression system. This neural network, using the non-linearity condition associated with its processing units, can perform as a non-linear vector predictor. The second component of the PVQ scheme vector quantizes (VQ) the residual vector that is formed by subtracting the output of the perceptron from the original wave-pattern. Kohonen Self-Organizing Feature Map (KSOFM) was utilized as a neural network clustering algorithm to design the codebook for the VQ technique. Coding results are presented for monochrome 'still' images.

  4. Coexistence: Threat to the Performance of Heterogeneous Network

    NASA Astrophysics Data System (ADS)

    Sharma, Neetu; Kaur, Amanpreet

    2010-11-01

    Wireless technology is gaining broad acceptance as users opt for the freedom that only wireless network can provide. Well-accepted wireless communication technologies generally operate in frequency bands that are shared among several users, often using different RF schemes. This is true in particular for WiFi, Bluetooth, and more recently ZigBee. These all three operate in the unlicensed 2.4 GHz band, also known as ISM band, which has been key to the development of a competitive and innovative market for wireless embedded devices. But, as with any resource held in common, it is crucial that those technologies coexist peacefully to allow each user of the band to fulfill its communication goals. This has led to an increase in wireless devices intended for use in IEEE 802.11 wireless local area networks (WLANs) and wireless personal area networks (WPANs), both of which support operation in the crowded 2.4-GHz industrial, scientific and medical (ISM) band. Despite efforts made by standardization bodies to ensure smooth coexistence it may occur that communication technologies transmitting for instance at very different power levels interfere with each other. In particular, it has been pointed out that ZigBee could potentially experience interference from WiFi traffic given that while both protocols can transmit on the same channel, WiFi transmissions usually occur at much higher power level. In this work, we considered a heterogeneous network and analyzed the impact of coexistence between IEEE 802.15.4 and IEEE 802.11b. To evaluate the performance of this network, measurement and simulation study are conducted and developed in the QualNet Network simulator, version 5.0.Model is analyzed for different placement models or topologies such as Random. Grid & Uniform. Performance is analyzed on the basis of characteristics such as throughput, average jitter and average end to end delay. Here, the impact of varying different antenna gain & shadowing model for this

  5. Practical Performance Analysis for Multiple Information Fusion Based Scalable Localization System Using Wireless Sensor Networks.

    PubMed

    Zhao, Yubin; Li, Xiaofan; Zhang, Sha; Meng, Tianhui; Zhang, Yiwen

    2016-08-23

    In practical localization system design, researchers need to consider several aspects to make the positioning efficiently and effectively, e.g., the available auxiliary information, sensing devices, equipment deployment and the environment. Then, these practical concerns turn out to be the technical problems, e.g., the sequential position state propagation, the target-anchor geometry effect, the Non-line-of-sight (NLOS) identification and the related prior information. It is necessary to construct an efficient framework that can exploit multiple available information and guide the system design. In this paper, we propose a scalable method to analyze system performance based on the Cramér-Rao lower bound (CRLB), which can fuse all of the information adaptively. Firstly, we use an abstract function to represent all of the wireless localization system model. Then, the unknown vector of the CRLB consists of two parts: the first part is the estimated vector, and the second part is the auxiliary vector, which helps improve the estimation accuracy. Accordingly, the Fisher information matrix is divided into two parts: the state matrix and the auxiliary matrix. Unlike the theoretical analysis, our CRLB can be a practical fundamental limit to denote the system that fuses multiple information in the complicated environment, e.g., recursive Bayesian estimation based on the hidden Markov model, the map matching method and the NLOS identification and mitigation methods. Thus, the theoretical results are approaching the real case more. In addition, our method is more adaptable than other CRLBs when considering more unknown important factors. We use the proposed method to analyze the wireless sensor network-based indoor localization system. The influence of the hybrid LOS/NLOS channels, the building layout information and the relative height differences between the target and anchors are analyzed. It is demonstrated that our method exploits all of the available information for

  6. Practical Performance Analysis for Multiple Information Fusion Based Scalable Localization System Using Wireless Sensor Networks

    PubMed Central

    Zhao, Yubin; Li, Xiaofan; Zhang, Sha; Meng, Tianhui; Zhang, Yiwen

    2016-01-01

    In practical localization system design, researchers need to consider several aspects to make the positioning efficiently and effectively, e.g., the available auxiliary information, sensing devices, equipment deployment and the environment. Then, these practical concerns turn out to be the technical problems, e.g., the sequential position state propagation, the target-anchor geometry effect, the Non-line-of-sight (NLOS) identification and the related prior information. It is necessary to construct an efficient framework that can exploit multiple available information and guide the system design. In this paper, we propose a scalable method to analyze system performance based on the Cramér–Rao lower bound (CRLB), which can fuse all of the information adaptively. Firstly, we use an abstract function to represent all of the wireless localization system model. Then, the unknown vector of the CRLB consists of two parts: the first part is the estimated vector, and the second part is the auxiliary vector, which helps improve the estimation accuracy. Accordingly, the Fisher information matrix is divided into two parts: the state matrix and the auxiliary matrix. Unlike the theoretical analysis, our CRLB can be a practical fundamental limit to denote the system that fuses multiple information in the complicated environment, e.g., recursive Bayesian estimation based on the hidden Markov model, the map matching method and the NLOS identification and mitigation methods. Thus, the theoretical results are approaching the real case more. In addition, our method is more adaptable than other CRLBs when considering more unknown important factors. We use the proposed method to analyze the wireless sensor network-based indoor localization system. The influence of the hybrid LOS/NLOS channels, the building layout information and the relative height differences between the target and anchors are analyzed. It is demonstrated that our method exploits all of the available information for

  7. Road safety performance indicators for the interurban road network.

    PubMed

    Yannis, George; Weijermars, Wendy; Gitelman, Victoria; Vis, Martijn; Chaziris, Antonis; Papadimitriou, Eleonora; Azevedo, Carlos Lima

    2013-11-01

    Various road safety performance indicators (SPIs) have been proposed for different road safety research areas, mainly as regards driver behaviour (e.g. seat belt use, alcohol, drugs, etc.) and vehicles (e.g. passive safety); however, no SPIs for the road network and design have been developed. The objective of this research is the development of an SPI for the road network, to be used as a benchmark for cross-region comparisons. The developed SPI essentially makes a comparison of the existing road network to the theoretically required one, defined as one which meets some minimum requirements with respect to road safety. This paper presents a theoretical concept for the determination of this SPI as well as a translation of this theory into a practical method. Also, the method is applied in a number of pilot countries namely the Netherlands, Portugal, Greece and Israel. The results show that the SPI could be efficiently calculated in all countries, despite some differences in the data sources. In general, the calculated overall SPI scores were realistic and ranged from 81 to 94%, with the exception of Greece where the SPI was relatively lower (67%). However, the SPI should be considered as a first attempt to determine the safety level of the road network. The proposed method has some limitations and could be further improved. The paper presents directions for further research to further develop the SPI.

  8. Social value of high bandwidth networks: creative performance and education.

    PubMed

    Mansell, Robin; Foresta, Don

    2016-03-06

    This paper considers limitations of existing network technologies for distributed theatrical performance in the creative arts and for symmetrical real-time interaction in online learning environments. It examines the experience of a multidisciplinary research consortium that aimed to introduce a solution to latency and other network problems experienced by users in these sectors. The solution builds on the Multicast protocol, Access Grid, an environment supported by very high bandwidth networks. The solution is intended to offer high-quality image and sound, interaction with other network platforms, maximum user control of multipoint transmissions, and open programming tools that are flexible and modifiable for specific uses. A case study is presented drawing upon an extended period of participant observation by the authors. This provides a basis for an examination of the challenges of promoting technological innovation in a multidisciplinary project. We highlight the kinds of technical advances and cultural and organizational changes that would be required to meet demanding quality standards, the way a research consortium planned to engage in experimentation and learning, and factors making it difficult to achieve an open platform that is responsive to the needs of users in the creative arts and education sectors.

  9. Performance evaluation of distributed wavelength assignment in WDM optical networks

    NASA Astrophysics Data System (ADS)

    Hashiguchi, Tomohiro; Wang, Xi; Morikawa, Hiroyuki; Aoyama, Tomonori

    2004-04-01

    In WDM wavelength routed networks, prior to a data transfer, a call setup procedure is required to reserve a wavelength path between the source-destination node pairs. A distributed approach to a connection setup can achieve a very high speed, while improving the reliability and reducing the implementation cost of the networks. However, along with many advantages, several major challenges have been posed by the distributed scheme in how the management and allocation of wavelength could be efficiently carried out. In this thesis, we apply a distributed wavelength assignment algorithm named priority based wavelength assignment (PWA) that was originally proposed for the use in burst switched optical networks to the problem of reserving wavelengths of path reservation protocols in the distributed control optical networks. Instead of assigning wavelengths randomly, this approach lets each node select the "safest" wavelengths based on the information of wavelength utilization history, thus unnecessary future contention is prevented. The simulation results presented in this paper show that the proposed protocol can enhance the performance of the system without introducing any apparent drawbacks.

  10. A study on the performance comparison of metaheuristic algorithms on the learning of neural networks

    NASA Astrophysics Data System (ADS)

    Lai, Kee Huong; Zainuddin, Zarita; Ong, Pauline

    2017-08-01

    The learning or training process of neural networks entails the task of finding the most optimal set of parameters, which includes translation vectors, dilation parameter, synaptic weights, and bias terms. Apart from the traditional gradient descent-based methods, metaheuristic methods can also be used for this learning purpose. Since the inception of genetic algorithm half a century ago, the last decade witnessed the explosion of a variety of novel metaheuristic algorithms, such as harmony search algorithm, bat algorithm, and whale optimization algorithm. Despite the proof of the no free lunch theorem in the discipline of optimization, a survey in the literature of machine learning gives contrasting results. Some researchers report that certain metaheuristic algorithms are superior to the others, whereas some others argue that different metaheuristic algorithms give comparable performance. As such, this paper aims to investigate if a certain metaheuristic algorithm will outperform the other algorithms. In this work, three metaheuristic algorithms, namely genetic algorithms, particle swarm optimization, and harmony search algorithm are considered. The algorithms are incorporated in the learning of neural networks and their classification results on the benchmark UCI machine learning data sets are compared. It is found that all three metaheuristic algorithms give similar and comparable performance, as captured in the average overall classification accuracy. The results corroborate the findings reported in the works done by previous researchers. Several recommendations are given, which include the need of statistical analysis to verify the results and further theoretical works to support the obtained empirical results.

  11. OPTIMAL CONFIGURATION OF A COMMAND AND CONTROL NETWORK: BALANCING PERFORMANCE AND RECONFIGURATION CONSTRAINTS

    SciTech Connect

    L. DOWELL

    1999-08-01

    The optimization of the configuration of communications and control networks is important for assuring the reliability and performance of the networks. This paper presents techniques for determining the optimal configuration for such a network in the presence of communication and connectivity constraints. reconfiguration to restore connectivity to a data-fusion network following the failure of a network component.

  12. Genetic algorithm optimization in drug design QSAR: Bayesian-regularized genetic neural networks (BRGNN) and genetic algorithm-optimized support vectors machines (GA-SVM).

    PubMed

    Fernandez, Michael; Caballero, Julio; Fernandez, Leyden; Sarai, Akinori

    2011-02-01

    Many articles in "in silico" drug design implemented genetic algorithm (GA) for feature selection, model optimization, conformational search, or docking studies. Some of these articles described GA applications to quantitative structure-activity relationships (QSAR) modeling in combination with regression and/or classification techniques. We reviewed the implementation of GA in drug design QSAR and specifically its performance in the optimization of robust mathematical models such as Bayesian-regularized artificial neural networks (BRANNs) and support vector machines (SVMs) on different drug design problems. Modeled data sets encompassed ADMET and solubility properties, cancer target inhibitors, acetylcholinesterase inhibitors, HIV-1 protease inhibitors, ion-channel and calcium entry blockers, and antiprotozoan compounds as well as protein classes, functional, and conformational stability data. The GA-optimized predictors were often more accurate and robust than previous published models on the same data sets and explained more than 65% of data variances in validation experiments. In addition, feature selection over large pools of molecular descriptors provided insights into the structural and atomic properties ruling ligand-target interactions.

  13. A case study using support vector machines, neural networks and logistic regression in a GIS to identify wells contaminated with nitrate-N

    NASA Astrophysics Data System (ADS)

    Dixon, Barnali

    2009-09-01

    Accurate and inexpensive identification of potentially contaminated wells is critical for water resources protection and management. The objectives of this study are to 1) assess the suitability of approximation tools such as neural networks (NN) and support vector machines (SVM) integrated in a geographic information system (GIS) for identifying contaminated wells and 2) use logistic regression and feature selection methods to identify significant variables for transporting contaminants in and through the soil profile to the groundwater. Fourteen GIS derived soil hydrogeologic and landuse parameters were used as initial inputs in this study. Well water quality data (nitrate-N) from 6,917 wells provided by Florida Department of Environmental Protection (USA) were used as an output target class. The use of the logistic regression and feature selection methods reduced the number of input variables to nine. Receiver operating characteristics (ROC) curves were used for evaluation of these approximation tools. Results showed superior performance with the NN as compared to SVM especially on training data while testing results were comparable. Feature selection did not improve accuracy; however, it helped increase the sensitivity or true positive rate (TPR). Thus, a higher TPR was obtainable with fewer variables.

  14. On the Classification Performance of TAN and General Bayesian Networks

    NASA Astrophysics Data System (ADS)

    Madden, Michael G.

    Over a decade ago, Friedmanet al. introduced the Tree Augmented Naïve Bayes (TAN) classifier, with experiments indicating that it significantly outperformed Naïve Bayes (NB) in terms of classification accuracy, whereas general Bayesian network (GBN) classifiers performed no better than NB. This paper challenges those claims, using a careful experimental analysis to show that GBN classifiers significantly outperform NB on datasets analyzed, and are comparable to TAN performance. It is found that the poor performance reported by Friedman et al. are not attributable to the GBN per se, but rather to their use of simple empirical frequencies to estimate GBN parameters, whereas basic parameter smoothing (used in their TAN analyses but not their GBN analyses) improves GBN performance significantly. It is concluded that, while GBN classifiers may have some limitations, they deserve greater attention, particularly in domains where insight into classification decisions, as well as good accuracy, is required.

  15. Comparative performance of imagicides on Anopheles stephensi, main malaria vector in a malarious area, southern Iran.

    PubMed

    Abai, M R; Mehravaran, A; Vatandoost, H; Oshaghi, M A; Javadian, E; Mashayekhi, M; Mosleminia, A; Piyazak, N; Edallat, H; Mohtarami, F; Jabbari, H; Rafi, F

    2008-12-01

    Jiroft district has subtropical climate and prone to seasonal malaria transmission with annual parasite index (API) 4.2 per 1000 in 2006. Anopheles stephensi Liston is a dominant malaria vector. The monitoring of insecticide susceptibility and irritability was conducted using discriminative dose as described by WHO. The IV instar larvae were collected from different larval breeding places and transported to the temporary insectary, fed with Bemax and then 2-3 days-old emerged and sugar-fed adults were used for susceptibility and irritability tests employing WHO methods and kits to organochlorine (OC) and pyrethroid (PY) insecticides. Mortality rates of field strain of An. stephensi were 91.3 +/- 0.14 and 90 +/- 0.47% to DDT and dieldrin, respectively at one hour exposure time but was susceptible to all pyrethroids tested. The average number of take-offs per min per adult was 2.09 +/- 0.13 for DDT, 0.581 +/- 0.05 for dieldrin, 1.85 +/- 0.08 for permethrin, 1.87 +/- 0.21 for lambda-cyhalothrin, 1.53 +/- 0.13 for cyfluthrin, and 1.23 +/- 0.1 for deltamethrin. Currently, deltamethrin is being used for indoor residual spraying against malaria vectors in the endemic areas of Iran. The findings revealed that the main malaria species is susceptible to all pyrethroids including deltamethrin, permethrin, cyfluthrin and lambda-cyhalothrin but was tolerant to DDT and dieldrin. This report and the finding are coincided with results of previous studies carried out during 1957-61 in the same area. Irritability tests to OC and PY insecticides revealed the moderate level of irritability to DDT compared to pyrethroids and dieldrin. Monitoring for possible cross-resistance between OC and PY insecticides should come into consideration for malaria control programme.

  16. Digitally controlled high-performance dc SQUID readout electronics for a 304-channel vector magnetometer

    NASA Astrophysics Data System (ADS)

    Bechstein, S.; Petsche, F.; Scheiner, M.; Drung, D.; Thiel, F.; Schnabel, A.; Schurig, Th

    2006-06-01

    Recently, we have developed a family of dc superconducting quantum interference device (SQUID) readout electronics for several applications. These electronics comprise a low-noise preamplifier followed by an integrator, and an analog SQUID bias circuit. A highly-compact low-power version with a flux-locked loop bandwidth of 0.3 MHz and a white noise level of 1 nV/√Hz was specially designed for a 304-channel low-Tc dc SQUID vector magnetometer, intended to operate in the new Berlin Magnetically Shielded Room (BMSR-2). In order to minimize the space needed to mount the electronics on top of the dewar and to minimize the power consumption, we have integrated four electronics channels on one 3 cm × 10 cm sized board. Furthermore we embedded the analog components of these four channels into a digitally controlled system including an in-system programmable microcontroller. Four of these integrated boards were combined to one module with a size of 4 cm × 4 cm × 16 cm. 19 of these modules were implemented, resulting in a total power consumption of about 61 W. To initialize the 304 channels and to service the system we have developed software tools running on a laptop computer. By means of these software tools the microcontrollers are fed with all required data such as the working points, the characteristic parameters of the sensors (noise, voltage swing), or the sensor position inside of the vector magnetometer system. In this paper, the developed electronics including the software tools are described, and first results are presented.

  17. Mean-square exponential input-to-state stability of delayed Cohen-Grossberg neural networks with Markovian switching based on vector Lyapunov functions.

    PubMed

    Li, Zhihong; Liu, Lei; Zhu, Quanxin

    2016-12-01

    This paper studies the mean-square exponential input-to-state stability of delayed Cohen-Grossberg neural networks with Markovian switching. By using the vector Lyapunov function and property of M-matrix, two generalized Halanay inequalities are established. By means of the generalized Halanay inequalities, sufficient conditions are also obtained, which can ensure the exponential input-to-state stability of delayed Cohen-Grossberg neural networks with Markovian switching. Two numerical examples are given to illustrate the efficiency of the derived results.

  18. A Framework to Implement IoT Network Performance Modelling Techniques for Network Solution Selection.

    PubMed

    Delaney, Declan T; O'Hare, Gregory M P

    2016-12-01

    No single network solution for Internet of Things (IoT) networks can provide the required level of Quality of Service (QoS) for all applications in all environments. This leads to an increasing number of solutions created to fit particular scenarios. Given the increasing number and complexity of solutions available, it becomes difficult for an application developer to choose the solution which is best suited for an application. This article introduces a framework which autonomously chooses the best solution for the application given the current deployed environment. The framework utilises a performance model to predict the expected performance of a particular solution in a given environment. The framework can then choose an apt solution for the application from a set of available solutions. This article presents the framework with a set of models built using data collected from simulation. The modelling technique can determine with up to 85% accuracy the solution which performs the best for a particular performance metric given a set of solutions. The article highlights the fractured and disjointed practice currently in place for examining and comparing communication solutions and aims to open a discussion on harmonising testing procedures so that different solutions can be directly compared and offers a framework to achieve this within IoT networks.

  19. A Framework to Implement IoT Network Performance Modelling Techniques for Network Solution Selection †

    PubMed Central

    Delaney, Declan T.; O’Hare, Gregory M. P.

    2016-01-01

    No single network solution for Internet of Things (IoT) networks can provide the required level of Quality of Service (QoS) for all applications in all environments. This leads to an increasing number of solutions created to fit particular scenarios. Given the increasing number and complexity of solutions available, it becomes difficult for an application developer to choose the solution which is best suited for an application. This article introduces a framework which autonomously chooses the best solution for the application given the current deployed environment. The framework utilises a performance model to predict the expected performance of a particular solution in a given environment. The framework can then choose an apt solution for the application from a set of available solutions. This article presents the framework with a set of models built using data collected from simulation. The modelling technique can determine with up to 85% accuracy the solution which performs the best for a particular performance metric given a set of solutions. The article highlights the fractured and disjointed practice currently in place for examining and comparing communication solutions and aims to open a discussion on harmonising testing procedures so that different solutions can be directly compared and offers a framework to achieve this within IoT networks. PMID:27916929

  20. The application of Nonlinear Local Lyapunov Vectors to the Zebiak-Cane Model and their performance in the Ensemble Prediction

    NASA Astrophysics Data System (ADS)

    Hou, Zhaolu; Li, Jianping; Ding, Ruiqiang; Feng, Jie

    2017-04-01

    Nonlinear local Lyapunov vectors (NLLVs) have been developed to indicate orthogonal directions in phase space with different error growth rates. Comparing to the breeding vectors (BVs), NLLVs can span the fast-growing perturbation subspace efficiently and may gasp more components in analysis errors than the BVs in the nonlinear dynamical system. Here, NLLVs are employed in the Zebiak-Cane (ZC) atmosphere-ocean coupled model and represent a nonlinear, finite-time extension of the local Lyapunov vectors of the ZC model. The statistical properties of NLLVs is not very sensitive to the choice of the breeding parameter. However, the non-leading NLLVs have some randomness, which increase the diversity of NLLVs. Not only the leading NLLV but also the non-leading NLLVs are flow-dependent and related to the background ENSO evolution of the ZC model in the aspect of spatial structure and error growth rate. the non-leading NLLVs also are the instability direction related to the ENSO process in the ZC model. Due to the non-leading NLLVs, the subspace of the first few NLLVs can describe better the analysis error than that of the same number BVs in the ZC model. NLLVs as initial ensemble perturbations are applied to the ensemble prediction of ENSO and the performance are systematically compared to those of the random perturbation (RP) technique, and the BV method in the prefect environment. The results demonstrate that the RP technique has the worst performance and the NLLVs method is the best in the ensemble forecasts. In particular, the NLLV technique can reduce the "spring barrier" for ENSO prediction further than the other ensemble method.

  1. High-performance, scalable optical network-on-chip architectures

    NASA Astrophysics Data System (ADS)

    Tan, Xianfang

    The rapid advance of technology enables a large number of processing cores to be integrated into a single chip which is called a Chip Multiprocessor (CMP) or a Multiprocessor System-on-Chip (MPSoC) design. The on-chip interconnection network, which is the communication infrastructure for these processing cores, plays a central role in a many-core system. With the continuously increasing complexity of many-core systems, traditional metallic wired electronic networks-on-chip (NoC) became a bottleneck because of the unbearable latency in data transmission and extremely high energy consumption on chip. Optical networks-on-chip (ONoC) has been proposed as a promising alternative paradigm for electronic NoC with the benefits of optical signaling communication such as extremely high bandwidth, negligible latency, and low power consumption. This dissertation focus on the design of high-performance and scalable ONoC architectures and the contributions are highlighted as follow: 1. A micro-ring resonator (MRR)-based Generic Wavelength-routed Optical Router (GWOR) is proposed. A method for developing any sized GWOR is introduced. GWOR is a scalable non-blocking ONoC architecture with simple structure, low cost and high power efficiency compared to existing ONoC designs. 2. To expand the bandwidth and improve the fault tolerance of the GWOR, a redundant GWOR architecture is designed by cascading different type of GWORs into one network. 3. The redundant GWOR built with MRR-based comb switches is proposed. Comb switches can expand the bandwidth while keep the topology of GWOR unchanged by replacing the general MRRs with comb switches. 4. A butterfly fat tree (BFT)-based hybrid optoelectronic NoC (HONoC) architecture is developed in which GWORs are used for global communication and electronic routers are used for local communication. The proposed HONoC uses less numbers of electronic routers and links than its counterpart of electronic BFT-based NoC. It takes the advantages of

  2. Performance characteristics of omnidirectional antennas for spacecraft using NASA networks

    NASA Technical Reports Server (NTRS)

    Hilliard, Lawrence M.

    1987-01-01

    Described are the performance capabilities and critical elements of the shaped omni antenna developed for NASA for space users of NASA networks. The shaped omni is designed to be operated in tandem for virtually omnidirectional coverage and uniform gain free of spacecraft interference. These antennas are ideal for low gain data requirements and emergency backup, deployment, amd retrieval of higher gain RF systems. Other omnidirectional antennas that have flown in space are described in the final section. A performance summary for the shaped omni is in the Appendix. This document introduces organizations and projects to the shaped omni applications for NASA's space use. Coverage, gain, weight, power, and implementation and other performance information for satisfying a wide range of data requirements are included.

  3. Vector Reflectometry in a Beam Waveguide

    NASA Technical Reports Server (NTRS)

    Eimer, J. R.; Bennett, C. L.; Chuss, D. T.; Wollack, E. J.

    2011-01-01

    We present a one-port calibration technique for characterization of beam waveguide components with a vector network analyzer. This technique involves using a set of known delays to separate the responses of the instrument and the device under test. We demonstrate this technique by measuring the reflected performance of a millimeter-wave variable-delay polarization modulator.

  4. Network and User-Perceived Performance of Web Page Retrievals

    NASA Technical Reports Server (NTRS)

    Kruse, Hans; Allman, Mark; Mallasch, Paul

    1998-01-01

    The development of the HTTP protocol has been driven by the need to improve the network performance of the protocol by allowing the efficient retrieval of multiple parts of a web page without the need for multiple simultaneous TCP connections between a client and a server. We suggest that the retrieval of multiple page elements sequentially over a single TCP connection may result in a degradation of the perceived performance experienced by the user. We attempt to quantify this perceived degradation through the use of a model which combines a web retrieval simulation and an analytical model of TCP operation. Starting with the current HTTP/l.1 specification, we first suggest a client@side heuristic to improve the perceived transfer performance. We show that the perceived speed of the page retrieval can be increased without sacrificing data transfer efficiency. We then propose a new client/server extension to the HTTP/l.1 protocol to allow for the interleaving of page element retrievals. We finally address the issue of the display of advertisements on web pages, and in particular suggest a number of mechanisms which can make efficient use of IP multicast to send advertisements to a number of clients within the same network.

  5. Protein interaction networks at the host-microbe interface in Diaphorina citri, the insect vector of the citrus greening pathogen

    USDA-ARS?s Scientific Manuscript database

    The Asian citrus psyllid (Diaphorina citri) is the insect vector responsible for the worldwide spread of Candidatus Liberibacter asiaticus, the bacterial pathogen associated with citrus greening disease. Developmental changes in the insect vector impact pathogen transmission, such that D. citri tra...

  6. High-Performance, Semi-Interpenetrating Polymer Network

    NASA Technical Reports Server (NTRS)

    Pater, Ruth H.; Lowther, Sharon E.; Smith, Janice Y.; Cannon, Michelle S.; Whitehead, Fred M.; Ely, Robert M.

    1992-01-01

    High-performance polymer made by new synthesis in which one or more easy-to-process, but brittle, thermosetting polyimides combined with one or more tough, but difficult-to-process, linear thermoplastics to yield semi-interpenetrating polymer network (semi-IPN) having combination of easy processability and high tolerance to damage. Two commercially available resins combined to form tough, semi-IPN called "LaRC-RP49." Displays improvements in toughness and resistance to microcracking. LaRC-RP49 has potential as high-temperature matrix resin, adhesive, and molding resin. Useful in aerospace, automotive, and electronic industries.

  7. High performance packet switches for broadband satellite networks

    NASA Astrophysics Data System (ADS)

    Xu, Liang; Luo, Fengguang; Luo, Zhixiang

    2005-11-01

    Buffered crossbar switches are now becoming very attractive for the high performance packet switches. An architecture that combines the VOQ architecture and internal buffers can eradicate the HOL problems and reducing the output contention. The architecture predominance and the internal distributed arbitration can fit the broadband satellite networks very well. We propose a new scheduling scheme named Rate Durative (RD), which a VOQ is served continuously at the same time considering its priority level under certain rules. Our scheme was shown to handle traffic more efficiently and better than previous schemes. In addition, this scheduling scheme also supports QoS very well

  8. Deploying optical performance monitoring in TeliaSonera's network

    NASA Astrophysics Data System (ADS)

    Svensson, Torbjorn K.; Karlsson, Per-Olov E.

    2004-09-01

    This paper reports on the first steps taken by TeliaSonera towards deploying optical performance monitoring (OPM) in the company"s transport network, in order to assure increasingly reliable communications on the physical layer. The big leap, a world-wide deployment of OPM still awaits a breakthrough. There is required very obvious benefits from using OPM in order to change this stalemate. Reasons may be the anaemic economy of many telecom operators, shareholders" pushing for short-term payback, and reluctance to add complexity and to integrate a system management. Technically, legacy digital systems do already have a proven ability of monitoring, so adding OPM to the dense wavelength division multiplexed (DWDM) systems in operation should be judged with care. Duly installed, today"s DWDM systems do their job well, owing to rigorous rules for link design and a prosperous power budget, a power management inherent to the system, and a reliable supplier"s support. So what may bring this stalemate to an end? -A growing number of appliances of OPM, for enhancing network operation and maintenance, and enabling new customer services, will most certainly bring momentum to a change. The first employment of OPM in TeliaSonera"s network is launched this year, 2004. The preparedness of future OPM dependent services and transport technologies will thereby be granted.

  9. Supporting Proactive Application Event Notification to Improve Sensor Network Performance

    NASA Astrophysics Data System (ADS)

    Merlin, Christophe J.; Heinzelman, Wendi B.

    As wireless sensor networks gain in popularity, many deployments are posing new challenges due to their diverse topologies and resource constraints. Previous work has shown the advantage of adapting protocols based on current network conditions (e.g., link status, neighbor status), in order to provide the best service in data transport. Protocols can similarly benefit from adaptation based on current application conditions. In particular, if proactively informed of the status of active queries in the network, protocols can adjust their behavior accordingly. In this paper, we propose a novel approach to provide such proactive application event notification to all interested protocols in the stack. Specifically, we use the existing interfaces and event signaling structure provided by the X-Lisa (Cross-layer Information Sharing Architecture) protocol architecture, augmenting this architecture with a Middleware Interpreter for managing application queries and performing event notification. Using this approach, we observe gains in Quality of Service of up to 40% in packet delivery ratios and a 75% decrease in packet delivery delay for the tested scenario.

  10. Observed and predicted performance of the global IMS infrasound network

    NASA Astrophysics Data System (ADS)

    Le Pichon, A.; Ceranna, L.; Landes, M.

    2012-04-01

    The International Monitoring System (IMS) infrasound network is being deployed to monitor compliance with the Comprehensive Nuclear-Test-Ban Treaty (CTBT). Global-scale analyses of data recorded by this network indicate that the detection capability exhibits strong spatio-temporal variations. Previous studies estimated radiated acoustic source energy from remote infrasound observations using empirical yield-scaling relations, which account for the along-path stratospheric winds. Although the empirical wind correction reduces the variance in the explosive energy versus pressure relationship, large error remains in the yield estimates. Numerical modeling techniques are now widely employed to investigate the role of different factors describing atmospheric infrasound sources and propagation. Here we develop a theoretical attenuation relation from a large set of numerical simulations using the Parabolic Equation method. This relation accounts for the effects of the source frequency; geometrical spreading and dissipation; and realistic atmospheric specifications on the pressure wave attenuation. Compared with previous studies, the derived attenuation relation incorporates a more realistic physical description of infrasound propagation. By incorporating real ambient noise information at the receivers, we obtain the minimum detectable source amplitude in the frequency band of interest for detecting explosions. Empirical relations between the source spectrum and explosion yield are used to infer detection thresholds in tons of TNT equivalent. In the context of future verification of the CTBT, the obtained attenuation relation provides a more realistic picture of the spatio-temporal variability of the IMS network performance. The attenuation relation could also be used in the design and maintenance of an arbitrary infrasound monitoring network.

  11. Performance evaluation for epileptic electroencephalogram (EEG) detection by using Neyman-Pearson criteria and a support vector machine

    NASA Astrophysics Data System (ADS)

    Wang, Chun-mei; Zhang, Chong-ming; Zou, Jun-zhong; Zhang, Jian

    2012-02-01

    The diagnosis of several neurological disorders is based on the detection of typical pathological patterns in electroencephalograms (EEGs). This is a time-consuming task requiring significant training and experience. A lot of effort has been devoted to developing automatic detection techniques which might help not only in accelerating this process but also in avoiding the disagreement among readers of the same record. In this work, Neyman-Pearson criteria and a support vector machine (SVM) are applied for detecting an epileptic EEG. Decision making is performed in two stages: feature extraction by computing the wavelet coefficients and the approximate entropy (ApEn) and detection by using Neyman-Pearson criteria and an SVM. Then the detection performance of the proposed method is evaluated. Simulation results demonstrate that the wavelet coefficients and the ApEn are features that represent the EEG signals well. By comparison with Neyman-Pearson criteria, an SVM applied on these features achieved higher detection accuracies.

  12. A neural network approach for fast, automated quantification of DIR performance.

    PubMed

    Neylon, John; Min, Yugang; Low, Daniel A; Santhanam, Anand

    2017-08-01

    A critical step in adaptive radiotherapy (ART) workflow is deformably registering the simulation CT with the daily or weekly volumetric imaging. Quantifying the deformable image registration accuracy under these circumstances is a complex task due to the lack of known ground-truth landmark correspondences between the source data and target data. Generating landmarks manually (using experts) is time-consuming, and limited by image quality and observer variability. While image similarity metrics (ISM) may be used as an alternative approach to quantify the registration error, there is a need to characterize the ISM values by developing a nonlinear cost function and translate them to physical distance measures in order to enable fast, quantitative comparison of registration performance. In this paper, we present a proof-of-concept methodology for automated quantification of DIR performance. A nonlinear cost function was developed as a combination of ISM values and governed by the following two expectations for an accurate registration: (a) the deformed data obtained from transforming the simulation CT data with the deformation vector field (DVF) should match the target image data with near perfect similarity, and (b) the similarity between the simulation CT and deformed data should match the similarity between the simulation CT and the target image data. A deep neural network (DNN) was developed that translated the cost function values to actual physical distance measure. To train the neural network, patient-specific biomechanical models of the head-and-neck anatomy were employed. The biomechanical model anatomy was systematically deformed to represent changes in patient posture and physiological regression. Volumetric source and target images with known ground-truth deformations vector fields were then generated, representing the daily or weekly imaging data. Annotated data was then fed through a supervised machine learning process, iteratively optimizing a nonlinear

  13. The challenges of archiving networked-based multimedia performances (Performance cryogenics)

    NASA Astrophysics Data System (ADS)

    Cohen, Elizabeth; Cooperstock, Jeremy; Kyriakakis, Chris

    2002-11-01

    Music archives and libraries have cultural preservation at the core of their charters. New forms of art often race ahead of the preservation infrastructure. The ability to stream multiple synchronized ultra-low latency streams of audio and video across a continent for a distributed interactive performance such as music and dance with high-definition video and multichannel audio raises a series of challenges for the architects of digital libraries and those responsible for cultural preservation. The archiving of such performances presents numerous challenges that go beyond simply recording each stream. Case studies of storage and subsequent retrieval issues for Internet2 collaborative performances are discussed. The development of shared reality and immersive environments generate issues about, What constitutes an archived performance that occurs across a network (in multiple spaces over time)? What are the families of necessary metadata to reconstruct this virtual world in another venue or era? For example, if the network exhibited changes in latency the performers most likely adapted. In a future recreation, the latency will most likely be completely different. We discuss the parameters of immersive environment acquisition and rendering, network architectures, software architecture, musical/choreographic scores, and environmental acoustics that must be considered to address this problem.

  14. Support vector machine-an alternative to artificial neuron network for water quality forecasting in an agricultural nonpoint source polluted river?

    PubMed

    Liu, Mei; Lu, Jun

    2014-09-01

    Water quality forecasting in agricultural drainage river basins is difficult because of the complicated nonpoint source (NPS) pollution transport processes and river self-purification processes involved in highly nonlinear problems. Artificial neural network (ANN) and support vector model (SVM) were developed to predict total nitrogen (TN) and total phosphorus (TP) concentrations for any location of the river polluted by agricultural NPS pollution in eastern China. River flow, water temperature, flow travel time, rainfall, dissolved oxygen, and upstream TN or TP concentrations were selected as initial inputs of the two models. Monthly, bimonthly, and trimonthly datasets were selected to train the two models, respectively, and the same monthly dataset which had not been used for training was chosen to test the models in order to compare their generalization performance. Trial and error analysis and genetic algorisms (GA) were employed to optimize the parameters of ANN and SVM models, respectively. The results indicated that the proposed SVM models performed better generalization ability due to avoiding the occurrence of overtraining and optimizing fewer parameters based on structural risk minimization (SRM) principle. Furthermore, both TN and TP SVM models trained by trimonthly datasets achieved greater forecasting accuracy than corresponding ANN models. Thus, SVM models will be a powerful alternative method because it is an efficient and economic tool to accurately predict water quality with low risk. The sensitivity analyses of two models indicated that decreasing upstream input concentrations during the dry season and NPS emission along the reach during average or flood season should be an effective way to improve Changle River water quality. If the necessary water quality and hydrology data and even trimonthly data are available, the SVM methodology developed here can easily be applied to other NPS-polluted rivers.

  15. Evaluating the High Risk Groups for Suicide: A Comparison of Logistic Regression, Support Vector Machine, Decision Tree and Artificial Neural Network

    PubMed Central

    AMINI, Payam; AHMADINIA, Hasan; POOROLAJAL, Jalal; MOQADDASI AMIRI, Mohammad

    2016-01-01

    Background: We aimed to assess the high-risk group for suicide using different classification methods includinglogistic regression (LR), decision tree (DT), artificial neural network (ANN), and support vector machine (SVM). Methods: We used the dataset of a study conducted to predict risk factors of completed suicide in Hamadan Province, the west of Iran, in 2010. To evaluate the high-risk groups for suicide, LR, SVM, DT and ANN were performed. The applied methods were compared using sensitivity, specificity, positive predicted value, negative predicted value, accuracy and the area under curve. Cochran-Q test was implied to check differences in proportion among methods. To assess the association between the observed and predicted values, Ø coefficient, contingency coefficient, and Kendall tau-b were calculated. Results: Gender, age, and job were the most important risk factors for fatal suicide attempts in common for four methods. SVM method showed the highest accuracy 0.68 and 0.67 for training and testing sample, respectively. However, this method resulted in the highest specificity (0.67 for training and 0.68 for testing sample) and the highest sensitivity for training sample (0.85), but the lowest sensitivity for the testing sample (0.53). Cochran-Q test resulted in differences between proportions in different methods (P<0.001). The association of SVM predictions and observed values, Ø coefficient, contingency coefficient, and Kendall tau-b were 0.239, 0.232 and 0.239, respectively. Conclusion: SVM had the best performance to classify fatal suicide attempts comparing to DT, LR and ANN. PMID:27957463

  16. A Generic Framework of Performance Measurement in Networked Enterprises

    NASA Astrophysics Data System (ADS)

    Kim, Duk-Hyun; Kim, Cheolhan

    Performance measurement (PM) is essential for managing networked enterprises (NEs) because it greatly affects the effectiveness of collaboration among members of NE.PM in NE requires somewhat different approaches from PM in a single enterprise because of heterogeneity, dynamism, and complexity of NE’s. This paper introduces a generic framework of PM in NE (we call it NEPM) based on the Balanced Scorecard (BSC) approach. In NEPM key performance indicators and cause-and-effect relationships among them are defined in a generic strategy map. NEPM could be applied to various types of NEs after specializing KPIs and relationships among them. Effectiveness of NEPM is shown through a case study of some Korean NEs.

  17. The Deep Space Network: Noise temperature concepts, measurements, and performance

    NASA Technical Reports Server (NTRS)

    Stelzried, C. T.

    1982-01-01

    The use of higher operational frequencies is being investigated for improved performance of the Deep Space Network. Noise temperature and noise figure concepts are used to describe the noise performance of these receiving systems. The ultimate sensitivity of a linear receiving system is limited by the thermal noise of the source and the quantum noise of the receiver amplifier. The atmosphere, antenna and receiver amplifier of an Earth station receiving system are analyzed separately and as a system. Performance evaluation and error analysis techniques are investigated. System noise temperature and antenna gain parameters are combined to give an overall system figure of merit G/T. Radiometers are used to perform radio ""star'' antenna and system sensitivity calibrations. These are analyzed and the performance of several types compared to an idealized total power radiometer. The theory of radiative transfer is applicable to the analysis of transmission medium loss. A power series solution in terms of the transmission medium loss is given for the solution of the noise temperature contribution.

  18. [Pre-warning model of bacterial foodborne illness based on performance of principal component analysis combined with support vector machine].

    PubMed

    Duan, Hejun; Shao, Bing

    2010-11-01

    Based on the historical data of bacterial foodborne illness, the scoring system applied on the pre-warning model system was established in this study according to the rated harm factors. It could build up effectively the predictive model in the analysis of foodborne illness accidents. Extracting the useful information, the principal component analysis was performed on the normalized raw data to reduce the dimension. The result was split into 70% data randomly for training set into the regression model of support vector machine that was used to predict the remaining 30% . Through reducing the dimensions for selecting the optional PCs, it could optimize the calibration and improve the efficiency. The combining method of principal component analysis (PCA) and support vector machine (SVM) could provide the reliable results in the pre-warning model, especially for the high-dimensional data with the limited sample populations. Furthermore, it could achieve 80% accuracy with the optimized parameters. The pre-warning model of bacterial foodborne illness could give the assessment of the poisoning accidents and provided the scientific theory for reducing the incidence of bacterial of food poisoning.

  19. A simplified method of performance indicators development for epidemiological surveillance networks--application to the RESAPATH surveillance network.

    PubMed

    Sorbe, A; Chazel, M; Gay, E; Haenni, M; Madec, J-Y; Hendrikx, P

    2011-06-01

    Develop and calculate performance indicators allows to continuously follow the operation of an epidemiological surveillance network. This is an internal evaluation method, implemented by the coordinators in collaboration with all the actors of the network. Its purpose is to detect weak points in order to optimize management. A method for the development of performance indicators of epidemiological surveillance networks was developed in 2004 and was applied to several networks. Its implementation requires a thorough description of the network environment and all its activities to define priority indicators. Since this method is considered to be complex, our objective consisted in developing a simplified approach and applying it to an epidemiological surveillance network. We applied the initial method to a theoretical network model to obtain a list of generic indicators that can be adapted to any surveillance network. We obtained a list of 25 generic performance indicators, intended to be reformulated and described according to the specificities of each network. It was used to develop performance indicators for RESAPATH, an epidemiological surveillance network of antimicrobial resistance in pathogenic bacteria of animal origin in France. This application allowed us to validate the simplified method, its value in terms of practical implementation, and its level of user acceptance. Its ease of use and speed of application compared to the initial method argue in favor of its use on broader scale. Copyright © 2011 Elsevier Masson SAS. All rights reserved.

  20. Determination of electric-field, magnetic-field, and electric-current distributions of infrared optical antennas: a near-field optical vector network analyzer.

    PubMed

    Olmon, Robert L; Rang, Matthias; Krenz, Peter M; Lail, Brian A; Saraf, Laxmikant V; Boreman, Glenn D; Raschke, Markus B

    2010-10-15

    In addition to the electric field E(r), the associated magnetic field H(r) and current density J(r) characterize any electromagnetic device, providing insight into antenna coupling and mutual impedance. We demonstrate the optical analogue of the radio frequency vector network analyzer implemented in interferometric homodyne scattering-type scanning near-field optical microscopy for obtaining E(r), H(r), and J(r). The approach is generally applicable and demonstrated for the case of a linear coupled-dipole antenna in the midinfrared spectral region. The determination of the underlying 3D vector electric near-field distribution E(r) with nanometer spatial resolution and full phase and amplitude information is enabled by the design of probe tips with selectivity with respect to E(∥) and E(⊥) fabricated by focused ion-beam milling and nano-chemical-vapor-deposition methods.

  1. Spiking neural networks on high performance computer clusters

    NASA Astrophysics Data System (ADS)

    Chen, Chong; Taha, Tarek M.

    2011-09-01

    In this paper we examine the acceleration of two spiking neural network models on three clusters of multicore processors representing three categories of processors: x86, STI Cell, and NVIDIA GPGPUs. The x86 cluster utilized consists of 352 dualcore AMD Opterons, the Cell cluster consists of 320 Sony Playstation 3s, while the GPGPU cluster contains 32 NVIDIA Tesla S1070 systems. The results indicate that the GPGPU platform can dominate in performance compared to the Cell and x86 platforms examined. From a cost perspective, the GPGPU is more expensive in terms of neuron/s throughput. If the cost of GPGPUs go down in the future, this platform will become very cost effective for these models.

  2. An easily fabricated high performance ionic polymer based sensor network

    NASA Astrophysics Data System (ADS)

    Zhu, Zicai; Wang, Yanjie; Hu, Xiaopin; Sun, Xiaofei; Chang, Longfei; Lu, Pin

    2016-08-01

    Ionic polymer materials can generate an electrical potential from ion migration under an external force. For traditional ionic polymer metal composite sensors, the output voltage is very small (a few millivolts), and the fabrication process is complex and time-consuming. This letter presents an ionic polymer based network of pressure sensors which is easily and quickly constructed, and which can generate high voltage. A 3 × 3 sensor array was prepared by casting Nafion solution directly over copper wires. Under applied pressure, two different levels of voltage response were observed among the nine nodes in the array. For the group producing the higher level, peak voltages reached as high as 25 mV. Computational stress analysis revealed the physical origin of the different responses. High voltages resulting from the stress concentration and asymmetric structure can be further utilized to modify subsequent designs to improve the performance of similar sensors.

  3. Two-Dimensional Confined Jet Thrust Vector Control: Operating Mechanisms and Performance

    DTIC Science & Technology

    1989-03-01

    Avdilability Codes Nsf / AvduI and Ior 4 01st Specia Appr-ved for public release; distribution unlimited Pr ef ace In this thesis, I continued the...exceptionally high quality test articles, also with impossible deadlines. - the von Karman Institute, Dr. M. Carbonaro provided me with theoretical and...Schlieren photographs and video tapes were used to study flow separation and internal shock structures. Nozzle performance parameters were determined for

  4. Enabling Secure High-Performance Wireless Ad Hoc Networking

    DTIC Science & Technology

    2003-05-29

    extremely vulnerable to this attack . For example, OLSR and TBRPF use HELLO packets for neighbor detection, so if an attacker tunnels through a wormhole to...security. In particular, when an attacker is present in the network, a protocol that provides security against such an attacker should provide better...better than in insecure protocol when the network is under attack . As a network experiences attack , a secure network routing protocol may continue to

  5. Measuring Human Performance in a Mobile Ad Hoc Network (MANET)

    DTIC Science & Technology

    2010-06-01

    routing in a wireless mesh network test bed. Journal on Wireless Communi- cations and Networking , Vol 2007, Article ID 86510. Ikeda, M., L. Barolli, M...by wireless radios with limited band- width and no fixed infrastructure support. Instead of fixed network nodes, the MANET nodes will be dynamic...2006. Evaluation of packet latency in single and multi -hop WiFi wireless networks . In Proceedings of SPIE: Wireless sensing and processing,

  6. Assessing the performance of ultrafast vector flow imaging in the neonatal heart via multiphysics modeling and in-vitro experiments.

    PubMed

    Van Cauwenberge, Joris; Lovstakken, Lasse; Fadnes, Solveig; Rodriguez-Molares, Alfonso; Vierendeels, Jan; Segers, Patrick; Swillens, Abigail

    2016-08-01

    Ultrafast vector flow imaging would benefit newborn patients with congenital heart disorders, but still requires thorough validation before translation to clinical practice. This study investigates 2D speckle tracking of intraventricular blood flow in neonates when transmitting diverging waves at ultrafast frame rate. Computational and in-vitro studies enabled us to quantify the performance and identify artefacts related to the flow and the imaging sequence. First, synthetic ultrasound images of a neonate's left ventricular flow pattern were obtained with the ultrasound simulator Field II by propagating point scatterers according to 3D intraventricular flow fields obtained with computational fluid dynamics (CFD). Non-compounded diverging waves (opening angle of 60°) were transmitted at a pulse repetition frequency of 9 kHz. Speckle tracking of the B-mode data provided 2D flow estimates at 180 Hz, which were compared to the CFD flow field. We demonstrated that the diastolic inflow jet showed a strong bias in the lateral velocity estimates at the edges of the jet, as confirmed by additional in-vitro tests on a jet flow phantom. Further, speckle tracking performance was highly dependent on the cardiac phase with low flows (< 5 cm/s), high spatial flow gradients and out-of-plane flow as deteriorating factors. Despite the observed artefacts, a good overall performance of 2D speckle tracking was obtained with a median magnitude underestimation and angular deviation of respectively 28% and 13.5° during systole, and 16% and 10.5° during diastole.

  7. Assessing the Performance of Ultrafast Vector Flow Imaging in the Neonatal Heart via Multiphysics Modeling and In Vitro Experiments.

    PubMed

    Van Cauwenberge, Joris; Lovstakken, Lasse; Fadnes, Solveig; Rodriguez-Morales, Alfonso; Vierendeels, Jan; Segers, Patrick; Swillens, Abigail

    2016-11-01

    Ultrafast vector flow imaging would benefit newborn patients with congenital heart disorders, but still requires thorough validation before translation to clinical practice. This paper investigates 2-D speckle tracking (ST) of intraventricular blood flow in neonates when transmitting diverging waves at ultrafast frame rate. Computational and in vitro studies enabled us to quantify the performance and identify artifacts related to the flow and the imaging sequence. First, synthetic ultrasound images of a neonate's left ventricular flow pattern were obtained with the ultrasound simulator Field II by propagating point scatterers according to 3-D intraventricular flow fields obtained with computational fluid dynamics (CFD). Noncompounded diverging waves (opening angle of 60°) were transmitted at a pulse repetition frequency of 9 kHz. ST of the B-mode data provided 2-D flow estimates at 180 Hz, which were compared with the CFD flow field. We demonstrated that the diastolic inflow jet showed a strong bias in the lateral velocity estimates at the edges of the jet, as confirmed by additional in vitro tests on a jet flow phantom. Furthermore, ST performance was highly dependent on the cardiac phase with low flows (<5 cm/s), high spatial flow gradients, and out-of-plane flow as deteriorating factors. Despite the observed artifacts, a good overall performance of 2-D ST was obtained with a median magnitude underestimation and angular deviation of, respectively, 28% and 13.5° during systole and 16% and 10.5° during diastole.

  8. Social Networks Use, Loneliness and Academic Performance among University Students

    ERIC Educational Resources Information Center

    Stankovska, Gordana; Angelkovska, Slagana; Grncarovska, Svetlana Pandiloska

    2016-01-01

    The world is extensively changed by Social Networks Sites (SNSs) on the Internet. A large number of children and adolescents in the world have access to the internet and are exposed to the internet at a very early age. Most of them use the Social Networks Sites with the purpose of exchanging academic activities and developing a social network all…

  9. Direct Fabrication of 3D Metallic Networks and Their Performance.

    PubMed

    Ron, Racheli; Gachet, David; Rechav, Katya; Salomon, Adi

    2017-02-01

    Fabrication of macroscopic nanoporous metallic networks is challenging, because it demands fine structures at the nanoscale over a large-scale. A technique to form pure scalable networks is introduced. The networked-metals ("Netals") exhibit a strong interaction with light and indicate a large fraction of hot-electrons generation. These hot-electrons are available to derive photocatalytic processes.

  10. Mean Throughput: A Method for Analyzing and Comparing Computer Network Performances

    PubMed Central

    Dwyer, Samuel J.; Cox, Glendon G.; Templeton, Arch W.; Cook, Larry T.; Hensley, Kenneth L.; Johnson, Joy A.; Anderson, William H.; Bramble, John M.

    1986-01-01

    Computer networks for managing and transmitting digitally formatted radiographic images are being developed by industrial firms and academic research groups. The ability to measure and compare the performance of these networks is absolutely essential when proposing network operational protocols. Mean throughput analysis is an excellent method for predicting and documenting a network's performance. Mean throughput measurements for digital image networks are analogous to the use of modulation transfer function measurements of radiographic systems. This paper describes mean throughput. The mean throughput for the interactive diagnostic display stations on the digital network in our department are presented.

  11. Relative performance of indoor vector control interventions in the Ifakara and the West African experimental huts.

    PubMed

    Oumbouke, Welbeck A; Fongnikin, Augustin; Soukou, Koffi B; Moore, Sarah J; N'Guessan, Raphael

    2017-09-19

    West African and Ifakara experimental huts are used to evaluate indoor mosquito control interventions, including spatial repellents and insecticides. The two hut types differ in size and design, so a side-by-side comparison was performed to investigate the performance of indoor interventions in the two hut designs using standard entomological outcomes: relative indoor mosquito density (deterrence), exophily (induced exit), blood-feeding and mortality of mosquitoes. Metofluthrin mosquito coils (0.00625% and 0.0097%) and Olyset® Net vs control nets (untreated, deliberately holed net) were evaluated against pyrethroid-resistant Culex quinquefasciatus in Benin. Four experimental huts were used: two West African hut designs and two Ifakara hut designs. Treatments were rotated among the huts every four nights until each treatment was tested in each hut 52 times. Volunteers rotated between huts nightly. The Ifakara huts caught a median of 37 Culex quinquefasciatus/ night, while the West African huts captured a median of 8/ night (rate ratio 3.37, 95% CI: 2.30-4.94, P < 0.0001) and this difference in mosquito entry was similar for Olyset® Net and more pronounced for spatial repellents. Exophily was greater in the Ifakara huts with > 4-fold higher mosquito exit relative to the West African huts (odds ratio 4.18, 95% CI: 3.18-5.51, P < 0.0001), regardless of treatment. While blood-feeding rates were significantly higher in the West African huts, mortality appeared significantly lower for all treatments. The Ifakara hut captured more Cx. quinquefasciatus that could more easily exit into windows and eave traps after failing to blood-feed, compared to the West African hut. The higher mortality rates recorded in the Ifakara huts could be attributable to the greater proportions of Culex mosquitoes exiting and probably dying from starvation, relative to the situation in the West African huts.

  12. On-sky Performance Analysis of the Vector Apodizing Phase Plate Coronagraph on MagAO/Clio2

    NASA Astrophysics Data System (ADS)

    Otten, Gilles P. P. L.; Snik, Frans; Kenworthy, Matthew A.; Keller, Christoph U.; Males, Jared R.; Morzinski, Katie M.; Close, Laird M.; Codona, Johanan L.; Hinz, Philip M.; Hornburg, Kathryn J.; Brickson, Leandra L.; Escuti, Michael J.

    2017-01-01

    We report on the performance of a vector apodizing phase plate coronagraph that operates over a wavelength range of 2–5 μm and is installed in MagAO/Clio2 at the 6.5 m Magellan Clay telescope at Las Campanas Observatory, Chile. The coronagraph manipulates the phase in the pupil to produce three beams yielding two coronagraphic point-spread functions (PSFs) and one faint leakage PSF. The phase pattern is imposed through the inherently achromatic geometric phase, enabled by liquid crystal technology and polarization techniques. The coronagraphic optic is manufactured using a direct-write technique for precise control of the liquid crystal pattern and multitwist retarders for achromatization. By integrating a linear phase ramp to the coronagraphic phase pattern, two separated coronagraphic PSFs are created with a single pupil-plane optic, which makes it robust and easy to install in existing telescopes. The two coronagraphic PSFs contain a 180° dark hole on each side of a star, and these complementary copies of the star are used to correct the seeing halo close to the star. To characterize the coronagraph, we collected a data set of a bright (mL = 0–1) nearby star with ∼1.5 hr of observing time. By rotating and optimally scaling one PSF and subtracting it from the other PSF, we see a contrast improvement by 1.46 magnitudes at 3.5 λ /D. With regular angular differential imaging at 3.9 μm, the MagAO vector apodizing phase plate coronagraph delivers a 5σ {{Δ }}{mag} contrast of 8.3 (={10}-3.3) at 2 λ /D and 12.2 (={10}-4.8) at 3.5 λ /D.

  13. Preliminary performance of a vertical-attitude takeoff and landing, supersonic cruise aircraft concept having thrust vectoring integrated into the flight control system

    NASA Technical Reports Server (NTRS)

    Robins, A. W.; Beissner, F. L., Jr.; Domack, C. S.; Swanson, E. E.

    1985-01-01

    A performance study was made of a vertical attitude takeoff and landing (VATOL), supersonic cruise aircraft concept having thrust vectoring integrated into the flight control system. Those characteristics considered were aerodynamics, weight, balance, and performance. Preliminary results indicate that high levels of supersonic aerodynamic performance can be achieved. Further, with the assumption of an advanced (1985 technology readiness) low bypass ratio turbofan engine and advanced structures, excellent mission performance capability is indicated.

  14. Clinical Performance of a New Biomimetic Double Network Material

    PubMed Central

    Dirxen, Christine; Blunck, Uwe; Preissner, Saskia

    2013-01-01

    Background: The development of ceramics during the last years was overwhelming. However, the focus was laid on the hardness and the strength of the restorative materials, resulting in high antagonistic tooth wear. This is critical for patients with bruxism. Objectives: The purpose of this study was to evaluate the clinical performance of the new double hybrid material for non-invasive treatment approaches. Material and Methods: The new approach of the material tested, was to modify ceramics to create a biomimetic material that has similar physical properties like dentin and enamel and is still as strong as conventional ceramics. Results: The produced crowns had a thickness ranging from 0.5 to 1.5 mm. To evaluate the clinical performance and durability of the crowns, the patient was examined half a year later. The crowns were still intact and soft tissues appeared healthy and this was achieved without any loss of tooth structure. Conclusions: The material can be milled to thin layers, but is still strong enough to prevent cracks which are stopped by the interpenetrating polymer within the network. Depending on the clinical situation, minimally- up to non-invasive restorations can be milled. Clinical Relevance: Dentistry aims in preservation of tooth structure. Patients suffering from loss of tooth structure (dental erosion, Amelogenesis imperfecta) or even young patients could benefit from minimally-invasive crowns. Due to a Vickers hardness between dentin and enamel, antagonistic tooth wear is very low. This might be interesting for treating patients with bruxism. PMID:24167534

  15. Performance of a laser microsatellite network with an optical preamplifier.

    PubMed

    Arnon, Shlomi

    2005-04-01

    Laser satellite communication (LSC) uses free space as a propagation medium for various applications, such as intersatellite communication or satellite networking. An LSC system includes a laser transmitter and an optical receiver. For communication to occur, the line of sight of the transmitter and the receiver must be aligned. However, mechanical vibration and electronic noise in the control system reduce alignment between the transmitter laser beam and the receiver field of view (FOV), which results in pointing errors. The outcome of pointing errors is fading of the received signal, which leads to impaired link performance. An LSC system is considered in which the optical preamplifier is incorporated into the receiver, and a bit error probability (BEP) model is derived that takes into account the statistics of the pointing error as well as the optical amplifier and communication system parameters. The model and the numerical calculation results indicate that random pointing errors of sigma(chi)2G > 0.05 penalize communication performance dramatically for all combinations of optical amplifier gains and noise figures that were calculated.

  16. Performance Improvement in Geographic Routing for Vehicular Ad Hoc Networks

    PubMed Central

    Kaiwartya, Omprakash; Kumar, Sushil; Lobiyal, D. K.; Abdullah, Abdul Hanan; Hassan, Ahmed Nazar

    2014-01-01

    Geographic routing is one of the most investigated themes by researchers for reliable and efficient dissemination of information in Vehicular Ad Hoc Networks (VANETs). Recently, different Geographic Distance Routing (GEDIR) protocols have been suggested in the literature. These protocols focus on reducing the forwarding region towards destination to select the Next Hop Vehicles (NHV). Most of these protocols suffer from the problem of elevated one-hop link disconnection, high end-to-end delay and low throughput even at normal vehicle speed in high vehicle density environment. This paper proposes a Geographic Distance Routing protocol based on Segment vehicle, Link quality and Degree of connectivity (SLD-GEDIR). The protocol selects a reliable NHV using the criteria segment vehicles, one-hop link quality and degree of connectivity. The proposed protocol has been simulated in NS-2 and its performance has been compared with the state-of-the-art protocols: P-GEDIR, J-GEDIR and V-GEDIR. The empirical results clearly reveal that SLD-GEDIR has lower link disconnection and end-to-end delay, and higher throughput as compared to the state-of-the-art protocols. It should be noted that the performance of the proposed protocol is preserved irrespective of vehicle density and speed. PMID:25429415

  17. Wireless Body Area Network (WBAN) design techniques and performance evaluation.

    PubMed

    Khan, Jamil Yusuf; Yuce, Mehmet R; Bulger, Garrick; Harding, Benjamin

    2012-06-01

    In recent years interest in the application of Wireless Body Area Network (WBAN) for patient monitoring applications has grown significantly. A WBAN can be used to develop patient monitoring systems which offer flexibility to medical staff and mobility to patients. Patients monitoring could involve a range of activities including data collection from various body sensors for storage and diagnosis, transmitting data to remote medical databases, and controlling medical appliances, etc. Also, WBANs could operate in an interconnected mode to enable remote patient monitoring using telehealth/e-health applications. A WBAN can also be used to monitor athletes' performance and assist them in training activities. For such applications it is very important that a WBAN collects and transmits data reliably, and in a timely manner to a monitoring entity. In order to address these issues, this paper presents WBAN design techniques for medical applications. We examine the WBAN design issues with particular emphasis on the design of MAC protocols and power consumption profiles of WBAN. Some simulation results are presented to further illustrate the performances of various WBAN design techniques.

  18. Performance evaluation of NASA/KSC CAD/CAE graphics local area network

    NASA Technical Reports Server (NTRS)

    Zobrist, George

    1988-01-01

    This study had as an objective the performance evaluation of the existing CAD/CAE graphics network at NASA/KSC. This evaluation will also aid in projecting planned expansions, such as the Space Station project on the existing CAD/CAE network. The objectives were achieved by collecting packet traffic on the various integrated sub-networks. This included items, such as total number of packets on the various subnetworks, source/destination of packets, percent utilization of network capacity, peak traffic rates, and packet size distribution. The NASA/KSC LAN was stressed to determine the useable bandwidth of the Ethernet network and an average design station workload was used to project the increased traffic on the existing network and the planned T1 link. This performance evaluation of the network will aid the NASA/KSC network managers in planning for the integration of future workload requirements into the existing network.

  19. Is functional integration of resting state brain networks an unspecific biomarker for working memory performance?

    PubMed

    Alavash, Mohsen; Doebler, Philipp; Holling, Heinz; Thiel, Christiane M; Gießing, Carsten

    2015-03-01

    Is there one optimal topology of functional brain networks at rest from which our cognitive performance would profit? Previous studies suggest that functional integration of resting state brain networks is an important biomarker for cognitive performance. However, it is still unknown whether higher network integration is an unspecific predictor for good cognitive performance or, alternatively, whether specific network organization during rest predicts only specific cognitive abilities. Here, we investigated the relationship between network integration at rest and cognitive performance using two tasks that measured different aspects of working memory; one task assessed visual-spatial and the other numerical working memory. Network clustering, modularity and efficiency were computed to capture network integration on different levels of network organization, and to statistically compare their correlations with the performance in each working memory test. The results revealed that each working memory aspect profits from a different resting state topology, and the tests showed significantly different correlations with each of the measures of network integration. While higher global network integration and modularity predicted significantly better performance in visual-spatial working memory, both measures showed no significant correlation with numerical working memory performance. In contrast, numerical working memory was superior in subjects with highly clustered brain networks, predominantly in the intraparietal sulcus, a core brain region of the working memory network. Our findings suggest that a specific balance between local and global functional integration of resting state brain networks facilitates special aspects of cognitive performance. In the context of working memory, while visual-spatial performance is facilitated by globally integrated functional resting state brain networks, numerical working memory profits from increased capacities for local processing

  20. Effects of Infection by Trypanosoma cruzi and Trypanosoma rangeli on the Reproductive Performance of the Vector Rhodnius prolixus

    PubMed Central

    Fellet, Maria Raquel; Lorenzo, Marcelo Gustavo; Elliot, Simon Luke; Carrasco, David; Guarneri, Alessandra Aparecida

    2014-01-01

    The insect Rhodnius prolixus is responsible for the transmission of Trypanosoma cruzi, which is the etiological agent of Chagas disease in areas of Central and South America. Besides this, it can be infected by other trypanosomes such as Trypanosoma rangeli. The effects of these parasites on vectors are poorly understood and are often controversial so here we focussed on possible negative effects of these parasites on the reproductive performance of R. prolixus, specifically comparing infected and uninfected couples. While T. cruzi infection did not delay pre-oviposition time of infected couples at either temperature tested (25 and 30°C) it did, at 25°C, increase the e-value in the second reproductive cycle, as well as hatching rates. Meanwhile, at 30°C, T. cruzi infection decreased the e-value of insects during the first cycle and also the fertility of older insects. When couples were instead infected with T. rangeli, pre-oviposition time was delayed, while reductions in the e-value and hatching rate were observed in the second and third cycles. We conclude that both T. cruzi and T. rangeli can impair reproductive performance of R. prolixus, although for T. cruzi, this is dependent on rearing temperature and insect age. We discuss these reproductive costs in terms of potential consequences on triatomine behavior and survival. PMID:25136800

  1. Amino acid sequence autocorrelation vectors and Bayesian-regularized genetic neural networks for modeling protein conformational stability: gene V protein mutants.

    PubMed

    Fernández, Leyden; Caballero, Julio; Abreu, José Ignacio; Fernández, Michael

    2007-06-01

    Development of novel computational approaches for modeling protein properties from their primary structure is the main goal in applied proteomics. In this work, we reported the extension of the autocorrelation vector formalism to amino acid sequences for encoding protein structural information with modeling purposes. Amino acid sequence autocorrelation (AASA) vectors were calculated by measuring the autocorrelations at sequence lags ranging from 1 to 15 on the protein primary structure of 48 amino acid/residue properties selected from the AAindex data base. A total of 720 AASA descriptors were tested for building predictive models of the change of thermal unfolding Gibbs free energy change (delta deltaG) of gene V protein upon mutation. In this sense, ensembles of Bayesian-regularized genetic neural networks (BRGNNs) were used for obtaining an optimum nonlinear model for the conformational stability. The ensemble predictor described about 88% and 66% variance of the data in training and test sets respectively. Furthermore, the optimum AASA vector subset not only helped to successfully model unfolding stability but also well distributed wild-type and gene V protein mutants on a stability self-organized map (SOM), when used for unsupervised training of competitive neurons. 2007 Wiley-Liss, Inc.

  2. Amino Acid Sequence Autocorrelation vectors and ensembles of Bayesian-Regularized Genetic Neural Networks for prediction of conformational stability of human lysozyme mutants.

    PubMed

    Caballero, Julio; Fernández, Leyden; Abreu, José Ignacio; Fernández, Michael

    2006-01-01

    Development of novel computational approaches for modeling protein properties from their primary structure is a main goal in applied proteomics. In this work, we reported the extension of the autocorrelation vector formalism to amino acid sequences for encoding protein structural information with modeling purposes. Amino Acid Sequence Autocorrelation (AASA) vectors were calculated by measuring the autocorrelations at sequence lags ranging from 1 to 15 on the protein primary structure of 48 amino acid/residue properties selected from the AAindex database. A total of 720 AASA descriptors were tested for building predictive models of the thermal unfolding Gibbs free energy change of human lysozyme mutants. In this sense, ensembles of Bayesian-Regularized Genetic Neural Networks (BRGNNs) were used for obtaining an optimum nonlinear model for the conformational stability. The ensemble predictor described about 88% and 68% variance of the data in training and test sets, respectively. Furthermore, the optimum AASA vector subset was shown not only to successfully model unfolding thermal stability but also to distribute wild-type and mutant lysozymes on a stability Self-organized Map (SOM) when used for unsupervised training of competitive neurons.

  3. Disentangling Vector-Borne Transmission Networks: A Universal DNA Barcoding Method to Identify Vertebrate Hosts from Arthropod Bloodmeals

    PubMed Central

    Alcaide, Miguel; Rico, Ciro; Ruiz, Santiago; Soriguer, Ramón; Muñoz, Joaquín; Figuerola, Jordi

    2009-01-01

    Emerging infectious diseases represent a challenge for global economies and public health. About one fourth of the last pandemics have been originated by the spread of vector-borne pathogens. In this sense, the advent of modern molecular techniques has enhanced our capabilities to understand vector-host interactions and disease ecology. However, host identification protocols have poorly profited of international DNA barcoding initiatives and/or have focused exclusively on a limited array of vector species. Therefore, ascertaining the potential afforded by DNA barcoding tools in other vector-host systems of human and veterinary importance would represent a major advance in tracking pathogen life cycles and hosts. Here, we show the applicability of a novel and efficient molecular method for the identification of the vertebrate host's DNA contained in the midgut of blood-feeding arthropods. To this end, we designed a eukaryote-universal forward primer and a vertebrate-specific reverse primer to selectively amplify 758 base pairs (bp) of the vertebrate mitochondrial Cytochrome c Oxidase Subunit I (COI) gene. Our method was validated using both extensive sequence surveys from the public domain and Polymerase Chain Reaction (PCR) experiments carried out over specimens from different Classes of vertebrates (Mammalia, Aves, Reptilia and Amphibia) and invertebrate ectoparasites (Arachnida and Insecta). The analysis of mosquito, culicoid, phlebotomie, sucking bugs, and tick bloodmeals revealed up to 40 vertebrate hosts, including 23 avian, 16 mammalian and one reptilian species. Importantly, the inspection and analysis of direct sequencing electropherograms also assisted the resolving of mixed bloodmeals. We therefore provide a universal and high-throughput diagnostic tool for the study of the ecology of haematophagous invertebrates in relation to their vertebrate hosts. Such information is crucial to support the efficient management of initiatives aimed at reducing

  4. Models of logistic regression analysis, support vector machine, and back-propagation neural network based on serum tumor markers in colorectal cancer diagnosis.

    PubMed

    Zhang, B; Liang, X L; Gao, H Y; Ye, L S; Wang, Y G

    2016-05-13

    We evaluated the application of three machine learning algorithms, including logistic regression, support vector machine and back-propagation neural network, for diagnosing congenital heart disease and colorectal cancer. By inspecting related serum tumor marker levels in colorectal cancer patients and healthy subjects, early diagnosis models for colorectal cancer were built using three machine learning algorithms to assess their corresponding diagnostic values. Except for serum alpha-fetoprotein, the levels of 11 other serum markers of patients in the colorectal cancer group were higher than those in the benign colorectal cancer group (P < 0.05). The results of logistic regression analysis indicted that individual detection of serum carcinoembryonic antigens, CA199, CA242, CA125, and CA153 and their combined detection was effective for diagnosing colorectal cancer. Combined detection had a better diagnostic effect with a sensitivity of 94.2% and specificity of 97.7%; combining serum carcinoembryonic antigens, CA199, CA242, CA125, and CA153, with the support vector machine diagnosis model and back-propagation, a neural network diagnosis model was built with diagnostic accuracies of 82 and 75%, sensitivities of 85 and 80%, and specificities of 80 and 70%, respectively. Colorectal cancer diagnosis models based on the three machine learning algorithms showed high diagnostic value and can help obtain evidence for the early diagnosis of colorectal cancer.

  5. Improving Department of Defense Global Distribution Performance Through Network Analysis

    DTIC Science & Technology

    2016-06-01

    Segment Days .................................................................. 16 2. Standards...17 C. METHODOLOGY ......................................................................... 17 1. Segment ...Independence ................................................... 17 2. Statistical Analysis of IPGs by Segment ........................ 18 3. Network

  6. Battery Performance Modelling ad Simulation: a Neural Network Based Approach

    NASA Astrophysics Data System (ADS)

    Ottavianelli, Giuseppe; Donati, Alessandro

    2002-01-01

    This project has developed on the background of ongoing researches within the Control Technology Unit (TOS-OSC) of the Special Projects Division at the European Space Operations Centre (ESOC) of the European Space Agency. The purpose of this research is to develop and validate an Artificial Neural Network tool (ANN) able to model, simulate and predict the Cluster II battery system's performance degradation. (Cluster II mission is made of four spacecraft flying in tetrahedral formation and aimed to observe and study the interaction between sun and earth by passing in and out of our planet's magnetic field). This prototype tool, named BAPER and developed with a commercial neural network toolbox, could be used to support short and medium term mission planning in order to improve and maximise the batteries lifetime, determining which are the future best charge/discharge cycles for the batteries given their present states, in view of a Cluster II mission extension. This study focuses on the five Silver-Cadmium batteries onboard of Tango, the fourth Cluster II satellite, but time restrains have allowed so far to perform an assessment only on the first battery. In their most basic form, ANNs are hyper-dimensional curve fits for non-linear data. With their remarkable ability to derive meaning from complicated or imprecise history data, ANN can be used to extract patterns and detect trends that are too complex to be noticed by either humans or other computer techniques. ANNs learn by example, and this is why they can be described as an inductive, or data-based models for the simulation of input/target mappings. A trained ANN can be thought of as an "expert" in the category of information it has been given to analyse, and this expert can then be used, as in this project, to provide projections given new situations of interest and answer "what if" questions. The most appropriate algorithm, in terms of training speed and memory storage requirements, is clearly the Levenberg

  7. Applying a social network analysis (SNA) approach to understanding radiologists' performance in reading mammograms

    NASA Astrophysics Data System (ADS)

    Tavakoli Taba, Seyedamir; Hossain, Liaquat; Heard, Robert; Brennan, Patrick; Lee, Warwick; Lewis, Sarah

    2017-03-01

    Rationale and objectives: Observer performance has been widely studied through examining the characteristics of individuals. Applying a systems perspective, while understanding of the system's output, requires a study of the interactions between observers. This research explains a mixed methods approach to applying a social network analysis (SNA), together with a more traditional approach of examining personal/ individual characteristics in understanding observer performance in mammography. Materials and Methods: Using social networks theories and measures in order to understand observer performance, we designed a social networks survey instrument for collecting personal and network data about observers involved in mammography performance studies. We present the results of a study by our group where 31 Australian breast radiologists originally reviewed 60 mammographic cases (comprising of 20 abnormal and 40 normal cases) and then completed an online questionnaire about their social networks and personal characteristics. A jackknife free response operating characteristic (JAFROC) method was used to measure performance of radiologists. JAFROC was tested against various personal and network measures to verify the theoretical model. Results: The results from this study suggest a strong association between social networks and observer performance for Australian radiologists. Network factors accounted for 48% of variance in observer performance, in comparison to 15.5% for the personal characteristics for this study group. Conclusion: This study suggest a strong new direction for research into improving observer performance. Future studies in observer performance should consider social networks' influence as part of their research paradigm, with equal or greater vigour than traditional constructs of personal characteristics.

  8. A Bayesian Network Approach to Modeling Learning Progressions and Task Performance. CRESST Report 776

    ERIC Educational Resources Information Center

    West, Patti; Rutstein, Daisy Wise; Mislevy, Robert J.; Liu, Junhui; Choi, Younyoung; Levy, Roy; Crawford, Aaron; DiCerbo, Kristen E.; Chappel, Kristina; Behrens, John T.

    2010-01-01

    A major issue in the study of learning progressions (LPs) is linking student performance on assessment tasks to the progressions. This report describes the challenges faced in making this linkage using Bayesian networks to model LPs in the field of computer networking. The ideas are illustrated with exemplar Bayesian networks built on Cisco…

  9. OPTIMAL CONFIGURATION OF A COMMAND AND CONTROL NETWORK: BALANCING PERFORMANCE AND RECONFIGURATION CONSTRAINTS

    SciTech Connect

    L. DOWELL

    1999-07-01

    The optimization of the configuration of communications and control networks is important for assuring the reliability and performance of the networks. This paper presents techniques for determining the optimal configuration for such a network in the presence of communication and connectivity constraints.

  10. Static internal performance of a thrust vectoring and reversing two-dimensional convergent-divergent nozzle with an aft flap

    NASA Technical Reports Server (NTRS)

    Re, R. J.; Leavitt, L. D.

    1986-01-01

    The static internal performance of a multifunction nozzle having some of the geometric characteristics of both two-dimensional convergent-divergent and single expansion ramp nozzles has been investigated in the static-test facility of the Langley 16-Foot Transonic Tunnel. The internal expansion portion of the nozzle consisted of two symmetrical flat surfaces of equal length, and the external expansion portion of the nozzle consisted of a single aft flap. The aft flap could be varied in angle independently of the upper internal expansion surface to which it was attached. The effects of internal expansion ratio, nozzle thrust-vector angle (-30 deg. to 30 deg., aft flap shape, aft flap angle, and sidewall containment were determined for dry and afterburning power settings. In addition, a partial afterburning power setting nozzle, a fully deployed thrust reverser, and four vertical takeoff or landing nozzle, configurations were investigated. Nozzle pressure ratio was varied up to 10 for the dry power nozzles and 7 for the afterburning power nozzles.

  11. Index Sets and Vectorization

    SciTech Connect

    Keasler, J A

    2012-03-27

    Vectorization is data parallelism (SIMD, SIMT, etc.) - extension of ISA enabling the same instruction to be performed on multiple data items simultaeously. Many/most CPUs support vectorization in some form. Vectorization is difficult to enable, but can yield large efficiency gains. Extra programmer effort is required because: (1) not all algorithms can be vectorized (regular algorithm structure and fine-grain parallelism must be used); (2) most CPUs have data alignment restrictions for load/store operations (obey or risk incorrect code); (3) special directives are often needed to enable vectorization; and (4) vector instructions are architecture-specific. Vectorization is the best way to optimize for power and performance due to reduced clock cycles. When data is organized properly, a vector load instruction (i.e. movaps) can replace 'normal' load instructions (i.e. movsd). Vector operations can potentially have a smaller footprint in the instruction cache when fewer instructions need to be executed. Hybrid index sets insulate users from architecture specific details. We have applied hybrid index sets to achieve optimal vectorization. We can extend this concept to handle other programming models.

  12. The Current State of Human Performance Technology: A Citation Network Analysis of "Performance Improvement Quarterly," 1988-2010

    ERIC Educational Resources Information Center

    Cho, Yonjoo; Jo, Sung Jun; Park, Sunyoung; Kang, Ingu; Chen, Zengguan

    2011-01-01

    This study conducted a citation network analysis (CNA) of human performance technology (HPT) to examine its current state of the field. Previous reviews of the field have used traditional research methods, such as content analysis, survey, Delphi, and citation analysis. The distinctive features of CNA come from using a social network analysis…

  13. Support vector machine to predict diesel engine performance and emission parameters fueled with nano-particles additive to diesel fuel

    NASA Astrophysics Data System (ADS)

    Ghanbari, M.; Najafi, G.; Ghobadian, B.; Mamat, R.; Noor, M. M.; Moosavian, A.

    2015-12-01

    This paper studies the use of adaptive Support Vector Machine (SVM) to predict the performance parameters and exhaust emissions of a diesel engine operating on nanodiesel blended fuels. In order to predict the engine parameters, the whole experimental data were randomly divided into training and testing data. For SVM modelling, different values for radial basis function (RBF) kernel width and penalty parameters (C) were considered and the optimum values were then found. The results demonstrate that SVM is capable of predicting the diesel engine performance and emissions. In the experimental step, Carbon nano tubes (CNT) (40, 80 and 120 ppm) and nano silver particles (40, 80 and 120 ppm) with nanostructure were prepared and added as additive to the diesel fuel. Six cylinders, four-stroke diesel engine was fuelled with these new blended fuels and operated at different engine speeds. Experimental test results indicated the fact that adding nano particles to diesel fuel, increased diesel engine power and torque output. For nano-diesel it was found that the brake specific fuel consumption (bsfc) was decreased compared to the net diesel fuel. The results proved that with increase of nano particles concentrations (from 40 ppm to 120 ppm) in diesel fuel, CO2 emission increased. CO emission in diesel fuel with nano-particles was lower significantly compared to pure diesel fuel. UHC emission with silver nano-diesel blended fuel decreased while with fuels that contains CNT nano particles increased. The trend of NOx emission was inverse compared to the UHC emission. With adding nano particles to the blended fuels, NOx increased compared to the net diesel fuel. The tests revealed that silver & CNT nano particles can be used as additive in diesel fuel to improve complete combustion of the fuel and reduce the exhaust emissions significantly.

  14. The Global Seismographic Network (GSN): Challenges and Methods for Maintaining High Quality Network Performance

    NASA Astrophysics Data System (ADS)

    Hafner, Katrin; Davis, Peter; Wilson, David; Sumy, Danielle; Woodward, Bob

    2016-04-01

    The Global Seismographic Network (GSN) is a 152 station, globally-distributed, permanent network of state-of-the-art seismological and geophysical sensors. The GSN has been operating for over 20 years via an ongoing successful partnership between IRIS, the USGS, the University of California at San Diego, NSF and numerous host institutions worldwide. The central design goal of the GSN may be summarized as "to record with full fidelity and bandwidth all seismic signals above the Earth noise, accompanied by some efforts to reduce Earth noise by deployment strategies". While many of the technical design goals have been met, we continue to strive for higher data quality with a combination of new sensors and improved installation techniques designed to achieve the lowest noise possible under existing site conditions. Data from the GSN are used not only for research, but on a daily basis as part of the operational missions of the USGS NEIC, NOAA tsunami warning centers, the Comprehensive Nuclear-Test-Ban-Treaty Organization as well as other organizations. In the recent period of very tight funding budgets, the primary challenges for the GSN include maintaining these operational capabilities while simultaneously developing and replacing the primary sensors, maintaining high quality data and repairing station infrastructure. Aging of GSN equipment and station infrastructure has resulted in renewed emphasis on developing, evaluating and implementing quality control tools such as MUSTANG and DQA to maintain the high data quality from the GSN stations. These tools allow the network operators to routinely monitor and analyze waveform data to detect and track problems and develop action plans as issues are found. We will present summary data quality metrics for the GSN as obtained via these quality assurance tools. In recent years, the GSN has standardized dataloggers to the Quanterra Q330HR data acquisition system at all but three stations resulting in significantly improved

  15. Data mining methods in the prediction of Dementia: A real-data comparison of the accuracy, sensitivity and specificity of linear discriminant analysis, logistic regression, neural networks, support vector machines, classification trees and random forests

    PubMed Central

    2011-01-01

    Background Dementia and cognitive impairment associated with aging are a major medical and social concern. Neuropsychological testing is a key element in the diagnostic procedures of Mild Cognitive Impairment (MCI), but has presently a limited value in the prediction of progression to dementia. We advance the hypothesis that newer statistical classification methods derived from data mining and machine learning methods like Neural Networks, Support Vector Machines and Random Forests can improve accuracy, sensitivity and specificity of predictions obtained from neuropsychological testing. Seven non parametric classifiers derived from data mining methods (Multilayer Perceptrons Neural Networks, Radial Basis Function Neural Networks, Support Vector Machines, CART, CHAID and QUEST Classification Trees and Random Forests) were compared to three traditional classifiers (Linear Discriminant Analysis, Quadratic Discriminant Analysis and Logistic Regression) in terms of overall classification accuracy, specificity, sensitivity, Area under the ROC curve and Press'Q. Model predictors were 10 neuropsychological tests currently used in the diagnosis of dementia. Statistical distributions of classification parameters obtained from a 5-fold cross-validation were compared using the Friedman's nonparametric test. Results Press' Q test showed that all classifiers performed better than chance alone (p < 0.05). Support Vector Machines showed the larger overall classification accuracy (Median (Me) = 0.76) an area under the ROC (Me = 0.90). However this method showed high specificity (Me = 1.0) but low sensitivity (Me = 0.3). Random Forest ranked second in overall accuracy (Me = 0.73) with high area under the ROC (Me = 0.73) specificity (Me = 0.73) and sensitivity (Me = 0.64). Linear Discriminant Analysis also showed acceptable overall accuracy (Me = 0.66), with acceptable area under the ROC (Me = 0.72) specificity (Me = 0.66) and sensitivity (Me = 0.64). The remaining classifiers showed

  16. Data mining methods in the prediction of Dementia: A real-data comparison of the accuracy, sensitivity and specificity of linear discriminant analysis, logistic regression, neural networks, support vector machines, classification trees and random forests.

    PubMed

    Maroco, João; Silva, Dina; Rodrigues, Ana; Guerreiro, Manuela; Santana, Isabel; de Mendonça, Alexandre

    2011-08-17

    Dementia and cognitive impairment associated with aging are a major medical and social concern. Neuropsychological testing is a key element in the diagnostic procedures of Mild Cognitive Impairment (MCI), but has presently a limited value in the prediction of progression to dementia. We advance the hypothesis that newer statistical classification methods derived from data mining and machine learning methods like Neural Networks, Support Vector Machines and Random Forests can improve accuracy, sensitivity and specificity of predictions obtained from neuropsychological testing. Seven non parametric classifiers derived from data mining methods (Multilayer Perceptrons Neural Networks, Radial Basis Function Neural Networks, Support Vector Machines, CART, CHAID and QUEST Classification Trees and Random Forests) were compared to three traditional classifiers (Linear Discriminant Analysis, Quadratic Discriminant Analysis and Logistic Regression) in terms of overall classification accuracy, specificity, sensitivity, Area under the ROC curve and Press'Q. Model predictors were 10 neuropsychological tests currently used in the diagnosis of dementia. Statistical distributions of classification parameters obtained from a 5-fold cross-validation were compared using the Friedman's nonparametric test. Press' Q test showed that all classifiers performed better than chance alone (p < 0.05). Support Vector Machines showed the larger overall classification accuracy (Median (Me) = 0.76) an area under the ROC (Me = 0.90). However this method showed high specificity (Me = 1.0) but low sensitivity (Me = 0.3). Random Forest ranked second in overall accuracy (Me = 0.73) with high area under the ROC (Me = 0.73) specificity (Me = 0.73) and sensitivity (Me = 0.64). Linear Discriminant Analysis also showed acceptable overall accuracy (Me = 0.66), with acceptable area under the ROC (Me = 0.72) specificity (Me = 0.66) and sensitivity (Me = 0.64). The remaining classifiers showed overall

  17. Performance evaluation of a burst-mode EDFA in an optical packet and circuit integrated network.

    PubMed

    Shiraiwa, Masaki; Awaji, Yoshinari; Furukawa, Hideaki; Shinada, Satoshi; Puttnam, Benjamin J; Wada, Naoya

    2013-12-30

    We experimentally investigate the performance of burst-mode EDFA in an optical packet and circuit integrated system. In such networks, packets and light paths can be dynamically assigned to the same fibers, resulting in gain transients in EDFAs throughout the network that can limit network performance. Here, we compare the performance of a 'burst-mode' EDFA (BM-EDFA), employing transient suppression techniques and optical feedback, with conventional EDFAs, and those using automatic gain control and previous BM-EDFA implementations. We first measure gain transients and other impairments in a simplified set-up before making frame error-rate measurements in a network demonstration.

  18. Fault tolerant high-performance PACS network design and implementation

    NASA Astrophysics Data System (ADS)

    Chimiak, William J.; Boehme, Johannes M.

    1998-07-01

    The Wake Forest University School of Medicine and the Wake Forest University/Baptist Medical Center (WFUBMC) are implementing a second generation PACS. The first generation PACS provided helpful information about the functional and temporal requirements of the system. It highlighted the importance of image retrieval speed, system availability, RIS/HIS integration, the ability to rapidly view images on any PACS workstation, network bandwidth, equipment redundancy, and the ability for the system to evolve using standards-based components. This paper deals with the network design and implementation of the PACS. The physical layout of the hospital areas served by the PACS, the choice of network equipment and installation issues encountered are addressed. Efforts to optimize fault tolerance are discussed. The PACS network is a gigabit, mixed-media network based on LAN emulation over ATM (LANE) with a rapid migration from LANE to Multiple Protocols Over ATM (MPOA) planned. Two fault-tolerant backbone ATM switches serve to distribute network accesses with two load-balancing 622 megabit per second (Mbps) OC-12 interconnections. The switch was sized to be upgradable to provide a 2.54 Gbps OC-48 interconnection with an OC-12 interconnection as a load-balancing backup. Modalities connect with legacy network interface cards to a switched-ethernet device. This device has two 155 Mbps OC-3 load-balancing uplinks to each of the backbone ATM switches of the PACS. This provides a fault-tolerant logical connection to the modality servers which pass verified DICOM images to the PACS servers and proper PACS diagnostic workstations. Where fiber pulls were prohibitively expensive, edge ATM switches were installed with an OC-12 uplink to a backbone ATM switches. The PACS and data base servers are fault-tolerant, hot-swappable Sun Enterprise Servers with an OC-12 connection to a backbone ATM switch and a fast-ethernet connection to a back-up network. The workstations come with 10

  19. Multicast Performance Analysis for High-Speed Torus Networks

    DTIC Science & Technology

    2002-01-01

    wormhole routing. Lin and Ni [13] were the first to introduce and investigate the path-based multicasting approach. Subsequently, path-based...proposed as a solution to the multicast communication problem for generic, wormhole -routed, direct unidirectional and bi- directional torus networks...More details about path-based multicast algorithms for wormhole -routed networks can be found in the survey of Li and McKinley [15]. Tree- based

  20. Performance of Wireless Networks Subject to Constraints and Failures

    DTIC Science & Technology

    2008-01-01

    in the Graduate College of the University of Illinois at Urbana-Champaign, 2008 Urbana, Illinois Doctoral Committee: Professor Nitin H. Vaidya, Chair...to my advisor Prof. Nitin Vaidya. As his student, I have had the freedom to seek my trajectory, while always having access to his advice. My frequent...computing and networking, pages 216–230. ACM Press, 2004. [5] Vartika Bhandari and Nitin H. Vaidya. On reliable broadcast in a radio network. In PODC ’05

  1. Modeling and Performance Evaluation of Backoff Misbehaving Nodes in CSMA/CA Networks

    DTIC Science & Technology

    2012-08-01

    range of backoff misbehaviors on network performance in CSMA/CA-based wireless networks. 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17...Layer Misbeha- vior in Wireless Networks,” ACM Trans. Information and Systems Security , vol. 11, no. 4, pp. 19:1-19:28, July 2008. [10] S. Choi, K...Park, and C. kwon Kim, “On the Performance Characteristics of WLANs : Revisited,” Proc. ACM SIGMETRICS Int’l Conf. Measurement and Modeling of Computer

  2. The Moderating Effect of Psychological Empowerment on the Relationship between Network Centrality and Individual Job Performance

    DTIC Science & Technology

    2012-03-22

    T. R., Holtom, B. C., Lee, T. W., Sablynski, C. J., & Erez, M. (2001). Why People Stay: Using Job Embeddedness to Predict Voluntary Turnover. Academy...THE MODERATING EFFECT OF PSYCHOLOGICAL EMPOWERMENT ON THE RELATIONSHIP BETWEEN NETWORK CENTRALITY AND INDIVIDUAL JOB PERFORMANCE...BETWEEN NETWORK CENTRALITY AND INDIVIDUAL JOB PERFORMANCE THESIS Presented to the Faculty Department of Systems and Engineering

  3. Performance analysis of an iSCSI-based unified storage network.

    PubMed

    Fu, Xiang-lin; Zhang, Kun; Xie, Chang-sheng

    2004-01-01

    In this paper, we introduced a novel storage architecture "Unified Storage Network", which merges NAC(Network Attached Channel) and SAN(Storage Area Network), and provides the file I/O services as NAS devices and provides the block I/O services as SAN. To overcome the drawbacks from FC, we employ iSCSI to implement the USN(Unified Storage Network). To evaluate whether iSCSI is more suitable for implementing the USN, we analyze iSCSI protocol and compare it with FC protocol from several components of a network protocol which impact the performance of the network. From the analysis and comparison, we can conclude that the iSCSI is more suitable for implementing the storage network than the FC under condition of the wide-area network. At last, we designed two groups of experiments carefully.

  4. Performance optimisation through EPT-WBC in mobile ad hoc networks

    NASA Astrophysics Data System (ADS)

    Agarwal, Ratish; Gupta, Roopam; Motwani, Mahesh

    2016-03-01

    Mobile ad hoc networks are self-organised, infrastructure-less networks in which each mobile host works as a router to provide connectivity within the network. Nodes out of reach to each other can communicate with the help of intermediate routers (nodes). Routing protocols are the rules which determine the way in which these routing activities are to be performed. In cluster-based architecture, some selected nodes (clusterheads) are identified to bear the extra burden of network activities like routing. Selection of clusterheads is a critical issue which significantly affects the performance of the network. This paper proposes an enhanced performance and trusted weight-based clustering approach in which a number of performance factors such as trust, load balancing, energy consumption, mobility and battery power are considered for the selection of clusterheads. Moreover, the performance of the proposed scheme is compared with other existing approaches to demonstrate the effectiveness of the work.

  5. The performance evaluation of a new neural network based traffic management scheme for a satellite communication network

    NASA Technical Reports Server (NTRS)

    Ansari, Nirwan; Liu, Dequan

    1991-01-01

    A neural-network-based traffic management scheme for a satellite communication network is described. The scheme consists of two levels of management. The front end of the scheme is a derivation of Kohonen's self-organization model to configure maps for the satellite communication network dynamically. The model consists of three stages. The first stage is the pattern recognition task, in which an exemplar map that best meets the current network requirements is selected. The second stage is the analysis of the discrepancy between the chosen exemplar map and the state of the network, and the adaptive modification of the chosen exemplar map to conform closely to the network requirement (input data pattern) by means of Kohonen's self-organization. On the basis of certain performance criteria, whether a new map is generated to replace the original chosen map is decided in the third stage. A state-dependent routing algorithm, which arranges the incoming call to some proper path, is used to make the network more efficient and to lower the call block rate. Simulation results demonstrate that the scheme, which combines self-organization and the state-dependent routing mechanism, provides better performance in terms of call block rate than schemes that only have either the self-organization mechanism or the routing mechanism.

  6. The performance evaluation of a new neural network based traffic management scheme for a satellite communication network

    NASA Technical Reports Server (NTRS)

    Ansari, Nirwan; Liu, Dequan

    1991-01-01

    A neural-network-based traffic management scheme for a satellite communication network is described. The scheme consists of two levels of management. The front end of the scheme is a derivation of Kohonen's self-organization model to configure maps for the satellite communication network dynamically. The model consists of three stages. The first stage is the pattern recognition task, in which an exemplar map that best meets the current network requirements is selected. The second stage is the analysis of the discrepancy between the chosen exemplar map and the state of the network, and the adaptive modification of the chosen exemplar map to conform closely to the network requirement (input data pattern) by means of Kohonen's self-organization. On the basis of certain performance criteria, whether a new map is generated to replace the original chosen map is decided in the third stage. A state-dependent routing algorithm, which arranges the incoming call to some proper path, is used to make the network more efficient and to lower the call block rate. Simulation results demonstrate that the scheme, which combines self-organization and the state-dependent routing mechanism, provides better performance in terms of call block rate than schemes that only have either the self-organization mechanism or the routing mechanism.

  7. Impact of Network Activity Levels on the Performance of Passive Network Service Dependency Discovery

    SciTech Connect

    Carroll, Thomas E.; Chikkagoudar, Satish; Arthur-Durett, Kristine M.

    2015-11-02

    Network services often do not operate alone, but instead, depend on other services distributed throughout a network to correctly function. If a service fails, is disrupted, or degraded, it is likely to impair other services. The web of dependencies can be surprisingly complex---especially within a large enterprise network---and evolve with time. Acquiring, maintaining, and understanding dependency knowledge is critical for many network management and cyber defense activities. While automation can improve situation awareness for network operators and cyber practitioners, poor detection accuracy reduces their confidence and can complicate their roles. In this paper we rigorously study the effects of network activity levels on the detection accuracy of passive network-based service dependency discovery methods. The accuracy of all except for one method was inversely proportional to network activity levels. Our proposed cross correlation method was particularly robust to the influence of network activity. The proposed experimental treatment will further advance a more scientific evaluation of methods and provide the ability to determine their operational boundaries.

  8. Geophysical Inversion of Thermospheric Wind Observations by a Multistatic Network of Imaging Fabry-Perot Spectrometers Across Alaska, to Produce 2-Dimensional Maps of Three-Component Wind Vectors at 250 km Altitude.

    NASA Astrophysics Data System (ADS)

    Elliott, J.; Conde, M.

    2016-12-01

    Near-simultaneous measurements of line of sight (LOS) wind speed at 250 km altitude are collected from a network of three Scanning Doppler Imagers (SDIs) distributed across Alaska. Each instrument views the sky down to 70 degrees zenith angle, corresponding to a geographic region roughly 1000 km in diameter. Because there is considerable overlap of the three fields of view, it is possible to invert the line-of-sight component measurements to produce extended two-dimensional maps of the three-component vector wind field. Existing algorithms for this inversion employ either bistatic triangulation or multistatic basis function fitting. Bistatic triangulation analysis resolves finer structure in the actual thermosphere wind field at 250km altitude than basis functions can capture, but is sensitive to noise, artifacting, and is only feasible over limited geographic regions where the bistatic geometry is favorable. We have therefore developed a self tuned geophysical inversion technique using gradient and curvature penalties, to resolve the finest possible structure over the widest possible geographic region. L1,L2 regularization in combination with L1,L2 normalization is explored, along with variance-based cost additions to the objective functions involved. A Moore-Penrose Truncated Singular Value Decomposition (TSVD) provides a frequency independent pseudo-inversion technique. Synthetic wind fields are generated, with added Gaussian noise. Simple analytic functions (sines , hyperbolic tangents, Gaussian distributions, and constants) are used to generate model wind fields with and without large scale wind shears. These fields are sampled into their single line-of-sight components, as would be observed by our actual instruments, and analysis is then performed of the algorithm's ability to reconstruct from this the original (known) 3-component vector wind fields. . Finally we reconstruct wind fields using real data from our Alaskan SDI network and compare it to other data

  9. An efficient learning algorithm for improving generalization performance of radial basis function neural networks.

    PubMed

    Wang, Z O; Zhu, T

    2000-01-01

    This paper presents an efficient recursive learning algorithm for improving generalization performance of radial basis function (RBF) neural networks. The approach combines the rival penalized competitive learning (PRCL) [Xu, L., Kizyzak, A. & Oja, E. (1993). Rival penalized competitive learning for clustering analysis, RBF net and curve detection, IEEE Transactions on Neural Networks, 4, 636-649] and the regularized least squares (RLS) to provide an efficient and powerful procedure for constructing a minimal RBF network that generalizes very well. The RPCL selects the number of hidden units of network and adjusts centers, while the RLS constructs the parsimonious network and estimates the connection weights. In the RLS we derived a simple recursive algorithm, which needs no matrix calculation, and so largely reduces the computational cost. This combined algorithm significantly enhances the generalization performance and the real-time capability of the RBF networks. Simulation results of three different problems demonstrate much better generalization performance of the present algorithm over other existing similar algorithms.

  10. Performance Evolution of IEEE 802.11b Wireless Local Area Network

    NASA Astrophysics Data System (ADS)

    Malik, Deepak; Singhal, Ankur

    2011-12-01

    The Wireless network can be employed to connect wired network to the wireless network. Wireless local area networks (WLAN) are more bandwidth limited as compared to the wired networks because they rely on an inexpensive, but error prone, physical medium (air). Hence it is important to evaluate their performance. This paper presents a study of IEEE 802.11b wireless LAN (WLAN). The performance evaluation has been presented via a series of test with different parameters such as data rate, different number of nodes and physical characteristics. The different qualities of service parameter are chosen to be throughput, media access delay and dropped data packets. The simulation results show that an IEEE 802.11b WLAN can support up to 60 clients with modest throughput. Finally the results are compared to evaluate the performance of wireless local networks.

  11. DISCRETE EVENT SIMULATION OF OPTICAL SWITCH MATRIX PERFORMANCE IN COMPUTER NETWORKS

    SciTech Connect

    Imam, Neena; Poole, Stephen W

    2013-01-01

    In this paper, we present application of a Discrete Event Simulator (DES) for performance modeling of optical switching devices in computer networks. Network simulators are valuable tools in situations where one cannot investigate the system directly. This situation may arise if the system under study does not exist yet or the cost of studying the system directly is prohibitive. Most available network simulators are based on the paradigm of discrete-event-based simulation. As computer networks become increasingly larger and more complex, sophisticated DES tool chains have become available for both commercial and academic research. Some well-known simulators are NS2, NS3, OPNET, and OMNEST. For this research, we have applied OMNEST for the purpose of simulating multi-wavelength performance of optical switch matrices in computer interconnection networks. Our results suggest that the application of DES to computer interconnection networks provides valuable insight in device performance and aids in topology and system optimization.

  12. Testing the Feasibility of a Low-Cost Network Performance Measurement Infrastructure

    SciTech Connect

    Chevalier, Scott; Schopf, Jennifer M.; Miller, Kenneth; Zurawski, Jason

    2016-07-01

    Todays science collaborations depend on reliable, high performance networks, but monitoring the end-to-end performance of a network can be costly and difficult. The most accurate approaches involve using measurement equipment in many locations, which can be both expensive and difficult to manage due to immobile or complicated assets. The perfSONAR framework facilitates network measurement making management of the tests more reasonable. Traditional deployments have used over-provisioned servers, which can be expensive to deploy and maintain. As scientific network uses proliferate, there is a desire to instrument more facets of a network to better understand trends. This work explores low cost alternatives to assist with network measurement. Benefits include the ability to deploy more resources quickly, and reduced capital and operating expenditures. Finally, we present candidate platforms and a testing scenario that evaluated the relative merits of four types of small form factor equipment to deliver accurate performance measurements.

  13. International network for capacity building for the control of emerging viral vector-borne zoonotic diseases: ARBO-ZOONET.

    PubMed

    Ahmed, J; Bouloy, M; Ergonul, O; Fooks, Ar; Paweska, J; Chevalier, V; Drosten, C; Moormann, R; Tordo, N; Vatansever, Z; Calistri, P; Estrada-Pena, A; Mirazimi, A; Unger, H; Yin, H; Seitzer, U

    2009-03-26

    Arboviruses are arthropod-borne viruses, which include West Nile fever virus (WNFV), a mosquito-borne virus, Rift Valley fever virus (RVFV), a mosquito-borne virus, and Crimean-Congo haemorrhagic fever virus (CCHFV), a tick-borne virus. These arthropod-borne viruses can cause disease in different domestic and wild animals and in humans, posing a threat to public health because of their epidemic and zoonotic potential. In recent decades, the geographical distribution of these diseases has expanded. Outbreaks of WNF have already occurred in Europe, especially in the Mediterranean basin. Moreover, CCHF is endemic in many European countries and serious outbreaks have occurred, particularly in the Balkans, Turkey and Southern Federal Districts of Russia. In 2000, RVF was reported for the first time outside the African continent, with cases being confirmed in Saudi Arabia and Yemen. This spread was probably caused by ruminant trade and highlights that there is a threat of expansion of the virus into other parts of Asia and Europe. In the light of global warming and globalisation of trade and travel, public interest in emerging zoonotic diseases has increased. This is especially evident regarding the geographical spread of vector-borne diseases. A multi-disciplinary approach is now imperative, and groups need to collaborate in an integrated manner that includes vector control, vaccination programmes, improved therapy strategies, diagnostic tools and surveillance, public awareness, capacity building and improvement of infrastructure in endemic regions.

  14. Image Coding Based on Address Vector Quantization.

    NASA Astrophysics Data System (ADS)

    Feng, Yushu

    Image coding is finding increased application in teleconferencing, archiving, and remote sensing. This thesis investigates the potential of Vector Quantization (VQ), a relatively new source coding technique, for compression of monochromatic and color images. Extensions of the Vector Quantization technique to the Address Vector Quantization method have been investigated. In Vector Quantization, the image data to be encoded are first processed to yield a set of vectors. A codeword from the codebook which best matches the input image vector is then selected. Compression is achieved by replacing the image vector with the index of the code-word which produced the best match, the index is sent to the channel. Reconstruction of the image is done by using a table lookup technique, where the label is simply used as an address for a table containing the representative vectors. A code-book of representative vectors (codewords) is generated using an iterative clustering algorithm such as K-means, or the generalized Lloyd algorithm. A review of different Vector Quantization techniques are given in chapter 1. Chapter 2 gives an overview of codebook design methods including the Kohonen neural network to design codebook. During the encoding process, the correlation of the address is considered and Address Vector Quantization is developed for color image and monochrome image coding. Address VQ which includes static and dynamic processes is introduced in chapter 3. In order to overcome the problems in Hierarchical VQ, Multi-layer Address Vector Quantization is proposed in chapter 4. This approach gives the same performance as that of the normal VQ scheme but the bit rate is about 1/2 to 1/3 as that of the normal VQ method. In chapter 5, a Dynamic Finite State VQ based on a probability transition matrix to select the best subcodebook to encode the image is developed. In chapter 6, a new adaptive vector quantization scheme, suitable for color video coding, called "A Self -Organizing

  15. Distinguishing Parkinson's disease from atypical parkinsonian syndromes using PET data and a computer system based on support vector machines and Bayesian networks.

    PubMed

    Segovia, Fermín; Illán, Ignacio A; Górriz, Juan M; Ramírez, Javier; Rominger, Axel; Levin, Johannes

    2015-01-01

    Differentiating between Parkinson's disease (PD) and atypical parkinsonian syndromes (APS) is still a challenge, specially at early stages when the patients show similar symptoms. During last years, several computer systems have been proposed in order to improve the diagnosis of PD, but their accuracy is still limited. In this work we demonstrate a full automatic computer system to assist the diagnosis of PD using (18)F-DMFP PET data. First, a few regions of interest are selected by means of a two-sample t-test. The accuracy of the selected regions to separate PD from APS patients is then computed using a support vector machine classifier. The accuracy values are finally used to train a Bayesian network that can be used to predict the class of new unseen data. This methodology was evaluated using a database with 87 neuroimages, achieving accuracy rates over 78%. A fair comparison with other similar approaches is also provided.

  16. Vector network analyzer measurement of the amplitude of an electrically excited surface acoustic wave and validation by X-ray diffraction

    NASA Astrophysics Data System (ADS)

    Camara, I. S.; Croset, B.; Largeau, L.; Rovillain, P.; Thevenard, L.; Duquesne, J.-Y.

    2017-01-01

    Surface acoustic waves are used in magnetism to initiate magnetization switching, in microfluidics to control fluids and particles in lab-on-a-chip devices, and in quantum systems like two-dimensional electron gases, quantum dots, photonic cavities, and single carrier transport systems. For all these applications, an easy tool is highly needed to measure precisely the acoustic wave amplitude in order to understand the underlying physics and/or to optimize the device used to generate the acoustic waves. We present here a method to determine experimentally the amplitude of surface acoustic waves propagating on Gallium Arsenide generated by an interdigitated transducer. It relies on Vector Network Analyzer measurements of S parameters and modeling using the Coupling-Of-Modes theory. The displacements obtained are in excellent agreement with those measured by a very different method based on X-ray diffraction measurements.

  17. Performance evaluation of a holographic optical neural network system

    NASA Astrophysics Data System (ADS)

    Lu, Thomas T.; Kostrzewski, Andrew A.; Chou, Hung; Wu, Shudong; Lin, Freddie S.

    1993-02-01

    One of the most outstanding properties of artificial neural networks is their capability for massive interconnection and parallel processing. Recently, specialized electronic neural network processors and VLSI neural chips have been introduced to the commercial market. The number of parallel channels they can handle is limited because of the limited parallel interconnections number with one-dimensional (1-D) electronic wires. High resolution pattern recognition problems may require a large number of neurons for parallel processing of the image. The holographic optical neural network (HONN) based on high resolution volume holographic materials is capable of providing 3-D massive parallel interconnection of tens of thousand of neurons. A HONN with 3600 neurons, contained in a portable briefcase, has been developed. Rotation-shift-scale invariant pattern recognition operations have been demonstrated with this system. System parameters, such as signal-to-noise ratio, dynamic range, and processing speed, will be discussed.

  18. Issues in performing a network meta-analysis.

    PubMed

    Senn, Stephen; Gavini, Francois; Magrez, David; Scheen, André

    2013-04-01

    The example of the analysis of a collection of trials in diabetes consisting of a sparsely connected network of 10 treatments is used to make some points about approaches to analysis. In particular various graphical and tabular presentations, both of the network and of the results are provided and the connection to the literature of incomplete blocks is made. It is clear from this example that is inappropriate to treat the main effect of trial as random and the implications of this for analysis are discussed. It is also argued that the generalisation from a classic random-effect meta-analysis to one applied to a network usually involves strong assumptions about the variance components involved. Despite this, it is concluded that such an analysis can be a useful way of exploring a set of trials.

  19. Blocking performance approximation in flexi-grid networks

    NASA Astrophysics Data System (ADS)

    Ge, Fei; Tan, Liansheng

    2016-12-01

    The blocking probability to the path requests is an important issue in flexible bandwidth optical communications. In this paper, we propose a blocking probability approximation method of path requests in flexi-grid networks. It models the bundled neighboring carrier allocation with a group of birth-death processes and provides a theoretical analysis to the blocking probability under variable bandwidth traffic. The numerical results show the effect of traffic parameters to the blocking probability of path requests. We use the first fit algorithm in network nodes to allocate neighboring carriers to path requests in simulations, and verify approximation results.

  20. An Implementation of Traffic Monitoring for UNIX Network Performance Management

    DTIC Science & Technology

    1993-03-01

    plttsr->network-node~network-node~plttsr); new_node,_recjpltlsr->trafflc_info nr-rewremc..plttsr, * new-node_rec~plutsr->next;=NULL; if (head- nodej -ec...plttsr); free(new nodejrec~plttsr); free(cur _node__rec..plttsr); displayjlong-term-statisticsý-report(head_node-rec-dltsr.tail- nodej -ec~dltsr) long~ern...NULL) head- nodej - ec..pltutrrnew node-rec-plttrrn tail-node-re4cplttri-new node-rec-plttrr else tail-node-rec-plttr->next=new node_rec-plttrrn 265

  1. microRNA as a Potential Vector for the Propagation of Robustness in Protein Expression and Oscillatory Dynamics within a ceRNA Network

    PubMed Central

    Gérard, Claude; Novák, Béla

    2013-01-01

    microRNAs (miRNAs) are small noncoding RNAs that are important post-transcriptional regulators of gene expression. miRNAs can induce thresholds in protein synthesis. Such thresholds in protein output can be also achieved by oligomerization of transcription factors (TF) for the control of gene expression. First, we propose a minimal model for protein expression regulated by miRNA and by oligomerization of TF. We show that miRNA and oligomerization of TF generate a buffer, which increases the robustness of protein output towards molecular noise as well as towards random variation of kinetics parameters. Next, we extend the model by considering that the same miRNA can bind to multiple messenger RNAs, which accounts for the dynamics of a minimal competing endogenous RNAs (ceRNAs) network. The model shows that, through common miRNA regulation, TF can control the expression of all proteins formed by the ceRNA network, even if it drives the expression of only one gene in the network. The model further suggests that the threshold in protein synthesis mediated by the oligomerization of TF can be propagated to the other genes, which can increase the robustness of the expression of all genes in such ceRNA network. Furthermore, we show that a miRNA could increase the time delay of a “Goodwin-like” oscillator model, which may favor the occurrence of oscillations of large amplitude. This result predicts important roles of miRNAs in the control of the molecular mechanisms leading to the emergence of biological rhythms. Moreover, a model for the latter oscillator embedded in a ceRNA network indicates that the oscillatory behavior can be propagated, via the shared miRNA, to all proteins formed by such ceRNA network. Thus, by means of computational models, we show that miRNAs could act as vectors allowing the propagation of robustness in protein synthesis as well as oscillatory behaviors within ceRNA networks. PMID:24376695

  2. Performance Impacts of Lower-Layer Cryptographic Methods in Mobile Wireless Ad Hoc Networks

    SciTech Connect

    VAN LEEUWEN, BRIAN P.; TORGERSON, MARK D.

    2002-10-01

    In high consequence systems, all layers of the protocol stack need security features. If network and data-link layer control messages are not secured, a network may be open to adversarial manipulation. The open nature of the wireless channel makes mobile wireless mobile ad hoc networks (MANETs) especially vulnerable to control plane manipulation. The objective of this research is to investigate MANET performance issues when cryptographic processing delays are applied at the data-link layer. The results of analysis are combined with modeling and simulation experiments to show that network performance in MANETs is highly sensitive to the cryptographic overhead.

  3. Short-term forecasting of electric loads using nonlinear autoregressive artificial neural networks with exogenous vector inputs

    DOE PAGES

    Buitrago, Jaime; Asfour, Shihab

    2017-01-01

    Short-term load forecasting is crucial for the operations planning of an electrical grid. Forecasting the next 24 h of electrical load in a grid allows operators to plan and optimize their resources. The purpose of this study is to develop a more accurate short-term load forecasting method utilizing non-linear autoregressive artificial neural networks (ANN) with exogenous multi-variable input (NARX). The proposed implementation of the network is new: the neural network is trained in open-loop using actual load and weather data, and then, the network is placed in closed-loop to generate a forecast using the predicted load as the feedback input.more » Unlike the existing short-term load forecasting methods using ANNs, the proposed method uses its own output as the input in order to improve the accuracy, thus effectively implementing a feedback loop for the load, making it less dependent on external data. Using the proposed framework, mean absolute percent errors in the forecast in the order of 1% have been achieved, which is a 30% improvement on the average error using feedforward ANNs, ARMAX and state space methods, which can result in large savings by avoiding commissioning of unnecessary power plants. Finally, the New England electrical load data are used to train and validate the forecast prediction.« less

  4. Short-term forecasting of electric loads using nonlinear autoregressive artificial neural networks with exogenous vector inputs

    SciTech Connect

    Buitrago, Jaime; Asfour, Shihab

    2017-01-01

    Short-term load forecasting is crucial for the operations planning of an electrical grid. Forecasting the next 24 h of electrical load in a grid allows operators to plan and optimize their resources. The purpose of this study is to develop a more accurate short-term load forecasting method utilizing non-linear autoregressive artificial neural networks (ANN) with exogenous multi-variable input (NARX). The proposed implementation of the network is new: the neural network is trained in open-loop using actual load and weather data, and then, the network is placed in closed-loop to generate a forecast using the predicted load as the feedback input. Unlike the existing short-term load forecasting methods using ANNs, the proposed method uses its own output as the input in order to improve the accuracy, thus effectively implementing a feedback loop for the load, making it less dependent on external data. Using the proposed framework, mean absolute percent errors in the forecast in the order of 1% have been achieved, which is a 30% improvement on the average error using feedforward ANNs, ARMAX and state space methods, which can result in large savings by avoiding commissioning of unnecessary power plants. Finally, the New England electrical load data are used to train and validate the forecast prediction.

  5. Statistical modelling of networked human-automation performance using working memory capacity.

    PubMed

    Ahmed, Nisar; de Visser, Ewart; Shaw, Tyler; Mohamed-Ameen, Amira; Campbell, Mark; Parasuraman, Raja

    2014-01-01

    This study examines the challenging problem of modelling the interaction between individual attentional limitations and decision-making performance in networked human-automation system tasks. Analysis of real experimental data from a task involving networked supervision of multiple unmanned aerial vehicles by human participants shows that both task load and network message quality affect performance, but that these effects are modulated by individual differences in working memory (WM) capacity. These insights were used to assess three statistical approaches for modelling and making predictions with real experimental networked supervisory performance data: classical linear regression, non-parametric Gaussian processes and probabilistic Bayesian networks. It is shown that each of these approaches can help designers of networked human-automated systems cope with various uncertainties in order to accommodate future users by linking expected operating conditions and performance from real experimental data to observable cognitive traits like WM capacity. Practitioner Summary: Working memory (WM) capacity helps account for inter-individual variability in operator performance in networked unmanned aerial vehicle supervisory tasks. This is useful for reliable performance prediction near experimental conditions via linear models; robust statistical prediction beyond experimental conditions via Gaussian process models and probabilistic inference about unknown task conditions/WM capacities via Bayesian network models.

  6. Measurements-based performance evaluation of 3G wireless networks supporting m-health services

    NASA Astrophysics Data System (ADS)

    Wac, Katarzyna E.; Bults, Richard; van Halteren, Aart; Konstantas, Dimitri; Nicola, Victor F.

    2004-12-01

    The emergence of 3G networks gives rise to new mobile services in many different areas of our daily life. Examples of demanding mobile services are mobile-healthcare (i.e. m-health) services allowing the continuous monitoring of a patient"s vital signs. However, a prerequisite for the successful deployment of m-health services are appropriate performance characteristics of transport services offered by an underlying wireless network (e.g. 3G). In this direction, the EU MobiHealth project targeted the evaluation of 3G networks and their ability to support demanding m-health services. The project developed and trialled a patient monitoring system, evaluating at the same time the network"s performance. This paper presents measurements based performance evaluation methodology developed and applied to assess network performance from an end-user perspective. In addition, it presents the (selected) speed-related evaluation (best-case scenario) results collected during the project. Our measurements show the dynamicity in the performance of 3G networks and phenomena negatively influencing this performance. Based on the evaluation results, we conclude that in-spite of certain shortcomings of existing 3G networks, they are suitable to support a significant set of m-health services. A set of recommendations provide a road map for both operators and service developers for design and deployment of m-health services.

  7. Visualizing weighted networks: a performance comparison of adjacency matrices versus node-link diagrams

    NASA Astrophysics Data System (ADS)

    McIntire, John P.; Osesina, O. Isaac; Bartley, Cecilia; Tudoreanu, M. Eduard; Havig, Paul R.; Geiselman, Eric E.

    2012-06-01

    Ensuring the proper and effective ways to visualize network data is important for many areas of academia, applied sciences, the military, and the public. Fields such as social network analysis, genetics, biochemistry, intelligence, cybersecurity, neural network modeling, transit systems, communications, etc. often deal with large, complex network datasets that can be difficult to interact with, study, and use. There have been surprisingly few human factors performance studies on the relative effectiveness of different graph drawings or network diagram techniques to convey information to a viewer. This is particularly true for weighted networks which include the strength of connections between nodes, not just information about which nodes are linked to other nodes. We describe a human factors study in which participants performed four separate network analysis tasks (finding a direct link between given nodes, finding an interconnected node between given nodes, estimating link strengths, and estimating the most densely interconnected nodes) on two different network visualizations: an adjacency matrix with a heat-map versus a node-link diagram. The results should help shed light on effective methods of visualizing network data for some representative analysis tasks, with the ultimate goal of improving usability and performance for viewers of network data displays.

  8. Static and transient performance prediction for CFB boilers using a Bayesian-Gaussian Neural Network

    NASA Astrophysics Data System (ADS)

    Ye, Haiwen; Ni, Weidou

    1997-06-01

    A Bayesian-Gaussian Neural Network (BGNN) is put forward in this paper to predict the static and transient performance of Circulating Fluidized Bed (CFB) boilers. The advantages of this network over Back-Propagation Neural Networks (BPNNs), easier determination of topology, simpler and time saving in training process as well as self-organizing ability, make this network more practical in on-line performance prediction for complicated processes. Simulation shows that this network is comparable to the BPNNs in predicting the performance of CFB boilers. Good and practical on-line performance predictions are essential for operation guide and model predictive control of CFB boilers, which are under research by the authors.

  9. Network based high performance concurrent computing. Progress report, [FY 1991

    SciTech Connect

    Sunderam, V.S.

    1991-12-31

    The overall objectives of this project are to investigate research issues pertaining to programming tools and efficiency issues in network based concurrent computing systems. The basis for these efforts is the PVM project that evolved during my visits to Oak Ridge Laboratories under the DOE Faculty Research Participation program; I continue to collaborate with researchers at Oak Ridge on some portions of the project.

  10. Social Networks and Performance in Distributed Learning Communities

    ERIC Educational Resources Information Center

    Cadima, Rita; Ojeda, Jordi; Monguet, Josep M.

    2012-01-01

    Social networks play an essential role in learning environments as a key channel for knowledge sharing and students' support. In distributed learning communities, knowledge sharing does not occur as spontaneously as when a working group shares the same physical space; knowledge sharing depends even more on student informal connections. In this…

  11. Improving Stochastic Communication Network Performance: Reliability vs. Throughput

    DTIC Science & Technology

    1991-12-01

    ap- proach was only successful in computing a relaibility value for Network A. and even then, required on the order of hours to compute. The factoring...Algorithm for Sum of Disjoint Products,"IEEE Transactions on Relaibility Vol. R-36, No. 4: 445-453 (October 1987). 18. Page, L. B. and J. E. Perry

  12. Performance Evaluation and Control of Distributed Computer Communication Networks.

    DTIC Science & Technology

    1985-09-01

    Pazos-Rangel "Bandwidth Allocation and Routing in ISDN’s," IEEE Communications Magazine , February 1984. Abstract The goal of communications network design...December 1982. [28] M. Gerla and R. Pazos, "Bandwidth Allocation and Routing in ISDN’s," IEEE Communications Magazine , February 1984. [29] R. Pazos

  13. Social Networks and Performance in Distributed Learning Communities

    ERIC Educational Resources Information Center

    Cadima, Rita; Ojeda, Jordi; Monguet, Josep M.

    2012-01-01

    Social networks play an essential role in learning environments as a key channel for knowledge sharing and students' support. In distributed learning communities, knowledge sharing does not occur as spontaneously as when a working group shares the same physical space; knowledge sharing depends even more on student informal connections. In this…

  14. Algorithms for Performance, Dependability, and Performability Evaluation using Stochastic Activity Networks

    NASA Technical Reports Server (NTRS)

    Deavours, Daniel D.; Qureshi, M. Akber; Sanders, William H.

    1997-01-01

    Modeling tools and technologies are important for aerospace development. At the University of Illinois, we have worked on advancing the state of the art in modeling by Markov reward models in two important areas: reducing the memory necessary to numerically solve systems represented as stochastic activity networks and other stochastic Petri net extensions while still obtaining solutions in a reasonable amount of time, and finding numerically stable and memory-efficient methods to solve for the reward accumulated during a finite mission time. A long standing problem when modeling with high level formalisms such as stochastic activity networks is the so-called state space explosion, where the number of states increases exponentially with size of the high level model. Thus, the corresponding Markov model becomes prohibitively large and solution is constrained by the the size of primary memory. To reduce the memory necessary to numerically solve complex systems, we propose new methods that can tolerate such large state spaces that do not require any special structure in the model (as many other techniques do). First, we develop methods that generate row and columns of the state transition-rate-matrix on-the-fly, eliminating the need to explicitly store the matrix at all. Next, we introduce a new iterative solution method, called modified adaptive Gauss-Seidel, that exhibits locality in its use of data from the state transition-rate-matrix, permitting us to cache portions of the matrix and hence reduce the solution time. Finally, we develop a new memory and computationally efficient technique for Gauss-Seidel based solvers that avoids the need for generating rows of A in order to solve Ax = b. This is a significant performance improvement for on-the-fly methods as well as other recent solution techniques based on Kronecker operators. Taken together, these new results show that one can solve very large models without any special structure.

  15. Cortical thickness in frontoparietal and cingulo-opercular networks predicts executive function performance in older adults.

    PubMed

    Schmidt, Erica L; Burge, Wesley; Visscher, Kristina M; Ross, Lesley A

    2016-03-01

    This study examined the relationship between cortical thickness in executive control networks and neuropsychological measures of executive function. Forty-one community-dwelling older adults completed an MRI scan and a neuropsychological battery including 5 measures of executive function. Factor analysis of executive function measures revealed 2 distinct factors: (a) Complex Attention Control (CAC), comprised of tasks that required immediate response to stimuli and involved subtle performance feedback; and (b) Sustained Executive Control (SEC), comprised of tasks that involved maintenance and manipulation of information over time. Neural networks of interest were the frontoparietal network (F-P) and cingulo-opercular network (C-O), which have previously been hypothesized to relate to different components of executive function, based on functional MRI studies, but not neuropsychological factors. Linear regression models revealed that greater cortical thickness in the F-P network, but not the C-O network, predicted better performance on the CAC factor, whereas greater cortical thickness in the C-O network, but not the F-P network, predicted better performance on the SEC factor. The relationship between cortical thickness and performance on executive function measures was characterized by a double dissociation between the thickness of cortical regions hypothesized to be involved in executive control and distinct executive processes. Results indicate that fundamentally different executive processes may be predicted by cortical thickness in distinct brain networks. (c) 2016 APA, all rights reserved).

  16. Support vector machines

    NASA Technical Reports Server (NTRS)

    Garay, Michael J.; Mazzoni, Dominic; Davies, Roger; Wagstaff, Kiri

    2004-01-01

    Support Vector Machines (SVMs) are a type of supervised learning algorith,, other examples of which are Artificial Neural Networks (ANNs), Decision Trees, and Naive Bayesian Classifiers. Supervised learning algorithms are used to classify objects labled by a 'supervisor' - typically a human 'expert.'.

  17. Support vector machines

    NASA Technical Reports Server (NTRS)

    Garay, Michael J.; Mazzoni, Dominic; Davies, Roger; Wagstaff, Kiri

    2004-01-01

    Support Vector Machines (SVMs) are a type of supervised learning algorith,, other examples of which are Artificial Neural Networks (ANNs), Decision Trees, and Naive Bayesian Classifiers. Supervised learning algorithms are used to classify objects labled by a 'supervisor' - typically a human 'expert.'.

  18. Singular Vectors' Subtle Secrets

    ERIC Educational Resources Information Center

    James, David; Lachance, Michael; Remski, Joan

    2011-01-01

    Social scientists use adjacency tables to discover influence networks within and among groups. Building on work by Moler and Morrison, we use ordered pairs from the components of the first and second singular vectors of adjacency matrices as tools to distinguish these groups and to identify particularly strong or weak individuals.

  19. Singular Vectors' Subtle Secrets

    ERIC Educational Resources Information Center

    James, David; Lachance, Michael; Remski, Joan

    2011-01-01

    Social scientists use adjacency tables to discover influence networks within and among groups. Building on work by Moler and Morrison, we use ordered pairs from the components of the first and second singular vectors of adjacency matrices as tools to distinguish these groups and to identify particularly strong or weak individuals.

  20. Support vector regression model of wastewater bioreactor performance using microbial community diversity indices: effect of stress and bioaugmentation.

    PubMed

    Seshan, Hari; Goyal, Manish K; Falk, Michael W; Wuertz, Stefan

    2014-04-15

    The relationship between microbial community structure and function has been examined in detail in natural and engineered environments, but little work has been done on using microbial community information to predict function. We processed microbial community and operational data from controlled experiments with bench-scale bioreactor systems to predict reactor process performance. Four membrane-operated sequencing batch reactors treating synthetic wastewater were operated in two experiments to test the effects of (i) the toxic compound 3-chloroaniline (3-CA) and (ii) bioaugmentation targeting 3-CA degradation, on the sludge microbial community in the reactors. In the first experiment, two reactors were treated with 3-CA and two reactors were operated as controls without 3-CA input. In the second experiment, all four reactors were additionally bioaugmented with a Pseudomonas putida strain carrying a plasmid with a portion of the pathway for 3-CA degradation. Molecular data were generated from terminal restriction fragment length polymorphism (T-RFLP) analysis targeting the 16S rRNA and amoA genes from the sludge community. The electropherograms resulting from these T-RFs were used to calculate diversity indices - community richness, dynamics and evenness - for the domain Bacteria as well as for ammonia-oxidizing bacteria in each reactor over time. These diversity indices were then used to train and test a support vector regression (SVR) model to predict reactor performance based on input microbial community indices and operational data. Considering the diversity indices over time and across replicate reactors as discrete values, it was found that, although bioaugmentation with a bacterial strain harboring a subset of genes involved in the degradation of 3-CA did not bring about 3-CA degradation, it significantly affected the community as measured through all three diversity indices in both the general bacterial community and the ammonia-oxidizer community (

  1. Acquisition in a World of Joint Capabilities: Methods for Understanding Cross-Organizational Network Performance

    DTIC Science & Technology

    2016-04-30

    Cross-Organizational Network Performance Mary Brown, Professor, UNCC Zachary Mohr, Assistant Professor, UNCC Published April 30, 2016 Approved...Understanding Cross-Organizational Network Performance Mary Brown, Professor, UNCC Zachary Mohr, Assistant Professor, UNCC Modeling Uncertainty and... Performance Mary M. Brown—is a Professor at University of North Carolina at Charlotte. [marbrown@uncc.edu] Zachary Mohr—is an Assistant Professor at

  2. A high-performance feedback neural network for solving convex nonlinear programming problems.

    PubMed

    Leung, Yee; Chen, Kai-Zhou; Gao, Xing-Bao

    2003-01-01

    Based on a new idea of successive approximation, this paper proposes a high-performance feedback neural network model for solving convex nonlinear programming problems. Differing from existing neural network optimization models, no dual variables, penalty parameters, or Lagrange multipliers are involved in the proposed network. It has the least number of state variables and is very simple in structure. In particular, the proposed network has better asymptotic stability. For an arbitrarily given initial point, the trajectory of the network converges to an optimal solution of the convex nonlinear programming problem under no more than the standard assumptions. In addition, the network can also solve linear programming and convex quadratic programming problems, and the new idea of a feedback network may be used to solve other optimization problems. Feasibility and efficiency are also substantiated by simulation examples.

  3. Performance Modeling of Network-Attached Storage Device Based Hierarchical Mass Storage Systems

    NASA Technical Reports Server (NTRS)

    Menasce, Daniel A.; Pentakalos, Odysseas I.

    1995-01-01

    Network attached storage devices improve I/O performance by separating control and data paths and eliminating host intervention during the data transfer phase. Devices are attached to both a high speed network for data transfer and to a slower network for control messages. Hierarchical mass storage systems use disks to cache the most recently used files and a combination of robotic and manually mounted tapes to store the bulk of the files in the file system. This paper shows how queuing network models can be used to assess the performance of hierarchical mass storage systems that use network attached storage devices as opposed to host attached storage devices. Simulation was used to validate the model. The analytic model presented here can be used, among other things, to evaluate the protocols involved in 1/0 over network attached devices.

  4. A performance study of unmanned aerial vehicle-based sensor networks under cyber attack

    NASA Astrophysics Data System (ADS)

    Puchaty, Ethan M.

    In UAV-based sensor networks, an emerging area of interest is the performance of these networks under cyber attack. This study seeks to evaluate the performance trade-offs from a System-of-Systems (SoS) perspective between various UAV communications architecture options in the context two missions: tracking ballistic missiles and tracking insurgents. An agent-based discrete event simulation is used to model a sensor communication network consisting of UAVs, military communications satellites, ground relay stations, and a mission control center. Network susceptibility to cyber attack is modeled with probabilistic failures and induced data variability, with performance metrics focusing on information availability, latency, and trustworthiness. Results demonstrated that using UAVs as routers increased network availability with a minimal latency penalty and communications satellite networks were best for long distance operations. Redundancy in the number of links between communication nodes helped mitigate cyber-caused link failures and add robustness in cases of induced data variability by an adversary. However, when failures were not independent, redundancy and UAV routing were detrimental in some cases to network performance. Sensitivity studies indicated that long cyber-caused downtimes and increasing failure dependencies resulted in build-ups of failures and caused significant degradations in network performance.

  5. Moving Large Data Sets Over High-Performance Long Distance Networks

    SciTech Connect

    Hodson, Stephen W; Poole, Stephen W; Ruwart, Thomas; Settlemyer, Bradley W

    2011-04-01

    In this project we look at the performance characteristics of three tools used to move large data sets over dedicated long distance networking infrastructure. Although performance studies of wide area networks have been a frequent topic of interest, performance analyses have tended to focus on network latency characteristics and peak throughput using network traffic generators. In this study we instead perform an end-to-end long distance networking analysis that includes reading large data sets from a source file system and committing large data sets to a destination file system. An evaluation of end-to-end data movement is also an evaluation of the system configurations employed and the tools used to move the data. For this paper, we have built several storage platforms and connected them with a high performance long distance network configuration. We use these systems to analyze the capabilities of three data movement tools: BBcp, GridFTP, and XDD. Our studies demonstrate that existing data movement tools do not provide efficient performance levels or exercise the storage devices in their highest performance modes. We describe the device information required to achieve high levels of I/O performance and discuss how this data is applicable in use cases beyond data movement performance.

  6. G-NetMon: a GPU-accelerated network performance monitoring system

    SciTech Connect

    Wu, Wenji; DeMar, Phil; Holmgren, Don; Singh, Amitoj; /Fermilab

    2011-06-01

    At Fermilab, we have prototyped a GPU-accelerated network performance monitoring system, called G-NetMon, to support large-scale scientific collaborations. In this work, we explore new opportunities in network traffic monitoring and analysis with GPUs. Our system exploits the data parallelism that exists within network flow data to provide fast analysis of bulk data movement between Fermilab and collaboration sites. Experiments demonstrate that our G-NetMon can rapidly detect sub-optimal bulk data movements.

  7. Analysis of NASA communications (Nascom) II network protocols and performance

    NASA Technical Reports Server (NTRS)

    Omidyar, Guy C.; Butler, Thomas E.

    1991-01-01

    The NASA Communications (Nascom) Division of the Mission Operations and Data Systems Directorate is to undertake a major initiative to develop the Nascom II (NII) network to achieve its long-range service objectives for operational data transport to support the Space Station Freedom Program, the Earth Observing System, and other projects. NII is the Nascom ground communications network being developed to accommodate the operational traffic of the mid-1990s and beyond. The authors describe various baseline protocol architectures based on current and evolving technologies. They address the internetworking issues suggested for reliable transfer of data over heterogeneous segments. They also describe the NII architecture, topology, system components, and services. A comparative evaluation of the current and evolving technologies was made, and suggestions for further study are described. It is shown that the direction of the NII configuration and the subsystem component design will clearly depend on the advances made in the area of broadband integrated services.

  8. Statistical Analysis of Wireless Networks: Predicting Performance in Multiple Environments

    DTIC Science & Technology

    2006-06-01

    Internet Protocol TOC Tactical Operations Center SOSUS Sound Surveillance System FHL Fort Hunter Liggett LOS Line of Sight NMS Network Management...Fort Hunter Liggett ( FHL ), located approximately twenty miles west of Highway 101 near King City, CA, proved to be the best test location near the...COASTS 2006 topology. Figure 5 shows the complete setup of the proposed topology at FHL (less one aerial payload) as seen in the Mesh Dynamics

  9. Performance analysis of Integrated Communication and Control System networks

    NASA Technical Reports Server (NTRS)

    Halevi, Y.; Ray, A.

    1990-01-01

    This paper presents statistical analysis of delays in Integrated Communication and Control System (ICCS) networks that are based on asynchronous time-division multiplexing. The models are obtained in closed form for analyzing control systems with randomly varying delays. The results of this research are applicable to ICCS design for complex dynamical processes like advanced aircraft and spacecraft, autonomous manufacturing plants, and chemical and processing plants.

  10. Communications Performance of an Undersea Acoustic Wide-Area Network

    DTIC Science & Technology

    2006-03-01

    Seaweb technology . PEO IWS sponsored the Seaweb 2004 experiment analyzed in my thesis. The SSC San Diego Fellowship Program sponsored my research...layer protocol requiring the undersea vehicle to initiate all communications. As Seaweb advances technologically , the ability to maintain network-layer...racom buoys used in the Seaweb 2004 experiment incorporate FreeWave radio technology as well as Iridium satellite communication technology . The

  11. Performance Evaluation and Control of Distributed Computer Communication Networks.

    DTIC Science & Technology

    1984-09-01

    in ISDN’s," IEEE Communications Magazine , Feb. 1984. [29] R.A. Pazos-Rangel, "Evaluatidn and Design of Integrated Packet Switch- ing and Circuit... Communications Magazine , February 1984. Abstract The goal of communications network design is to satisfy user requirements with the minimum amount of...investigations have been reported in reference (1) and (2) below. References (1) M. Gerla, R. Pazos-Rangel "Bandwidth Allocation and Routing in ISDN’s," IEEE

  12. Performance analysis of Integrated Communication and Control System networks

    NASA Technical Reports Server (NTRS)

    Halevi, Y.; Ray, A.

    1990-01-01

    This paper presents statistical analysis of delays in Integrated Communication and Control System (ICCS) networks that are based on asynchronous time-division multiplexing. The models are obtained in closed form for analyzing control systems with randomly varying delays. The results of this research are applicable to ICCS design for complex dynamical processes like advanced aircraft and spacecraft, autonomous manufacturing plants, and chemical and processing plants.

  13. Performance of IEEE 1588 in Large-Scale Networks

    DTIC Science & Technology

    2010-11-01

    commercially available Syn1588 network cards from Oregano Systems. They not only feature an IEEE 1588 hardware timestamper, but also a 1 PPS output...of the measurements, which are done per default once a second, can be removed considering the short-term stability of the oscillator. As the Oregano ...PTTI) Meeting 76 ACKNOWLEDGMENTS The authors wish to thank Julien Ridoux, the University of Melbourne, Oregano Systems, and Meinberg for

  14. Traffic Dimensioning and Performance Modeling of 4G LTE Networks

    ERIC Educational Resources Information Center

    Ouyang, Ye

    2011-01-01

    Rapid changes in mobile techniques have always been evolutionary, and the deployment of 4G Long Term Evolution (LTE) networks will be the same. It will be another transition from Third Generation (3G) to Fourth Generation (4G) over a period of several years, as is the case still with the transition from Second Generation (2G) to 3G. As a result,…

  15. Performance Analysis on the Coexistence of Multiple Cognitive Radio Networks

    DTIC Science & Technology

    2015-05-28

    5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 5c. PROGRAM ELEMENT NUMBER 5b. GRANT NUMBER 5a. CONTRACT NUMBER Form Approved OMB NO...was built based on the hybrid priority dynamic policy. In [11], three state sensing model was proposed to detect the PU active and idle states as...networks is studied by investigating the effects of different parameters on the throughput of CRN1. Unless otherwise stated , the following practical values

  16. Traffic Dimensioning and Performance Modeling of 4G LTE Networks

    ERIC Educational Resources Information Center

    Ouyang, Ye

    2011-01-01

    Rapid changes in mobile techniques have always been evolutionary, and the deployment of 4G Long Term Evolution (LTE) networks will be the same. It will be another transition from Third Generation (3G) to Fourth Generation (4G) over a period of several years, as is the case still with the transition from Second Generation (2G) to 3G. As a result,…

  17. Optimizing performance of hybrid FSO/RF networks in realistic dynamic scenarios

    NASA Astrophysics Data System (ADS)

    Llorca, Jaime; Desai, Aniket; Baskaran, Eswaran; Milner, Stuart; Davis, Christopher

    2005-08-01

    Hybrid Free Space Optical (FSO) and Radio Frequency (RF) networks promise highly available wireless broadband connectivity and quality of service (QoS), particularly suitable for emerging network applications involving extremely high data rate transmissions such as high quality video-on-demand and real-time surveillance. FSO links are prone to atmospheric obscuration (fog, clouds, snow, etc) and are difficult to align over long distances due the use of narrow laser beams and the effect of atmospheric turbulence. These problems can be mitigated by using adjunct directional RF links, which provide backup connectivity. In this paper, methodologies for modeling and simulation of hybrid FSO/RF networks are described. Individual link propagation models are derived using scattering theory, as well as experimental measurements. MATLAB is used to generate realistic atmospheric obscuration scenarios, including moving cloud layers at different altitudes. These scenarios are then imported into a network simulator (OPNET) to emulate mobile hybrid FSO/RF networks. This framework allows accurate analysis of the effects of node mobility, atmospheric obscuration and traffic demands on network performance, and precise evaluation of topology reconfiguration algorithms as they react to dynamic changes in the network. Results show how topology reconfiguration algorithms, together with enhancements to TCP/IP protocols which reduce the network response time, enable the network to rapidly detect and act upon link state changes in highly dynamic environments, ensuring optimized network performance and availability.

  18. High performance interconnection between high data rate networks

    NASA Technical Reports Server (NTRS)

    Foudriat, E. C.; Maly, K.; Overstreet, C. M.; Zhang, L.; Sun, W.

    1992-01-01

    The bridge/gateway system needed to interconnect a wide range of computer networks to support a wide range of user quality-of-service requirements is discussed. The bridge/gateway must handle a wide range of message types including synchronous and asynchronous traffic, large, bursty messages, short, self-contained messages, time critical messages, etc. It is shown that messages can be classified into three basic classes, synchronous and large and small asynchronous messages. The first two require call setup so that packet identification, buffer handling, etc. can be supported in the bridge/gateway. Identification enables resequences in packet size. The third class is for messages which do not require call setup. Resequencing hardware based to handle two types of resequencing problems is presented. The first is for a virtual parallel circuit which can scramble channel bytes. The second system is effective in handling both synchronous and asynchronous traffic between networks with highly differing packet sizes and data rates. The two other major needs for the bridge/gateway are congestion and error control. A dynamic, lossless congestion control scheme which can easily support effective error correction is presented. Results indicate that the congestion control scheme provides close to optimal capacity under congested conditions. Under conditions where error may develop due to intervening networks which are not lossless, intermediate error recovery and correction takes 1/3 less time than equivalent end-to-end error correction under similar conditions.

  19. Communication, opponents, and clan performance in online games: a social network approach.

    PubMed

    Lee, Hong Joo; Choi, Jaewon; Kim, Jong Woo; Park, Sung Joo; Gloor, Peter

    2013-12-01

    Online gamers form clans voluntarily to play together and to discuss their real and virtual lives. Although these clans have diverse goals, they seek to increase their rank in the game community by winning more battles. Communications among clan members and battles with other clans may influence the performance of a clan. In this study, we compared the effects of communication structure inside a clan, and battle networks among clans, with the performance of the clans. We collected battle histories, posts, and comments on clan pages from a Korean online game, and measured social network indices for communication and battle networks. Communication structures in terms of density and group degree centralization index had no significant association with clan performance. However, the centrality of clans in the battle network was positively related to the performance of the clan. If a clan had many battle opponents, the performance of the clan improved.

  20. Communication, Opponents, and Clan Performance in Online Games: A Social Network Approach

    PubMed Central

    Lee, Hong Joo; Choi, Jaewon; Park, Sung Joo; Gloor, Peter

    2013-01-01

    Abstract Online gamers form clans voluntarily to play together and to discuss their real and virtual lives. Although these clans have diverse goals, they seek to increase their rank in the game community by winning more battles. Communications among clan members and battles with other clans may influence the performance of a clan. In this study, we compared the effects of communication structure inside a clan, and battle networks among clans, with the performance of the clans. We collected battle histories, posts, and comments on clan pages from a Korean online game, and measured social network indices for communication and battle networks. Communication structures in terms of density and group degree centralization index had no significant association with clan performance. However, the centrality of clans in the battle network was positively related to the performance of the clan. If a clan had many battle opponents, the performance of the clan improved. PMID:23745617

  1. Enhancing End-to-End Performance of Information Services Over Ka-Band Global Satellite Networks

    NASA Technical Reports Server (NTRS)

    Bhasin, Kul B.; Glover, Daniel R.; Ivancic, William D.; vonDeak, Thomas C.

    1997-01-01

    The Internet has been growing at a rapid rate as the key medium to provide information services such as e-mail, WWW and multimedia etc., however its global reach is limited. Ka-band communication satellite networks are being developed to increase the accessibility of information services via the Internet at global scale. There is need to assess satellite networks in their ability to provide these services and interconnect seamlessly with existing and proposed terrestrial telecommunication networks. In this paper the significant issues and requirements in providing end-to-end high performance for the delivery of information services over satellite networks based on various layers in the OSI reference model are identified. Key experiments have been performed to evaluate the performance of digital video and Internet over satellite-like testbeds. The results of the early developments in ATM and TCP protocols over satellite networks are summarized.

  2. Measurements-based performance evaluation of 3G wireless networks supporting m-health services

    NASA Astrophysics Data System (ADS)

    Wac, Katarzyna E.; Bults, Richard; van Halteren, Aart; Konstantas, Dimitri; Nicola, Victor F.

    2005-01-01

    The emergence of 3G networks gives rise to new mobile services in many different areas of our daily life. Examples of demanding mobile services are mobile-healthcare (i.e. m-health) services allowing the continuous monitoring of a patient"s vital signs. However, a prerequisite for the successful deployment of m-health services are appropriate performance characteristics of transport services offered by an underlying wireless network (e.g. 3G). In this direction, the EU MobiHealth project targeted the evaluation of 3G networks and their ability to support demanding m-health services. The project developed and trialled a patient monitoring system, evaluating at the same time the network's performance. This paper presents measurements based performance evaluation methodology developed and applied to assess network performance from an end-user perspective. In addition, it presents the (selected) speed-related evaluation (best-case scenario) results collected during the project. Our measurements show the dynamicity in the performance of 3G networks and phenomena negatively influencing this performance. Based on the evaluation results, we conclude that in-spite of certain shortcomings of existing 3G networks, they are suitable to support a significant set of m-health services. A set of recommendations provide a road map for both operators and service developers for design and deployment of m-health services.

  3. Functional Connectivity in Multiple Cortical Networks Is Associated with Performance Across Cognitive Domains in Older Adults

    PubMed Central

    Shaw, Emily E.; Schultz, Aaron P.; Sperling, Reisa A.

    2015-01-01

    Abstract Intrinsic functional connectivity MRI has become a widely used tool for measuring integrity in large-scale cortical networks. This study examined multiple cortical networks using Template-Based Rotation (TBR), a method that applies a priori network and nuisance component templates defined from an independent dataset to test datasets of interest. A priori templates were applied to a test dataset of 276 older adults (ages 65–90) from the Harvard Aging Brain Study to examine the relationship between multiple large-scale cortical networks and cognition. Factor scores derived from neuropsychological tests represented processing speed, executive function, and episodic memory. Resting-state BOLD data were acquired in two 6-min acquisitions on a 3-Tesla scanner and processed with TBR to extract individual-level metrics of network connectivity in multiple cortical networks. All results controlled for data quality metrics, including motion. Connectivity in multiple large-scale cortical networks was positively related to all cognitive domains, with a composite measure of general connectivity positively associated with general cognitive performance. Controlling for the correlations between networks, the frontoparietal control network (FPCN) and executive function demonstrated the only significant association, suggesting specificity in this relationship. Further analyses found that the FPCN mediated the relationships of the other networks with cognition, suggesting that this network may play a central role in understanding individual variation in cognition during aging. PMID:25827242

  4. Performance Analysis of Non-saturated IEEE 802.11 DCF Networks

    NASA Astrophysics Data System (ADS)

    Zhai, Linbo; Zhang, Xiaomin; Xie, Gang

    This letter presents a model with queueing theory to analyze the performance of non-saturated IEEE 802.11 DCF networks. We use the closed queueing network model and derive an approximate representation of throughput which can reveal the relationship between the throughput and the total offered load under finite traffic load conditions. The accuracy of the model is verified by extensive simulations.

  5. Social Networks, Communication Styles, and Learning Performance in a CSCL Community

    ERIC Educational Resources Information Center

    Cho, Hichang; Gay, Geri; Davidson, Barry; Ingraffea, Anthony

    2007-01-01

    The aim of this study is to empirically investigate the relationships between communication styles, social networks, and learning performance in a computer-supported collaborative learning (CSCL) community. Using social network analysis (SNA) and longitudinal survey data, we analyzed how 31 distributed learners developed collaborative learning…

  6. Social Networks, Communication Styles, and Learning Performance in a CSCL Community

    ERIC Educational Resources Information Center

    Cho, Hichang; Gay, Geri; Davidson, Barry; Ingraffea, Anthony

    2007-01-01

    The aim of this study is to empirically investigate the relationships between communication styles, social networks, and learning performance in a computer-supported collaborative learning (CSCL) community. Using social network analysis (SNA) and longitudinal survey data, we analyzed how 31 distributed learners developed collaborative learning…

  7. Social Networks and Students' Performance in Secondary Schools: Lessons from an Open Learning Centre, Kenya

    ERIC Educational Resources Information Center

    Muhingi, Wilkins Ndege; Mutavi, Teresia; Kokonya, Donald; Simiyu, Violet Nekesa; Musungu, Ben; Obondo, Anne; Kuria, Mary Wangari

    2015-01-01

    Given the known positive and negative effects of uncontrolled social networking among secondary school students worldwide, it is necessary to establish the relationship between social network sites and academic performances among secondary school students. This study, therefore, aimed at establishing the relationship between secondary school…

  8. Performance benefits and limitations of a camera network

    NASA Astrophysics Data System (ADS)

    Carr, Peter; Thomas, Paul J.; Hornsey, Richard

    2005-06-01

    Visual information is of vital significance to both animals and artificial systems. The majority of mammals rely on two images, each with a resolution of 107-108 'pixels' per image. At the other extreme are insect eyes where the field of view is segmented into 103-105 images, each comprising effectively one pixel/image. The great majority of artificial imaging systems lie nearer to the mammalian characteristics in this parameter space, although electronic compound eyes have been developed in this laboratory and elsewhere. If the definition of a vision system is expanded to include networks or swarms of sensor elements, then schools of fish, flocks of birds and ant or termite colonies occupy a region where the number of images and the pixels/image may be comparable. A useful system might then have 105 imagers, each with about 104-105 pixels. Artificial analogs to these situations include sensor webs, smart dust and co-ordinated robot clusters. As an extreme example, we might consider the collective vision system represented by the imminent existence of ~109 cellular telephones, each with a one-megapixel camera. Unoccupied regions in this resolution-segmentation parameter space suggest opportunities for innovative artificial sensor network systems. Essential for the full exploitation of these opportunities is the availability of custom CMOS image sensor chips whose characteristics can be tailored to the application. Key attributes of such a chip set might include integrated image processing and control, low cost, and low power. This paper compares selected experimentally determined system specifications for an inward-looking array of 12 cameras with the aid of a camera-network model developed to explore the tradeoff between camera resolution and the number of cameras.

  9. Information Fusion and Performance Modeling with Distributed Sensor Networks

    DTIC Science & Technology

    2010-11-01

    Ec,Ed) = X I P(X j I,Ec)P(I j E): (20) SUN & CHANG: MESSAGE PASSING FOR HYBRID BNS: REPRESENTATION, PROPAGATION, AND INTEGRATION 1531 Fig. 6. GHM -2...experiments. One is shown in Fig. 4 as mentioned in Section IIIA called GHM -1. GHM -1 has one loop in each network segment, respectively, (partitioned by...the interface node K). Another experiment model is shown in Fig. 6 called GHM -2. GHM -2 has multiple loops in the continuous segment. For GHM -1, we

  10. Low Temperature Performance of High-Speed Neural Network Circuits

    NASA Technical Reports Server (NTRS)

    Duong, T.; Tran, M.; Daud, T.; Thakoor, A.

    1995-01-01

    Artificial neural networks, derived from their biological counterparts, offer a new and enabling computing paradigm specially suitable for such tasks as image and signal processing with feature classification/object recognition, global optimization, and adaptive control. When implemented in fully parallel electronic hardware, it offers orders of magnitude speed advantage. Basic building blocks of the new architecture are the processing elements called neurons implemented as nonlinear operational amplifiers with sigmoidal transfer function, interconnected through weighted connections called synapses implemented using circuitry for weight storage and multiply functions either in an analog, digital, or hybrid scheme.

  11. Low Temperature Performance of High-Speed Neural Network Circuits

    NASA Technical Reports Server (NTRS)

    Duong, T.; Tran, M.; Daud, T.; Thakoor, A.

    1995-01-01

    Artificial neural networks, derived from their biological counterparts, offer a new and enabling computing paradigm specially suitable for such tasks as image and signal processing with feature classification/object recognition, global optimization, and adaptive control. When implemented in fully parallel electronic hardware, it offers orders of magnitude speed advantage. Basic building blocks of the new architecture are the processing elements called neurons implemented as nonlinear operational amplifiers with sigmoidal transfer function, interconnected through weighted connections called synapses implemented using circuitry for weight storage and multiply functions either in an analog, digital, or hybrid scheme.

  12. Performance of the Birmingham Solar-Oscillations Network (BiSON)

    NASA Astrophysics Data System (ADS)

    Hale, S. J.; Howe, R.; Chaplin, W. J.; Davies, G. R.; Elsworth, Y. P.

    2016-01-01

    The Birmingham Solar-Oscillations Network (BiSON) has been operating with a full complement of six stations since 1992. Over 20 years later, we look back on the network history. The meta-data from the sites have been analysed to assess performance in terms of site insolation, with a brief look at the challenges that have been encountered over the years. We explain how the international community can gain easy access to the ever-growing dataset produced by the network, and finally look to the future of the network and the potential impact of nearly 25 years of technology miniaturisation.

  13. A Space Vector Modulation Scheme for Matrix Converter that Gives Top Priority to the Improvement of the Output Control Performance

    NASA Astrophysics Data System (ADS)

    Tadano, Yugo; Hamada, Shizunori; Urushibata, Shota; Nomura, Masakatsu; Sato, Yukihiko; Ishida, Muneaki

    This Paper proposes a novel conversion scheme of switching patterns for three-phase to three-phase matrix converters. The conventional virtual indirect conversion method is equivalent to PWM technique via an ordinary PWM rectifier/inverter, offering the advantage that no complicated specialized control is required. On the other hand, 6 of 27 switching patterns cannot be used in this method, because the inputs and outputs are always connected by way of the virtual DC link that is composed of 2 lines. This paper therefore defines the direct space vectors that can express the all 27 switching patterns and utilizes the geometric relationship of direct space vectors so that all switching patterns can be converted from other arbitrary vectors. This conversion scheme also allows the duty factor conversion with simple calculation by utilizing the duties of the virtual indirect conversion approach. In particular, the above-mentioned 6 switching patterns that have been restricted can be fully utilized for reducing output voltage harmonics, switching losses and common-mode voltages. The validity of the proposed conversion method is proven from the experimental results compared with a conventional virtual indirect method.

  14. Analysis on the performance dependency of channel models in a wireless peer-to-peer network

    NASA Astrophysics Data System (ADS)

    Wang, Yupeng; Liu, Tianlong; Yu, Zelong; Li, Yufeng

    2017-08-01

    In order to reduce the simulation complexity and time of peer-to-peer network such as Ad Hoc network, most simulations only use the simplified Free Space Model or Two Ray Ground model to approximate the attenuation due to the wireless transmission without considering the dependency between system performance and channel models. In this paper, the effects of channel models on the wireless peer-to-peer network performance is analyzed in more details by using the conventional routing and medium access control algorithm to find the system performance sensitivity to different channel models. Through the computer simulation using network simulator 2, we found that some aspects of the system performance is only sensitive to the large scale fading effects, while others are not.

  15. Can artificial neural networks provide an "expert's" view of medical students performances on computer based simulations?

    PubMed Central

    Stevens, R. H.; Najafi, K.

    1992-01-01

    Artificial neural networks were trained to recognize the test selection patterns of students' successful solutions to seven immunology computer based simulations. When new student's test selections were presented to the trained neural network, their problem solutions were correctly classified as successful or non-successful > 90% of the time. Examination of the neural networks output weights after each test selection revealed a progressive increase for the relevant problem suggesting that a successful solution was represented by the neural network as the accumulation of relevant tests. Unsuccessful problem solutions revealed two patterns of students performances. The first pattern was characterized by low neural network output weights for all seven problems reflecting extensive searching and lack of recognition of relevant information. In the second pattern, the output weights from the neural network were biased towards one of the remaining six incorrect problems suggesting that the student mis-represented the current problem as an instance of a previous problem. PMID:1482863

  16. A comprehensive approach for evaluating network performance in surface and borehole seismic monitoring

    NASA Astrophysics Data System (ADS)

    Stabile, T. A.; Iannaccone, G.; Zollo, A.; Lomax, A.; Ferulano, M. F.; Vetri, M. L. V.; Barzaghi, L. P.

    2013-02-01

    The accurate determination of locations and magnitudes of seismic events in a monitored region is important for many scientific, industrial and military studies and applications; for these purposes a wide variety of seismic networks are deployed throughout the world. It is crucial to know the performance of these networks not only in detecting and locating seismic events of different sizes throughout a specified source region, but also by evaluating their location errors as a function of the magnitude and source location. In this framework, we have developed a method for evaluating network performance in surface and borehole seismic monitoring. For a specified network geometry, station characteristics and a target monitoring volume, the method determines the lowest magnitude of events that the seismic network is able to detect (Mwdetect), and locate (Mwloc) and estimates the expected location and origin time errors for a specified magnitude. Many of the features related to the seismic signal recorded at a single station are considered in this methodology, including characteristics of the seismic source, the instrument response, the ambient noise level, wave propagation in a layered, anelastic medium and uncertainties on waveform measures and the velocity model. We applied this method to two different network typologies: a local earthquake monitoring network, Irpinia Seismic Network (ISNet), installed along the Campania-Lucania Apennine chain in Southern Italy, and a hypothetic borehole network for monitoring microfractures induced during the hydrocarbon extraction process in an oil field. The method we present may be used to aid in enhancing existing networks and/or understanding their capabilities, such as for the ISNet case study, or to optimally design the network geometry in specific target regions, as for the borehole network example.

  17. Altered small-world brain networks in schizophrenia patients during working memory performance.

    PubMed

    He, Hao; Sui, Jing; Yu, Qingbao; Turner, Jessica A; Ho, Beng-Choon; Sponheim, Scott R; Manoach, Dara S; Clark, Vincent P; Calhoun, Vince D

    2012-01-01

    Impairment of working memory (WM) performance in schizophrenia patients (SZ) is well-established. Compared to healthy controls (HC), SZ patients show aberrant blood oxygen level dependent (BOLD) activations and disrupted functional connectivity during WM performance. In this study, we examined the small-world network metrics computed from functional magnetic resonance imaging (fMRI) data collected as 35 HC and 35 SZ performed a Sternberg Item Recognition Paradigm (SIRP) at three WM load levels. Functional connectivity networks were built by calculating the partial correlation on preprocessed time courses of BOLD signal between task-related brain regions of interest (ROIs) defined by group independent component analysis (ICA). The networks were then thresholded within the small-world regime, resulting in undirected binarized small-world networks at different working memory loads. Our results showed: 1) at the medium WM load level, the networks in SZ showed a lower clustering coefficient and less local efficiency compared with HC; 2) in SZ, most network measures altered significantly as the WM load level increased from low to medium and from medium to high, while the network metrics were relatively stable in HC at different WM loads; and 3) the altered structure at medium WM load in SZ was related to their performance during the task, with longer reaction time related to lower clustering coefficient and lower local efficiency. These findings suggest brain connectivity in patients with SZ was more diffuse and less strongly linked locally in functional network at intermediate level of WM when compared to HC. SZ show distinctly inefficient and variable network structures in response to WM load increase, comparing to stable highly clustered network topologies in HC.

  18. Long-Term Effects of Attentional Performance on Functional Brain Network Topology

    PubMed Central

    Breckel, Thomas P. K.; Thiel, Christiane M.; Bullmore, Edward T.; Zalesky, Andrew; Patel, Ameera X.; Giessing, Carsten

    2013-01-01

    Individuals differ in their cognitive resilience. Less resilient people demonstrate a greater tendency to vigilance decrements within sustained attention tasks. We hypothesized that a period of sustained attention is followed by prolonged changes in the organization of “resting state” brain networks and that individual differences in cognitive resilience are related to differences in post-task network reorganization. We compared the topological and spatial properties of brain networks as derived from functional MRI data (N = 20) recorded for 6 mins before and 12 mins after the performance of an attentional task. Furthermore we analysed changes in brain topology during task performance and during the switches between rest and task conditions. The cognitive resilience of each individual was quantified as the rate of increase in response latencies over the 32-minute time course of the attentional paradigm. On average, functional networks measured immediately post-task demonstrated significant and prolonged changes in network organization compared to pre-task networks with higher connectivity strength, more clustering, less efficiency, and shorter distance connections. Individual differences in cognitive resilience were significantly correlated with differences in the degree of recovery of some network parameters. Changes in network measures were still present in less resilient individuals in the second half of the post-task period (i.e. 6–12 mins after task completion), while resilient individuals already demonstrated significant reductions of functional connectivity and clustering towards pre-task levels. During task performance brain topology became more integrated with less clustering and higher global efficiency, but linearly decreased with ongoing time-on-task. We conclude that sustained attentional task performance has prolonged, “hang-over” effects on the organization of post-task resting-state brain networks; and that more cognitively resilient

  19. Consensus group sessions: a useful method to reconcile stakeholders’ perspectives about network performance evaluation

    PubMed Central

    Lamontagne, Marie-Eve; Swaine, Bonnie R; Lavoie, André; Champagne, François; Marcotte, Anne-Claire

    2010-01-01

    Background Having a common vision among network stakeholders is an important ingredient to developing a performance evaluation process. Consensus methods may be a viable means to reconcile the perceptions of different stakeholders about the dimensions to include in a performance evaluation framework. Objectives To determine whether individual organizations within traumatic brain injury (TBI) networks differ in perceptions about the importance of performance dimensions for the evaluation of TBI networks and to explore the extent to which group consensus sessions could reconcile these perceptions. Methods We used TRIAGE, a consensus technique that combines an individual and a group data collection phase to explore the perceptions of network stakeholders and to reach a consensus within structured group discussions. Results One hundred and thirty-nine professionals from 43 organizations within eight TBI networks participated in the individual data collection; 62 professionals from these same organisations contributed to the group data collection. The extent of consensus based on questionnaire results (e.g. individual data collection) was low, however, 100% agreement was obtained for each network during the consensus group sessions. The median importance scores and mean ranks attributed to the dimensions by individuals compared to groups did not differ greatly. Group discussions were found useful in understanding the reasons motivating the scoring, for resolving differences among participants, and for harmonizing their values. Conclusion Group discussions, as part of a consensus technique, appear to be a useful process to reconcile diverging perceptions of network performance among stakeholders. PMID:21289996

  20. Mapping the social network: tracking lice in a wild primate (Microcebus rufus) population to infer social contacts and vector potential

    PubMed Central

    2012-01-01

    previously unseen parasite movement between lemurs, but also allowed us to infer social interactions between them. As lice are known pathogen vectors, our method also allowed us to identify the lemurs most likely to facilitate louse-mediated epidemics. Our approach demonstrates the potential to uncover otherwise inaccessible parasite-host, and host social interaction data in any trappable species parasitized by sucking lice. PMID:22449178

  1. On the Performance of the Underwater Acoustic Sensor Networks

    DTIC Science & Technology

    2015-05-01

    UWSN has many constraints mainly due to limited capacity, propagation loss, as well as power limitation since in underwater environment solar energy...SUPPLEMENTARY NOTES 12. DISTRIBUTION AVAILIBILITY STATEMENT 6. AUTHORS 7. PERFORMING ORGANIZATION NAMES AND ADDRESSES 15. SUBJECT TERMS b. ABSTRACT 2. REPORT...MONITOR’S ACRONYM(S) ARO 8. PERFORMING ORGANIZATION REPORT NUMBER 19a. NAME OF RESPONSIBLE PERSON 19b. TELEPHONE NUMBER Paul Cotae Mahmoud Elsayed

  2. Hybrid Neural-Network: Genetic Algorithm Technique for Aircraft Engine Performance Diagnostics Developed and Demonstrated

    NASA Technical Reports Server (NTRS)

    Kobayashi, Takahisa; Simon, Donald L.

    2002-01-01

    As part of the NASA Aviation Safety Program, a unique model-based diagnostics method that employs neural networks and genetic algorithms for aircraft engine performance diagnostics has been developed and demonstrated at the NASA Glenn Research Center against a nonlinear gas turbine engine model. Neural networks are applied to estimate the internal health condition of the engine, and genetic algorithms are used for sensor fault detection, isolation, and quantification. This hybrid architecture combines the excellent nonlinear estimation capabilities of neural networks with the capability to rank the likelihood of various faults given a specific sensor suite signature. The method requires a significantly smaller data training set than a neural network approach alone does, and it performs the combined engine health monitoring objectives of performance diagnostics and sensor fault detection and isolation in the presence of nominal and degraded engine health conditions.

  3. Study on multiple-hops performance of MOOC sequences-based optical labels for OPS networks

    NASA Astrophysics Data System (ADS)

    Zhang, Chongfu; Qiu, Kun; Ma, Chunli

    2009-11-01

    In this paper, we utilize a new study method that is under independent case of multiple optical orthogonal codes to derive the probability function of MOOCS-OPS networks, discuss the performance characteristics for a variety of parameters, and compare some characteristics of the system employed by single optical orthogonal code or multiple optical orthogonal codes sequences-based optical labels. The performance of the system is also calculated, and our results verify that the method is effective. Additionally it is found that performance of MOOCS-OPS networks would, negatively, be worsened, compared with single optical orthogonal code-based optical label for optical packet switching (SOOC-OPS); however, MOOCS-OPS networks can greatly enlarge the scalability of optical packet switching networks.

  4. Team Sports Performance Analysed Through the Lens of Social Network Theory: Implications for Research and Practice.

    PubMed

    Ribeiro, João; Silva, Pedro; Duarte, Ricardo; Davids, Keith; Garganta, Júlio

    2017-02-15

    This paper discusses how social network analyses and graph theory can be implemented in team sports performance analyses to evaluate individual (micro) and collective (macro) performance data, and how to use this information for designing practice tasks. Moreover, we briefly outline possible limitations of social network studies and provide suggestions for future research. Instead of cataloguing discrete events or player actions, it has been argued that researchers need to consider the synergistic interpersonal processes emerging between teammates in competitive performance environments. Theoretical assumptions on team coordination prompted the emergence of innovative, theoretically driven methods for assessing collective team sport behaviours. Here, we contribute to this theoretical and practical debate by re-conceptualising sports teams as complex social networks. From this perspective, players are viewed as network nodes, connected through relevant information variables (e.g. a ball-passing action), sustaining complex patterns of interaction between teammates (e.g. a ball-passing network). Specialised tools and metrics related to graph theory could be applied to evaluate structural and topological properties of interpersonal interactions of teammates, complementing more traditional analysis methods. This innovative methodology moves beyond the use of common notation analysis methods, providing a richer understanding of the complexity of interpersonal interactions sustaining collective team sports performance. The proposed approach provides practical applications for coaches, performance analysts, practitioners and researchers by establishing social network analyses as a useful approach for capturing the emergent properties of interactions between players in sports teams.

  5. Cloning vector

    DOEpatents

    Guilfoyle, R.A.; Smith, L.M.

    1994-12-27

    A vector comprising a filamentous phage sequence containing a first copy of filamentous phage gene X and other sequences necessary for the phage to propagate is disclosed. The vector also contains a second copy of filamentous phage gene X downstream from a promoter capable of promoting transcription in a bacterial host. In a preferred form of the present invention, the filamentous phage is M13 and the vector additionally includes a restriction endonuclease site located in such a manner as to substantially inactivate the second gene X when a DNA sequence is inserted into the restriction site. 2 figures.

  6. Cloning vector

    DOEpatents

    Guilfoyle, Richard A.; Smith, Lloyd M.

    1994-01-01

    A vector comprising a filamentous phage sequence containing a first copy of filamentous phage gene X and other sequences necessary for the phage to propagate is disclosed. The vector also contains a second copy of filamentous phage gene X downstream from a promoter capable of promoting transcription in a bacterial host. In a preferred form of the present invention, the filamentous phage is M13 and the vector additionally includes a restriction endonuclease site located in such a manner as to substantially inactivate the second gene X when a DNA sequence is inserted into the restriction site.

  7. Analysis of Latency Performance of Bluetooth Low Energy (BLE) Networks

    PubMed Central

    Cho, Keuchul; Park, Woojin; Hong, Moonki; Park, Gisu; Cho, Wooseong; Seo, Jihoon; Han, Kijun

    2015-01-01

    Bluetooth Low Energy (BLE) is a short-range wireless communication technology aiming at low-cost and low-power communication. The performance evaluation of classical Bluetooth device discovery have been intensively studied using analytical modeling and simulative methods, but these techniques are not applicable to BLE, since BLE has a fundamental change in the design of the discovery mechanism, including the usage of three advertising channels. Recently, there several works have analyzed the topic of BLE device discovery, but these studies are still far from thorough. It is thus necessary to develop a new, accurate model for the BLE discovery process. In particular, the wide range settings of the parameters introduce lots of potential for BLE devices to customize their discovery performance. This motivates our study of modeling the BLE discovery process and performing intensive simulation. This paper is focused on building an analytical model to investigate the discovery probability, as well as the expected discovery latency, which are then validated via extensive experiments. Our analysis considers both continuous and discontinuous scanning modes. We analyze the sensitivity of these performance metrics to parameter settings to quantitatively examine to what extent parameters influence the performance metric of the discovery processes. PMID:25545266

  8. Analysis of latency performance of bluetooth low energy (BLE) networks.

    PubMed

    Cho, Keuchul; Park, Woojin; Hong, Moonki; Park, Gisu; Cho, Wooseong; Seo, Jihoon; Han, Kijun

    2014-12-23

    Bluetooth Low Energy (BLE) is a short-range wireless communication technology aiming at low-cost and low-power communication. The performance evaluation of classical Bluetooth device discovery have been intensively studied using analytical modeling and simulative methods, but these techniques are not applicable to BLE, since BLE has a fundamental change in the design of the discovery mechanism, including the usage of three advertising channels. Recently, there several works have analyzed the topic of BLE device discovery, but these studies are still far from thorough. It is thus necessary to develop a new, accurate model for the BLE discovery process. In particular, the wide range settings of the parameters introduce lots of potential for BLE devices to customize their discovery performance. This motivates our study of modeling the BLE discovery process and performing intensive simulation. This paper is focused on building an analytical model to investigate the discovery probability, as well as the expected discovery latency, which are then validated via extensive experiments. Our analysis considers both continuous and discontinuous scanning modes. We analyze the sensitivity of these performance metrics to parameter settings to quantitatively examine to what extent parameters influence the performance metric of the discovery processes.

  9. Long-running telemedicine networks delivering humanitarian services: experience, performance and scientific output.

    PubMed

    Wootton, Richard; Geissbuhler, Antoine; Jethwani, Kamal; Kovarik, Carrie; Person, Donald A; Vladzymyrskyy, Anton; Zanaboni, Paolo; Zolfo, Maria

    2012-05-01

    To summarize the experience, performance and scientific output of long-running telemedicine networks delivering humanitarian services. Nine long-running networks--those operating for five years or more--were identified and seven provided detailed information about their activities, including performance and scientific output. Information was extracted from peer-reviewed papers describing the networks' study design, effectiveness, quality, economics, provision of access to care and sustainability. The strength of the evidence was scored as none, poor, average or good. The seven networks had been operating for a median of 11 years (range: 5-15). All networks provided clinical tele-consultations for humanitarian purposes using store-and-forward methods and five were also involved in some form of education. The smallest network had 15 experts and the largest had more than 500. The clinical caseload was 50 to 500 cases a year. A total of 59 papers had been published by the networks, and 44 were listed in Medline. Based on study design, the strength of the evidence was generally poor by conventional standards (e.g. 29 papers described non-controlled clinical series). Over half of the papers provided evidence of sustainability and improved access to care. Uncertain funding was a common risk factor. Improved collaboration between networks could help attenuate the lack of resources reported by some networks and improve sustainability. Although the evidence base is weak, the networks appear to offer sustainable and clinically useful services. These findings may interest decision-makers in developing countries considering starting, supporting or joining similar telemedicine networks.

  10. Network Characteristics of Successful Performance in Association Football. A Study on the UEFA Champions League

    PubMed Central

    Pina, Tiago J.; Paulo, Ana; Araújo, Duarte

    2017-01-01

    The synergistic interaction between teammates in association football has properties that can be captured by Social Network Analysis (SNA). The analysis of networks formed by team players passing a ball in a match shows that team success is correlated with high network density and clustering coefficient, as well as with reduced network centralization. However, oversimplification needs to be avoided, as network metrics events associated with success should not be considered equally to those that are not. In the present study, we investigated whether network density, clustering coefficient and centralization can predict successful or unsuccessful team performance. We analyzed 12 games of the Group Stage of UEFA Champions League 2015/2016 Group C by using public records from TV broadcasts. Notational analyses were performed to categorize attacking sequences as successful or unsuccessful, and to collect data on the ball-passing networks. The network metrics were then computed. A hierarchical logistic-regression model was used to predict the successfulness of the offensive plays from network density, clustering coefficient and centralization, after controlling for the effect of total passes on successfulness of offensive plays. Results confirmed the independent effect of network metrics. Density, but not clustering coefficient or centralization, was a significant predictor of the successfulness of offensive plays. We found a negative relation between density and successfulness of offensive plays. However, reduced density was associated with a higher number of offensive plays, albeit mostly unsuccessful. Conversely, high density was associated with a lower number of successful offensive plays (SOPs), but also with overall fewer offensive plays and “ball possession losses” before the attacking team entered the finishing zone. Independent SNA of team performance is important to minimize the limitations of oversimplifying effective team synergies.

  11. A Method for Integrating Thrust-Vectoring and Actuated Forebody Strakes with Conventional Aerodynamic Controls on a High-Performance Fighter Airplane

    NASA Technical Reports Server (NTRS)

    Lallman, Frederick J.; Davidson, John B.; Murphy, Patrick C.

    1998-01-01

    A method, called pseudo controls, of integrating several airplane controls to achieve cooperative operation is presented. The method eliminates conflicting control motions, minimizes the number of feedback control gains, and reduces the complication of feedback gain schedules. The method is applied to the lateral/directional controls of a modified high-performance airplane. The airplane has a conventional set of aerodynamic controls, an experimental set of thrust-vectoring controls, and an experimental set of actuated forebody strakes. The experimental controls give the airplane additional control power for enhanced stability and maneuvering capabilities while flying over an expanded envelope, especially at high angles of attack. The flight controls are scheduled to generate independent body-axis control moments. These control moments are coordinated to produce stability-axis angular accelerations. Inertial coupling moments are compensated. Thrust-vectoring controls are engaged according to their effectiveness relative to that of the aerodynamic controls. Vane-relief logic removes steady and slowly varying commands from the thrust-vectoring controls to alleviate heating of the thrust turning devices. The actuated forebody strakes are engaged at high angles of attack. This report presents the forward-loop elements of a flight control system that positions the flight controls according to the desired stability-axis accelerations. This report does not include the generation of the required angular acceleration commands by means of pilot controls or the feedback of sensed airplane motions.

  12. Static thrust-vectoring performance of nonaxisymmetric convergent-divergent nozzles with post-exit yaw vanes. M.S. Thesis - George Washington Univ., Aug. 1988

    NASA Technical Reports Server (NTRS)

    Foley, Robert J.; Pendergraft, Odis C., Jr.

    1991-01-01

    A static (wind-off) test was conducted in the Static Test Facility of the 16-ft transonic tunnel to determine the performance and turning effectiveness of post-exit yaw vanes installed on two-dimensional convergent-divergent nozzles. One nozzle design that was previously tested was used as a baseline, simulating dry power and afterburning power nozzles at both 0 and 20 degree pitch vectoring conditions. Vanes were installed on these four nozzle configurations to study the effects of vane deflection angle, longitudinal and lateral location, size, and camber. All vanes were hinged at the nozzle sidewall exit, and in addition, some were also hinged at the vane quarter chord (double-hinged). The vane concepts tested generally produced yaw thrust vectoring angles much less than the geometric vane angles, for (up to 8 percent) resultant thrust losses. When the nozzles were pitch vectored, yawing effectiveness decreased as the vanes were moved downstream. Thrust penalties and yawing effectiveness both decreased rapidly as the vanes were moved outboard (laterally). Vane length and height changes increased yawing effectiveness and thrust ratio losses, while using vane camber, and double-hinged vanes increased resultant yaw angles by 50 to 100 percent.

  13. High-performance parallel interface to synchronous optical network gateway

    DOEpatents

    St. John, W.B.; DuBois, D.H.

    1996-12-03

    Disclosed is a system of sending and receiving gateways interconnects high speed data interfaces, e.g., HIPPI interfaces, through fiber optic links, e.g., a SONET network. An electronic stripe distributor distributes bytes of data from a first interface at the sending gateway onto parallel fiber optics of the fiber optic link to form transmitted data. An electronic stripe collector receives the transmitted data on the parallel fiber optics and reforms the data into a format effective for input to a second interface at the receiving gateway. Preferably, an error correcting syndrome is constructed at the sending gateway and sent with a data frame so that transmission errors can be detected and corrected in a real-time basis. Since the high speed data interface operates faster than any of the fiber optic links the transmission rate must be adapted to match the available number of fiber optic links so the sending and receiving gateways monitor the availability of fiber links and adjust the data throughput accordingly. In another aspect, the receiving gateway must have sufficient available buffer capacity to accept an incoming data frame. A credit-based flow control system provides for continuously updating the sending gateway on the available buffer capacity at the receiving gateway. 7 figs.

  14. High-performance parallel interface to synchronous optical network gateway

    DOEpatents

    St. John, Wallace B.; DuBois, David H.

    1996-01-01

    A system of sending and receiving gateways interconnects high speed data interfaces, e.g., HIPPI interfaces, through fiber optic links, e.g., a SONET network. An electronic stripe distributor distributes bytes of data from a first interface at the sending gateway onto parallel fiber optics of the fiber optic link to form transmitted data. An electronic stripe collector receives the transmitted data on the parallel fiber optics and reforms the data into a format effective for input to a second interface at the receiving gateway. Preferably, an error correcting syndrome is constructed at the sending gateway and sent with a data frame so that transmission errors can be detected and corrected in a real-time basis. Since the high speed data interface operates faster than any of the fiber optic links the transmission rate must be adapted to match the available number of fiber optic links so the sending and receiving gateways monitor the availability of fiber links and adjust the data throughput accordingly. In another aspect, the receiving gateway must have sufficient available buffer capacity to accept an incoming data frame. A credit-based flow control system provides for continuously updating the sending gateway on the available buffer capacity at the receiving gateway.

  15. A comparison of back propagation and Generalized Regression Neural Networks performance in neutron spectrometry.

    PubMed

    Martínez-Blanco, Ma Del Rosario; Ornelas-Vargas, Gerardo; Solís-Sánchez, Luis Octavio; Castañeda-Miranada, Rodrigo; Vega-Carrillo, Héctor René; Celaya-Padilla, José M; Garza-Veloz, Idalia; Martínez-Fierro, Margarita; Ortiz-Rodríguez, José Manuel

    2016-11-01

    The process of unfolding the neutron energy spectrum has been subject of research for many years. Monte Carlo, iterative methods, the bayesian theory, the principle of maximum entropy are some of the methods used. The drawbacks associated with traditional unfolding procedures have motivated the research of complementary approaches. Back Propagation Neural Networks (BPNN), have been applied with success in neutron spectrometry and dosimetry domains, however, the structure and learning parameters are factors that highly impact in the networks performance. In ANN domain, Generalized Regression Neural Network (GRNN) is one of the simplest neural networks in term of network architecture and learning algorithm. The learning is instantaneous, requiring no time for training. Opposite to BPNN, a GRNN would be formed instantly with just a 1-pass training on the development data. In the network development phase, the only hurdle is to optimize the hyper-parameter, which is known as sigma, governing the smoothness of the network. The aim of this work was to compare the performance of BPNN and GRNN in the solution of the neutron spectrometry problem. From results obtained it can be observed that despite the very similar results, GRNN performs better than BPNN. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Social learning strategies modify the effect of network structure on group performance

    PubMed Central

    Barkoczi, Daniel; Galesic, Mirta

    2016-01-01

    The structure of communication networks is an important determinant of the capacity of teams, organizations and societies to solve policy, business and science problems. Yet, previous studies reached contradictory results about the relationship between network structure and performance, finding support for the superiority of both well-connected efficient and poorly connected inefficient network structures. Here we argue that understanding how communication networks affect group performance requires taking into consideration the social learning strategies of individual team members. We show that efficient networks outperform inefficient networks when individuals rely on conformity by copying the most frequent solution among their contacts. However, inefficient networks are superior when individuals follow the best member by copying the group member with the highest payoff. In addition, groups relying on conformity based on a small sample of others excel at complex tasks, while groups following the best member achieve greatest performance for simple tasks. Our findings reconcile contradictory results in the literature and have broad implications for the study of social learning across disciplines. PMID:27713417

  17. Social learning strategies modify the effect of network structure on group performance

    NASA Astrophysics Data System (ADS)

    Barkoczi, Daniel; Galesic, Mirta

    2016-10-01

    The structure of communication networks is an important determinant of the capacity of teams, organizations and societies to solve policy, business and science problems. Yet, previous studies reached contradictory results about the relationship between network structure and performance, finding support for the superiority of both well-connected efficient and poorly connected inefficient network structures. Here we argue that understanding how communication networks affect group performance requires taking into consideration the social learning strategies of individual team members. We show that efficient networks outperform inefficient networks when individuals rely on conformity by copying the most frequent solution among their contacts. However, inefficient networks are superior when individuals follow the best member by copying the group member with the highest payoff. In addition, groups relying on conformity based on a small sample of others excel at complex tasks, while groups following the best member achieve greatest performance for simple tasks. Our findings reconcile contradictory results in the literature and have broad implications for the study of social learning across disciplines.

  18. Social learning strategies modify the effect of network structure on group performance.

    PubMed

    Barkoczi, Daniel; Galesic, Mirta

    2016-10-07

    The structure of communication networks is an important determinant of the capacity of teams, organizations and societies to solve policy, business and science problems. Yet, previous studies reached contradictory results about the relationship between network structure and performance, finding support for the superiority of both well-connected efficient and poorly connected inefficient network structures. Here we argue that understanding how communication networks affect group performance requires taking into consideration the social learning strategies of individual team members. We show that efficient networks outperform inefficient networks when individuals rely on conformity by copying the most frequent solution among their contacts. However, inefficient networks are superior when individuals follow the best member by copying the group member with the highest payoff. In addition, groups relying on conformity based on a small sample of others excel at complex tasks, while groups following the best member achieve greatest performance for simple tasks. Our findings reconcile contradictory results in the literature and have broad implications for the study of social learning across disciplines.

  19. Delays and user performance in human-computer-network interaction tasks.

    PubMed

    Caldwell, Barrett S; Wang, Enlie

    2009-12-01

    This article describes a series of studies conducted to examine factors affecting user perceptions, responses, and tolerance for network-based computer delays affecting distributed human-computer-network interaction (HCNI) tasks. HCNI tasks, even with increasing computing and network bandwidth capabilities, are still affected by human perceptions of delay and appropriate waiting times for information flow latencies. Conducted were 6 laboratory studies with university participants in China (Preliminary Experiments 1 through 3) and the United States (Experiments 4 through 6) to examine users' perceptions of elapsed time, effect of perceived network task performance partners on delay tolerance, and expectations of appropriate delays based on task, situation, and network conditions. Results across the six experiments indicate that users' delay tolerance and estimated delay were affected by multiple task and expectation factors, including task complexity and importance, situation urgency and time availability, file size, and network bandwidth capacity. Results also suggest a range of user strategies for incorporating delay tolerance in task planning and performance. HCNI user experience is influenced by combinations of task requirements, constraints, and understandings of system performance; tolerance is a nonlinear function of time constraint ratios or decay. Appropriate user interface tools providing delay feedback information can help modify user expectations and delay tolerance. These tools are especially valuable when delay conditions exceed a few seconds or when task constraints and system demands are high. Interface designs for HCNI tasks should consider assistant-style presentations of delay feedback, information freshness, and network characteristics. Assistants should also gather awareness of user time constraints.

  20. Dynamic Social Networks in High Performance Football Coaching

    ERIC Educational Resources Information Center

    Occhino, Joseph; Mallett, Cliff; Rynne, Steven

    2013-01-01

    Background: Sports coaching is largely a social activity where engagement with athletes and support staff can enhance the experiences for all involved. This paper examines how high performance football coaches develop knowledge through their interactions with others within a social learning theory framework. Purpose: The key purpose of this study…

  1. Implementation and Performance Evaluation Using the Fuzzy Network Balanced Scorecard

    ERIC Educational Resources Information Center

    Tseng, Ming-Lang

    2010-01-01

    The balanced scorecard (BSC) is a multi-criteria evaluation concept that highlights the importance of performance measurement. However, although there is an abundance of literature on the BSC framework, there is a scarcity of literature regarding how the framework with dependence and interactive relationships should be properly implemented in…

  2. Implementation and Performance Evaluation Using the Fuzzy Network Balanced Scorecard

    ERIC Educational Resources Information Center

    Tseng, Ming-Lang

    2010-01-01

    The balanced scorecard (BSC) is a multi-criteria evaluation concept that highlights the importance of performance measurement. However, although there is an abundance of literature on the BSC framework, there is a scarcity of literature regarding how the framework with dependence and interactive relationships should be properly implemented in…

  3. Dynamic Social Networks in High Performance Football Coaching

    ERIC Educational Resources Information Center

    Occhino, Joseph; Mallett, Cliff; Rynne, Steven

    2013-01-01

    Background: Sports coaching is largely a social activity where engagement with athletes and support staff can enhance the experiences for all involved. This paper examines how high performance football coaches develop knowledge through their interactions with others within a social learning theory framework. Purpose: The key purpose of this study…

  4. Support vector regression and artificial neural network models for stability indicating analysis of mebeverine hydrochloride and sulpiride mixtures in pharmaceutical preparation: A comparative study

    NASA Astrophysics Data System (ADS)

    Naguib, Ibrahim A.; Darwish, Hany W.

    2012-02-01

    A comparison between support vector regression (SVR) and Artificial Neural Networks (ANNs) multivariate regression methods is established showing the underlying algorithm for each and making a comparison between them to indicate the inherent advantages and limitations. In this paper we compare SVR to ANN with and without variable selection procedure (genetic algorithm (GA)). To project the comparison in a sensible way, the methods are used for the stability indicating quantitative analysis of mixtures of mebeverine hydrochloride and sulpiride in binary mixtures as a case study in presence of their reported impurities and degradation products (summing up to 6 components) in raw materials and pharmaceutical dosage form via handling the UV spectral data. For proper analysis, a 6 factor 5 level experimental design was established resulting in a training set of 25 mixtures containing different ratios of the interfering species. An independent test set consisting of 5 mixtures was used to validate the prediction ability of the suggested models. The proposed methods (linear SVR (without GA) and linear GA-ANN) were successfully applied to the analysis of pharmaceutical tablets containing mebeverine hydrochloride and sulpiride mixtures. The results manifest the problem of nonlinearity and how models like the SVR and ANN can handle it. The methods indicate the ability of the mentioned multivariate calibration models to deconvolute the highly overlapped UV spectra of the 6 components' mixtures, yet using cheap and easy to handle instruments like the UV spectrophotometer.

  5. Analogies between the measurement of acoustic impedance via the reaction on the source method and the automatic microwave vector network analyzer technique

    NASA Astrophysics Data System (ADS)

    McLean, James; Sutton, Robert; Post, John

    2003-10-01

    One useful method of acoustic impedance measurement involves the measurement of the electrical impedance ``looking into'' the electrical port of a reciprocal electroacoustic transducer. This reaction on the source method greatly facilitates the measurement of acoustic impedance by borrowing highly refined techniques to measure electrical impedance. It is also well suited for in situ acoustic impedance measurements. In order to accurately determine acoustic impedance from the measured electrical impedance, the characteristics of the transducer must be accurately known, i.e., the characteristics of the transducer must be ``removed'' completely from the data. The measurement of acoustic impedance via the measurement of the reaction on the source is analogous to modern microwave measurements made with an automatic vector network analyzer. The action of the analyzer is described as de-embedding the desired data (such as acoustic impedance) from the raw data. Such measurements are fundamentally substitution measurements in that the transducer's characteristics are determined by measuring a set of reference standards. The reaction on the source method is extended to take advantage of improvements in microwave measurement techniques which allow calibration via imperfect standard loads. This removes one of the principal weaknesses of the method in that the requirement of high-quality reference standards is relaxed.

  6. Comparison of Free Space Measurement Using a Vector Network Analyzer and Low-Cost-Type THz-TDS Measurement Methods Between 75 and 325 GHz

    NASA Astrophysics Data System (ADS)

    Ozturk, Turgut; Morikawa, Osamu; Ünal, İlhami; Uluer, İhsan

    2017-10-01

    Specifications of two measurement systems, free space measurement using a vector network analyzer and low-cost-type terahertz time-domain spectroscopy using a multimode laser diode, have been compared in the frequency region of millimeter/sub-THz waves. In the comparison, accuracy, cost, measurement time, calculation time, etc. were considered. Four samples (Rexolite, RO3003, Ultralam 3850HT-design, and L1000HF) were selected for the comparison of the specifications of the two methods. The acquired data was used to compute the complex permittivity of measured materials. The extracted results by free space measurement agreed well to the ones obtained by low-cost-type terahertz time-domain spectroscopy. This result proves free space measurement that can be assessed as a new method of material characterization in the sub-THz region successfully worked. Furthermore, free space measurement was proved to be suitable for a measurement in a narrow frequency range. On the other hand, low-cost-type terahertz time-domain spectroscopy has features not only low cost but also measurement capability in wide frequency range.

  7. Neural Network for Visual Search Classification

    DTIC Science & Technology

    2007-11-02

    neural network used to perform visual search classification. The neural network consists of a Learning vector quantization network (LVQ) and a single layer perceptron. The objective of this neural network is to classify the various human visual search patterns into predetermined classes. The classes signify the different search strategies used by individuals to scan the same target pattern. The input search patterns are quantified with respect to an ideal search pattern, determined by the user. A supervised learning rule,

  8. Vector quantization

    NASA Technical Reports Server (NTRS)

    Gray, Robert M.

    1989-01-01

    During the past ten years Vector Quantization (VQ) has developed from a theoretical possibility promised by Shannon's source coding theorems into a powerful and competitive technique for speech and image coding and compression at medium to low bit rates. In this survey, the basic ideas behind the design of vector quantizers are sketched and some comments made on the state-of-the-art and current research efforts.

  9. System for Automated Calibration of Vector Modulators

    NASA Technical Reports Server (NTRS)

    Lux, James; Boas, Amy; Li, Samuel

    2009-01-01

    Vector modulators are used to impose baseband modulation on RF signals, but non-ideal behavior limits the overall performance. The non-ideal behavior of the vector modulator is compensated using data collected with the use of an automated test system driven by a LabVIEW program that systematically applies thousands of control-signal values to the device under test and collects RF measurement data. The technology innovation automates several steps in the process. First, an automated test system, using computer controlled digital-to-analog converters (DACs) and a computer-controlled vector network analyzer (VNA) systematically can apply different I and Q signals (which represent the complex number by which the RF signal is multiplied) to the vector modulator under test (VMUT), while measuring the RF performance specifically, gain and phase. The automated test system uses the LabVIEW software to control the test equipment, collect the data, and write it to a file. The input to the Lab - VIEW program is either user-input for systematic variation, or is provided in a file containing specific test values that should be fed to the VMUT. The output file contains both the control signals and the measured data. The second step is to post-process the file to determine the correction functions as needed. The result of the entire process is a tabular representation, which allows translation of a desired I/Q value to the required analog control signals to produce a particular RF behavior. In some applications, corrected performance is needed only for a limited range. If the vector modulator is being used as a phase shifter, there is only a need to correct I and Q values that represent points on a circle, not the entire plane. This innovation has been used to calibrate 2-GHz MMIC (monolithic microwave integrated circuit) vector modulators in the High EIRP Cluster Array project (EIRP is high effective isotropic radiated power). These calibrations were then used to create

  10. Performance Analysis of MIMO Relay Network via Propagation Measurement in L-Shaped Corridor Environment

    NASA Astrophysics Data System (ADS)

    Lertwiram, Namzilp; Tran, Gia Khanh; Mizutani, Keiichi; Sakaguchi, Kei; Araki, Kiyomichi

    Setting relays can address the shadowing problem between a transmitter (Tx) and a receiver (Rx). Moreover, the Multiple-Input Multiple-Output (MIMO) technique has been introduced to improve wireless link capacity. The MIMO technique can be applied in relay network to enhance system performance. However, the efficiency of relaying schemes and relay placement have not been well investigated with experiment-based study. This paper provides a propagation measurement campaign of a MIMO two-hop relay network in 5GHz band in an L-shaped corridor environment with various relay locations. Furthermore, this paper proposes a Relay Placement Estimation (RPE) scheme to identify the optimum relay location, i.e. the point at which the network performance is highest. Analysis results of channel capacity show that relaying technique is beneficial over direct transmission in strong shadowing environment while it is ineffective in non-shadowing environment. In addition, the optimum relay location estimated with the RPE scheme also agrees with the location where the network achieves the highest performance as identified by network capacity. Finally, the capacity analysis shows that two-way MIMO relay employing network coding has the best performance while cooperative relaying scheme is not effective due to shadowing effect weakening the signal strength of the direct link.

  11. The simulation of cropping pattern to improve the performance of irrigation network in Cau irrigation area

    NASA Astrophysics Data System (ADS)

    Wahyuningsih, Retno; Rintis Hadiani, RR; Sobriyah

    2017-01-01

    Cau irrigation area located in Madiun district, East Java Province, irrigates 1.232 Ha of land which covers Cau primary channel irrigation network, Wungu Secondary channel irrigation network, and Grape secondary channel irrigation network. The problems in Cau irrigation area are limited availability of water especially during the dry season (planting season II and III) and non-compliance to cropping patterns. The evaluation of irrigation system performance of Cau irrigation area needs to be done in order to know how far the irrigation system performance is, especially based on planting productivity aspect. The improvement of irrigation network performance through cropping pattern optimization is based on the increase of water necessity fulfillment (k factor), the realization of planting area and rice productivity. The research method of irrigation system performance is by analyzing the secondary data based on the Regulation of Ministry of Public Work and State Minister for Public Housing Number: 12/PRT/M/2015. The analysis of water necessity fulfillment (k factor) uses Public Work Plan Criteria Method. The performance level of planting productivity aspect in existing condition is 87.10%, alternative 1 is 93.90% dan alternative 2 is 96.90%. It means that the performance of the irrigation network from productivity aspect increases 6.80% for alternative 1 and 9.80% for alternative 2.

  12. INCITE: Edge-based Traffic Processing and Inference for High-Performance Networks

    SciTech Connect

    Baraniuk, Richard G.; Feng, Wu-chun; Cottrell, Les; Knightly, Edward; Nowak, Robert; Riedi, Rolf

    2005-06-20

    The INCITE (InterNet Control and Inference Tools at the Edge) Project developed on-line tools to characterize and map host and network performance as a function of space, time, application, protocol, and service. In addition to their utility for trouble-shooting problems, these tools will enable a new breed of applications and operating systems that are network aware and resource aware. Launching from the foundation provided our recent leading-edge research on network measurement, multifractal signal analysis, multiscale random fields, and quality of service, our effort consisted of three closely integrated research thrusts that directly attack several key networking challenges of DOE's SciDAC program. These are: Thrust 1, Multiscale traffic analysis and modeling techniques; Thrust 2, Inference and control algorithms for network paths, links, and routers, and Thrust 3, Data collection tools.

  13. The Impact of Network Performance on Warfighter Effectiveness

    DTIC Science & Technology

    2006-01-01

    necessarily make greater gains in an operation than the choice of appropriate tactics.” Organization of This Report The remainder of this report is...Likelihood Blue Objective Achieved 34% 39% 38% A new model was fit using the logit function (see Appendix A) so that the perform- ance metric is now the...again reached conclusions that were similar to what this re- port also found, i.e., “the force with improved situational awareness can only take

  14. Overhead-Performance Tradeoffs in Distributed Wireless Networks

    DTIC Science & Technology

    2015-06-26

    comparison, we have included a trendline for the rate distortion function plus a bit. Key Publications & Abstracts • B. D. Boyle , J. M. Walsh, and S. Weber...IID. • Jie Ren, Bradford Boyle , Gwanmo Ku, Steven Weber, John MacLaren Walsh, Overhead Performance Tradeoffs A Resource Allocation Perspective, IEEE...Bradford D. Boyle , Jie Ren, John MacLaren Walsh, and Steven Weber, Interactive Scalar Quantization for Distributed Extremization, IEEE Trans. Signal

  15. Balancing Mission Requirement for Networked Autonomous Rotorcrafts Performing Video Reconnaissance

    DTIC Science & Technology

    2009-08-01

    controls position. In contrast, we propose a multi-input multi-output (MIMO) control strategy. Using LQR design to solve for a constant gain matrix...cameras. Future efforts will seek to improve performance by modifying the control law, possibly adding time varying tasks shaped by trajectory planning...navigation, and control ; geolocation; environment mapping; target tracking and surveillance; etc. A historical problem in image-based estimation and control

  16. Assessing Performance Tradeoffs in Undersea Distributed Sensor Networks

    DTIC Science & Technology

    2006-09-01

    Papoulis , Probability , Random Variables, and Stochastic Processes, Third Edition, McGraw-Hill, Boston, MA, 1991. [7] N. Srinivas and K. Deb...level detection performance by the terms probability of successful search PSS and probability of false search PFS . Successful search is an important...expressions for probability of successful search and probability of false search for modeling the track-before-detect process. We then describe a numerical

  17. Long-running telemedicine networks delivering humanitarian services: experience, performance and scientific output

    PubMed Central

    Geissbuhler, Antoine; Jethwani, Kamal; Kovarik, Carrie; Person, Donald A; Vladzymyrskyy, Anton; Zanaboni, Paolo; Zolfo, Maria

    2012-01-01

    Abstract Objective To summarize the experience, performance and scientific output of long-running telemedicine networks delivering humanitarian services. Methods Nine long-running networks – those operating for five years or more– were identified and seven provided detailed information about their activities, including performance and scientific output. Information was extracted from peer-reviewed papers describing the networks’ study design, effectiveness, quality, economics, provision of access to care and sustainability. The strength of the evidence was scored as none, poor, average or good. Findings The seven networks had been operating for a median of 11 years (range: 5–15). All networks provided clinical tele-consultations for humanitarian purposes using store-and-forward methods and five were also involved in some form of education. The smallest network had 15 experts and the largest had more than 500. The clinical caseload was 50 to 500 cases a year. A total of 59 papers had been published by the networks, and 44 were listed in Medline. Based on study design, the strength of the evidence was generally poor by conventional standards (e.g. 29 papers described non-controlled clinical series). Over half of the papers provided evidence of sustainability and improved access to care. Uncertain funding was a common risk factor. Conclusion Improved collaboration between networks could help attenuate the lack of resources reported by some networks and improve sustainability. Although the evidence base is weak, the networks appear to offer sustainable and clinically useful services. These findings may interest decision-makers in developing countries considering starting, supporting or joining similar telemedicine networks. PMID:22589567

  18. Performance Analysis of Network Model to Identify Healthy and Cancerous Colon Genes.

    PubMed

    Roy, Tanusree; Barman, Soma

    2016-03-01

    Modeling of cancerous and healthy Homo Sapiens colon gene using electrical network is proposed to study their behavior. In this paper, the individual amino acid models are designed using hydropathy index of amino acid side chain. The phase and magnitude responses of genes are examined to screen out cancer from healthy genes. The performance of proposed modeling technique is judged using various performance measurement metrics such as accuracy, sensitivity, specificity, etc. The network model performance is increased with frequency, which is analyzed using the receiver operating characteristic curve. The accuracy of the model is tested on colon genes and achieved maximum 97% at 10-MHz frequency.

  19. Research Activity in Computational Physics utilizing High Performance Computing: Co-authorship Network Analysis

    NASA Astrophysics Data System (ADS)

    Ahn, Sul-Ah; Jung, Youngim

    2016-10-01

    The research activities of the computational physicists utilizing high performance computing are analyzed by bibliometirc approaches. This study aims at providing the computational physicists utilizing high-performance computing and policy planners with useful bibliometric results for an assessment of research activities. In order to achieve this purpose, we carried out a co-authorship network analysis of journal articles to assess the research activities of researchers for high-performance computational physics as a case study. For this study, we used journal articles of the Scopus database from Elsevier covering the time period of 2004-2013. We extracted the author rank in the physics field utilizing high-performance computing by the number of papers published during ten years from 2004. Finally, we drew the co-authorship network for 45 top-authors and their coauthors, and described some features of the co-authorship network in relation to the author rank. Suggestions for further studies are discussed.

  20. Performance analysis and comparison of PTOP and LANE for IP transmission over ATM networks

    NASA Astrophysics Data System (ADS)

    Zubairi, Junaid A.; Al-Irhayim, Sufyan; Al-Khateeb, Wajdi; Wajdi, Yahya

    1998-12-01

    Due to its traffic control and performance assurance characteristics, ATM is being employed as the core network in most campuses. However, bulk of the workstations remain on Ethernet, generating IP traffic that passes through ATM using special schemes such as PTOP or LANE. In such a network, the performance is affected due to the extra overheads in multiple conversions between cells and packets and managing virtual circuits. The aim of this paper is to compare the performance of PTOP and LANE in passing the IP traffic under various conditions. This study helps in understanding the various performance issues in these environments in order to define the end-to-end quality of service for Ethernet-ATM networks.