Science.gov

Sample records for performance vector network

  1. Vector Network Analysis

    Energy Science and Technology Software Center (ESTSC)

    1997-10-20

    Vector network analyzers are a convenient way to measure scattering parameters of a variety of microwave devices. However, these instruments, unlike oscilloscopes for example, require a relatively high degree of user knowledge and expertise. Due to the complexity of the instrument and of the calibration process, there are many ways in which an incorrect measurement may be produced. The Microwave Project, which is part of Sandia National Laboratories Primary Standards Laboratory, routinely uses check standardsmore » to verify that the network analyzer is operating properly. In the past, these measurements were recorded manually and, sometimes, interpretation of the results was problematic. To aid our measurement assurance process, a software program was developed to automatically measure a check standard and compare the new measurements with an historical database of measurements of the same device. The program acquires new measurement data from selected check standards, plots the new data against the mean and standard deviation of prior data for the same check standard, and updates the database files for the check standard. The program is entirely menu-driven requiring little additional work by the user.« less

  2. Vector Encoding in Biochemical Networks

    NASA Astrophysics Data System (ADS)

    Potter, Garrett; Sun, Bo

    Encoding of environmental cues via biochemical signaling pathways is of vital importance in the transmission of information for cells in a network. The current literature assumes a single cell state is used to encode information, however, recent research suggests the optimal strategy utilizes a vector of cell states sampled at various time points. To elucidate the optimal sampling strategy for vector encoding, we take an information theoretic approach and determine the mutual information of the calcium signaling dynamics obtained from fibroblast cells perturbed with different concentrations of ATP. Specifically, we analyze the sampling strategies under the cases of fixed and non-fixed vector dimension as well as the efficiency of these strategies. Our results show that sampling with greater frequency is optimal in the case of non-fixed vector dimension but that, in general, a lower sampling frequency is best from both a fixed vector dimension and efficiency standpoint. Further, we find the use of a simple modified Ornstein-Uhlenbeck process as a model qualitatively captures many of our experimental results suggesting that sampling in biochemical networks is based on a few basic components.

  3. Applying knowledge engineering and representation methods to improve support vector machine and multivariate probabilistic neural network CAD performance

    NASA Astrophysics Data System (ADS)

    Land, Walker H., Jr.; Anderson, Frances; Smith, Tom; Fahlbusch, Stephen; Choma, Robert; Wong, Lut

    2005-04-01

    Achieving consistent and correct database cases is crucial to the correct evaluation of any computer-assisted diagnostic (CAD) paradigm. This paper describes the application of artificial intelligence (AI), knowledge engineering (KE) and knowledge representation (KR) to a data set of ~2500 cases from six separate hospitals, with the objective of removing/reducing inconsistent outlier data. Several support vector machine (SVM) kernels were used to measure diagnostic performance of the original and a "cleaned" data set. Specifically, KE and ER principles were applied to the two data sets which were re-examined with respect to the environment and agents. One data set was found to contain 25 non-characterizable sets. The other data set contained 180 non-characterizable sets. CAD system performance was measured with both the original and "cleaned" data sets using two SVM kernels as well as a multivariate probabilistic neural network (PNN). Results demonstrated: (i) a 10% average improvement in overall Az and (ii) approximately a 50% average improvement in partial Az.

  4. Video data compression using artificial neural network differential vector quantization

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, Ashok K.; Bibyk, Steven B.; Ahalt, Stanley C.

    1991-01-01

    An artificial neural network vector quantizer is developed for use in data compression applications such as Digital Video. Differential Vector Quantization is used to preserve edge features, and a new adaptive algorithm, known as Frequency-Sensitive Competitive Learning, is used to develop the vector quantizer codebook. To develop real time performance, a custom Very Large Scale Integration Application Specific Integrated Circuit (VLSI ASIC) is being developed to realize the associative memory functions needed in the vector quantization algorithm. By using vector quantization, the need for Huffman coding can be eliminated, resulting in superior performance against channel bit errors than methods that use variable length codes.

  5. Ranked Retrieval with Semantic Networks and Vector Spaces.

    ERIC Educational Resources Information Center

    Kulyukin, Vladimir A.; Settle, Amber

    2001-01-01

    Discussion of semantic networks and ranked retrieval focuses on two models, the semantic network model with spreading activation and the vector space model with dot product. Suggests a formal method to analyze the two models in terms of their relative performance in the same universe of objects. (Author/LRW)

  6. Distributed Estimation for Vector Signal in Linear Coherent Sensor Networks

    NASA Astrophysics Data System (ADS)

    Wu, Chien-Hsien; Lin, Ching-An

    We introduce the distributed estimation of a random vector signal in wireless sensor networks that follow coherent multiple access channel model. We adopt the linear minimum mean squared error fusion rule. The problem of interest is to design linear coding matrices for those sensors in the network so as to minimize mean squared error of the estimated vector signal under a total power constraint. We show that the problem can be formulated as a convex optimization problem and we obtain closed form expressions of the coding matrices. Numerical results are used to illustrate the performance of the proposed method.

  7. Performance evaluation of vector-machine architectures

    SciTech Connect

    Tang, Ju-ho.

    1989-01-01

    Vector machines are well known for their high-peak performance, but the delivered performance varies greatly over different workloads and depends strongly on compiler optimizations. Recently it has been claimed that several horizontal superscalar architectures, e.g., VLIW and polycyclic architectures, provide a more balanced performance across a wider range of scientific workloads than do vector machines. The purpose of this research is to study the performance of register-register vector processors, such as Cray supercomputers, as a function of their architectural features, scheduling schemes, compiler optimization capabilities, and program parameters. The results of this study also provide a base for comparing vector machines with horizontal superscalar machines. An evaluation methodology, based on timing parameters, bottle-necks, and run time bounds, is developed. Cray-1 performance is degraded by the multiple memory loads of index-misaligned vectors and the inability of the Cray Fortran Compiler (CFT) to produce code that hits all the chain slot times. The impact of chaining and two instruction scheduling schemes on one-memory-port vector supercomputers, illustrated by the Cray-1 and Cray-2, is studied. The lack of instruction chaining on the Cray-2 requires a different instruction scheduling scheme from that of the Cray-1. Situations are characterized in which simple vector scheduling can generate code that fully utilizes one functional unit for machines with chaining. Even without chaining, polycyclic scheduling guarantees full utilization of one functional unit, after an initial transient, for loops with acyclic dependence graphs.

  8. A calibration free vector network analyzer

    NASA Astrophysics Data System (ADS)

    Kothari, Arpit

    Recently, two novel single-port, phase-shifter based vector network analyzer (VNA) systems were developed and tested at X-band (8.2--12.4 GHz) and Ka-band (26.4--40 GHz), respectively. These systems operate based on electronically moving the standing wave pattern, set up in a waveguide, over a Schottky detector and sample the standing wave voltage for several phase shift values. Once this system is fully characterized, all parameters in the system become known and hence theoretically, no other correction (or calibration) should be required to obtain the reflection coefficient, (Gamma), of an unknown load. This makes this type of VNA "calibration free" which is a significant advantage over other types of VNAs. To this end, a VNA system, based on this design methodology, was developed at X-band using several design improvements (compared to the previous designs) with the aim of demonstrating this "calibration-free" feature. It was found that when a commercial VNA (HP8510C) is used as the source and the detector, the system works as expected. However, when a detector is used (Schottky diode, log detector, etc.), obtaining correct Gamma still requires the customary three-load calibration. With the aim of exploring the cause, a detailed sensitivity analysis of prominent error sources was performed. Extensive measurements were done with different detection techniques including use of a spectrum analyzer as power detector. The system was tested even for electromagnetic compatibility (EMC) which may have contributed to this issue. Although desired results could not be obtained using the proposed standing-wave-power measuring devices like the Schottky diode but the principle of "calibration-free VNA" was shown to be true.

  9. Performance of vector sensors in noise

    NASA Astrophysics Data System (ADS)

    Cox, Henry; Baggeroer, Arthur

    2003-10-01

    Vector sensors are super gain devices that can provide ``array gain'' against ocean noise with a point sensor. As supergain devices they have increased sensitivity to nonacoustic noise components. This paper reviews and summarizes the processing gain that is achievable in various noise fields. Comparisons are made with an omni-directional sensor and with the correlation of a pair of closely spaced omni-directional sensors. Total processing gain that consists of both spatial and temporal gain is considered so that a proper analysis and interpretation of multiplicative processing can be made. The performance of ``intensity sensors'' (pressure times velocity) that are obtained by multiplying the omnidirectional component with a co-located dipole is also considered. A misinterpretation, that is common in the literature, concerning the performance of intensity sensors is discussed. The adaptive cardioid processing of vector sensors is also reviewed.

  10. Distributed Signal Decorrelation and Detection in Multi View Camera Networks Using the Vector Sparse Matrix Transform.

    PubMed

    Bachega, Leonardo R; Hariharan, Srikanth; Bouman, Charles A; Shroff, Ness B

    2015-12-01

    This paper introduces the vector sparse matrix transform (vector SMT), a new decorrelating transform suitable for performing distributed processing of high-dimensional signals in sensor networks. We assume that each sensor in the network encodes its measurements into vector outputs instead of scalar ones. The proposed transform decorrelates a sequence of pairs of vector outputs, until these vectors are decorrelated. In our experiments, we simulate distributed anomaly detection by a network of cameras, monitoring a spatial region. Each camera records an image of the monitored environment from its particular viewpoint and outputs a vector encoding the image. Our results, with both artificial and real data, show that the proposed vector SMT transform effectively decorrelates image measurements from the multiple cameras in the network while maintaining low overall communication energy consumption. Since it enables joint processing of the multiple vector outputs, our method provides significant improvements to anomaly detection accuracy when compared with the baseline case when the images are processed independently. PMID:26415179

  11. NASF transposition network: A computing network for unscrambling p-ordered vectors

    NASA Technical Reports Server (NTRS)

    Lim, R. S.

    1979-01-01

    The viewpoints of design, programming, and application of the transportation network (TN) is presented. The TN is a programmable combinational logic network that connects 521 memory modules to 512 processors. The unscrambling of p-ordered vectors to 1-ordered vectors in one cycle is described. The TN design is based upon the concept of cyclic groups from abstract algebra and primitive roots and indices from number theory. The programming of the TN is very simple, requiring only 20 bits: 10 bits for offset control and 10 bits for barrel switch shift control. This simple control is executed by the control unit (CU), not the processors. Any memory access by a processor must be coordinated with the CU and wait for all other processors to come to a synchronization point. These wait and synchronization events can be a degradation in performance to a computation. The TN application is for multidimensional data manipulation, matrix processing, and data sorting, and can also perform a perfect shuffle. Unlike other more complicated and powerful permutation networks, the TN cannot, if possible at all, unscramble non-p-ordered vectors in one cycle.

  12. A Distributed Support Vector Machine Learning Over Wireless Sensor Networks.

    PubMed

    Kim, Woojin; Stanković, Milos S; Johansson, Karl H; Kim, H Jin

    2015-11-01

    This paper is about fully-distributed support vector machine (SVM) learning over wireless sensor networks. With the concept of the geometric SVM, we propose to gossip the set of extreme points of the convex hull of local data set with neighboring nodes. It has the advantages of a simple communication mechanism and finite-time convergence to a common global solution. Furthermore, we analyze the scalability with respect to the amount of exchanged information and convergence time, with a specific emphasis on the small-world phenomenon. First, with the proposed naive convex hull algorithm, the message length remains bounded as the number of nodes increases. Second, by utilizing a small-world network, we have an opportunity to drastically improve the convergence performance with only a small increase in power consumption. These properties offer a great advantage when dealing with a large-scale network. Simulation and experimental results support the feasibility and effectiveness of the proposed gossip-based process and the analysis. PMID:26470063

  13. Performance of the butterfly processor-memory interconnection in a vector environment

    NASA Astrophysics Data System (ADS)

    Brooks, E. D., III

    1985-02-01

    A fundamental hurdle impeding the development of large N common memory multiprocessors is the performance limitation in the switch connecting the processors to the memory modules. Multistage networks currently considered for this connection have a memory latency which grows like (ALPHA)log2N*. For scientific computing, it is natural to look for a multiprocessor architecture that will enable the use of vector operations to mask memory latency. The problem to be overcome here is the chaotic behavior introduced by conflicts occurring in the switch. The performance of the butterfly or indirect binary n-cube network in a vector processing environment is examined. A simple modification of the standard 2X2 switch node used in such networks which adaptively removes chaotic behavior during a vector operation is described.

  14. High Performance Network Monitoring

    SciTech Connect

    Martinez, Jesse E

    2012-08-10

    Network Monitoring requires a substantial use of data and error analysis to overcome issues with clusters. Zenoss and Splunk help to monitor system log messages that are reporting issues about the clusters to monitoring services. Infiniband infrastructure on a number of clusters upgraded to ibmon2. ibmon2 requires different filters to report errors to system administrators. Focus for this summer is to: (1) Implement ibmon2 filters on monitoring boxes to report system errors to system administrators using Zenoss and Splunk; (2) Modify and improve scripts for monitoring and administrative usage; (3) Learn more about networks including services and maintenance for high performance computing systems; and (4) Gain a life experience working with professionals under real world situations. Filters were created to account for clusters running ibmon2 v1.0.0-1 10 Filters currently implemented for ibmon2 using Python. Filters look for threshold of port counters. Over certain counts, filters report errors to on-call system administrators and modifies grid to show local host with issue.

  15. Demonstration of Cost-Effective, High-Performance Computing at Performance and Reliability Levels Equivalent to a 1994 Vector Supercomputer

    NASA Technical Reports Server (NTRS)

    Babrauckas, Theresa

    2000-01-01

    The Affordable High Performance Computing (AHPC) project demonstrated that high-performance computing based on a distributed network of computer workstations is a cost-effective alternative to vector supercomputers for running CPU and memory intensive design and analysis tools. The AHPC project created an integrated system called a Network Supercomputer. By connecting computer work-stations through a network and utilizing the workstations when they are idle, the resulting distributed-workstation environment has the same performance and reliability levels as the Cray C90 vector Supercomputer at less than 25 percent of the C90 cost. In fact, the cost comparison between a Cray C90 Supercomputer and Sun workstations showed that the number of distributed networked workstations equivalent to a C90 costs approximately 8 percent of the C90.

  16. Improving neural network performance on SIMD architectures

    NASA Astrophysics Data System (ADS)

    Limonova, Elena; Ilin, Dmitry; Nikolaev, Dmitry

    2015-12-01

    Neural network calculations for the image recognition problems can be very time consuming. In this paper we propose three methods of increasing neural network performance on SIMD architectures. The usage of SIMD extensions is a way to speed up neural network processing available for a number of modern CPUs. In our experiments, we use ARM NEON as SIMD architecture example. The first method deals with half float data type for matrix computations. The second method describes fixed-point data type for the same purpose. The third method considers vectorized activation functions implementation. For each method we set up a series of experiments for convolutional and fully connected networks designed for image recognition task.

  17. Internal performance characteristics of thrust-vectored axisymmetric ejector nozzles

    NASA Technical Reports Server (NTRS)

    Lamb, Milton

    1995-01-01

    A series of thrust-vectored axisymmetric ejector nozzles were designed and experimentally tested for internal performance and pumping characteristics at the Langley research center. This study indicated that discontinuities in the performance occurred at low primary nozzle pressure ratios and that these discontinuities were mitigated by decreasing expansion area ratio. The addition of secondary flow increased the performance of the nozzles. The mid-to-high range of secondary flow provided the most overall improvements, and the greatest improvements were seen for the largest ejector area ratio. Thrust vectoring the ejector nozzles caused a reduction in performance and discharge coefficient. With or without secondary flow, the vectored ejector nozzles produced thrust vector angles that were equivalent to or greater than the geometric turning angle. With or without secondary flow, spacing ratio (ejector passage symmetry) had little effect on performance (gross thrust ratio), discharge coefficient, or thrust vector angle. For the unvectored ejectors, a small amount of secondary flow was sufficient to reduce the pressure levels on the shroud to provide cooling, but for the vectored ejector nozzles, a larger amount of secondary air was required to reduce the pressure levels to provide cooling.

  18. Modeling and performance analysis of GPS vector tracking algorithms

    NASA Astrophysics Data System (ADS)

    Lashley, Matthew

    This dissertation provides a detailed analysis of GPS vector tracking algorithms and the advantages they have over traditional receiver architectures. Standard GPS receivers use a decentralized architecture that separates the tasks of signal tracking and position/velocity estimation. Vector tracking algorithms combine the two tasks into a single algorithm. The signals from the various satellites are processed collectively through a Kalman filter. The advantages of vector tracking over traditional, scalar tracking methods are thoroughly investigated. A method for making a valid comparison between vector and scalar tracking loops is developed. This technique avoids the ambiguities encountered when attempting to make a valid comparison between tracking loops (which are characterized by noise bandwidths and loop order) and the Kalman filters (which are characterized by process and measurement noise covariance matrices) that are used by vector tracking algorithms. The improvement in performance offered by vector tracking is calculated in multiple different scenarios. Rule of thumb analysis techniques for scalar Frequency Lock Loops (FLL) are extended to the vector tracking case. The analysis tools provide a simple method for analyzing the performance of vector tracking loops. The analysis tools are verified using Monte Carlo simulations. Monte Carlo simulations are also used to study the effects of carrier to noise power density (C/N0) ratio estimation and the advantage offered by vector tracking over scalar tracking. The improvement from vector tracking ranges from 2.4 to 6.2 dB in various scenarios. The difference in the performance of the three vector tracking architectures is analyzed. The effects of using a federated architecture with and without information sharing between the receiver's channels are studied. A combination of covariance analysis and Monte Carlo simulation is used to analyze the performance of the three algorithms. The federated algorithm without

  19. A feedforward artificial neural network based on quantum effect vector-matrix multipliers.

    PubMed

    Levy, H J; McGill, T C

    1993-01-01

    The vector-matrix multiplier is the engine of many artificial neural network implementations because it can simulate the way in which neurons collect weighted input signals from a dendritic arbor. A new technology for building analog weighting elements that is theoretically capable of densities and speeds far beyond anything that conventional VLSI in silicon could ever offer is presented. To illustrate the feasibility of such a technology, a small three-layer feedforward prototype network with five binary neurons and six tri-state synapses was built and used to perform all of the fundamental logic functions: XOR, AND, OR, and NOT. PMID:18267745

  20. Biologically relevant neural network architectures for support vector machines.

    PubMed

    Jändel, Magnus

    2014-01-01

    Neural network architectures that implement support vector machines (SVM) are investigated for the purpose of modeling perceptual one-shot learning in biological organisms. A family of SVM algorithms including variants of maximum margin, 1-norm, 2-norm and ν-SVM is considered. SVM training rules adapted for neural computation are derived. It is found that competitive queuing memory (CQM) is ideal for storing and retrieving support vectors. Several different CQM-based neural architectures are examined for each SVM algorithm. Although most of the sixty-four scanned architectures are unconvincing for biological modeling four feasible candidates are found. The seemingly complex learning rule of a full ν-SVM implementation finds a particularly simple and natural implementation in bisymmetric architectures. Since CQM-like neural structures are thought to encode skilled action sequences and bisymmetry is ubiquitous in motor systems it is speculated that trainable pattern recognition in low-level perception has evolved as an internalized motor programme. PMID:24126252

  1. Optical vector network analyzer based on amplitude-phase modulation

    NASA Astrophysics Data System (ADS)

    Morozov, Oleg G.; Morozov, Gennady A.; Nureev, Ilnur I.; Kasimova, Dilyara I.; Zastela, Mikhail Y.; Gavrilov, Pavel V.; Makarov, Igor A.; Purtov, Vadim A.

    2016-03-01

    The article describes the principles of optical vector network analyzer (OVNA) design for fiber Bragg gratings (FBG) characterization based on amplitude-phase modulation of optical carrier that allow us to improve the measurement accuracy of amplitude and phase parameters of the elements under test. Unlike existing OVNA based on a single-sideband and unbalanced double sideband amplitude modulation, the ratio of the two side components of the probing radiation is used for analysis of amplitude and phase parameters of the tested elements, and the radiation of the optical carrier is suppressed, or the latter is used as a local oscillator. The suggested OVNA is designed for the narrow band-stop elements (π-phaseshift FBG) and wide band-pass elements (linear chirped FBG) research.

  2. Performance evaluation of the SX-6 vector architecture forscientific computations

    SciTech Connect

    Oliker, Leonid; Canning, Andrew; Carter, Jonathan Carter; Shalf,John; Skinner, David; Ethier, Stephane; Biswas, Rupak; Djomehri,Jahed; Van der Wijngaart, Rob

    2005-01-01

    The growing gap between sustained and peak performance for scientific applications is a well-known problem in high performance computing. The recent development of parallel vector systems offers the potential to reduce this gap for many computational science codes and deliver a substantial increase in computing capabilities. This paper examines the intranode performance of the NEC SX-6 vector processor, and compares it against the cache-based IBMPower3 and Power4 superscalar architectures, across a number of key scientific computing areas. First, we present the performance of a microbenchmark suite that examines many low-level machine characteristics. Next, we study the behavior of the NAS Parallel Benchmarks. Finally, we evaluate the performance of several scientific computing codes. Overall results demonstrate that the SX-6 achieves high performance on a large fraction of our application suite and often significantly outperforms the cache-based architectures. However, certain classes of applications are not easily amenable to vectorization and would require extensive algorithm and implementation reengineering to utilize the SX-6 effectively.

  3. Folksonomical P2P File Sharing Networks Using Vectorized KANSEI Information as Search Tags

    NASA Astrophysics Data System (ADS)

    Ohnishi, Kei; Yoshida, Kaori; Oie, Yuji

    We present the concept of folksonomical peer-to-peer (P2P) file sharing networks that allow participants (peers) to freely assign structured search tags to files. These networks are similar to folksonomies in the present Web from the point of view that users assign search tags to information distributed over a network. As a concrete example, we consider an unstructured P2P network using vectorized Kansei (human sensitivity) information as structured search tags for file search. Vectorized Kansei information as search tags indicates what participants feel about their files and is assigned by the participant to each of their files. A search query also has the same form of search tags and indicates what participants want to feel about files that they will eventually obtain. A method that enables file search using vectorized Kansei information is the Kansei query-forwarding method, which probabilistically propagates a search query to peers that are likely to hold more files having search tags that are similar to the query. The similarity between the search query and the search tags is measured in terms of their dot product. The simulation experiments examine if the Kansei query-forwarding method can provide equal search performance for all peers in a network in which only the Kansei information and the tendency with respect to file collection are different among all of the peers. The simulation results show that the Kansei query forwarding method and a random-walk-based query forwarding method, for comparison, work effectively in different situations and are complementary. Furthermore, the Kansei query forwarding method is shown, through simulations, to be superior to or equal to the random-walk based one in terms of search speed.

  4. Vector Symbolic Spiking Neural Network Model of Hippocampal Subarea CA1 Novelty Detection Functionality.

    PubMed

    Agerskov, Claus

    2016-04-01

    A neural network model is presented of novelty detection in the CA1 subdomain of the hippocampal formation from the perspective of information flow. This computational model is restricted on several levels by both anatomical information about hippocampal circuitry and behavioral data from studies done in rats. Several studies report that the CA1 area broadcasts a generalized novelty signal in response to changes in the environment. Using the neural engineering framework developed by Eliasmith et al., a spiking neural network architecture is created that is able to compare high-dimensional vectors, symbolizing semantic information, according to the semantic pointer hypothesis. This model then computes the similarity between the vectors, as both direct inputs and a recalled memory from a long-term memory network by performing the dot-product operation in a novelty neural network architecture. The developed CA1 model agrees with available neuroanatomical data, as well as the presented behavioral data, and so it is a biologically realistic model of novelty detection in the hippocampus, which can provide a feasible explanation for experimentally observed dynamics. PMID:26890351

  5. Biasing vector network analyzers using variable frequency and amplitude signals

    NASA Astrophysics Data System (ADS)

    Nobles, J. E.; Zagorodnii, V.; Hutchison, A.; Celinski, Z.

    2016-08-01

    We report the development of a test setup designed to provide a variable frequency biasing signal to a vector network analyzer (VNA). The test setup is currently used for the testing of liquid crystal (LC) based devices in the microwave region. The use of an AC bias for LC based devices minimizes the negative effects associated with ionic impurities in the media encountered with DC biasing. The test setup utilizes bias tees on the VNA test station to inject the bias signal. The square wave biasing signal is variable from 0.5 to 36.0 V peak-to-peak (VPP) with a frequency range of DC to 10 kHz. The test setup protects the VNA from transient processes, voltage spikes, and high-frequency leakage. Additionally, the signals to the VNA are fused to ½ amp and clipped to a maximum of 36 VPP based on bias tee limitations. This setup allows us to measure S-parameters as a function of both the voltage and the frequency of the applied bias signal.

  6. Biasing vector network analyzers using variable frequency and amplitude signals.

    PubMed

    Nobles, J E; Zagorodnii, V; Hutchison, A; Celinski, Z

    2016-08-01

    We report the development of a test setup designed to provide a variable frequency biasing signal to a vector network analyzer (VNA). The test setup is currently used for the testing of liquid crystal (LC) based devices in the microwave region. The use of an AC bias for LC based devices minimizes the negative effects associated with ionic impurities in the media encountered with DC biasing. The test setup utilizes bias tees on the VNA test station to inject the bias signal. The square wave biasing signal is variable from 0.5 to 36.0 V peak-to-peak (VPP) with a frequency range of DC to 10 kHz. The test setup protects the VNA from transient processes, voltage spikes, and high-frequency leakage. Additionally, the signals to the VNA are fused to ½ amp and clipped to a maximum of 36 VPP based on bias tee limitations. This setup allows us to measure S-parameters as a function of both the voltage and the frequency of the applied bias signal. PMID:27587141

  7. Monthly evaporation forecasting using artificial neural networks and support vector machines

    NASA Astrophysics Data System (ADS)

    Tezel, Gulay; Buyukyildiz, Meral

    2016-04-01

    Evaporation is one of the most important components of the hydrological cycle, but is relatively difficult to estimate, due to its complexity, as it can be influenced by numerous factors. Estimation of evaporation is important for the design of reservoirs, especially in arid and semi-arid areas. Artificial neural network methods and support vector machines (SVM) are frequently utilized to estimate evaporation and other hydrological variables. In this study, usability of artificial neural networks (ANNs) (multilayer perceptron (MLP) and radial basis function network (RBFN)) and ɛ-support vector regression (SVR) artificial intelligence methods was investigated to estimate monthly pan evaporation. For this aim, temperature, relative humidity, wind speed, and precipitation data for the period 1972 to 2005 from Beysehir meteorology station were used as input variables while pan evaporation values were used as output. The Romanenko and Meyer method was also considered for the comparison. The results were compared with observed class A pan evaporation data. In MLP method, four different training algorithms, gradient descent with momentum and adaptive learning rule backpropagation (GDX), Levenberg-Marquardt (LVM), scaled conjugate gradient (SCG), and resilient backpropagation (RBP), were used. Also, ɛ-SVR model was used as SVR model. The models were designed via 10-fold cross-validation (CV); algorithm performance was assessed via mean absolute error (MAE), root mean square error (RMSE), and coefficient of determination (R 2). According to the performance criteria, the ANN algorithms and ɛ-SVR had similar results. The ANNs and ɛ-SVR methods were found to perform better than the Romanenko and Meyer methods. Consequently, the best performance using the test data was obtained using SCG(4,2,2,1) with R 2 = 0.905.

  8. Performance of Ultra-Scale Applications on Leading Vector andScalar HPC Platforms

    SciTech Connect

    Oliker, Leonid; Canning, Andrew; Carter, Jonathan Carter; Shalf,John; Simon, Horst; Ethier, Stephane; Parks, David; Kitawaki, Shigemune; Tsuda, Yoshinori; Sato, Tetsuya

    2005-01-01

    The last decade has witnessed a rapid proliferation of superscalar cache-based microprocessors to build high-end capability and capacity computers primarily because of their generality, scalability, and cost effectiveness. However, the constant degradation of superscalar sustained performance, has become a well-known problem in the scientific computing community. This trend has been widely attributed to the use of superscalar-based commodity components who's architectural designs offer a balance between memory performance, network capability, and execution rate that is poorly matched to the requirements of large-scale numerical computations. The recent development of massively parallel vector systems offers the potential to increase the performance gap for many important classes of algorithms. In this study we examine four diverse scientific applications with the potential to run at ultrascale, from the areas of plasma physics, material science, astrophysics, and magnetic fusion. We compare performance between the vector-based Earth Simulator (ES) and Cray X1, with leading superscalar-based platforms: the IBM Power3/4 and the SGI Altix. Results demonstrate that the ES vector systems achieve excellent performance on our application suite - the highest of any architecture tested to date.

  9. Maximizing sparse matrix vector product performance in MIMD computers

    SciTech Connect

    McLay, R.T.; Kohli, H.S.; Swift, S.L.; Carey, G.F.

    1994-12-31

    A considerable component of the computational effort involved in conjugate gradient solution of structured sparse matrix systems is expended during the Matrix-Vector Product (MVP), and hence it is the focus of most efforts at improving performance. Such efforts are hindered on MIMD machines due to constraints on memory, cache and speed of memory-cpu data transfer. This paper describes a strategy for maximizing the performance of the local computations associated with the MVP. The method focuses on single stride memory access, and the efficient use of cache by pre-loading it with data that is re-used while bypassing it for other data. The algorithm is designed to behave optimally for varying grid sizes and number of unknowns per gridpoint. Results from an assembly language implementation of the strategy on the iPSC/860 show a significant improvement over the performance using FORTRAN.

  10. Locally connected neural network with improved feature vector

    NASA Technical Reports Server (NTRS)

    Thomas, Tyson (Inventor)

    2004-01-01

    A pattern recognizer which uses neuromorphs with a fixed amount of energy that is distributed among the elements. The distribution of the energy is used to form a histogram which is used as a feature vector.

  11. Double Virus Vector Infection to the Prefrontal Network of the Macaque Brain

    PubMed Central

    Tanaka, Shingo; Koizumi, Masashi; Kikusui, Takefumi; Ichihara, Nobutsune; Kato, Shigeki; Kobayashi, Kazuto; Sakagami, Masamichi

    2015-01-01

    To precisely understand how higher cognitive functions are implemented in the prefrontal network of the brain, optogenetic and pharmacogenetic methods to manipulate the signal transmission of a specific neural pathway are required. The application of these methods, however, has been mostly restricted to animals other than the primate, which is the best animal model to investigate higher cognitive functions. In this study, we used a double viral vector infection method in the prefrontal network of the macaque brain. This enabled us to express specific constructs into specific neurons that constitute a target pathway without use of germline genetic manipulation. The double-infection technique utilizes two different virus vectors in two monosynaptically connected areas. One is a vector which can locally infect cell bodies of projection neurons (local vector) and the other can retrogradely infect from axon terminals of the same projection neurons (retrograde vector). The retrograde vector incorporates the sequence which encodes Cre recombinase and the local vector incorporates the “Cre-On” FLEX double-floxed sequence in which a reporter protein (mCherry) was encoded. mCherry thus came to be expressed only in doubly infected projection neurons with these vectors. We applied this method to two macaque monkeys and targeted two different pathways in the prefrontal network: The pathway from the lateral prefrontal cortex to the caudate nucleus and the pathway from the lateral prefrontal cortex to the frontal eye field. As a result, mCherry-positive cells were observed in the lateral prefrontal cortex in all of the four injected hemispheres, indicating that the double virus vector transfection is workable in the prefrontal network of the macaque brain. PMID:26193102

  12. Distributed Vector Estimation for Power- and Bandwidth-Constrained Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Sani, Alireza; Vosoughi, Azadeh

    2016-08-01

    We consider distributed estimation of a Gaussian vector with a linear observation model in an inhomogeneous wireless sensor network, where a fusion center (FC) reconstructs the unknown vector, using a linear estimator. Sensors employ uniform multi-bit quantizers and binary PSK modulation, and communicate with the FC over orthogonal power- and bandwidth-constrained wireless channels. We study transmit power and quantization rate (measured in bits per sensor) allocation schemes that minimize mean-square error (MSE). In particular, we derive two closed-form upper bounds on the MSE, in terms of the optimization parameters and propose coupled and decoupled resource allocation schemes that minimize these bounds. We show that the bounds are good approximations of the simulated MSE and the performance of the proposed schemes approaches the clairvoyant centralized estimation when total transmit power or bandwidth is very large. We study how the power and rate allocation are dependent on sensors observation qualities and channel gains, as well as total transmit power and bandwidth constraints. Our simulations corroborate our analytical results and illustrate the superior performance of the proposed algorithms.

  13. Intercomparison of Terahertz Dielectric Measurements Using Vector Network Analyzer and Time-Domain Spectrometer

    NASA Astrophysics Data System (ADS)

    Naftaly, Mira; Shoaib, Nosherwan; Stokes, Daniel; Ridler, Nick M.

    2016-02-01

    We describe a method for direct intercomparison of terahertz permittivities at 200 GHz obtained by a Vector Network Analyzer and a Time-Domain Spectrometer, whereby both instruments operate in their customary configurations, i.e., the VNA in waveguide and TDS in free-space. The method employs material that can be inserted into a waveguide for VNA measurements or contained in a cell for TDS measurements. The intercomparison experiments were performed using two materials: petroleum jelly and a mixture of petroleum jelly with carbon powder. The obtained values of complex permittivities were similar within the measurement uncertainty. An intercomparison between VNA and TDS measurements is of importance because the two modalities are customarily employed separately and require different approaches. Since material permittivities can and have been measured using either platform, it is necessary to ascertain that the obtained data is similar in both cases.

  14. Intercomparison of Terahertz Dielectric Measurements Using Vector Network Analyzer and Time-Domain Spectrometer

    NASA Astrophysics Data System (ADS)

    Naftaly, Mira; Shoaib, Nosherwan; Stokes, Daniel; Ridler, Nick M.

    2016-07-01

    We describe a method for direct intercomparison of terahertz permittivities at 200 GHz obtained by a Vector Network Analyzer and a Time-Domain Spectrometer, whereby both instruments operate in their customary configurations, i.e., the VNA in waveguide and TDS in free-space. The method employs material that can be inserted into a waveguide for VNA measurements or contained in a cell for TDS measurements. The intercomparison experiments were performed using two materials: petroleum jelly and a mixture of petroleum jelly with carbon powder. The obtained values of complex permittivities were similar within the measurement uncertainty. An intercomparison between VNA and TDS measurements is of importance because the two modalities are customarily employed separately and require different approaches. Since material permittivities can and have been measured using either platform, it is necessary to ascertain that the obtained data is similar in both cases.

  15. Diagnosing Anomalous Network Performance with Confidence

    SciTech Connect

    Settlemyer, Bradley W; Hodson, Stephen W; Kuehn, Jeffery A; Poole, Stephen W

    2011-04-01

    Variability in network performance is a major obstacle in effectively analyzing the throughput of modern high performance computer systems. High performance interconnec- tion networks offer excellent best-case network latencies; how- ever, highly parallel applications running on parallel machines typically require consistently high levels of performance to adequately leverage the massive amounts of available computing power. Performance analysts have usually quantified network performance using traditional summary statistics that assume the observational data is sampled from a normal distribution. In our examinations of network performance, we have found this method of analysis often provides too little data to under- stand anomalous network performance. Our tool, Confidence, instead uses an empirically derived probability distribution to characterize network performance. In this paper we describe several instances where the Confidence toolkit allowed us to understand and diagnose network performance anomalies that we could not adequately explore with the simple summary statis- tics provided by traditional measurement tools. In particular, we examine a multi-modal performance scenario encountered with an Infiniband interconnection network and we explore the performance repeatability on the custom Cray SeaStar2 interconnection network after a set of software and driver updates.

  16. HYBRID NEURAL NETWORK AND SUPPORT VECTOR MACHINE METHOD FOR OPTIMIZATION

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan (Inventor)

    2005-01-01

    System and method for optimization of a design associated with a response function, using a hybrid neural net and support vector machine (NN/SVM) analysis to minimize or maximize an objective function, optionally subject to one or more constraints. As a first example, the NN/SVM analysis is applied iteratively to design of an aerodynamic component, such as an airfoil shape, where the objective function measures deviation from a target pressure distribution on the perimeter of the aerodynamic component. As a second example, the NN/SVM analysis is applied to data classification of a sequence of data points in a multidimensional space. The NN/SVM analysis is also applied to data regression.

  17. Hybrid Neural Network and Support Vector Machine Method for Optimization

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan (Inventor)

    2007-01-01

    System and method for optimization of a design associated with a response function, using a hybrid neural net and support vector machine (NN/SVM) analysis to minimize or maximize an objective function, optionally subject to one or more constraints. As a first example, the NN/SVM analysis is applied iteratively to design of an aerodynamic component, such as an airfoil shape, where the objective function measures deviation from a target pressure distribution on the perimeter of the aerodynamic component. As a second example, the NN/SVM analysis is applied to data classification of a sequence of data points in a multidimensional space. The NN/SVM analysis is also applied to data regression.

  18. Data Access Performance Through Parallelization and Vectored Access: Some Results

    SciTech Connect

    Furano, Fabrizio; Hanushevsky, Andrew; /SLAC

    2011-11-10

    High Energy Physics data processing and analysis applications typically deal with the problem of accessing and processing data at high speed. Recent studies, development and test work have shown that the latencies due to data access can often be hidden by parallelizing them with the data processing, thus giving the ability to have applications which process remote data with a high level of efficiency. Techniques and algorithms able to reach this result have been implemented in the client side of the Scalla/xrootd system, and in this contribution we describe the results of some tests done in order to compare their performance and characteristics. These techniques, if used together with multiple streams data access, can also be effective in allowing to efficiently and transparently deal with data repositories accessible via a Wide Area Network.

  19. Radio to microwave dielectric characterisation of constitutive electromagnetic soil properties using vector network analyses

    NASA Astrophysics Data System (ADS)

    Schwing, M.; Wagner, N.; Karlovsek, J.; Chen, Z.; Williams, D. J.; Scheuermann, A.

    2016-04-01

    The knowledge of constitutive broadband electromagnetic (EM) properties of porous media such as soils and rocks is essential in the theoretical and numerical modeling of EM wave propagation in the subsurface. This paper presents an experimental and numerical study on the performance EM measuring instruments for broadband EM wave in the radio-microwave frequency range. 3-D numerical calculations of a specific sensor were carried out using the Ansys HFSS (high frequency structural simulator) to further evaluate the probe performance. In addition, six different sensors of varying design, application purpose, and operational frequency range, were tested on different calibration liquids and a sample of fine-grained soil over a frequency range of 1 MHz-40 GHz using four vector network analysers. The resulting dielectric spectrum of the soil was analysed and interpreted using a 3-term Cole-Cole model under consideration of a direct current conductivity contribution. Comparison of sensor performances on calibration materials and fine-grained soils showed consistency in the measured dielectric spectra at a frequency range from 100 MHz-2 GHz. By combining open-ended coaxial line and coaxial transmission line measurements, the observable frequency window could be extended to a truly broad frequency range of 1 MHz-40 GHz.

  20. Retroviral vector performance in defined chromosomal Loci of modular packaging cell lines.

    PubMed

    Gama-Norton, L; Herrmann, S; Schucht, R; Coroadinha, A S; Löw, R; Alves, P M; Bartholomae, C C; Schmidt, M; Baum, C; Schambach, A; Hauser, H; Wirth, D

    2010-08-01

    The improvement of safety and titer of retroviral vectors produced in standard retroviral packaging cell lines is hampered because production relies on uncontrollable vector integration events. The influences of chromosomal surroundings make it difficult to dissect the performance of a specific vector from the chromosomal surroundings of the respective integration site. Taking advantage of a technology that relies on the use of packaging cell lines with predefined integration sites, we have systematically evaluated the performance of several retroviral vectors. In two previously established modular packaging cell lines (Flp293A and 293 FLEX) with single, defined chromosomal integration sites, retroviral vectors were integrated by means of Flp-mediated site-specific recombination. Vectors that are distinguished by different long terminal repeat promoters were introduced in either the sense or reverse orientation. The results show that the promoter, viral vector orientation, and integration site are the main determinants of the titer. Furthermore, we exploited the viral production systems to evaluate read-through activity. Read-through is thought to be caused by inefficient termination of vector transcription and is inherent to the nature of retroviral vectors. We assessed the frequency of transduction of sequences flanking the retroviral vectors from both integration sites. The approach presented here provides a platform for systematic design and evaluation of the efficiency and safety of retroviral vectors optimized for a given producer cell line. PMID:20222806

  1. Improved input representation for enhancement of neural network performance

    SciTech Connect

    Aldrich, C.H.; An, Z.G.; Lee, K.; Lee, Y.C.

    1987-01-01

    The performance of an associate memory network depends significantly on the representation of the data. For example, it has already been recognized that bipolar representation of neurons with -1 and +1 states out- perform neurons with on and off states of +1 and 0 respectively. This paper will show that a simple modification of the pattern vector to have zero bias will provide even more significant increase for the performance of an associative memory network. The higher order algorithm is used for the numerical simulation studies of this paper. To the lowest order this algorithm reduces to the Hopfield model for auto-associative memory and the bidirectional associative memory (BAM) for hetero-associative memory model respectively. 16 refs., 4 figs., 1 tabs.

  2. Effects of internal yaw-vectoring devices on the static performance of a pitch-vectoring nonaxisymmetric convergent-divergent nozzle

    NASA Technical Reports Server (NTRS)

    Asbury, Scott C.

    1993-01-01

    An investigation was conducted in the static test facility of the Langley 16-Foot Transonic Tunnel to evaluate the internal performance of a nonaxisymmetric convergent divergent nozzle designed to have simultaneous pitch and yaw thrust vectoring capability. This concept utilized divergent flap deflection for thrust vectoring in the pitch plane and flow-turning deflectors installed within the divergent flaps for yaw thrust vectoring. Modifications consisting of reducing the sidewall length and deflecting the sidewall outboard were investigated as means to increase yaw-vectoring performance. This investigation studied the effects of multiaxis (pitch and yaw) thrust vectoring on nozzle internal performance characteristics. All tests were conducted with no external flow, and nozzle pressure ratio was varied from 2.0 to approximately 13.0. The results indicate that this nozzle concept can successfully generate multiaxis thrust vectoring. Deflection of the divergent flaps produced resultant pitch vector angles that, although dependent on nozzle pressure ratio, were nearly equal to the geometric pitch vector angle. Losses in resultant thrust due to pitch vectoring were small or negligible. The yaw deflectors produced resultant yaw vector angles up to 21 degrees that were controllable by varying yaw deflector rotation. However, yaw deflector rotation resulted in significant losses in thrust ratios and, in some cases, nozzle discharge coefficient. Either of the sidewall modifications generally reduced these losses and increased maximum resultant yaw vector angle. During multiaxis (simultaneous pitch and yaw) thrust vectoring, little or no cross coupling between the thrust vectoring processes was observed.

  3. Student Collaborative Networks and Academic Performance

    NASA Astrophysics Data System (ADS)

    Schmidt, David; Bridgeman, Ariel; Kohl, Patrick

    2013-04-01

    Undergraduate physics students commonly collaborate with one another on homework assignments, especially in more challenging courses. However, there currently exists a dearth of empirical research directly comparing the structure of students' collaborative networks to their academic performances in lower and upper division physics courses. We investigate such networks and associated performances through a mandated collaboration reporting system in two sophomore level and three junior level physics courses during the Fall 2012 and Spring 2013 semesters. We employ social network analysis to quantify the structure and time evolution of networks involving approximately 140 students. Analysis includes analytical and numerical assignments in addition to homework and exam scores. Preliminary results are discussed.

  4. Target localization in wireless sensor networks using online semi-supervised support vector regression.

    PubMed

    Yoo, Jaehyun; Kim, H Jin

    2015-01-01

    Machine learning has been successfully used for target localization in wireless sensor networks (WSNs) due to its accurate and robust estimation against highly nonlinear and noisy sensor measurement. For efficient and adaptive learning, this paper introduces online semi-supervised support vector regression (OSS-SVR). The first advantage of the proposed algorithm is that, based on semi-supervised learning framework, it can reduce the requirement on the amount of the labeled training data, maintaining accurate estimation. Second, with an extension to online learning, the proposed OSS-SVR automatically tracks changes of the system to be learned, such as varied noise characteristics. We compare the proposed algorithm with semi-supervised manifold learning, an online Gaussian process and online semi-supervised colocalization. The algorithms are evaluated for estimating the unknown location of a mobile robot in a WSN. The experimental results show that the proposed algorithm is more accurate under the smaller amount of labeled training data and is robust to varying noise. Moreover, the suggested algorithm performs fast computation, maintaining the best localization performance in comparison with the other methods. PMID:26024420

  5. The interplay of vaccination and vector control on small dengue networks.

    PubMed

    Hendron, Ross-William S; Bonsall, Michael B

    2016-10-21

    Dengue fever is a major public health issue affecting billions of people in over 100 countries across the globe. This challenge is growing as the invasive mosquito vectors, Aedes aegypti and Aedes albopictus, expand their distributions and increase their population sizes. Hence there is an increasing need to devise effective control methods that can contain dengue outbreaks. Here we construct an epidemiological model for virus transmission between vectors and hosts on a network of host populations distributed among city and town patches, and investigate disease control through vaccination and vector control using variants of the sterile insect technique (SIT). Analysis of the basic reproductive number and simulations indicate that host movement across this small network influences the severity of epidemics. Both vaccination and vector control strategies are investigated as methods of disease containment and our results indicate that these controls can be made more effective with mixed strategy solutions. We predict that reduced lethality through poor SIT methods or imperfectly efficacious vaccines will impact efforts to control disease spread. In particular, weakly efficacious vaccination strategies against multiple virus serotype diversity may be counter productive to disease control efforts. Even so, failings of one method may be mitigated by supplementing it with an alternative control strategy. Generally, our network approach encourages decision making to consider connected populations, to emphasise that successful control methods must effectively suppress dengue epidemics at this landscape scale. PMID:27457093

  6. Epidemic spreading and global stability of an SIS model with an infective vector on complex networks

    NASA Astrophysics Data System (ADS)

    Kang, Huiyan; Fu, Xinchu

    2015-10-01

    In this paper, we present a new SIS model with delay on scale-free networks. The model is suitable to describe some epidemics which are not only transmitted by a vector but also spread between individuals by direct contacts. In view of the biological relevance and real spreading process, we introduce a delay to denote average incubation period of disease in a vector. By mathematical analysis, we obtain the epidemic threshold and prove the global stability of equilibria. The simulation shows the delay will effect the epidemic spreading. Finally, we investigate and compare two major immunization strategies, uniform immunization and targeted immunization.

  7. Artificial astrocytes improve neural network performance.

    PubMed

    Porto-Pazos, Ana B; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-01-01

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function. PMID:21526157

  8. Artificial Astrocytes Improve Neural Network Performance

    PubMed Central

    Porto-Pazos, Ana B.; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-01-01

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function. PMID:21526157

  9. Ultimate conductivity performance in metallic nanowire networks

    NASA Astrophysics Data System (ADS)

    Gomes da Rocha, Claudia; Manning, Hugh G.; O'Callaghan, Colin; Ritter, Carlos; Bellew, Allen T.; Boland, John J.; Ferreira, Mauro S.

    2015-07-01

    In this work, we introduce a combined experimental and computational approach to describe the conductivity of metallic nanowire networks. Due to their highly disordered nature, these materials are typically described by simplified models in which network junctions control the overall conductivity. Here, we introduce a combined experimental and simulation approach that involves a wire-by-wire junction-by-junction simulation of an actual network. Rather than dealing with computer-generated networks, we use a computational approach that captures the precise spatial distribution of wires from an SEM analysis of a real network. In this way, we fully account for all geometric aspects of the network, i.e. for the properties of the junctions and wire segments. Our model predicts characteristic junction resistances that are smaller than those found by earlier simplified models. The model outputs characteristic values that depend on the detailed connectivity of the network, which can be used to compare the performance of different networks and to predict the optimum performance of any network and its scope for improvement.In this work, we introduce a combined experimental and computational approach to describe the conductivity of metallic nanowire networks. Due to their highly disordered nature, these materials are typically described by simplified models in which network junctions control the overall conductivity. Here, we introduce a combined experimental and simulation approach that involves a wire-by-wire junction-by-junction simulation of an actual network. Rather than dealing with computer-generated networks, we use a computational approach that captures the precise spatial distribution of wires from an SEM analysis of a real network. In this way, we fully account for all geometric aspects of the network, i.e. for the properties of the junctions and wire segments. Our model predicts characteristic junction resistances that are smaller than those found by earlier

  10. A Performance Management Initiative for Local Health Department Vector Control Programs

    PubMed Central

    Gerding, Justin; Kirshy, Micaela; Moran, John W.; Bialek, Ron; Lamers, Vanessa; Sarisky, John

    2016-01-01

    Local health department (LHD) vector control programs have experienced reductions in funding and capacity. Acknowledging this situation and its potential effect on the ability to respond to vector-borne diseases, the U.S. Centers for Disease Control and Prevention and the Public Health Foundation partnered on a performance management initiative for LHD vector control programs. The initiative involved 14 programs that conducted a performance assessment using the Environmental Public Health Performance Standards. The programs, assisted by quality improvement (QI) experts, used the assessment results to prioritize improvement areas that were addressed with QI projects intended to increase effectiveness and efficiency in the delivery of services such as responding to mosquito complaints and educating the public about vector-borne disease prevention. This article describes the initiative as a process LHD vector control programs may adapt to meet their performance management needs. This study also reviews aggregate performance assessment results and QI projects, which may reveal common aspects of LHD vector control program performance and priority improvement areas. LHD vector control programs interested in performance assessment and improvement may benefit from engaging in an approach similar to this performance management initiative. PMID:27429555

  11. High Performance Networks for High Impact Science

    SciTech Connect

    Scott, Mary A.; Bair, Raymond A.

    2003-02-13

    This workshop was the first major activity in developing a strategic plan for high-performance networking in the Office of Science. Held August 13 through 15, 2002, it brought together a selection of end users, especially representing the emerging, high-visibility initiatives, and network visionaries to identify opportunities and begin defining the path forward.

  12. Pipelining performance of structured dataflow networks

    SciTech Connect

    Tonge, F.M.

    1983-01-01

    A particular approach to specifying procedure interconnection and allocation is presented. The major result is that, within stated assumptions, networks constructed using a small set of structured process connectives can achieve at least as good throughput (pipelining performance) as arbitrarily interconnected networks. 20 references.

  13. Static internal performance of a single expansion ramp nozzle with multiaxis thrust vectoring capability

    NASA Technical Reports Server (NTRS)

    Capone, Francis J.; Schirmer, Alberto W.

    1993-01-01

    An investigation was conducted at static conditions in order to determine the internal performance characteristics of a multiaxis thrust vectoring single expansion ramp nozzle. Yaw vectoring was achieved by deflecting yaw flaps in the nozzle sidewall into the nozzle exhaust flow. In order to eliminate any physical interference between the variable angle yaw flap deflected into the exhaust flow and the nozzle upper ramp and lower flap which were deflected for pitch vectoring, the downstream corners of both the nozzle ramp and lower flap were cut off to allow for up to 30 deg of yaw vectoring. The effects of nozzle upper ramp and lower flap cutout, yaw flap hinge line location and hinge inclination angle, sidewall containment, geometric pitch vector angle, and geometric yaw vector angle were studied. This investigation was conducted in the static-test facility of the Langley 16-Foot Transonic Tunnel at nozzle pressure ratios up to 8.0.

  14. Internal performance of two nozzles utilizing gimbal concepts for thrust vectoring

    NASA Technical Reports Server (NTRS)

    Berrier, Bobby L.; Taylor, John G.

    1990-01-01

    The internal performance of an axisymmetric convergent-divergent nozzle and a nonaxisymmetric convergent-divergent nozzle, both of which utilized a gimbal type mechanism for thrust vectoring was evaluated in the Static Test Facility of the Langley 16-Foot Transonic Tunnel. The nonaxisymmetric nozzle used the gimbal concept for yaw thrust vectoring only; pitch thrust vectoring was accomplished by simultaneous deflection of the upper and lower divergent flaps. The model geometric parameters investigated were pitch vector angle for the axisymmetric nozzle and pitch vector angle, yaw vector angle, nozzle throat aspect ratio, and nozzle expansion ratio for the nonaxisymmetric nozzle. All tests were conducted with no external flow, and nozzle pressure ratio was varied from 2.0 to approximately 12.0.

  15. Spatial Variance in Resting fMRI Networks of Schizophrenia Patients: An Independent Vector Analysis.

    PubMed

    Gopal, Shruti; Miller, Robyn L; Michael, Andrew; Adali, Tulay; Cetin, Mustafa; Rachakonda, Srinivas; Bustillo, Juan R; Cahill, Nathan; Baum, Stefi A; Calhoun, Vince D

    2016-01-01

    Spatial variability in resting functional MRI (fMRI) brain networks has not been well studied in schizophrenia, a disease known for both neurodevelopmental and widespread anatomic changes. Motivated by abundant evidence of neuroanatomical variability from previous studies of schizophrenia, we draw upon a relatively new approach called independent vector analysis (IVA) to assess this variability in resting fMRI networks. IVA is a blind-source separation algorithm, which segregates fMRI data into temporally coherent but spatially independent networks and has been shown to be especially good at capturing spatial variability among subjects in the extracted networks. We introduce several new ways to quantify differences in variability of IVA-derived networks between schizophrenia patients (SZs = 82) and healthy controls (HCs = 89). Voxelwise amplitude analyses showed significant group differences in the spatial maps of auditory cortex, the basal ganglia, the sensorimotor network, and visual cortex. Tests for differences (HC-SZ) in the spatial variability maps suggest, that at rest, SZs exhibit more activity within externally focused sensory and integrative network and less activity in the default mode network thought to be related to internal reflection. Additionally, tests for difference of variance between groups further emphasize that SZs exhibit greater network variability. These results, consistent with our prediction of increased spatial variability within SZs, enhance our understanding of the disease and suggest that it is not just the amplitude of connectivity that is different in schizophrenia, but also the consistency in spatial connectivity patterns across subjects. PMID:26106217

  16. Enhancing neural-network performance via assortativity.

    PubMed

    de Franciscis, Sebastiano; Johnson, Samuel; Torres, Joaquín J

    2011-03-01

    The performance of attractor neural networks has been shown to depend crucially on the heterogeneity of the underlying topology. We take this analysis a step further by examining the effect of degree-degree correlations--assortativity--on neural-network behavior. We make use of a method recently put forward for studying correlated networks and dynamics thereon, both analytically and computationally, which is independent of how the topology may have evolved. We show how the robustness to noise is greatly enhanced in assortative (positively correlated) neural networks, especially if it is the hub neurons that store the information. PMID:21517565

  17. High-performance neural networks. [Neural computers

    SciTech Connect

    Dress, W.B.

    1987-06-01

    The new Forth hardware architectures offer an intermediate solution to high-performance neural networks while the theory and programming details of neural networks for synthetic intelligence are developed. This approach has been used successfully to determine the parameters and run the resulting network for a synthetic insect consisting of a 200-node ''brain'' with 1760 interconnections. Both the insect's environment and its sensor input have thus far been simulated. However, the frequency-coded nature of the Browning network allows easy replacement of the simulated sensors by real-world counterparts.

  18. Enhancing neural-network performance via assortativity

    SciTech Connect

    Franciscis, Sebastiano de; Johnson, Samuel; Torres, Joaquin J.

    2011-03-15

    The performance of attractor neural networks has been shown to depend crucially on the heterogeneity of the underlying topology. We take this analysis a step further by examining the effect of degree-degree correlations - assortativity - on neural-network behavior. We make use of a method recently put forward for studying correlated networks and dynamics thereon, both analytically and computationally, which is independent of how the topology may have evolved. We show how the robustness to noise is greatly enhanced in assortative (positively correlated) neural networks, especially if it is the hub neurons that store the information.

  19. Measurements by a Vector Network Analyzer at 325 to 508 GHz

    NASA Technical Reports Server (NTRS)

    Fung, King Man; Samoska, Lorene; Chattopadhyay, Goutam; Gaier, Todd; Kangaslahti, Pekka; Pukala, David; Lau, Yuenie; Oleson, Charles; Denning, Anthony

    2008-01-01

    Recent experiments were performed in which return loss and insertion loss of waveguide test assemblies in the frequency range from 325 to 508 GHz were measured by use of a swept-frequency two-port vector network analyzer (VNA) test set. The experiments were part of a continuing effort to develop means of characterizing passive and active electronic components and systems operating at ever increasing frequencies. The waveguide test assemblies comprised WR-2.2 end sections collinear with WR-3.3 middle sections. The test set, assembled from commercially available components, included a 50-GHz VNA scattering- parameter test set and external signal synthesizers, augmented with recently developed frequency extenders, and further augmented with attenuators and amplifiers as needed to adjust radiofrequency and intermediate-frequency power levels between the aforementioned components. The tests included line-reflect-line calibration procedures, using WR-2.2 waveguide shims as the "line" standards and waveguide flange short circuits as the "reflect" standards. Calibrated dynamic ranges somewhat greater than about 20 dB for return loss and 35 dB for insertion loss were achieved. The measurement data of the test assemblies were found to substantially agree with results of computational simulations.

  20. Belief network algorithms: A study of performance

    SciTech Connect

    Jitnah, N.

    1996-12-31

    This abstract gives an overview of the work. We present a survey of Belief Network algorithms and propose a domain characterization system to be used as a basis for algorithm comparison and for predicting algorithm performance.

  1. Evaluation of Raman spectra of human brain tumor tissue using the learning vector quantization neural network

    NASA Astrophysics Data System (ADS)

    Liu, Tuo; Chen, Changshui; Shi, Xingzhe; Liu, Chengyong

    2016-05-01

    The Raman spectra of tissue of 20 brain tumor patients was recorded using a confocal microlaser Raman spectroscope with 785 nm excitation in vitro. A total of 133 spectra were investigated. Spectra peaks from normal white matter tissue and tumor tissue were analyzed. Algorithms, such as principal component analysis, linear discriminant analysis, and the support vector machine, are commonly used to analyze spectral data. However, in this study, we employed the learning vector quantization (LVQ) neural network, which is typically used for pattern recognition. By applying the proposed method, a normal diagnosis accuracy of 85.7% and a glioma diagnosis accuracy of 89.5% were achieved. The LVQ neural network is a recent approach to excavating Raman spectra information. Moreover, it is fast and convenient, does not require the spectra peak counterpart, and achieves a relatively high accuracy. It can be used in brain tumor prognostics and in helping to optimize the cutting margins of gliomas.

  2. Estimating computer communication network performance using network simulations

    SciTech Connect

    Garcia, A.B.

    1985-01-01

    A generalized queuing model simulation of store-and-forward computer communication networks is developed and implemented using Simulation Language for Alternative Modeling (SLAM). A baseline simulation model is validated by comparison with published analytic models. The baseline model is expanded to include an ACK/NAK data link protocol, four-level message precedence, finite queues, and a response traffic scenario. Network performance, as indicated by average message delay and message throughput, is estimated using the simulation model.

  3. Speech recognition method based on genetic vector quantization and BP neural network

    NASA Astrophysics Data System (ADS)

    Gao, Li'ai; Li, Lihua; Zhou, Jian; Zhao, Qiuxia

    2009-07-01

    Vector Quantization is one of popular codebook design methods for speech recognition at present. In the process of codebook design, traditional LBG algorithm owns the advantage of fast convergence, but it is easy to get the local optimal result and be influenced by initial codebook. According to the understanding that Genetic Algorithm has the capability of getting the global optimal result, this paper proposes a hybrid clustering method GA-L based on Genetic Algorithm and LBG algorithm to improve the codebook.. Then using genetic neural networks for speech recognition. consequently search a global optimization codebook of the training vector space. The experiments show that neural network identification method based on genetic algorithm can extricate from its local maximum value and the initial restrictions, it can show superior to the standard genetic algorithm and BP neural network algorithm from various sources, and the genetic BP neural networks has a higher recognition rate and the unique application advantages than the general BP neural network in the same GA-VQ codebook, it can achieve a win-win situation in the time and efficiency.

  4. Diversity Performance Analysis on Multiple HAP Networks

    PubMed Central

    Dong, Feihong; Li, Min; Gong, Xiangwu; Li, Hongjun; Gao, Fengyue

    2015-01-01

    One of the main design challenges in wireless sensor networks (WSNs) is achieving a high-data-rate transmission for individual sensor devices. The high altitude platform (HAP) is an important communication relay platform for WSNs and next-generation wireless networks. Multiple-input multiple-output (MIMO) techniques provide the diversity and multiplexing gain, which can improve the network performance effectively. In this paper, a virtual MIMO (V-MIMO) model is proposed by networking multiple HAPs with the concept of multiple assets in view (MAV). In a shadowed Rician fading channel, the diversity performance is investigated. The probability density function (PDF) and cumulative distribution function (CDF) of the received signal-to-noise ratio (SNR) are derived. In addition, the average symbol error rate (ASER) with BPSK and QPSK is given for the V-MIMO model. The system capacity is studied for both perfect channel state information (CSI) and unknown CSI individually. The ergodic capacity with various SNR and Rician factors for different network configurations is also analyzed. The simulation results validate the effectiveness of the performance analysis. It is shown that the performance of the HAPs network in WSNs can be significantly improved by utilizing the MAV to achieve overlapping coverage, with the help of the V-MIMO techniques. PMID:26134102

  5. Diversity Performance Analysis on Multiple HAP Networks.

    PubMed

    Dong, Feihong; Li, Min; Gong, Xiangwu; Li, Hongjun; Gao, Fengyue

    2015-01-01

    One of the main design challenges in wireless sensor networks (WSNs) is achieving a high-data-rate transmission for individual sensor devices. The high altitude platform (HAP) is an important communication relay platform for WSNs and next-generation wireless networks. Multiple-input multiple-output (MIMO) techniques provide the diversity and multiplexing gain, which can improve the network performance effectively. In this paper, a virtual MIMO (V-MIMO) model is proposed by networking multiple HAPs with the concept of multiple assets in view (MAV). In a shadowed Rician fading channel, the diversity performance is investigated. The probability density function (PDF) and cumulative distribution function (CDF) of the received signal-to-noise ratio (SNR) are derived. In addition, the average symbol error rate (ASER) with BPSK and QPSK is given for the V-MIMO model. The system capacity is studied for both perfect channel state information (CSI) and unknown CSI individually. The ergodic capacity with various SNR and Rician factors for different network configurations is also analyzed. The simulation results validate the effectiveness of the performance analysis. It is shown that the performance of the HAPs network in WSNs can be significantly improved by utilizing the MAV to achieve overlapping coverage, with the help of the V-MIMO techniques. PMID:26134102

  6. WDM backbone network with guaranteed performance planning

    NASA Astrophysics Data System (ADS)

    Liang, Peng; Sheng, Wang; Zhong, Xusi; Li, Lemin

    2005-11-01

    Wavelength-Division multiplexing (WDM), which allows a single fibre to carry multiple signals simultaneously, has been widely used to increase link capacity and is a promising technology in backbone transport network. But designing such WDM backbone network is hard for two reasons, one is the uncertainty of future traffic demand, the other is difficulty of planning of the backup resource for failure conditions. As a result, enormous amount of link capacity for the network has to be provided for the network. Recently, a new approach called Valiant Load-Balanced Scheme (VLBS) has been proposed to design the WDM backbone network. The network planned by Valiant Load-Balanced Scheme is insensitive to the traffic and continues to guarantee performance under a user defined number of link or node failures. In this paper, the Valiant Load-Balanced Scheme (VLBS) for backbone network planning has been studied and a new Valiant Load-Balanced Scheme has been proposed. Compared with the early work, the new Valiant Load-Balanced Scheme is much more general and can be used for the computation of the link capacity of both homogeneous and heterogeneous networks. The abbreviation for the general Valiant Load-Balanced Scheme is GVLBS. After a brief description of the VLBS, we will give the detail derivation of the GVLBS. The central concept of the derivation of GVLBS is transforming the heterogeneous network into a homogeneous network, and taking advantage of VLBS to get GVLBS. Such transformation process is described and the derivation and analysis of GVLBS for link capacity under normal and failure conditions is also given. The numerical results show that GVLBS can compute the minimum link capacity required for the heterogeneous backbone network under different conditions (normal or failure).

  7. Performance characterization of a broadband vector Apodizing Phase Plate coronagraph.

    PubMed

    Otten, Gilles P P L; Snik, Frans; Kenworthy, Matthew A; Miskiewicz, Matthew N; Escuti, Michael J

    2014-12-01

    One of the main challenges for the direct imaging of planets around nearby stars is the suppression of the diffracted halo from the primary star. Coronagraphs are angular filters that suppress this diffracted halo. The Apodizing Phase Plate coronagraph modifies the pupil-plane phase with an anti-symmetric pattern to suppress diffraction over a 180 degree region from 2 to 7 λ/D and achieves a mean raw contrast of 10(-4) in this area, independent of the tip-tilt stability of the system. Current APP coronagraphs implemented using classical phase techniques are limited in bandwidth and suppression region geometry (i.e. only on one side of the star). In this paper, we introduce the vector-APP (vAPP) whose phase pattern is implemented through the vector phase imposed by the orientation of patterned liquid crystals. Beam-splitting according to circular polarization states produces two, complementary PSFs with dark holes on either side. We have developed a prototype vAPP that consists of a stack of three twisting liquid crystal layers to yield a bandwidth of 500 to 900 nm. We characterize the properties of this device using reconstructions of the pupil-plane pattern, and of the ensuing PSF structures. By imaging the pupil between crossed and parallel polarizers we reconstruct the fast axis pattern, transmission, and retardance of the vAPP, and use this as input for a PSF model. This model includes aberrations of the laboratory set-up, and matches the measured PSF, which shows a raw contrast of 10(-3.8) between 2 and 7 λ/D in a 135 degree wedge. The vAPP coronagraph is relatively easy to manufacture and can be implemented together with a broadband quarter-wave plate and Wollaston prism in a pupil wheel in high-contrast imaging instruments. The liquid crystal patterning technique permits the application of extreme phase patterns with deeper contrasts inside the dark holes, and the multilayer liquid crystal achromatization technique enables unprecedented spectral bandwidths

  8. Performance analysis of distributed symmetric sparse matrix vector multiplication algorithm for multi-core architectures

    SciTech Connect

    Oryspayev, Dossay; Aktulga, Hasan Metin; Sosonkina, Masha; Maris, Pieter; Vary, James P.

    2015-07-14

    In this article, sparse matrix vector multiply (SpMVM) is an important kernel that frequently arises in high performance computing applications. Due to its low arithmetic intensity, several approaches have been proposed in literature to improve its scalability and efficiency in large scale computations. In this paper, our target systems are high end multi-core architectures and we use messaging passing interface + open multiprocessing hybrid programming model for parallelism. We analyze the performance of recently proposed implementation of the distributed symmetric SpMVM, originally developed for large sparse symmetric matrices arising in ab initio nuclear structure calculations. We also study important features of this implementation and compare with previously reported implementations that do not exploit underlying symmetry. Our SpMVM implementations leverage the hybrid paradigm to efficiently overlap expensive communications with computations. Our main comparison criterion is the "CPU core hours" metric, which is the main measure of resource usage on supercomputers. We analyze the effects of topology-aware mapping heuristic using simplified network load model. Furthermore, we have tested the different SpMVM implementations on two large clusters with 3D Torus and Dragonfly topology. Our results show that the distributed SpMVM implementation that exploits matrix symmetry and hides communication yields the best value for the "CPU core hours" metric and significantly reduces data movement overheads.

  9. Performance analysis of distributed symmetric sparse matrix vector multiplication algorithm for multi-core architectures

    DOE PAGESBeta

    Oryspayev, Dossay; Aktulga, Hasan Metin; Sosonkina, Masha; Maris, Pieter; Vary, James P.

    2015-07-14

    In this article, sparse matrix vector multiply (SpMVM) is an important kernel that frequently arises in high performance computing applications. Due to its low arithmetic intensity, several approaches have been proposed in literature to improve its scalability and efficiency in large scale computations. In this paper, our target systems are high end multi-core architectures and we use messaging passing interface + open multiprocessing hybrid programming model for parallelism. We analyze the performance of recently proposed implementation of the distributed symmetric SpMVM, originally developed for large sparse symmetric matrices arising in ab initio nuclear structure calculations. We also study important featuresmore » of this implementation and compare with previously reported implementations that do not exploit underlying symmetry. Our SpMVM implementations leverage the hybrid paradigm to efficiently overlap expensive communications with computations. Our main comparison criterion is the "CPU core hours" metric, which is the main measure of resource usage on supercomputers. We analyze the effects of topology-aware mapping heuristic using simplified network load model. Furthermore, we have tested the different SpMVM implementations on two large clusters with 3D Torus and Dragonfly topology. Our results show that the distributed SpMVM implementation that exploits matrix symmetry and hides communication yields the best value for the "CPU core hours" metric and significantly reduces data movement overheads.« less

  10. Health professional networks as a vector for improving healthcare quality and safety: a systematic review

    PubMed Central

    Ranmuthugala, Geetha; Plumb, Jennifer; Georgiou, Andrew; Westbrook, Johanna I; Braithwaite, Jeffrey

    2011-01-01

    Background While there is a considerable corpus of theoretical and empirical literature on networks within and outside of the health sector, multiple research questions are yet to be answered. Objective To conduct a systematic review of studies of professionals' network structures, identifying factors associated with network effectiveness and sustainability, particularly in relation to quality of care and patient safety. Methods The authors searched MEDLINE, CINAHL, EMBASE, Web of Science and Business Source Premier from January 1995 to December 2009. Results A majority of the 26 unique studies identified used social network analysis to examine structural relationships in networks: structural relationships within and between networks, health professionals and their social context, health collaboratives and partnerships, and knowledge sharing networks. Key aspects of networks explored were administrative and clinical exchanges, network performance, integration, stability and influences on the quality of healthcare. More recent studies show that cohesive and collaborative health professional networks can facilitate the coordination of care and contribute to improving quality and safety of care. Structural network vulnerabilities include cliques, professional and gender homophily, and over-reliance on central agencies or individuals. Conclusions Effective professional networks employ natural structural network features (eg, bridges, brokers, density, centrality, degrees of separation, social capital, trust) in producing collaboratively oriented healthcare. This requires efficient transmission of information and social and professional interaction within and across networks. For those using networks to improve care, recurring success factors are understanding your network's characteristics, attending to its functioning and investing time in facilitating its improvement. Despite this, there is no guarantee that time spent on networks will necessarily improve patient

  11. Predictable nonwandering localization of covariant Lyapunov vectors and cluster synchronization in scale-free networks of chaotic maps.

    PubMed

    Kuptsov, Pavel V; Kuptsova, Anna V

    2014-09-01

    Covariant Lyapunov vectors for scale-free networks of Hénon maps are highly localized. We revealed two mechanisms of the localization related to full and phase cluster synchronization of network nodes. In both cases the localization nodes remain unaltered in the course of the dynamics, i.e., the localization is nonwandering. Moreover, this is predictable: The localization nodes are found to have specific dynamical and topological properties and they can be found without computing of the covariant vectors. This is an example of explicit relations between the system topology, its phase-space dynamics, and the associated tangent-space dynamics of covariant Lyapunov vectors. PMID:25314498

  12. Static performance of five twin-engine nonaxisymmetric nozzles with vectoring and reversing capability

    NASA Technical Reports Server (NTRS)

    Capone, F. J.

    1978-01-01

    Transonic tunnel test was performed to determine the static performance of five twin-engine nonaxisymmetric nozzles and a base-line axisymmetric nozzle at three nozzle power settings. Static thrust-vectoring and thrust-reversing performance were also determined. Nonaxisymmetric-nozzle concepts included two-dimensional convergent-divergent nozzles, wedge nozzles, and a nozzle with a single external-expansion ramp. All nonaxisymmetric nozzles had essentially the same statis performance as the axisymmetric nozzle. Effective thrust vectoring and reversing was also achieved.

  13. Selected Performance Measurements of the F-15 Active Axisymmetric Thrust-vectoring Nozzle

    NASA Technical Reports Server (NTRS)

    Orme, John S.; Sims, Robert L.

    1998-01-01

    Flight tests recently completed at the NASA Dryden Flight Research Center evaluated performance of a hydromechanically vectored axisymmetric nozzle onboard the F-15 ACTIVE. A flight-test technique whereby strain gages installed onto engine mounts provided for the direct measurement of thrust and vector forces has proven to be extremely valuable. Flow turning and thrust efficiency, as well as nozzle static pressure distributions were measured and analyzed. This report presents results from testing at an altitude of 30,000 ft and a speed of Mach 0.9. Flow turning and thrust efficiency were found to be significantly different than predicted, and moreover, varied substantially with power setting and pitch vector angle. Results of an in-flight comparison of the direct thrust measurement technique and an engine simulation fell within the expected uncertainty bands. Overall nozzle performance at this flight condition demonstrated the F100-PW-229 thrust-vectoring nozzles to be highly capable and efficient.

  14. Selected Performance Measurements of the F-15 ACTIVE Axisymmetric Thrust-Vectoring Nozzle

    NASA Technical Reports Server (NTRS)

    Orme, John S.; Sims, Robert L.

    1999-01-01

    Flight tests recently completed at the NASA Dryden Flight Research Center evaluated performance of a hydromechanically vectored axisymmetric nozzle onboard the F-15 ACTIVE. A flight-test technique whereby strain gages installed onto engine mounts provided for the direct measurement of thrust and vector forces has proven to be extremely valuable. Flow turning and thrust efficiency, as well as nozzle static pressure distributions were measured and analyzed. This report presents results from testing at an altitude of 30,000 ft and a speed of Mach 0.9. Flow turning and thrust efficiency were found to be significantly different than predicted, and moreover, varied substantially with power setting and pitch vector angle. Results of an in-flight comparison of the direct thrust measurement technique and an engine simulation fell within the expected uncertainty bands. Overall nozzle performance at this flight condition demonstrated the F100-PW-229 thrust-vectoring nozzles to be highly capable and efficient.

  15. Monthly river flow forecasting using artificial neural network and support vector regression models coupled with wavelet transform

    NASA Astrophysics Data System (ADS)

    Kalteh, Aman Mohammad

    2013-04-01

    Reliable and accurate forecasts of river flow is needed in many water resources planning, design development, operation and maintenance activities. In this study, the relative accuracy of artificial neural network (ANN) and support vector regression (SVR) models coupled with wavelet transform in monthly river flow forecasting is investigated, and compared to regular ANN and SVR models, respectively. The relative performance of regular ANN and SVR models is also compared to each other. For this, monthly river flow data of Kharjegil and Ponel stations in Northern Iran are used. The comparison of the results reveals that both ANN and SVR models coupled with wavelet transform, are able to provide more accurate forecasting results than the regular ANN and SVR models. However, it is found that SVR models coupled with wavelet transform provide better forecasting results than ANN models coupled with wavelet transform. The results also indicate that regular SVR models perform slightly better than regular ANN models.

  16. Static internal performance of an axisymmetric nozzle with multiaxis thrust-vectoring capability

    NASA Technical Reports Server (NTRS)

    Carson, George T., Jr.; Capone, Francis J.

    1991-01-01

    An investigation was conducted in the static test facility of the Langley 16 Foot Transonic Tunnel in order to determine the internal performance characteristics of a multiaxis thrust vectoring axisymmetric nozzle. Thrust vectoring for this nozzle was achieved by deflection of only the divergent section of this nozzle. The effects of nozzle power setting and divergent flap length were studied at nozzle deflection angles of 0 to 30 at nozzle pressure ratios up to 8.0.

  17. Geometrically nonlinear design sensitivity analysis on parallel-vector high-performance computers

    NASA Technical Reports Server (NTRS)

    Baddourah, Majdi A.; Nguyen, Duc T.

    1993-01-01

    Parallel-vector solution strategies for generation and assembly of element matrices, solution of the resulted system of linear equations, calculations of the unbalanced loads, displacements, stresses, and design sensitivity analysis (DSA) are all incorporated into the Newton Raphson (NR) procedure for nonlinear finite element analysis and DSA. Numerical results are included to show the performance of the proposed method for structural analysis and DSA in a parallel-vector computer environment.

  18. Performance Analysis of IIUM Wireless Campus Network

    NASA Astrophysics Data System (ADS)

    Abd Latif, Suhaimi; Masud, Mosharrof H.; Anwar, Farhat

    2013-12-01

    International Islamic University Malaysia (IIUM) is one of the leading universities in the world in terms of quality of education that has been achieved due to providing numerous facilities including wireless services to every enrolled student. The quality of this wireless service is controlled and monitored by Information Technology Division (ITD), an ISO standardized organization under the university. This paper aims to investigate the constraints of wireless campus network of IIUM. It evaluates the performance of the IIUM wireless campus network in terms of delay, throughput and jitter. QualNet 5.2 simulator tool has employed to measure these performances of IIUM wireless campus network. The observation from the simulation result could be one of the influencing factors in improving wireless services for ITD and further improvement.

  19. Genetic algorithm-support vector regression for high reliability SHM system based on FBG sensor network

    NASA Astrophysics Data System (ADS)

    Zhang, XiaoLi; Liang, DaKai; Zeng, Jie; Asundi, Anand

    2012-02-01

    Structural Health Monitoring (SHM) based on fiber Bragg grating (FBG) sensor network has attracted considerable attention in recent years. However, FBG sensor network is embedded or glued in the structure simply with series or parallel. In this case, if optic fiber sensors or fiber nodes fail, the fiber sensors cannot be sensed behind the failure point. Therefore, for improving the survivability of the FBG-based sensor system in the SHM, it is necessary to build high reliability FBG sensor network for the SHM engineering application. In this study, a model reconstruction soft computing recognition algorithm based on genetic algorithm-support vector regression (GA-SVR) is proposed to achieve the reliability of the FBG-based sensor system. Furthermore, an 8-point FBG sensor system is experimented in an aircraft wing box. The external loading damage position prediction is an important subject for SHM system; as an example, different failure modes are selected to demonstrate the SHM system's survivability of the FBG-based sensor network. Simultaneously, the results are compared with the non-reconstruct model based on GA-SVR in each failure mode. Results show that the proposed model reconstruction algorithm based on GA-SVR can still keep the predicting precision when partial sensors failure in the SHM system; thus a highly reliable sensor network for the SHM system is facilitated without introducing extra component and noise.

  20. On-wafer vector network analyzer measurements in the 220-325 Ghz frequency band

    NASA Technical Reports Server (NTRS)

    Fung, King Man Andy; Dawson, D.; Samoska, L.; Lee, K.; Oleson, C.; Boll, G.

    2006-01-01

    We report on a full two-port on-wafer vector network analyzer test set for the 220-325 GHz (WR3) frequency band. The test set utilizes Oleson Microwave Labs frequency extenders with the Agilent 8510C network analyzer. Two port on-wafer measurements are made with GGB Industries coplanar waveguide (CPW) probes. With this test set we have measured the WR3 band S-parameters of amplifiers on-wafer, and the characteristics of the CPW wafer probes. Results for a three stage InP HEMT amplifier show 10 dB gain at 235 GHz [1], and that of a single stage amplifier, 2.9 dB gain at 231 GHz. The approximate upper limit of loss per CPW probe range from 3.0 to 4.8 dB across the WR3 frequency band.

  1. Static performance investigation of a skewed-throat multiaxis thrust-vectoring nozzle concept

    NASA Technical Reports Server (NTRS)

    Wing, David J.

    1994-01-01

    The static performance of a jet exhaust nozzle which achieves multiaxis thrust vectoring by physically skewing the geometric throat has been characterized in the static test facility of the 16-Foot Transonic Tunnel at NASA Langley Research Center. The nozzle has an asymmetric internal geometry defined by four surfaces: a convergent-divergent upper surface with its ridge perpendicular to the nozzle centerline, a convergent-divergent lower surface with its ridge skewed relative to the nozzle centerline, an outwardly deflected sidewall, and a straight sidewall. The primary goal of the concept is to provide efficient yaw thrust vectoring by forcing the sonic plane (nozzle throat) to form at a yaw angle defined by the skewed ridge of the lower surface contour. A secondary goal is to provide multiaxis thrust vectoring by combining the skewed-throat yaw-vectoring concept with upper and lower pitch flap deflections. The geometric parameters varied in this investigation included lower surface ridge skew angle, nozzle expansion ratio (divergence angle), aspect ratio, pitch flap deflection angle, and sidewall deflection angle. Nozzle pressure ratio was varied from 2 to a high of 11.5 for some configurations. The results of the investigation indicate that efficient, substantial multiaxis thrust vectoring was achieved by the skewed-throat nozzle concept. However, certain control surface deflections destabilized the internal flow field, which resulted in substantial shifts in the position and orientation of the sonic plane and had an adverse effect on thrust-vectoring and weight flow characteristics. By increasing the expansion ratio, the location of the sonic plane was stabilized. The asymmetric design resulted in interdependent pitch and yaw thrust vectoring as well as nonzero thrust-vector angles with undeflected control surfaces. By skewing the ridges of both the upper and lower surface contours, the interdependency between pitch and yaw thrust vectoring may be eliminated

  2. Initial Flight Test Evaluation of the F-15 ACTIVE Axisymmetric Vectoring Nozzle Performance

    NASA Technical Reports Server (NTRS)

    Orme, John S.; Hathaway, Ross; Ferguson, Michael D.

    1998-01-01

    A full envelope database of a thrust-vectoring axisymmetric nozzle performance for the Pratt & Whitney Pitch/Yaw Balance Beam Nozzle (P/YBBN) is being developed using the F-15 Advanced Control Technology for Integrated Vehicles (ACTIVE) aircraft. At this time, flight research has been completed for steady-state pitch vector angles up to 20' at an altitude of 30,000 ft from low power settings to maximum afterburner power. The nozzle performance database includes vector forces, internal nozzle pressures, and temperatures all of which can be used for regression analysis modeling. The database was used to substantiate a set of nozzle performance data from wind tunnel testing and computational fluid dynamic analyses. Findings from initial flight research at Mach 0.9 and 1.2 are presented in this paper. The results show that vector efficiency is strongly influenced by power setting. A significant discrepancy in nozzle performance has been discovered between predicted and measured results during vectoring.

  3. Analysis of a general SIS model with infective vectors on the complex networks

    NASA Astrophysics Data System (ADS)

    Juang, Jonq; Liang, Yu-Hao

    2015-11-01

    A general SIS model with infective vectors on complex networks is studied in this paper. In particular, the model considers the linear combination of three possible routes of disease propagation between infected and susceptible individuals as well as two possible transmission types which describe how the susceptible vectors attack the infected individuals. A new technique based on the basic reproduction matrix is introduced to obtain the following results. First, necessary and sufficient conditions are obtained for the global stability of the model through a unified approach. As a result, we are able to produce the exact basic reproduction number and the precise epidemic thresholds with respect to three spreading strengths, the curing strength or the immunization strength all at once. Second, the monotonicity of the basic reproduction number and the above mentioned epidemic thresholds with respect to all other parameters can be rigorously characterized. Finally, we are able to compare the effectiveness of various immunization strategies under the assumption that the number of persons getting vaccinated is the same for all strategies. In particular, we prove that in the scale-free networks, both targeted and acquaintance immunizations are more effective than uniform and active immunizations and that active immunization is the least effective strategy among those four. We are also able to determine how the vaccine should be used at minimum to control the outbreak of the disease.

  4. Online learning vector quantization: a harmonic competition approach based on conservation network.

    PubMed

    Wang, J H; Sun, W D

    1999-01-01

    This paper presents a self-creating neural network in which a conservation principle is incorporated with the competitive learning algorithm to harmonize equi-probable and equi-distortion criteria. Each node is associated with a measure of vitality which is updated after each input presentation. The total amount of vitality in the network at any time is 1, hence the name conservation. Competitive learning based on a vitality conservation principle is near-optimum, in the sense that problem of trapping in a local minimum is alleviated by adding perturbations to the learning rate during node generation processes. Combined with a procedure that redistributes the learning rate variables after generation and removal of nodes, the competitive conservation strategy provides a novel approach to the problem of harmonizing equi-error and equi-probable criteria. The training process is smooth and incremental, it not only achieves the biologically plausible learning property, but also facilitates systematic derivations for training parameters. Comparison studies on learning vector quantization involving stationary and nonstationary, structured and nonstructured inputs demonstrate that the proposed network outperforms other competitive networks in terms of quantization error, learning speed, and codeword search efficiency. PMID:18252343

  5. Performance of TCP variants over LTE network

    NASA Astrophysics Data System (ADS)

    Nor, Shahrudin Awang; Maulana, Ade Novia

    2016-08-01

    One of the implementation of a wireless network is based on mobile broadband technology Long Term Evolution (LTE). LTE offers a variety of advantages, especially in terms of access speed, capacity, architectural simplicity and ease of implementation, as well as the breadth of choice of the type of user equipment (UE) that can establish the access. The majority of the Internet connections in the world happen using the TCP (Transmission Control Protocol) due to the TCP's reliability in transmitting packets in the network. TCP reliability lies in the ability to control the congestion. TCP was originally designed for wired media, but LTE connected through a wireless medium that is not stable in comparison to wired media. A wide variety of TCP has been made to produce a better performance than its predecessor. In this study, we simulate the performance provided by the TCP NewReno and TCP Vegas based on simulation using network simulator version 2 (ns2). The TCP performance is analyzed in terms of throughput, packet loss and end-to-end delay. In comparing the performance of TCP NewReno and TCP Vegas, the simulation result shows that the throughput of TCP NewReno is slightly higher than TCP Vegas, while TCP Vegas gives significantly better end-to-end delay and packet loss. The analysis of throughput, packet loss and end-to-end delay are made to evaluate the simulation.

  6. Performance Evaluation Modeling of Network Sensors

    NASA Technical Reports Server (NTRS)

    Clare, Loren P.; Jennings, Esther H.; Gao, Jay L.

    2003-01-01

    Substantial benefits are promised by operating many spatially separated sensors collectively. Such systems are envisioned to consist of sensor nodes that are connected by a communications network. A simulation tool is being developed to evaluate the performance of networked sensor systems, incorporating such metrics as target detection probabilities, false alarms rates, and classification confusion probabilities. The tool will be used to determine configuration impacts associated with such aspects as spatial laydown, and mixture of different types of sensors (acoustic, seismic, imaging, magnetic, RF, etc.), and fusion architecture. The QualNet discrete-event simulation environment serves as the underlying basis for model development and execution. This platform is recognized for its capabilities in efficiently simulating networking among mobile entities that communicate via wireless media. We are extending QualNet's communications modeling constructs to capture the sensing aspects of multi-target sensing (analogous to multiple access communications), unimodal multi-sensing (broadcast), and multi-modal sensing (multiple channels and correlated transmissions). Methods are also being developed for modeling the sensor signal sources (transmitters), signal propagation through the media, and sensors (receivers) that are consistent with the discrete event paradigm needed for performance determination of sensor network systems. This work is supported under the Microsensors Technical Area of the Army Research Laboratory (ARL) Advanced Sensors Collaborative Technology Alliance.

  7. USING MULTITAIL NETWORKS IN HIGH PERFORMANCE CLUSTERS

    SciTech Connect

    S. COLL; E. FRACHTEMBERG; F. PETRINI; A. HOISIE; L. GURVITS

    2001-03-01

    Using multiple independent networks (also known as rails) is an emerging technique to overcome bandwidth limitations and enhance fault-tolerance of current high-performance clusters. We present and analyze various venues for exploiting multiple rails. Different rail access policies are presented and compared, including static and dynamic allocation schemes. An analytical lower bound on the number of networks required for static rail allocation is shown. We also present an extensive experimental comparison of the behavior of various allocation schemes in terms of bandwidth and latency. Striping messages over multiple rails can substantially reduce network latency, depending on average message size, network load and allocation scheme. The methods compared include a static rail allocation, a round-robin rail allocation, a dynamic allocation based on local knowledge, and a rail allocation that reserves both end-points of a message before sending it. The latter is shown to perform better than other methods at higher loads: up to 49% better than local-knowledge allocation and 37% better than the round-robin allocation. This allocation scheme also shows lower latency and it saturates on higher loads (for messages large enough). Most importantly, this proposed allocation scheme scales well with the number of rails and message sizes.

  8. The design of a broadband ocean acoustic laboratory: detailed examination of vector sensor performance

    NASA Astrophysics Data System (ADS)

    Carpenter, Robert; Silvia, Manuel; Cray, Benjamin A.

    2006-05-01

    Acoustic vector sensors measure the acoustic pressure and three orthogonal components of the acoustic particle acceleration at a single point in space. These sensors, and arrays composed of them, have a number of advantages over traditional hydrophone arrays. This includes full azimuth/elevation angle estimation, even with a single sensor. It is of interest to see how in-water vector sensor performance matches theoretical bounds. A series of experiments designed to characterize the performance of vector sensors operating in shallow water was conducted to assess sensor mounting techniques, and evaluate the sensor's ability to measure bearing and elevation angles to a source as a function of waveform characteristics and signal-to-noise ratio.

  9. The Helioseismic and Magnetic Imager (HMI) Vector Magnetic Field Pipeline: Overview and Performance

    NASA Astrophysics Data System (ADS)

    Hoeksema, J. Todd; Liu, Yang; Hayashi, Keiji; Sun, Xudong; Schou, Jesper; Couvidat, Sebastien; Norton, Aimee; Bobra, Monica; Centeno, Rebecca; Leka, K. D.; Barnes, Graham; Turmon, Michael

    2014-09-01

    The Helioseismic and Magnetic Imager (HMI) began near-continuous full-disk solar measurements on 1 May 2010 from the Solar Dynamics Observatory (SDO). An automated processing pipeline keeps pace with observations to produce observable quantities, including the photospheric vector magnetic field, from sequences of filtergrams. The basic vector-field frame list cadence is 135 seconds, but to reduce noise the filtergrams are combined to derive data products every 720 seconds. The primary 720 s observables were released in mid-2010, including Stokes polarization parameters measured at six wavelengths, as well as intensity, Doppler velocity, and the line-of-sight magnetic field. More advanced products, including the full vector magnetic field, are now available. Automatically identified HMI Active Region Patches (HARPs) track the location and shape of magnetic regions throughout their lifetime. The vector field is computed using the Very Fast Inversion of the Stokes Vector (VFISV) code optimized for the HMI pipeline; the remaining 180∘ azimuth ambiguity is resolved with the Minimum Energy (ME0) code. The Milne-Eddington inversion is performed on all full-disk HMI observations. The disambiguation, until recently run only on HARP regions, is now implemented for the full disk. Vector and scalar quantities in the patches are used to derive active region indices potentially useful for forecasting; the data maps and indices are collected in the SHARP data series, hmi.sharp_720s. Definitive SHARP processing is completed only after the region rotates off the visible disk; quick-look products are produced in near real time. Patches are provided in both CCD and heliographic coordinates. HMI provides continuous coverage of the vector field, but has modest spatial, spectral, and temporal resolution. Coupled with limitations of the analysis and interpretation techniques, effects of the orbital velocity, and instrument performance, the resulting measurements have a certain dynamic

  10. Scheduling and performance limits of networks with constantly changing topology

    SciTech Connect

    Tassiulas, L.

    1997-01-01

    A communication network with time-varying topology is considered. The network consists of M receivers and N transmitters that may access in principle every receiver. An underlying network state process with Markovian statistics is considered, that reflects the physical characteristics of the network affecting the link service capacity. The transmissions are scheduled dynamically, based on information about the link capacities and the backlog in the network. The region of achievable throughputs is characterized. A transmission scheduling policy is proposed, that utilizes current topology state information and achieves all throughput vectors achievable by any anticipative policy. The changing topology model applies to networks of Low Earth Orbit (LEO) satellites, meteor-burst communication networks and networks with mobile users. {copyright} {ital 1997 American Institute of Physics.}

  11. Inference of nonlinear gene regulatory networks through optimized ensemble of support vector regression and dynamic Bayesian networks.

    PubMed

    Akutekwe, Arinze; Seker, Huseyin

    2015-08-01

    Comprehensive understanding of gene regulatory networks (GRNs) is a major challenge in systems biology. Most methods for modeling and inferring the dynamics of GRNs, such as those based on state space models, vector autoregressive models and G1DBN algorithm, assume linear dependencies among genes. However, this strong assumption does not make for true representation of time-course relationships across the genes, which are inherently nonlinear. Nonlinear modeling methods such as the S-systems and causal structure identification (CSI) have been proposed, but are known to be statistically inefficient and analytically intractable in high dimensions. To overcome these limitations, we propose an optimized ensemble approach based on support vector regression (SVR) and dynamic Bayesian networks (DBNs). The method called SVR-DBN, uses nonlinear kernels of the SVR to infer the temporal relationships among genes within the DBN framework. The two-stage ensemble is further improved by SVR parameter optimization using Particle Swarm Optimization. Results on eight insilico-generated datasets, and two real world datasets of Drosophila Melanogaster and Escherichia Coli, show that our method outperformed the G1DBN algorithm by a total average accuracy of 12%. We further applied our method to model the time-course relationships of ovarian carcinoma. From our results, four hub genes were discovered. Stratified analysis further showed that the expression levels Prostrate differentiation factor and BTG family member 2 genes, were significantly increased by the cisplatin and oxaliplatin platinum drugs; while expression levels of Polo-like kinase and Cyclin B1 genes, were both decreased by the platinum drugs. These hub genes might be potential biomarkers for ovarian carcinoma. PMID:26738192

  12. Functional performance requirements for seismic network upgrade

    SciTech Connect

    Lee, R.C.

    1991-08-18

    The SRL seismic network, established in 1976, was developed to monitor site and regional seismic activity that may have any potential to impact the safety or reduce containment capability of existing and planned structures and systems at the SRS, report seismic activity that may be relevant to emergency preparedness, including rapid assessments of earthquake location and magnitude, and estimates of potential on-site and off-site damage to facilities and lifelines for mitigation measures. All of these tasks require SRL seismologists to provide rapid analysis of large amounts of seismic data. The current seismic network upgrade, the subject of this Functional Performance Requirements Document, is necessary to improve system reliability and resolution. The upgrade provides equipment for the analysis of the network seismic data and replacement of old out-dated equipment. The digital network upgrade is configured for field station and laboratory digital processing systems. The upgrade consists of the purchase and installation of seismic sensors,, data telemetry digital upgrades, a dedicated Seismic Data Processing (SDP) system (already in procurement stage), and a Seismic Signal Analysis (SSA) system. The field stations and telephone telemetry upgrades include equipment necessary for three remote station upgrades including seismic amplifiers, voltage controlled oscillators, pulse calibrators, weather protection (including lightning protection) systems, seismometers, seismic amplifiers, and miscellaneous other parts. The central receiving and recording station upgrades will include discriminators, helicopter amplifier, omega timing system, strong motion instruments, wide-band velocity sensors, and other miscellaneous equipment.

  13. Static internal performance of single expansion-ramp nozzles with thrust vectoring and reversing

    NASA Technical Reports Server (NTRS)

    Re, R. J.; Berrier, B. L.

    1982-01-01

    The effects of geometric design parameters on the internal performance of nonaxisymmetric single expansion-ramp nozzles were investigated at nozzle pressure ratios up to approximately 10. Forward-flight (cruise), vectored-thrust, and reversed-thrust nozzle operating modes were investigated.

  14. Modeling and Performance Simulation of the Mass Storage Network Environment

    NASA Technical Reports Server (NTRS)

    Kim, Chan M.; Sang, Janche

    2000-01-01

    This paper describes the application of modeling and simulation in evaluating and predicting the performance of the mass storage network environment. Network traffic is generated to mimic the realistic pattern of file transfer, electronic mail, and web browsing. The behavior and performance of the mass storage network and a typical client-server Local Area Network (LAN) are investigated by modeling and simulation. Performance characteristics in throughput and delay demonstrate the important role of modeling and simulation in network engineering and capacity planning.

  15. Parallel-vector unsymmetric Eigen-Solver on high performance computers

    NASA Technical Reports Server (NTRS)

    Nguyen, Duc T.; Jiangning, Qin

    1993-01-01

    The popular QR algorithm for solving all eigenvalues of an unsymmetric matrix is reviewed. Among the basic components in the QR algorithm, it was concluded from this study, that the reduction of an unsymmetric matrix to a Hessenberg form (before applying the QR algorithm itself) can be done effectively by exploiting the vector speed and multiple processors offered by modern high-performance computers. Numerical examples of several test cases have indicated that the proposed parallel-vector algorithm for converting a given unsymmetric matrix to a Hessenberg form offers computational advantages over the existing algorithm. The time saving obtained by the proposed methods is increased as the problem size increased.

  16. Improving the performance of tensor matrix vector multiplication in quantum chemistry codes.

    SciTech Connect

    Gropp, W. D.; Kaushik, D. K.; Minkoff, M.; Smith, B. F.

    2008-05-08

    Cumulative reaction probability (CRP) calculations provide a viable computational approach to estimate reaction rate coefficients. However, in order to give meaningful results these calculations should be done in many dimensions (ten to fifteen). This makes CRP codes memory intensive. For this reason, these codes use iterative methods to solve the linear systems, where a good fraction of the execution time is spent on matrix-vector multiplication. In this paper, we discuss the tensor product form of applying the system operator on a vector. This approach shows much better performance and provides huge savings in memory as compared to the explicit sparse representation of the system matrix.

  17. Condition classification of small reciprocating compressor for refrigerators using artificial neural networks and support vector machines

    NASA Astrophysics Data System (ADS)

    Yang, Bo-Suk; Hwang, Won-Woo; Kim, Dong-Jo; Chit Tan, Andy

    2005-03-01

    The need to increase machine reliability and decrease production loss due to faulty products in highly automated line requires accurate and reliable fault classification technique. Wavelet transform and statistical method are used to extract salient features from raw noise and vibration signals. The wavelet transform decomposes the raw time-waveform signals into two respective parts in the time space and frequency domain. With wavelet transform prominent features can be obtained easily than from time-waveform analysis. This paper focuses on the development of an advanced signal classifier for small reciprocating refrigerator compressors using noise and vibration signals. Three classifiers, self-organising feature map, learning vector quantisation and support vector machine (SVM) are applied in training and testing for feature extraction and the classification accuracies of the techniques are compared to determine the optimum fault classifier. The classification technique selected for detecting faulty reciprocating refrigerator compressors involves artificial neural networks and SVMs. The results confirm that the classification technique can differentiate faulty compressors from healthy ones and with high flexibility and reliability.

  18. Artificial neural network simulation of battery performance

    SciTech Connect

    O`Gorman, C.C.; Ingersoll, D.; Jungst, R.G.; Paez, T.L.

    1998-12-31

    Although they appear deceptively simple, batteries embody a complex set of interacting physical and chemical processes. While the discrete engineering characteristics of a battery such as the physical dimensions of the individual components, are relatively straightforward to define explicitly, their myriad chemical and physical processes, including interactions, are much more difficult to accurately represent. Within this category are the diffusive and solubility characteristics of individual species, reaction kinetics and mechanisms of primary chemical species as well as intermediates, and growth and morphology characteristics of reaction products as influenced by environmental and operational use profiles. For this reason, development of analytical models that can consistently predict the performance of a battery has only been partially successful, even though significant resources have been applied to this problem. As an alternative approach, the authors have begun development of a non-phenomenological model for battery systems based on artificial neural networks. Both recurrent and non-recurrent forms of these networks have been successfully used to develop accurate representations of battery behavior. The connectionist normalized linear spline (CMLS) network has been implemented with a self-organizing layer to model a battery system with the generalized radial basis function net. Concurrently, efforts are under way to use the feedforward back propagation network to map the {open_quotes}state{close_quotes} of a battery system. Because of the complexity of battery systems, accurate representation of the input and output parameters has proven to be very important. This paper describes these initial feasibility studies as well as the current models and makes comparisons between predicted and actual performance.

  19. Measured performances on vectorization and multitasking with a Monte Carlo code for neutron transport problems

    NASA Astrophysics Data System (ADS)

    Chauvet, Yves

    1985-07-01

    This paper summarized two improvements of a real production code by using vectorization and multitasking techniques. After a short description of Monte Carlo algorithms employed in our neutron transport problems, we briefly describe the work we have done in order to get a vector code. Vectorization principles will be presented and measured performances on the CRAY 1S, CYBER 205 and CRAY X-MP compared in terms of vector lengths. The second part of this work is an adaptation to multitasking on the CRAY X-MP using exclusively standard multitasking tools available with FORTRAN under the COS 1.13 system. Two examples will be presented. The goal of the first one is to measure the overhead inherent to multitasking when tasks become too small and to define a granularity threshold that is to say a minimum size for a task. With the second example we propose a method that is very X-MP oriented in order to get the best speedup factor on such a computer. In conclusion we prove that Monte Carlo algorithms are very well suited to future vector and parallel computers.

  20. Static Thrust and Vectoring Performance of a Spherical Convergent Flap Nozzle with a Nonrectangular Divergent Duct

    NASA Technical Reports Server (NTRS)

    Wing, David J.

    1998-01-01

    The static internal performance of a multiaxis-thrust-vectoring, spherical convergent flap (SCF) nozzle with a non-rectangular divergent duct was obtained in the model preparation area of the Langley 16-Foot Transonic Tunnel. Duct cross sections of hexagonal and bowtie shapes were tested. Additional geometric parameters included throat area (power setting), pitch flap deflection angle, and yaw gimbal angle. Nozzle pressure ratio was varied from 2 to 12 for dry power configurations and from 2 to 6 for afterburning power configurations. Approximately a 1-percent loss in thrust efficiency from SCF nozzles with a rectangular divergent duct was incurred as a result of internal oblique shocks in the flow field. The internal oblique shocks were the result of cross flow generated by the vee-shaped geometric throat. The hexagonal and bowtie nozzles had mirror-imaged flow fields and therefore similar thrust performance. Thrust vectoring was not hampered by the three-dimensional internal geometry of the nozzles. Flow visualization indicates pitch thrust-vector angles larger than 10' may be achievable with minimal adverse effect on or a possible gain in resultant thrust efficiency as compared with the performance at a pitch thrust-vector angle of 10 deg.

  1. Scientific Application Performance on Leading Scalar and VectorSupercomputing Platforms

    SciTech Connect

    Oliker, Leonid; Canning, Andrew; Carter, Jonathan; Shalf, John; Ethier, Stephane

    2007-01-01

    The last decade has witnessed a rapid proliferation of superscalar cache-based microprocessors to build high-end computing (HEC) platforms, primarily because of their generality, scalability, and cost effectiveness. However, the growing gap between sustained and peak performance for full-scale scientific applications on conventional supercomputers has become a major concern in high performance computing, requiring significantly larger systems and application scalability than implied by peak performance in order to achieve desired performance. The latest generation of custom-built parallel vector systems have the potential to address this issue for numerical algorithms with sufficient regularity in their computational structure. In this work we explore applications drawn from four areas: magnetic fusion (GTC), plasma physics (LBMHD3D), astrophysics (Cactus), and material science (PARATEC). We compare performance of the vector-based Cray X1, X1E, Earth Simulator, NEC SX-8, with performance of three leading commodity-based superscalar platforms utilizing the IBM Power3, Intel Itanium2, and AMD Opteron processors. Our work makes several significant contributions: a new data-decomposition scheme for GTC that (for the first time) enables a breakthrough of the Teraflop barrier; the introduction of a new three-dimensional Lattice Boltzmann magneto-hydrodynamic implementation used to study the onset evolution of plasma turbulence that achieves over 26Tflop/s on 4800 ES processors; the highest per processor performance (by far) achieved by the full-production version of the Cactus ADM-BSSN; and the largest PARATEC cell size atomistic simulation to date. Overall, results show that the vector architectures attain unprecedented aggregate performance across our application suite, demonstrating the tremendous potential of modern parallel vector systems.

  2. Advancements and performance of iterative methods in industrial applications codes on CRAY parallel/vector supercomputers

    SciTech Connect

    Poole, G.; Heroux, M.

    1994-12-31

    This paper will focus on recent work in two widely used industrial applications codes with iterative methods. The ANSYS program, a general purpose finite element code widely used in structural analysis applications, has now added an iterative solver option. Some results are given from real applications comparing performance with the tradition parallel/vector frontal solver used in ANSYS. Discussion of the applicability of iterative solvers as a general purpose solver will include the topics of robustness, as well as memory requirements and CPU performance. The FIDAP program is a widely used CFD code which uses iterative solvers routinely. A brief description of preconditioners used and some performance enhancements for CRAY parallel/vector systems is given. The solution of large-scale applications in structures and CFD includes examples from industry problems solved on CRAY systems.

  3. Curriculum Assessment Using Artificial Neural Network and Support Vector Machine Modeling Approaches: A Case Study. IR Applications. Volume 29

    ERIC Educational Resources Information Center

    Chen, Chau-Kuang

    2010-01-01

    Artificial Neural Network (ANN) and Support Vector Machine (SVM) approaches have been on the cutting edge of science and technology for pattern recognition and data classification. In the ANN model, classification accuracy can be achieved by using the feed-forward of inputs, back-propagation of errors, and the adjustment of connection weights. In…

  4. Evaluation models for soil nutrient based on support vector machine and artificial neural networks.

    PubMed

    Li, Hao; Leng, Weijia; Zhou, Yibing; Chen, Fudi; Xiu, Zhilong; Yang, Dazuo

    2014-01-01

    Soil nutrient is an important aspect that contributes to the soil fertility and environmental effects. Traditional evaluation approaches of soil nutrient are quite hard to operate, making great difficulties in practical applications. In this paper, we present a series of comprehensive evaluation models for soil nutrient by using support vector machine (SVM), multiple linear regression (MLR), and artificial neural networks (ANNs), respectively. We took the content of organic matter, total nitrogen, alkali-hydrolysable nitrogen, rapidly available phosphorus, and rapidly available potassium as independent variables, while the evaluation level of soil nutrient content was taken as dependent variable. Results show that the average prediction accuracies of SVM models are 77.87% and 83.00%, respectively, while the general regression neural network (GRNN) model's average prediction accuracy is 92.86%, indicating that SVM and GRNN models can be used effectively to assess the levels of soil nutrient with suitable dependent variables. In practical applications, both SVM and GRNN models can be used for determining the levels of soil nutrient. PMID:25548781

  5. Performance of wireless sensor networks under random node failures

    SciTech Connect

    Bradonjic, Milan; Hagberg, Aric; Feng, Pan

    2011-01-28

    Networks are essential to the function of a modern society and the consequence of damages to a network can be large. Assessing network performance of a damaged network is an important step in network recovery and network design. Connectivity, distance between nodes, and alternative routes are some of the key indicators to network performance. In this paper, random geometric graph (RGG) is used with two types of node failure, uniform failure and localized failure. Since the network performance are multi-facet and assessment can be time constrained, we introduce four measures, which can be computed in polynomial time, to estimate performance of damaged RGG. Simulation experiments are conducted to investigate the deterioration of networks through a period of time. With the empirical results, the performance measures are analyzed and compared to provide understanding of different failure scenarios in a RGG.

  6. Quantifying performance limitations of Kalman filters in state vector estimation problems

    NASA Astrophysics Data System (ADS)

    Bageshwar, Vibhor Lal

    In certain applications, the performance objectives of a Kalman filter (KF) are to compute unbiased, minimum variance estimates of a state mean vector governed by a stochastic system. The KF can be considered as a model based algorithm used to recursively estimate the state mean vector and state covariance matrix. The general objective of this thesis is to investigate the performance limitations of the KF in three state vector estimation applications. Stochastic observability is a property of a system and refers to the existence of a filter for which the errors of the estimated state mean vector have bounded variance. In the first application, we derive a test to assess the stochastic observability of a KF implemented for discrete linear time-varying systems consisting of known, deterministic parameters. This class of system includes discrete nonlinear systems linearized about the true state vector trajectory. We demonstrate the utility of the stochastic observability test using an aided INS problem. Attitude determination systems consist of a sensor set, a stochastic system, and a filter to estimate attitude. In the second application, we design an inertially aided (IA) vector matching algorithm (VMA) architecture for estimating a spacecraft's attitude. The sensor set includes rate gyros and a three-axis magnetometer (TAM). The VMA is a filtering algorithm that solves Wahba's problem. The VMA is then extended by incorporating dynamic and sensor models to formulate the IA VMA architecture. We evaluate the performance of the IA VMA architectures by using an extended KF to blend post-processed spaceflight data. Model predictive control (MPC) algorithms achieve offset-free control by augmenting the nominal system model with a disturbance model. In the third application, we consider an offset-free MPC framework that includes an output integrator disturbance model and a KF to estimate the state and disturbance vectors. Using root locus techniques, we identify sufficient

  7. Improving Memory Subsystem Performance Using ViVA: Virtual Vector Architecture

    SciTech Connect

    Gebis, Joseph; Oliker, Leonid; Shalf, John; Williams, Samuel; Yelick, Katherine

    2009-01-12

    The disparity between microprocessor clock frequencies and memory latency is a primary reason why many demanding applications run well below peak achievable performance. Software controlled scratchpad memories, such as the Cell local store, attempt to ameliorate this discrepancy by enabling precise control over memory movement; however, scratchpad technology confronts the programmer and compiler with an unfamiliar and difficult programming model. In this work, we present the Virtual Vector Architecture (ViVA), which combines the memory semantics of vector computers with a software-controlled scratchpad memory in order to provide a more effective and practical approach to latency hiding. ViVA requires minimal changes to the core design and could thus be easily integrated with conventional processor cores. To validate our approach, we implemented ViVA on the Mambo cycle-accurate full system simulator, which was carefully calibrated to match the performance on our underlying PowerPC Apple G5 architecture. Results show that ViVA is able to deliver significant performance benefits over scalar techniques for a variety of memory access patterns as well as two important memory-bound compact kernels, corner turn and sparse matrix-vector multiplication -- achieving 2x-13x improvement compared the scalar version. Overall, our preliminary ViVA exploration points to a promising approach for improving application performance on leading microprocessors with minimal design and complexity costs, in a power efficient manner.

  8. Support vector machine based training of multilayer feedforward neural networks as optimized by particle swarm algorithm: application in QSAR studies of bioactivity of organic compounds.

    PubMed

    Lin, Wei-Qi; Jiang, Jian-Hui; Zhou, Yan-Ping; Wu, Hai-Long; Shen, Guo-Li; Yu, Ru-Qin

    2007-01-30

    Multilayer feedforward neural networks (MLFNNs) are important modeling techniques widely used in QSAR studies for their ability to represent nonlinear relationships between descriptors and activity. However, the problems of overfitting and premature convergence to local optima still pose great challenges in the practice of MLFNNs. To circumvent these problems, a support vector machine (SVM) based training algorithm for MLFNNs has been developed with the incorporation of particle swarm optimization (PSO). The introduction of the SVM based training mechanism imparts the developed algorithm with inherent capacity for combating the overfitting problem. Moreover, with the implementation of PSO for searching the optimal network weights, the SVM based learning algorithm shows relatively high efficiency in converging to the optima. The proposed algorithm has been evaluated using the Hansch data set. Application to QSAR studies of the activity of COX-2 inhibitors is also demonstrated. The results reveal that this technique provides superior performance to backpropagation (BP) and PSO training neural networks. PMID:17186488

  9. Online monitoring and control of particle size in the grinding process using least square support vector regression and resilient back propagation neural network.

    PubMed

    Pani, Ajaya Kumar; Mohanta, Hare Krishna

    2015-05-01

    Particle size soft sensing in cement mills will be largely helpful in maintaining desired cement fineness or Blaine. Despite the growing use of vertical roller mills (VRM) for clinker grinding, very few research work is available on VRM modeling. This article reports the design of three types of feed forward neural network models and least square support vector regression (LS-SVR) model of a VRM for online monitoring of cement fineness based on mill data collected from a cement plant. In the data pre-processing step, a comparative study of the various outlier detection algorithms has been performed. Subsequently, for model development, the advantage of algorithm based data splitting over random selection is presented. The training data set obtained by use of Kennard-Stone maximal intra distance criterion (CADEX algorithm) was used for development of LS-SVR, back propagation neural network, radial basis function neural network and generalized regression neural network models. Simulation results show that resilient back propagation model performs better than RBF network, regression network and LS-SVR model. Model implementation has been done in SIMULINK platform showing the online detection of abnormal data and real time estimation of cement Blaine from the knowledge of the input variables. Finally, closed loop study shows how the model can be effectively utilized for maintaining cement fineness at desired value. PMID:25528293

  10. Performance monitoring for coherent DP-QPSK systems based on stokes vectors analysis

    NASA Astrophysics Data System (ADS)

    Louchet, Hadrien; Koltchanov, Igor; Richter, André

    2010-12-01

    We show how to estimate accurately the Jones matrix of the transmission line by analyzing the Stokes vectors of DP-QPSK signals. This method can be used to perform in-situ PMD measurement in dual-polarization QPSK systems, and in addition to the constant modulus algorithm (CMA) to mitigate polarization-induced impairments. The applicability of this method to other modulation formats is discussed.

  11. Performance of an integrated network model

    PubMed Central

    Lehmann, François; Dunn, David; Beaulieu, Marie-Dominique; Brophy, James

    2016-01-01

    Objective To evaluate the changes in accessibility, patients’ care experiences, and quality-of-care indicators following a clinic’s transformation into a fully integrated network clinic. Design Mixed-methods study. Setting Verdun, Que. Participants Data on all patient visits were used, in addition to 2 distinct patient cohorts: 134 patients with chronic illness (ie, diabetes, arteriosclerotic heart disease, or both); and 450 women between the ages of 20 and 70 years. Main outcome measures Accessibility was measured by the number of walk-in visits, scheduled visits, and new patient enrolments. With the first cohort, patients’ care experiences were measured using validated serial questionnaires; and quality-of-care indicators were measured using biologic data. With the second cohort, quality of preventive care was measured using the number of Papanicolaou tests performed as a surrogate marker. Results Despite a negligible increase in the number of physicians, there was an increase in accessibility after the clinic’s transition to an integrated network model. During the first 4 years of operation, the number of scheduled visits more than doubled, nonscheduled visits (walk-in visits) increased by 29%, and enrolment of vulnerable patients (those with chronic illnesses) at the clinic remained high. Patient satisfaction with doctors was rated very highly at all points of time that were evaluated. While the number of Pap tests done did not increase with time, the proportion of patients meeting hemoglobin A1c and low-density lipoprotein guideline target levels increased, as did the number of patients tested for microalbuminuria. Conclusion Transformation to an integrated network model of care led to increased efficiency and enhanced accessibility with no negative effects on the doctor-patient relationship. Improvements in biologic data also suggested better quality of care. PMID:27521410

  12. Performance Characteristics of an Adaptive Mesh RefinementCalculation on Scalar and Vector Platforms

    SciTech Connect

    Welcome, Michael; Rendleman, Charles; Oliker, Leonid; Biswas, Rupak

    2006-01-31

    Adaptive mesh refinement (AMR) is a powerful technique thatreduces the resources necessary to solve otherwise in-tractable problemsin computational science. The AMR strategy solves the problem on arelatively coarse grid, and dynamically refines it in regions requiringhigher resolution. However, AMR codes tend to be far more complicatedthan their uniform grid counterparts due to the software infrastructurenecessary to dynamically manage the hierarchical grid framework. Despitethis complexity, it is generally believed that future multi-scaleapplications will increasingly rely on adaptive methods to study problemsat unprecedented scale and resolution. Recently, a new generation ofparallel-vector architectures have become available that promise toachieve extremely high sustained performance for a wide range ofapplications, and are the foundation of many leadership-class computingsystems worldwide. It is therefore imperative to understand the tradeoffsbetween conventional scalar and parallel-vector platforms for solvingAMR-based calculations. In this paper, we examine the HyperCLaw AMRframework to compare and contrast performance on the Cray X1E, IBM Power3and Power5, and SGI Altix. To the best of our knowledge, this is thefirst work that investigates and characterizes the performance of an AMRcalculation on modern parallel-vector systems.

  13. Non-metallic coating thickness prediction using artificial neural network and support vector machine with time resolved thermography

    NASA Astrophysics Data System (ADS)

    Wang, Hongjin; Hsieh, Sheng-Jen; Peng, Bo; Zhou, Xunfei

    2016-07-01

    A method without requirements on knowledge about thermal properties of coatings or those of substrates will be interested in the industrial application. Supervised machine learning regressions may provide possible solution to the problem. This paper compares the performances of two regression models (artificial neural networks (ANN) and support vector machines for regression (SVM)) with respect to coating thickness estimations made based on surface temperature increments collected via time resolved thermography. We describe SVM roles in coating thickness prediction. Non-dimensional analyses are conducted to illustrate the effects of coating thicknesses and various factors on surface temperature increments. It's theoretically possible to correlate coating thickness with surface increment. Based on the analyses, the laser power is selected in such a way: during the heating, the temperature increment is high enough to determine the coating thickness variance but low enough to avoid surface melting. Sixty-one pain-coated samples with coating thicknesses varying from 63.5 μm to 571 μm are used to train models. Hyper-parameters of the models are optimized by 10-folder cross validation. Another 28 sets of data are then collected to test the performance of the three methods. The study shows that SVM can provide reliable predictions of unknown data, due to its deterministic characteristics, and it works well when used for a small input data group. The SVM model generates more accurate coating thickness estimates than the ANN model.

  14. Connections between inversion, kriging, wiener filters, support vector machines, and neural networks.

    NASA Astrophysics Data System (ADS)

    Kuzma, H. A.; Kappler, K. A.; Rector, J. W.

    2006-12-01

    Kriging, wiener filters, support vector machines (SVMs), neural networks, linear and non-linear inversion are methods for predicting the values of one set of variables given the values of another. They can all be used to estimate a set of model parameters from measured data given that a physical relationship exists between models and data. However, since the methods were developed in different fields, the mathematics used to describe them tend to obscure rather than highlight the links between them. In this poster, we diagram the methods and clarify their connections in hopes that practitioners of one method will be able to understand and learn from the insights developed in another. At the heart of all of the methods are a set of coefficients that must be found by minimizing an objective function. The solution to the objective function can be found either by inverting a matrix, or by searching through a space of possible answers. We distinguish between direct inversion, in which the desired coefficients are those of the model itself, and indirect inversion, in which examples of models and data are used to estimate the coefficients of an inverse process that, once discovered, can be used to compute new models from new data. Kriging is developed from Geostatistics. The model is usually a rock property (such as gold concentration) and the data is a sample location (x,y,z). The desired coefficients are a set of weights which are used to predict the concentration of a sample taken at a new location based on a variogram. The variogram is computed by averaging across a given set of known samples and is manually adjusted to reflect prior knowledge. Wiener filters were developed in signal processing to predict the values of one time-series from measurements of another. A wiener filter can be derived from kriging by replacing variograms with correlation. Support vector machines are an offshoot of statistical learning theory. They can be written as a form of kriging in which

  15. Communication Network Patterns and Employee Performance with New Technology.

    ERIC Educational Resources Information Center

    Papa, Michael J.

    1990-01-01

    Investigates the relationship between employee performance, new technology, employee communication network variables (activity, size, diversity, and integrativeness), and productivity at two corporate offices. Reports significant positive relationships between three of the network variables and employee productivity with new technology. Discusses…

  16. Static internal performance including thrust vectoring and reversing of two-dimensional convergent-divergent nozzles

    NASA Technical Reports Server (NTRS)

    Re, R. J.; Leavitt, L. D.

    1984-01-01

    The effects of geometric design parameters on two dimensional convergent-divergent nozzles were investigated at nozzle pressure ratios up to 12 in the static test facility. Forward flight (dry and afterburning power settings), vectored-thrust (afterburning power setting), and reverse-thrust (dry power setting) nozzles were investigated. The nozzles had thrust vector angles from 0 deg to 20.26 deg, throat aspect ratios of 3.696 to 7.612, throat radii from sharp to 2.738 cm, expansion ratios from 1.089 to 1.797, and various sidewall lengths. The results indicate that unvectored two dimensional convergent-divergent nozzles have static internal performance comparable to axisymmetric nozzles with similar expansion ratios.

  17. Optimization of ion-exchange protein separations using a vector quantizing neural network.

    PubMed

    Klein, E J; Rivera, S L; Porter, J E

    2000-01-01

    In this work, a previously proposed methodology for the optimization of analytical scale protein separations using ion-exchange chromatography is subjected to two challenging case studies. The optimization methodology uses a Doehlert shell design for design of experiments and a novel criteria function to rank chromatograms in order of desirability. This chromatographic optimization function (COF) accounts for the separation between neighboring peaks, the total number of peaks eluted, and total analysis time. The COF is penalized when undesirable peak geometries (i.e., skewed and/or shouldered peaks) are present as determined by a vector quantizing neural network. Results of the COF analysis are fit to a quadratic response model, which is optimized with respect to the optimization variables using an advanced Nelder and Mead simplex algorithm. The optimization methodology is tested on two case study sample mixtures, the first of which is composed of equal parts of lysozyme, conalbumin, bovine serum albumin, and transferrin, and the second of which contains equal parts of conalbumin, bovine serum albumin, tranferrin, beta-lactoglobulin, insulin, and alpha -chymotrypsinogen A. Mobile-phase pH and gradient length are optimized to achieve baseline resolution of all solutes for both case studies in acceptably short analysis times, thus demonstrating the usefulness of the empirical optimization methodology. PMID:10835256

  18. Efficient modeling of vector hysteresis using a novel Hopfield neural network implementation of Stoner–Wohlfarth-like operators

    PubMed Central

    Adly, Amr A.; Abd-El-Hafiz, Salwa K.

    2012-01-01

    Incorporation of hysteresis models in electromagnetic analysis approaches is indispensable to accurate field computation in complex magnetic media. Throughout those computations, vector nature and computational efficiency of such models become especially crucial when sophisticated geometries requiring massive sub-region discretization are involved. Recently, an efficient vector Preisach-type hysteresis model constructed from only two scalar models having orthogonally coupled elementary operators has been proposed. This paper presents a novel Hopfield neural network approach for the implementation of Stoner–Wohlfarth-like operators that could lead to a significant enhancement in the computational efficiency of the aforementioned model. Advantages of this approach stem from the non-rectangular nature of these operators that substantially minimizes the number of operators needed to achieve an accurate vector hysteresis model. Details of the proposed approach, its identification and experimental testing are presented in the paper. PMID:25685446

  19. Support Vector Machine and Artificial Neural Network Models for the Classification of Grapevine Varieties Using a Portable NIR Spectrophotometer.

    PubMed

    Gutiérrez, Salvador; Tardaguila, Javier; Fernández-Novales, Juan; Diago, María P

    2015-01-01

    The identification of different grapevine varieties, currently attended using visual ampelometry, DNA analysis and very recently, by hyperspectral analysis under laboratory conditions, is an issue of great importance in the wine industry. This work presents support vector machine and artificial neural network's modelling for grapevine varietal classification from in-field leaf spectroscopy. Modelling was attempted at two scales: site-specific and a global scale. Spectral measurements were obtained on the near-infrared (NIR) spectral range between 1600 to 2400 nm under field conditions in a non-destructive way using a portable spectrophotometer. For the site specific approach, spectra were collected from the adaxial side of 400 individual leaves of 20 grapevine (Vitis vinifera L.) varieties one week after veraison. For the global model, two additional sets of spectra were collected one week before harvest from two different vineyards in another vintage, each one consisting on 48 measurement from individual leaves of six varieties. Several combinations of spectra scatter correction and smoothing filtering were studied. For the training of the models, support vector machines and artificial neural networks were employed using the pre-processed spectra as input and the varieties as the classes of the models. The results from the pre-processing study showed that there was no influence whether using scatter correction or not. Also, a second-degree derivative with a window size of 5 Savitzky-Golay filtering yielded the highest outcomes. For the site-specific model, with 20 classes, the best results from the classifiers thrown an overall score of 87.25% of correctly classified samples. These results were compared under the same conditions with a model trained using partial least squares discriminant analysis, which showed a worse performance in every case. For the global model, a 6-class dataset involving samples from three different vineyards, two years and leaves

  20. Performance Evaluation of Lattice-Boltzmann MagnetohydrodynamicsSimulations on Modern Parallel Vector Systems

    SciTech Connect

    Carter, Jonathan; Oliker, Leonid

    2006-01-09

    The last decade has witnessed a rapid proliferation of superscalarcache-based microprocessors to build high-end computing (HEC) platforms, primarily because of their generality, scalability, and cost effectiveness. However, the growing gap between sustained and peak performance for full-scale scientific applications on such platforms has become major concern in high performance computing. The latest generation of custom-built parallel vector systems have the potential to address this concern for numerical algorithms with sufficient regularity in their computational structure. In this work, we explore two and three dimensional implementations of a lattice-Boltzmann magnetohydrodynamics (MHD) physics application, on some of today's most powerful supercomputing platforms. Results compare performance between the vector-based Cray X1, Earth Simulator, and newly-released NEC SX-8, with the commodity-based superscalar platforms of the IBM Power3, IntelItanium2, and AMD Opteron. Overall results show that the SX-8 attains unprecedented aggregate performance across our evaluated applications.

  1. Analysis of complex network performance and heuristic node removal strategies

    NASA Astrophysics Data System (ADS)

    Jahanpour, Ehsan; Chen, Xin

    2013-12-01

    Removing important nodes from complex networks is a great challenge in fighting against criminal organizations and preventing disease outbreaks. Six network performance metrics, including four new metrics, are applied to quantify networks' diffusion speed, diffusion scale, homogeneity, and diameter. In order to efficiently identify nodes whose removal maximally destroys a network, i.e., minimizes network performance, ten structured heuristic node removal strategies are designed using different node centrality metrics including degree, betweenness, reciprocal closeness, complement-derived closeness, and eigenvector centrality. These strategies are applied to remove nodes from the September 11, 2001 hijackers' network, and their performance are compared to that of a random strategy, which removes randomly selected nodes, and the locally optimal solution (LOS), which removes nodes to minimize network performance at each step. The computational complexity of the 11 strategies and LOS is also analyzed. Results show that the node removal strategies using degree and betweenness centralities are more efficient than other strategies.

  2. Improving the performance of physiologic hot flash measures with support vector machines.

    PubMed

    Thurston, Rebecca C; Matthews, Karen A; Hernandez, Javier; De La Torre, Fernando

    2009-03-01

    Hot flashes are experienced by over 70% of menopausal women. Criteria to classify hot flashes from physiologic signals show variable performance. The primary aim was to compare conventional criteria to Support Vector Machines (SVMs), an advanced machine learning method, to classify hot flashes from sternal skin conductance. Thirty women with > or =4 hot flashes/day underwent laboratory hot flash testing with skin conductance measurement. Hot flashes were quantified with conventional (> or =2 micromho, 30 s) and SVM methods. Conventional methods had poor sensitivity (sensitivity=0.41, specificity=1, positive predictive value (PPV)=0.94, negative predictive value (NPV)=0.85) in classifying hot flashes, with poorest performance among women with high body mass index or anxiety. SVM models showed improved performance (sensitivity=0.89, specificity=0.96, PPV=0.85, NPV=0.96). SVM may improve the performance of skin conductance measures of hot flashes. PMID:19170952

  3. Wireless Local Area Network Performance Inside Aircraft Passenger Cabins

    NASA Technical Reports Server (NTRS)

    Whetten, Frank L.; Soroker, Andrew; Whetten, Dennis A.; Whetten, Frank L.; Beggs, John H.

    2005-01-01

    An examination of IEEE 802.11 wireless network performance within an aircraft fuselage is performed. This examination measured the propagated RF power along the length of the fuselage, and the associated network performance: the link speed, total throughput, and packet losses and errors. A total of four airplanes: one single-aisle and three twin-aisle airplanes were tested with 802.11a, 802.11b, and 802.11g networks.

  4. Multiple-error-correcting codes for improving the performance of optical matrix-vector processors.

    PubMed

    Neifeld, M A

    1995-04-01

    I examine the use of Reed-Solomon multiple-error-correcting codes for enhancing the performance of optical matrix-vector processors. An optimal code rate of 0.75 is found, and n = 127 block-length codes are seen to increase the optical matrix dimension achievable by a factor of 2.0 for a required system bit-error rate of 10(-15). The optimal codes required for various matrix dimensions are determined. I show that single code word implementations are more efficient than those utilizing multiple code words. PMID:19859320

  5. Performance and human factors results from thrust vectoring investigations in simulated air combat

    NASA Technical Reports Server (NTRS)

    Pennington, J. E.; Meintel, A. J., Jr.

    1980-01-01

    In support of research related to advanced fighter technology, the Langley Differential Maneuvering Simulator (DMS) has been used to investigate the effects of advanced aerodynamic concepts, parametric changes in performance parameters, and advanced flight control systems on the combat capability of fighter airplanes. At least five studies were related to thrust vectoring and/or inflight thrust reversing. The aircraft simulated ranged from F-4 class to F-15 class, and included the AV-8 Harrier. This paper presents an overview of these studies including the assumptions involved, trends of results, and human factors considerations that were found.

  6. Static internal performance of a two-dimensional convergent nozzle with thrust-vectoring capability up to 60 deg

    NASA Technical Reports Server (NTRS)

    Leavitt, L. D.

    1985-01-01

    An investigation was conducted at wind-off conditions in the static-test facility of the Langley 16-Foot Transonic Tunnel to determine the internal performance characteristics of a two-dimensional convergent nozzle with a thrust-vectoring capability up to 60 deg. Vectoring was accomplished by a downward rotation of a hinged upper convergent flap and a corresponding rotation of a center-pivoted lower convergent flap. The effects of geometric thrust-vector angle and upper-rotating-flap geometry on internal nozzle performance characteristics were investigated. Nozzle pressure ratio was varied from 1.0 (jet off) to approximately 5.0.

  7. Performance characteristics of a one-third-scale, vectorable ventral nozzle for SSTOVL aircraft

    NASA Technical Reports Server (NTRS)

    Esker, Barbara S.; Mcardle, Jack G.

    1990-01-01

    Several proposed configurations for supersonic short takeoff, vertical landing aircraft will require one or more ventral nozzles for lift and pitch control. The swivel nozzle is one possible ventral nozzle configuration. A swivel nozzle (approximately one-third scale) was built and tested on a generic model tailpipe. This nozzle was capable of vectoring the flow up to + or - 23 deg from the vertical position. Steady-state performance data were obtained at pressure ratios to 4.5, and pitot-pressure surveys of the nozzle exit plane were made. Two configurations were tested: the swivel nozzle with a square contour of the leading edge of the ventral duct inlet, and the same nozzle with a round leading edge contour. The swivel nozzle showed good performance overall, and the round-leading edge configuration showed an improvement in performance over the square-leading edge configuration.

  8. A novel application classification and its impact on network performance

    NASA Astrophysics Data System (ADS)

    Zhang, Shuo; Huang, Ning; Sun, Xiaolei; Zhang, Yue

    2016-07-01

    Network traffic is believed to have a significant impact on network performance and is the result of the application operation on networks. Majority of current network performance analysis are based on the premise that the traffic transmission is through the shortest path, which is too simple to reflect a real traffic process. The real traffic process is related to the network application process characteristics, involving the realistic user behavior. In this paper, first, an application can be divided into the following three categories according to realistic application process characteristics: random application, customized application and routine application. Then, numerical simulations are carried out to analyze the effect of different applications on the network performance. The main results show that (i) network efficiency for the BA scale-free network is less than the ER random network when similar single application is loaded on the network; (ii) customized application has the greatest effect on the network efficiency when mixed multiple applications are loaded on BA network.

  9. Improving matrix-vector product performance and multi-level preconditioning for the parallel PCG package

    SciTech Connect

    McLay, R.T.; Carey, G.F.

    1996-12-31

    In this study we consider parallel solution of sparse linear systems arising from discretized PDE`s. As part of our continuing work on our parallel PCG Solver package, we have made improvements in two areas. The first is improving the performance of the matrix-vector product. Here on regular finite-difference grids, we are able to use the cache memory more efficiently for smaller domains or where there are multiple degrees of freedom. The second problem of interest in the present work is the construction of preconditioners in the context of the parallel PCG solver we are developing. Here the problem is partitioned over a set of processors subdomains and the matrix-vector product for PCG is carried out in parallel for overlapping grid subblocks. For problems of scaled speedup, the actual rate of convergence of the unpreconditioned system deteriorates as the mesh is refined. Multigrid and subdomain strategies provide a logical approach to resolving the problem. We consider the parallel trade-offs between communication and computation and provide a complexity analysis of a representative algorithm. Some preliminary calculations using the parallel package and comparisons with other preconditioners are provided together with parallel performance results.

  10. Manipulation of Host Quality and Defense by a Plant Virus Improves Performance of Whitefly Vectors.

    PubMed

    Su, Qi; Preisser, Evan L; Zhou, Xiao Mao; Xie, Wen; Liu, Bai Ming; Wang, Shao Li; Wu, Qing Jun; Zhang, You Jun

    2015-02-01

    Pathogen-mediated interactions between insect vectors and their host plants can affect herbivore fitness and the epidemiology of plant diseases. While the role of plant quality and defense in mediating these tripartite interactions has been recognized, there are many ecologically and economically important cases where the nature of the interaction has yet to be characterized. The Bemisia tabaci (Gennadius) cryptic species Mediterranean (MED) is an important vector of tomato yellow leaf curl virus (TYLCV), and performs better on virus-infected tomato than on uninfected controls. We assessed the impact of TYLCV infection on plant quality and defense, and the direct impact of TYLCV infection on MED feeding. We found that although TYLCV infection has a minimal direct impact on MED, the virus alters the nutritional content of leaf tissue and phloem sap in a manner beneficial to MED. TYLCV infection also suppresses herbivore-induced production of plant defensive enzymes and callose deposition. The strongly positive net effect on TYLCV on MED is consistent with previously reported patterns of whitefly behavior and performance, and provides a foundation for further exploration of the molecular mechanisms responsible for these effects and the evolutionary processes that shape them. PMID:26470098

  11. Network interface unit design options performance analysis

    NASA Technical Reports Server (NTRS)

    Miller, Frank W.

    1991-01-01

    An analysis is presented of three design options for the Space Station Freedom (SSF) onboard Data Management System (DMS) Network Interface Unit (NIU). The NIU provides the interface from the Fiber Distributed Data Interface (FDDI) local area network (LAN) to the DMS processing elements. The FDDI LAN provides the primary means for command and control and low and medium rate telemetry data transfers on board the SSF. The results of this analysis provide the basis for the implementation of the NIU.

  12. On the MAC/network/energy performance evaluation of Wireless Sensor Networks: Contrasting MPH, AODV, DSR and ZTR routing protocols.

    PubMed

    Del-Valle-Soto, Carolina; Mex-Perera, Carlos; Orozco-Lugo, Aldo; Lara, Mauricio; Galván-Tejada, Giselle M; Olmedo, Oscar

    2014-01-01

    Wireless Sensor Networks deliver valuable information for long periods, then it is desirable to have optimum performance, reduced delays, low overhead, and reliable delivery of information. In this work, proposed metrics that influence energy consumption are used for a performance comparison among our proposed routing protocol, called Multi-Parent Hierarchical (MPH), the well-known protocols for sensor networks, Ad hoc On-Demand Distance Vector (AODV), Dynamic Source Routing (DSR), and Zigbee Tree Routing (ZTR), all of them working with the IEEE 802.15.4 MAC layer. Results show how some communication metrics affect performance, throughput, reliability and energy consumption. It can be concluded that MPH is an efficient protocol since it reaches the best performance against the other three protocols under evaluation, such as 19.3% reduction of packet retransmissions, 26.9% decrease of overhead, and 41.2% improvement on the capacity of the protocol for recovering the topology from failures with respect to AODV protocol. We implemented and tested MPH in a real network of 99 nodes during ten days and analyzed parameters as number of hops, connectivity and delay, in order to validate our Sensors 2014, 14 22812 simulator and obtain reliable results. Moreover, an energy model of CC2530 chip is proposed and used for simulations of the four aforementioned protocols, showing that MPH has 15.9% reduction of energy consumption with respect to AODV, 13.7% versus DSR, and 5% against ZTR. PMID:25474377

  13. On the MAC/Network/Energy Performance Evaluation of Wireless Sensor Networks: Contrasting MPH, AODV, DSR and ZTR Routing Protocols

    PubMed Central

    Del-Valle-Soto, Carolina; Mex-Perera, Carlos; Orozco-Lugo, Aldo; Lara, Mauricio; Galván-Tejada, Giselle M.; Olmedo, Oscar

    2014-01-01

    Wireless Sensor Networks deliver valuable information for long periods, then it is desirable to have optimum performance, reduced delays, low overhead, and reliable delivery of information. In this work, proposed metrics that influence energy consumption are used for a performance comparison among our proposed routing protocol, called Multi-Parent Hierarchical (MPH), the well-known protocols for sensor networks, Ad hoc On-Demand Distance Vector (AODV), Dynamic Source Routing (DSR), and Zigbee Tree Routing (ZTR), all of them working with the IEEE 802.15.4 MAC layer. Results show how some communication metrics affect performance, throughput, reliability and energy consumption. It can be concluded that MPH is an efficient protocol since it reaches the best performance against the other three protocols under evaluation, such as 19.3% reduction of packet retransmissions, 26.9% decrease of overhead, and 41.2% improvement on the capacity of the protocol for recovering the topology from failures with respect to AODV protocol. We implemented and tested MPH in a real network of 99 nodes during ten days and analyzed parameters as number of hops, connectivity and delay, in order to validate our simulator and obtain reliable results. Moreover, an energy model of CC2530 chip is proposed and used for simulations of the four aforementioned protocols, showing that MPH has 15.9% reduction of energy consumption with respect to AODV, 13.7% versus DSR, and 5% against ZTR. PMID:25474377

  14. Experimental performance evaluation of software defined networking (SDN) based data communication networks for large scale flexi-grid optical networks.

    PubMed

    Zhao, Yongli; He, Ruiying; Chen, Haoran; Zhang, Jie; Ji, Yuefeng; Zheng, Haomian; Lin, Yi; Wang, Xinbo

    2014-04-21

    Software defined networking (SDN) has become the focus in the current information and communication technology area because of its flexibility and programmability. It has been introduced into various network scenarios, such as datacenter networks, carrier networks, and wireless networks. Optical transport network is also regarded as an important application scenario for SDN, which is adopted as the enabling technology of data communication networks (DCN) instead of general multi-protocol label switching (GMPLS). However, the practical performance of SDN based DCN for large scale optical networks, which is very important for the technology selection in the future optical network deployment, has not been evaluated up to now. In this paper we have built a large scale flexi-grid optical network testbed with 1000 virtual optical transport nodes to evaluate the performance of SDN based DCN, including network scalability, DCN bandwidth limitation, and restoration time. A series of network performance parameters including blocking probability, bandwidth utilization, average lightpath provisioning time, and failure restoration time have been demonstrated under various network environments, such as with different traffic loads and different DCN bandwidths. The demonstration in this work can be taken as a proof for the future network deployment. PMID:24787842

  15. Using high-performance networks to enable computational aerosciences applications

    NASA Technical Reports Server (NTRS)

    Johnson, Marjory J.

    1992-01-01

    One component of the U.S. Federal High Performance Computing and Communications Program (HPCCP) is the establishment of a gigabit network to provide a communications infrastructure for researchers across the nation. This gigabit network will provide new services and capabilities, in addition to increased bandwidth, to enable future applications. An understanding of these applications is necessary to guide the development of the gigabit network and other high-performance networks of the future. In this paper we focus on computational aerosciences applications run remotely using the Numerical Aerodynamic Simulation (NAS) facility located at NASA Ames Research Center. We characterize these applications in terms of network-related parameters and relate user experiences that reveal limitations imposed by the current wide-area networking infrastructure. Then we investigate how the development of a nationwide gigabit network would enable users of the NAS facility to work in new, more productive ways.

  16. Performance Evaluation of Plasma and Astrophysics Applications onModern Parallel Vector Systems

    SciTech Connect

    Carter, Jonathan; Oliker, Leonid; Shalf, John

    2005-10-28

    The last decade has witnessed a rapid proliferation ofsuperscalar cache-based microprocessors to build high-endcomputing (HEC)platforms, primarily because of their generality,scalability, and costeffectiveness. However, the growing gap between sustained and peakperformance for full-scale scientific applications on such platforms hasbecome major concern in highperformance computing. The latest generationof custom-built parallel vector systems have the potential to addressthis concern for numerical algorithms with sufficient regularity in theircomputational structure. In this work, we explore two and threedimensional implementations of a plasma physics application, as well as aleading astrophysics package on some of today's most powerfulsupercomputing platforms. Results compare performance between the thevector-based Cray X1, EarthSimulator, and newly-released NEC SX- 8, withthe commodity-based superscalar platforms of the IBM Power3, IntelItanium2, and AMDOpteron. Overall results show that the SX-8 attainsunprecedented aggregate performance across our evaluatedapplications.

  17. Support Vector Machine and Artificial Neural Network Models for the Classification of Grapevine Varieties Using a Portable NIR Spectrophotometer

    PubMed Central

    Gutiérrez, Salvador; Tardaguila, Javier; Fernández-Novales, Juan; Diago, María P.

    2015-01-01

    The identification of different grapevine varieties, currently attended using visual ampelometry, DNA analysis and very recently, by hyperspectral analysis under laboratory conditions, is an issue of great importance in the wine industry. This work presents support vector machine and artificial neural network’s modelling for grapevine varietal classification from in-field leaf spectroscopy. Modelling was attempted at two scales: site-specific and a global scale. Spectral measurements were obtained on the near-infrared (NIR) spectral range between 1600 to 2400 nm under field conditions in a non-destructive way using a portable spectrophotometer. For the site specific approach, spectra were collected from the adaxial side of 400 individual leaves of 20 grapevine (Vitis vinifera L.) varieties one week after veraison. For the global model, two additional sets of spectra were collected one week before harvest from two different vineyards in another vintage, each one consisting on 48 measurement from individual leaves of six varieties. Several combinations of spectra scatter correction and smoothing filtering were studied. For the training of the models, support vector machines and artificial neural networks were employed using the pre-processed spectra as input and the varieties as the classes of the models. The results from the pre-processing study showed that there was no influence whether using scatter correction or not. Also, a second-degree derivative with a window size of 5 Savitzky-Golay filtering yielded the highest outcomes. For the site-specific model, with 20 classes, the best results from the classifiers thrown an overall score of 87.25% of correctly classified samples. These results were compared under the same conditions with a model trained using partial least squares discriminant analysis, which showed a worse performance in every case. For the global model, a 6-class dataset involving samples from three different vineyards, two years and leaves

  18. Building and measuring a high performance network architecture

    SciTech Connect

    Kramer, William T.C.; Toole, Timothy; Fisher, Chuck; Dugan, Jon; Wheeler, David; Wing, William R; Nickless, William; Goddard, Gregory; Corbato, Steven; Love, E. Paul; Daspit, Paul; Edwards, Hal; Mercer, Linden; Koester, David; Decina, Basil; Dart, Eli; Paul Reisinger, Paul; Kurihara, Riki; Zekauskas, Matthew J; Plesset, Eric; Wulf, Julie; Luce, Douglas; Rogers, James; Duncan, Rex; Mauth, Jeffery

    2001-04-20

    Once a year, the SC conferences present a unique opportunity to create and build one of the most complex and highest performance networks in the world. At SC2000, large-scale and complex local and wide area networking connections were demonstrated, including large-scale distributed applications running on different architectures. This project was designed to use the unique opportunity presented at SC2000 to create a testbed network environment and then use that network to demonstrate and evaluate high performance computational and communication applications. This testbed was designed to incorporate many interoperable systems and services and was designed for measurement from the very beginning. The end results were key insights into how to use novel, high performance networking technologies and to accumulate measurements that will give insights into the networks of the future.

  19. Performance Statistics of the DWD Ceilometer Network

    NASA Astrophysics Data System (ADS)

    Wagner, Frank; Mattis, Ina; Flentje, Harald; Thomas, Werner

    2015-04-01

    The DWD ceilometer network was created in 2008. In the following years more and more ceilometers of type CHM15k (manufacturer Jenoptik) were installed with the aim of observing atmospheric aerosol particles. Now, 58 ceilometers are in continuous operation. The presentation aims on the one side on the statistical behavior of a several instrumental parameters which are related to the measurement performance. Some problems are addressed and conclusions or recommendations which parameters should be monitored for unattended automated operation. On the other side, the presentation aims on a statistical analysis of several measured quantities. Differences between geographic locations (e.g. north versus south, mountainous versus flat terrain) are investigated. For instance the occurrence of fog in lowlands is associated with the overall meteorological situation whereas mountain stations such as Hohenpeissenberg are often within a cumulus cloud which appears as fog in the measurements. The longest time series of data were acquired at Lindenberg. The ceilometer was installed in 2008. Until the end of 2008 the number of installed ceilometers increased to 28 and in the end of 2009 already 42 instruments were measuring. In 2011 the ceilometers were upgraded to the so-called Nimbus instruments. The nimbus instruments have enhanced capabilities of coping and correcting short-term instrumental fluctuations (e.g. detector sensitivity). About 30% of all ceilometer measurements were done under clear skies and hence can be used without limitations for aerosol particle observations. Multiple cloud layers could only be detected in about 23% of all cases with clouds. This is caused either by the presence of only 1 cloud layer or that the ceilometer laser beam could not see through the lowest cloud and hence was blind for the detection of several cloud layers. 3 cloud layers could only be detected in 5% of all cases with clouds. Considering only cases without clouds the diurnal cycle for

  20. Performance Enhancement for a GPS Vector-Tracking Loop Utilizing an Adaptive Iterated Extended Kalman Filter

    PubMed Central

    Chen, Xiyuan; Wang, Xiying; Xu, Yuan

    2014-01-01

    This paper deals with the problem of state estimation for the vector-tracking loop of a software-defined Global Positioning System (GPS) receiver. For a nonlinear system that has the model error and white Gaussian noise, a noise statistics estimator is used to estimate the model error, and based on this, a modified iterated extended Kalman filter (IEKF) named adaptive iterated Kalman filter (AIEKF) is proposed. A vector-tracking GPS receiver utilizing AIEKF is implemented to evaluate the performance of the proposed method. Through road tests, it is shown that the proposed method has an obvious accuracy advantage over the IEKF and Adaptive Extended Kalman filter (AEKF) in position determination. The results show that the proposed method is effective to reduce the root-mean-square error (RMSE) of position (including longitude, latitude and altitude). Comparing with EKF, the position RMSE values of AIEKF are reduced by about 45.1%, 40.9% and 54.6% in the east, north and up directions, respectively. Comparing with IEKF, the position RMSE values of AIEKF are reduced by about 25.7%, 19.3% and 35.7% in the east, north and up directions, respectively. Compared with AEKF, the position RMSE values of AIEKF are reduced by about 21.6%, 15.5% and 30.7% in the east, north and up directions, respectively. PMID:25502124

  1. Urban Heat Island Growth Modeling Using Artificial Neural Networks and Support Vector Regression: A case study of Tehran, Iran

    NASA Astrophysics Data System (ADS)

    Sherafati, Sh. A.; Saradjian, M. R.; Niazmardi, S.

    2013-09-01

    Numerous investigations on Urban Heat Island (UHI) show that land cover change is the main factor of increasing Land Surface Temperature (LST) in urban areas. Therefore, to achieve a model which is able to simulate UHI growth, urban expansion should be concerned first. Considerable researches on urban expansion modeling have been done based on cellular automata. Accordingly the objective of this paper is to implement CA method for trend detection of Tehran UHI spatiotemporal growth based on urban sprawl parameters (such as Distance to nearest road, Digital Elevation Model (DEM), Slope and Aspect ratios). It should be mentioned that UHI growth modeling may have more complexities in comparison with urban expansion, since the amount of each pixel's temperature should be investigated instead of its state (urban and non-urban areas). The most challenging part of CA model is the definition of Transfer Rules. Here, two methods have used to find appropriate transfer Rules which are Artificial Neural Networks (ANN) and Support Vector Regression (SVR). The reason of choosing these approaches is that artificial neural networks and support vector regression have significant abilities to handle the complications of such a spatial analysis in comparison with other methods like Genetic or Swarm intelligence. In this paper, UHI change trend has discussed between 1984 and 2007. For this purpose, urban sprawl parameters in 1984 have calculated and added to the retrieved LST of this year. In order to achieve LST, Thematic Mapper (TM) and Enhanced Thematic Mapper (ETM+) night-time images have exploited. The reason of implementing night-time images is that UHI phenomenon is more obvious during night hours. After that multilayer feed-forward neural networks and support vector regression have used separately to find the relationship between this data and the retrieved LST in 2007. Since the transfer rules might not be the same in different regions, the satellite image of the city has

  2. Topology design and performance analysis of an integrated communication network

    NASA Technical Reports Server (NTRS)

    Li, V. O. K.; Lam, Y. F.; Hou, T. C.; Yuen, J. H.

    1985-01-01

    A research study on the topology design and performance analysis for the Space Station Information System (SSIS) network is conducted. It is begun with a survey of existing research efforts in network topology design. Then a new approach for topology design is presented. It uses an efficient algorithm to generate candidate network designs (consisting of subsets of the set of all network components) in increasing order of their total costs, and checks each design to see if it forms an acceptable network. This technique gives the true cost-optimal network, and is particularly useful when the network has many constraints and not too many components. The algorithm for generating subsets is described in detail, and various aspects of the overall design procedure are discussed. Two more efficient versions of this algorithm (applicable in specific situations) are also given. Next, two important aspects of network performance analysis: network reliability and message delays are discussed. A new model is introduced to study the reliability of a network with dependent failures. For message delays, a collection of formulas from existing research results is given to compute or estimate the delays of messages in a communication network without making the independence assumption. The design algorithm coded in PASCAL is included as an appendix.

  3. Vector performance analysis of three supercomputers - Cray-2, Cray Y-MP, and ETA10-Q

    NASA Technical Reports Server (NTRS)

    Fatoohi, Rod A.

    1989-01-01

    Results are presented of a series of experiments to study the single-processor performance of three supercomputers: Cray-2, Cray Y-MP, and ETA10-Q. The main object of this study is to determine the impact of certain architectural features on the performance of modern supercomputers. Features such as clock period, memory links, memory organization, multiple functional units, and chaining are considered. A simple performance model is used to examine the impact of these features on the performance of a set of basic operations. The results of implementing this set on these machines for three vector lengths and three memory strides are presented and compared. For unit stride operations, the Cray Y-MP outperformed the Cray-2 by as much as three times and the ETA10-Q by as much as four times for these operations. Moreover, unlike the Cray-2 and ETA10-Q, even-numbered strides do not cause a major performance degradation on the Cray Y-MP. Two numerical algorithms are also used for comparison. For three problem sizes of both algorithms, the Cray Y-MP outperformed the Cray-2 by 43 percent to 68 percent and the ETA10-Q by four to eight times.

  4. Network based high performance concurrent computing

    SciTech Connect

    Sunderam, V.S.

    1991-01-01

    The overall objectives of this project are to investigate research issues pertaining to programming tools and efficiency issues in network based concurrent computing systems. The basis for these efforts is the PVM project that evolved during my visits to Oak Ridge Laboratories under the DOE Faculty Research Participation program; I continue to collaborate with researchers at Oak Ridge on some portions of the project.

  5. Comparison of Two Machine Learning Regression Approaches (Multivariate Relevance Vector Machine and Artificial Neural Network) Coupled with Wavelet Decomposition to Forecast Monthly Streamflow in Peru

    NASA Astrophysics Data System (ADS)

    Ticlavilca, A. M.; Maslova, I.; McKee, M.

    2011-12-01

    This research presents a modeling approach that incorporates wavelet-based analysis techniques used in statistical signal processing and multivariate machine learning regression to forecast monthly streamflow in Peru. Two machine learning regression approaches, Multivariate Relevance Vector Machine and Artificial Neural Network, are compared in terms of performance and robustness. The inputs of the model utilize information of streamflow and Pacific sea surface temperature (SST). The monthly Pacific SST data (from 1950 to 2010) are obtained from the NOAA Climate Prediction Center website. The inputs are decomposed into meaningful components formulated in terms of wavelet multiresolution analysis (MRA). The outputs are the forecasts of streamflow two, four and six months ahead simultaneously. The proposed hybrid modeling approach of wavelet decomposition and machine learning regression can capture sufficient information at meaningful temporal scales to improve the performance of the streamflow forecasts in Peru. A bootstrap analysis is used to explore the robustness of the hybrid modeling approaches.

  6. A parallel-vector algorithm for rapid structural analysis on high-performance computers

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.; Nguyen, Duc T.; Agarwal, Tarun K.

    1990-01-01

    A fast, accurate Choleski method for the solution of symmetric systems of linear equations is presented. This direct method is based on a variable-band storage scheme and takes advantage of column heights to reduce the number of operations in the Choleski factorization. The method employs parallel computation in the outermost DO-loop and vector computation via the loop unrolling technique in the innermost DO-loop. The method avoids computations with zeros outside the column heights, and as an option, zeros inside the band. The close relationship between Choleski and Gauss elimination methods is examined. The minor changes required to convert the Choleski code to a Gauss code to solve non-positive-definite symmetric systems of equations are identified. The results for two large scale structural analyses performed on supercomputers, demonstrate the accuracy and speed of the method.

  7. End-to-end network/application performance troubleshooting methodology

    SciTech Connect

    Wu, Wenji; Bobyshev, Andrey; Bowden, Mark; Crawford, Matt; Demar, Phil; Grigaliunas, Vyto; Grigoriev, Maxim; Petravick, Don; /Fermilab

    2007-09-01

    The computing models for HEP experiments are globally distributed and grid-based. Obstacles to good network performance arise from many causes and can be a major impediment to the success of the computing models for HEP experiments. Factors that affect overall network/application performance exist on the hosts themselves (application software, operating system, hardware), in the local area networks that support the end systems, and within the wide area networks. Since the computer and network systems are globally distributed, it can be very difficult to locate and identify the factors that are hurting application performance. In this paper, we present an end-to-end network/application performance troubleshooting methodology developed and in use at Fermilab. The core of our approach is to narrow down the problem scope with a divide and conquer strategy. The overall complex problem is split into two distinct sub-problems: host diagnosis and tuning, and network path analysis. After satisfactorily evaluating, and if necessary resolving, each sub-problem, we conduct end-to-end performance analysis and diagnosis. The paper will discuss tools we use as part of the methodology. The long term objective of the effort is to enable site administrators and end users to conduct much of the troubleshooting themselves, before (or instead of) calling upon network and operating system 'wizards,' who are always in short supply.

  8. A performance data network for solar process heat systems

    SciTech Connect

    Barker, G.; Hale, M.J.

    1996-03-01

    A solar process heat (SPH) data network has been developed to access remote-site performance data from operational solar heat systems. Each SPH system in the data network is outfitted with monitoring equipment and a datalogger. The datalogger is accessed via modem from the data network computer at the National Renewable Energy Laboratory (NREL). The dataloggers collect both ten-minute and hourly data and download it to the data network every 24-hours for archiving, processing, and plotting. The system data collected includes energy delivered (fluid temperatures and flow rates) and site meteorological conditions, such as solar insolation and ambient temperature. The SPH performance data network was created for collecting performance data from SPH systems that are serving in industrial applications or from systems using technologies that show promise for industrial applications. The network will be used to identify areas of SPH technology needing further development, to correlate computer models with actual performance, and to improve the credibility of SPH technology. The SPH data network also provides a centralized bank of user-friendly performance data that will give prospective SPH users an indication of how actual systems perform. There are currently three systems being monitored and archived under the SPH data network: two are parabolic trough systems and the third is a flat-plate system. The two trough systems both heat water for prisons; the hot water is used for personal hygiene, kitchen operations, and laundry. The flat plate system heats water for meat processing at a slaughter house. We plan to connect another parabolic trough system to the network during the first months of 1996. We continue to look for good examples of systems using other types of collector technologies and systems serving new applications (such as absorption chilling) to include in the SPH performance data network.

  9. Optimal Beamforming and Performance Analysis of Wireless Relay Networks with Unmanned Aerial Vehicle

    NASA Astrophysics Data System (ADS)

    Ouyang, Jian; Lin, Min

    2015-03-01

    In this paper, we investigate a wireless communication system employing a multi-antenna unmanned aerial vehicle (UAV) as the relay to improve the connectivity between the base station (BS) and the receive node (RN), where the BS-UAV link undergoes the correlated Rician fading while the UAV-RN link follows the correlated Rayleigh fading with large scale path loss. By assuming that the amplify-and-forward (AF) protocol is adopted at UAV, we first propose an optimal beamforming (BF) scheme to maximize the mutual information of the UAV-assisted dual-hop relay network, by calculating the BF weight vectors and the power allocation coefficient. Then, we derive the analytical expressions for the outage probability (OP) and the ergodic capacity (EC) of the relay network to evaluate the system performance conveniently. Finally, computer simulation results are provided to demonstrate the validity and efficiency of the proposed scheme as well as the performance analysis.

  10. Performance of a Regional Aeronautical Telecommunications Network

    NASA Technical Reports Server (NTRS)

    Bretmersky, Steven C.; Ripamonti, Claudio; Konangi, Vijay K.; Kerczewski, Robert J.

    2001-01-01

    This paper reports the findings of the simulation of the ATN (Aeronautical Telecommunications Network) for three typical average-sized U.S. airports and their associated air traffic patterns. The models of the protocols were designed to achieve the same functionality and meet the ATN specifications. The focus of this project is on the subnetwork and routing aspects of the simulation. To maintain continuous communication between the aircrafts and the ground facilities, a model based on mobile IP is used. The results indicate that continuous communication is indeed possible. The network can support two applications of significance in the immediate future FTP and HTTP traffic. Results from this simulation prove the feasibility of development of the ATN concept for AC/ATM (Advanced Communications for Air Traffic Management).

  11. Evaluation of delay performance in valiant load-balancing network

    NASA Astrophysics Data System (ADS)

    Yu, Yingdi; Jin, Yaohui; Cheng, Hong; Gao, Yu; Sun, Weiqiang; Guo, Wei; Hu, Weisheng

    2007-11-01

    Network traffic grows in an unpredictable way, which forces network operators to over-provision their backbone network in order to meet the increasing demands. In the consideration of new users, applications and unexpected failures, the utilization is typically below 30% [1]. There are two methods aimed to solve this problem. The first one is to adjust link capacity with the variation of traffic. However in optical network, rapid signaling scheme and large buffer is required. The second method is to use the statistical multiplexing function of IP routers connected point-to-point by optical links to counteract the effect brought by traffic's variation [2]. But the routing mechanism would be much more complex, and introduce more overheads into backbone network. To exert the potential of network and reduce its overhead, the use of Valiant Load-balancing for backbone network has been proposed in order to enhance the utilization of the network and to simplify the routing process. Raising the network utilization and improving throughput would inevitably influence the end-to-end delay. However, the study on delays of Load-balancing is lack. In the work presented in this paper, we study the delay performance in Valiant Load-balancing network, and isolate the queuing delay for modeling and detail analysis. We design the architecture of a switch with the ability of load-balancing for our simulation and experiment, and analyze the relationship between switch architecture and delay performance.

  12. Applying neural networks to optimize instrumentation performance

    SciTech Connect

    Start, S.E.; Peters, G.G.

    1995-06-01

    Well calibrated instrumentation is essential in providing meaningful information about the status of a plant. Signals from plant instrumentation frequently have inherent non-linearities, may be affected by environmental conditions and can therefore cause calibration difficulties for the people who maintain them. Two neural network approaches are described in this paper for improving the accuracy of a non-linear, temperature sensitive level probe ised in Expermental Breeder Reactor II (EBR-II) that was difficult to calibrate.

  13. Flood damage assessment performed based on Support Vector Machines combined with Landsat TM imagery and GIS

    NASA Astrophysics Data System (ADS)

    Alouene, Y.; Petropoulos, G. P.; Kalogrias, A.; Papanikolaou, F.

    2012-04-01

    Floods are a water-related natural disaster affecting and often threatening different aspects of human life, such as property damage, economic degradation, and in some instances even loss of precious human lives. Being able to provide accurately and cost-effectively assessment of damage from floods is essential to both scientists and policy makers in many aspects ranging from mitigating to assessing damage extent as well as in rehabilitation of affected areas. Remote Sensing often combined with Geographical Information Systems (GIS) has generally shown a very promising potential in performing rapidly and cost-effectively flooding damage assessment, particularly so in remote, otherwise inaccessible locations. The progress in remote sensing during the last twenty years or so has resulted to the development of a large number of image processing techniques suitable for use with a range of remote sensing data in performing flooding damage assessment. Supervised image classification is regarded as one of the most widely used approaches employed for this purpose. Yet, the use of recently developed image classification algorithms such as of machine learning-based Support Vector Machines (SVMs) classifier has not been adequately investigated for this purpose. The objective of our work had been to quantitatively evaluate the ability of SVMs combined with Landsat TM multispectral imagery in performing a damage assessment of a flood occurred in a Mediterranean region. A further objective has been to examine if the inclusion of additional spectral information apart from the original TM bands in SVMs can improve flooded area extraction accuracy. As a case study is used the case of a river Evros flooding of 2010 located in the north of Greece, in which TM imagery before and shortly after the flooding was available. Assessment of the flooded area is performed in a GIS environment on the basis of classification accuracy assessment metrics as well as comparisons versus a vector

  14. Static performance of nonaxisymmetric nozzles with yaw thrust-vectoring vanes

    NASA Technical Reports Server (NTRS)

    Mason, Mary L.; Berrier, Bobby L.

    1988-01-01

    A static test was conducted in the static test facility of the Langley 16 ft Transonic Tunnel to evaluate the effects of post exit vane vectoring on nonaxisymmetric nozzles. Three baseline nozzles were tested: an unvectored two dimensional convergent nozzle, an unvectored two dimensional convergent-divergent nozzle, and a pitch vectored two dimensional convergent-divergent nozzle. Each nozzle geometry was tested with 3 exit aspect ratios (exit width divided by exit height) of 1.5, 2.5 and 4.0. Two post exit yaw vanes were externally mounted on the nozzle sidewalls at the nozzle exit to generate yaw thrust vectoring. Vane deflection angle (0, -20 and -30 deg), vane planform and vane curvature were varied during the test. Results indicate that the post exit vane concept produced resultant yaw vector angles which were always smaller than the geometric yaw vector angle. Losses in resultant thrust ratio increased with the magnitude of resultant yaw vector angle. The widest post exit vane produced the largest degree of flow turning, but vane curvature had little effect on thrust vectoring. Pitch vectoring was independent of yaw vectoring, indicating that multiaxis thrust vectoring is feasible for the nozzle concepts tested.

  15. Performance analysis of electronic code division multiple access based virtual private networks over passive optical networks

    NASA Astrophysics Data System (ADS)

    Nadarajah, Nishaanthan; Nirmalathas, Ampalavanapillai

    2008-03-01

    A solution for implementing multiple secure virtual private networks over a passive optical network using electronic code division multiple access is proposed and experimentally demonstrated. The multiple virtual private networking capability is experimentally demonstrated with 40 Mb/s data multiplexed with a 640 Mb/s electronic code that is unique to each of the virtual private networks in the passive optical network, and the transmission of the electronically coded data is carried out using Fabry-Perot laser diodes. A theoretical scalability analysis for electronic code division multiple access based virtual private networks over a passive optical network is also carried out to identify the performance limits of the scheme. Several sources of noise such as optical beat interference and multiple access interference that are present in the receiver are considered with different operating system parameters such as transmitted optical power, spectral width of the broadband optical source, and processing gain to study the scalability of the network.

  16. Towards a Social Networks Model for Online Learning & Performance

    ERIC Educational Resources Information Center

    Chung, Kon Shing Kenneth; Paredes, Walter Christian

    2015-01-01

    In this study, we develop a theoretical model to investigate the association between social network properties, "content richness" (CR) in academic learning discourse, and performance. CR is the extent to which one contributes content that is meaningful, insightful and constructive to aid learning and by social network properties we…

  17. IBM SP high-performance networking with a GRF.

    SciTech Connect

    Navarro, J.P.

    1999-05-27

    Increasing use of highly distributed applications, demand for faster data exchange, and highly parallel applications can push the limits of conventional external networking for IBM SP sites. In technical computing applications we have observed a growing use of a pipeline of hosts and networks collaborating to collect, process, and visualize large amounts of realtime data. The GRF, a high-performance IP switch from Ascend and IBM, is the first backbone network switch to offer a media card that can directly connect to an SP Switch. This enables switch attached hosts in an SP complex to communicate at near SP Switch speeds with other GRF attached hosts and networks.

  18. Improving performance in a contracted physician network.

    PubMed

    Smith, A L; Epstein, A L

    1999-01-01

    Health care organizations face significant performance challenges. Achieving desired results requires the highest level of partnership with independent physicians. Tufts Health Plan invited medical directors of its affiliated groups to participate in a leadership development process to improve clinical, service, and business performance. The design included performance review, gap analysis, priority setting, improvement work plans, and defining the optimum practice culture. Medical directors practiced core leadership capabilities, including building a shared context, getting physician buy-in, and managing outliers. The peer learning environment has been sustained in redesigned medical directors' meetings. There has been significant performance improvement in several practices and enhanced relations between the health plan and medical directors. PMID:10788102

  19. The performance analysis of linux networking - packet receiving

    SciTech Connect

    Wu, Wenji; Crawford, Matt; Bowden, Mark; /Fermilab

    2006-11-01

    The computing models for High-Energy Physics experiments are becoming ever more globally distributed and grid-based, both for technical reasons (e.g., to place computational and data resources near each other and the demand) and for strategic reasons (e.g., to leverage equipment investments). To support such computing models, the network and end systems, computing and storage, face unprecedented challenges. One of the biggest challenges is to transfer scientific data sets--now in the multi-petabyte (10{sup 15} bytes) range and expected to grow to exabytes within a decade--reliably and efficiently among facilities and computation centers scattered around the world. Both the network and end systems should be able to provide the capabilities to support high bandwidth, sustained, end-to-end data transmission. Recent trends in technology are showing that although the raw transmission speeds used in networks are increasing rapidly, the rate of advancement of microprocessor technology has slowed down. Therefore, network protocol-processing overheads have risen sharply in comparison with the time spent in packet transmission, resulting in degraded throughput for networked applications. More and more, it is the network end system, instead of the network, that is responsible for degraded performance of network applications. In this paper, the Linux system's packet receive process is studied from NIC to application. We develop a mathematical model to characterize the Linux packet receiving process. Key factors that affect Linux systems network performance are analyzed.

  20. Performance analysis of local area networks

    NASA Technical Reports Server (NTRS)

    Alkhatib, Hasan S.; Hall, Mary Grace

    1990-01-01

    A simulation of the TCP/IP protocol running on a CSMA/CD data link layer was described. The simulation was implemented using the simula language, and object oriented discrete event language. It allows the user to set the number of stations at run time, as well as some station parameters. Those parameters are the interrupt time and the dma transfer rate for each station. In addition, the user may configure the network at run time with stations of differing characteristics. Two types are available, and the parameters of both types are read from input files at run time. The parameters include the dma transfer rate, interrupt time, data rate, average message size, maximum frame size and the average interarrival time of messages per station. The information collected for the network is the throughput and the mean delay per packet. For each station, the number of messages attempted as well as the number of messages successfully transmitted is collected in addition to the throughput and mean packet delay per station.

  1. Optical performance monitoring for the next generation optical communication networks

    NASA Astrophysics Data System (ADS)

    Pan, Zhongqi; Yu, Changyuan; Willner, Alan E.

    2010-01-01

    Today's optical networks function are in a fairly static fashion and are built to operate within well-defined specifications. This scenario is quite challenging for next generation high-capacity systems, since network paths are not static and channel-degrading effects can change with temperature, component drift, aging, fiber plant maintenance and many other factors. Moreover, we are far from being able to simply "plug-and-play" an optical node into an existing network in such a way that the network itself can allocate resources to ensure error-free transmission. Optical performance monitoring could potentially enable higher stability, reconfigurability, and flexibility in a self-managed optical network. This paper will describe the specific fiber impairments that future intelligent optical network might want to monitor as well as some promising techniques.

  2. Leveraging Structure to Improve Classification Performance in Sparsely Labeled Networks

    SciTech Connect

    Gallagher, B; Eliassi-Rad, T

    2007-10-22

    We address the problem of classification in a partially labeled network (a.k.a. within-network classification), with an emphasis on tasks in which we have very few labeled instances to start with. Recent work has demonstrated the utility of collective classification (i.e., simultaneous inferences over class labels of related instances) in this general problem setting. However, the performance of collective classification algorithms can be adversely affected by the sparseness of labels in real-world networks. We show that on several real-world data sets, collective classification appears to offer little advantage in general and hurts performance in the worst cases. In this paper, we explore a complimentary approach to within-network classification that takes advantage of network structure. Our approach is motivated by the observation that real-world networks often provide a great deal more structural information than attribute information (e.g., class labels). Through experiments on supervised and semi-supervised classifiers of network data, we demonstrate that a small number of structural features can lead to consistent and sometimes dramatic improvements in classification performance. We also examine the relative utility of individual structural features and show that, in many cases, it is a combination of both local and global network structure that is most informative.

  3. Neural network submodel as an abstraction tool: relating network performance to combat outcome

    NASA Astrophysics Data System (ADS)

    Jablunovsky, Greg; Dorman, Clark; Yaworsky, Paul S.

    2000-06-01

    Simulation of Command and Control (C2) networks has historically emphasized individual system performance with little architectural context or credible linkage to `bottom- line' measures of combat outcomes. Renewed interest in modeling C2 effects and relationships stems from emerging network intensive operational concepts. This demands improved methods to span the analytical hierarchy between C2 system performance models and theater-level models. Neural network technology offers a modeling approach that can abstract the essential behavior of higher resolution C2 models within a campaign simulation. The proposed methodology uses off-line learning of the relationships between network state and campaign-impacting performance of a complex C2 architecture and then approximation of that performance as a time-varying parameter in an aggregated simulation. Ultimately, this abstraction tool offers an increased fidelity of C2 system simulation that captures dynamic network dependencies within a campaign context.

  4. Challenges for high-performance networking for exascale computing.

    SciTech Connect

    Barrett, Brian W.; Hemmert, K. Scott; Underwood, Keith Douglas; Brightwell, Ronald Brian

    2010-05-01

    Achieving the next three orders of magnitude performance increase to move from petascale to exascale computing will require a significant advancements in several fundamental areas. Recent studies have outlined many of the challenges in hardware and software that will be needed. In this paper, we examine these challenges with respect to high-performance networking. We describe the repercussions of anticipated changes to computing and networking hardware and discuss the impact that alternative parallel programming models will have on the network software stack. We also present some ideas on possible approaches that address some of these challenges.

  5. Performance enhancement of OSPF protocol in the private network

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Lu, Yang; Lin, Xiaokang

    2005-11-01

    The private network serves as an information exchange platform to support the integrated services via microwave channels and accordingly selects the open shortest path first (OSPF) as the IP routing protocol. But the existing OSPF can't fit the private network very well for its special characteristics. This paper presents our modifications to the standard protocol in such aspects as the single-area scheme, link state advertisement (LSA) types and formats, OSPF packet formats, important state machines, setting of protocol parameters and link flap damping. Finally simulations are performed in various scenarios and the results indicate that our modifications can enhance the OSPF performance in the private network effectively.

  6. Optimal sampling in network performance evaluation

    SciTech Connect

    Fedorov, V.; Flanagan, D.; Batsell, S.

    1998-11-01

    Unlike many other experiments, in meteorology and seismology for instance, monitoring measurements on communication networks are cheap and fast. Even the simplest measurement tools, which are usually some interrogating programs, can provide a huge amount of data at almost no expense. The problem is not decreasing the cost of measurements, but rather reducing the amount of stored data and the measurement and analysis time. The authors address the approach that is based on the covariances between the measurements for various sites. The corresponding covariance matrix can be constructed either theoretically under some assumptions about the observed random processes, or can be estimated from some preliminary experiments. The authors compare the proposed algorithm with heuristic procedures that are used in other monitoring problems.

  7. Investigation of road network features and safety performance.

    PubMed

    Wang, Xuesong; Wu, Xingwei; Abdel-Aty, Mohamed; Tremont, Paul J

    2013-07-01

    The analysis of road network designs can provide useful information to transportation planners as they seek to improve the safety of road networks. The objectives of this study were to compare and define the effective road network indices and to analyze the relationship between road network structure and traffic safety at the level of the Traffic Analysis Zone (TAZ). One problem in comparing different road networks is establishing criteria that can be used to scale networks in terms of their structures. Based on data from Orange and Hillsborough Counties in Florida, road network structural properties within TAZs were scaled using 3 indices: Closeness Centrality, Betweenness Centrality, and Meshedness Coefficient. The Meshedness Coefficient performed best in capturing the structural features of the road network. Bayesian Conditional Autoregressive (CAR) models were developed to assess the safety of various network configurations as measured by total crashes, crashes on state roads, and crashes on local roads. The models' results showed that crash frequencies on local roads were closely related to factors within the TAZs (e.g., zonal network structure, TAZ population), while crash frequencies on state roads were closely related to the road and traffic features of state roads. For the safety effects of different networks, the Grid type was associated with the highest frequency of crashes, followed by the Mixed type, the Loops & Lollipops type, and the Sparse type. This study shows that it is possible to develop a quantitative scale for structural properties of a road network, and to use that scale to calculate the relationships between network structural properties and safety. PMID:23584537

  8. Performance Analysis of a NASA Integrated Network Array

    NASA Technical Reports Server (NTRS)

    Nessel, James A.

    2012-01-01

    The Space Communications and Navigation (SCaN) Program is planning to integrate its individual networks into a unified network which will function as a single entity to provide services to user missions. This integrated network architecture is expected to provide SCaN customers with the capabilities to seamlessly use any of the available SCaN assets to support their missions to efficiently meet the collective needs of Agency missions. One potential optimal application of these assets, based on this envisioned architecture, is that of arraying across existing networks to significantly enhance data rates and/or link availabilities. As such, this document provides an analysis of the transmit and receive performance of a proposed SCaN inter-network antenna array. From the study, it is determined that a fully integrated internetwork array does not provide any significant advantage over an intra-network array, one in which the assets of an individual network are arrayed for enhanced performance. Therefore, it is the recommendation of this study that NASA proceed with an arraying concept, with a fundamental focus on a network-centric arraying.

  9. Development of task network models of human performance in microgravity

    NASA Technical Reports Server (NTRS)

    Diaz, Manuel F.; Adam, Susan

    1992-01-01

    This paper discusses the utility of task-network modeling for quantifying human performance variability in microgravity. The data are gathered for: (1) improving current methodologies for assessing human performance and workload in the operational space environment; (2) developing tools for assessing alternative system designs; and (3) developing an integrated set of methodologies for the evaluation of performance degradation during extended duration spaceflight. The evaluation entailed an analysis of the Remote Manipulator System payload-grapple task performed on many shuttle missions. Task-network modeling can be used as a tool for assessing and enhancing human performance in man-machine systems, particularly for modeling long-duration manned spaceflight. Task-network modeling can be directed toward improving system efficiency by increasing the understanding of basic capabilities of the human component in the system and the factors that influence these capabilities.

  10. High-Performance Satellite/Terrestrial-Network Gateway

    NASA Technical Reports Server (NTRS)

    Beering, David R.

    2005-01-01

    A gateway has been developed to enable digital communication between (1) the high-rate receiving equipment at NASA's White Sands complex and (2) a standard terrestrial digital communication network at data rates up to 622 Mb/s. The design of this gateway can also be adapted for use in commercial Earth/satellite and digital communication networks, and in terrestrial digital communication networks that include wireless subnetworks. Gateway as used here signifies an electronic circuit that serves as an interface between two electronic communication networks so that a computer (or other terminal) on one network can communicate with a terminal on the other network. The connection between this gateway and the high-rate receiving equipment is made via a synchronous serial data interface at the emitter-coupled-logic (ECL) level. The connection between this gateway and a standard asynchronous transfer mode (ATM) terrestrial communication network is made via a standard user network interface with a synchronous optical network (SONET) connector. The gateway contains circuitry that performs the conversion between the ECL and SONET interfaces. The data rate of the SONET interface can be either 155.52 or 622.08 Mb/s. The gateway derives its clock signal from a satellite modem in the high-rate receiving equipment and, hence, is agile in the sense that it adapts to the data rate of the serial interface.

  11. Urban traffic-network performance: flow theory and simulation experiments

    SciTech Connect

    Williams, J.C.

    1986-01-01

    Performance models for urban street networks were developed to describe the response of a traffic network to given travel-demand levels. The three basic traffic flow variables, speed, flow, and concentration, are defined at the network level, and three model systems are proposed. Each system consists of a series of interrelated, consistent functions between the three basic traffic-flow variables as well as the fraction of stopped vehicles in the network. These models are subsequently compared with the results of microscopic simulation of a small test network. The sensitivity of one of the model systems to a variety of network features was also explored. Three categories of features were considered, with the specific features tested listed in parentheses: network topology (block length and street width), traffic control (traffic signal coordination), and traffic characteristics (level of inter-vehicular interaction). Finally, a fundamental issue concerning the estimation of two network-level parameters (from a nonlinear relation in the two-fluid theory) was examined. The principal concern was that of comparability of these parameters when estimated with information from a single vehicle (or small group of vehicles), as done in conjunction with previous field studies, and when estimated with network-level information (i.e., all the vehicles), as is possible with simulation.

  12. Body Area Networks performance analysis using UWB.

    PubMed

    Fatehy, Mohammed; Kohno, Ryuji

    2013-01-01

    The successful realization of a Wireless Body Area Network (WBAN) using Ultra Wideband (UWB) technology supports different medical and consumer electronics (CE) applications but stand in a need for an innovative solution to meet the different requirements of these applications. Previously, we proposed to use adaptive processing gain (PG) to fulfill the different QoS requirements of these WBAN applications. In this paper, interference occurred between two different BANs in a UWB-based system has been analyzed in terms of acceptable ratio of overlapping between these BANs' PG providing the required QoS for each BAN. The first BAN employed for a healthcare device (e.g. EEG, ECG, etc.) with a relatively longer spreading sequence is used and the second customized for entertainment application (e.g. wireless headset, wireless game pad, etc.) where a shorter spreading code is assigned. Considering bandwidth utilization and difference in the employed spreading sequence, the acceptable ratio of overlapping between these BANs should fall between 0.05 and 0.5 in order to optimize the used spreading sequence and in the meantime satisfying the required QoS for these applications. PMID:24109913

  13. Performance evaluation of reactive and proactive routing protocol in IEEE 802.11 ad hoc network

    NASA Astrophysics Data System (ADS)

    Hamma, Salima; Cizeron, Eddy; Issaka, Hafiz; Guédon, Jean-Pierre

    2006-10-01

    Wireless technology based on the IEEE 802.11 standard is widely deployed. This technology is used to support multiple types of communication services (data, voice, image) with different QoS requirements. MANET (Mobile Adhoc NETwork) does not require a fixed infrastructure. Mobile nodes communicate through multihop paths. The wireless communication medium has variable and unpredictable characteristics. Furthermore, node mobility creates a continuously changing communication topology in which paths break and new one form dynamically. The routing table of each router in an adhoc network must be kept up-to-date. MANET uses Distance Vector or Link State algorithms which insure that the route to every host is always known. However, this approach must take into account the adhoc networks specific characteristics: dynamic topologies, limited bandwidth, energy constraints, limited physical security, ... Two main routing protocols categories are studied in this paper: proactive protocols (e.g. Optimised Link State Routing - OLSR) and reactive protocols (e.g. Ad hoc On Demand Distance Vector - AODV, Dynamic Source Routing - DSR). The proactive protocols are based on periodic exchanges that update the routing tables to all possible destinations, even if no traffic goes through. The reactive protocols are based on on-demand route discoveries that update routing tables only for the destination that has traffic going through. The present paper focuses on study and performance evaluation of these categories using NS2 simulations. We have considered qualitative and quantitative criteria. The first one concerns distributed operation, loop-freedom, security, sleep period operation. The second are used to assess performance of different routing protocols presented in this paper. We can list end-to-end data delay, jitter, packet delivery ratio, routing load, activity distribution. Comparative study will be presented with number of networking context consideration and the results show

  14. Performance evaluation of transport protocols for networked haptic collaboration

    NASA Astrophysics Data System (ADS)

    Lee, Seokhee; Moon, Sungtae; Kim, JongWon

    2006-10-01

    In this paper, we explain two transport-related experimental results for networked haptic CVEs (collaborative virtual environments). The first set of experiments evaluate the performance changes in terms of QoE (quality of experience) with the haptic-based CVEs under different network settings. The evaluation results are then used to define the minimum networking requirements for CVEs with force-feedback haptic interface. The second experiments verify whether the existing haptics-specialized transport protocols can satisfy the networking QoE requirements for the networked haptic CVEs. The results will be used to suggest in design guidelines for an effective transport protocol for this highly-interactive (i.e., extremely low-delay latency at up to 1 kHz processing cycle) haptic CVEs over the delay-crippled Internet.

  15. Diversity improves performance in excitable networks

    PubMed Central

    Copelli, Mauro; Roberts, James A.

    2016-01-01

    As few real systems comprise indistinguishable units, diversity is a hallmark of nature. Diversity among interacting units shapes properties of collective behavior such as synchronization and information transmission. However, the benefits of diversity on information processing at the edge of a phase transition, ordinarily assumed to emerge from identical elements, remain largely unexplored. Analyzing a general model of excitable systems with heterogeneous excitability, we find that diversity can greatly enhance optimal performance (by two orders of magnitude) when distinguishing incoming inputs. Heterogeneous systems possess a subset of specialized elements whose capability greatly exceeds that of the nonspecialized elements. We also find that diversity can yield multiple percolation, with performance optimized at tricriticality. Our results are robust in specific and more realistic neuronal systems comprising a combination of excitatory and inhibitory units, and indicate that diversity-induced amplification can be harnessed by neuronal systems for evaluating stimulus intensities. PMID:27168961

  16. Performance characteristics of a variable-area vane nozzle for vectoring an ASTOVL exhaust jet up to 45 deg

    NASA Technical Reports Server (NTRS)

    Mcardle, Jack G.; Esker, Barbara S.

    1993-01-01

    Many conceptual designs for advanced short-takeoff, vertical landing (ASTOVL) aircraft need exhaust nozzles that can vector the jet to provide forces and moments for controlling the aircraft's movement or attitude in flight near the ground. A type of nozzle that can both vector the jet and vary the jet flow area is called a vane nozzle. Basically, the nozzle consists of parallel, spaced-apart flow passages formed by pairs of vanes (vanesets) that can be rotated on axes perpendicular to the flow. Two important features of this type of nozzle are the abilities to vector the jet rearward up to 45 degrees and to produce less harsh pressure and velocity footprints during vertical landing than does an equivalent single jet. A one-third-scale model of a generic vane nozzle was tested with unheated air at the NASA Lewis Research Center's Powered Lift Facility. The model had three parallel flow passages. Each passage was formed by a vaneset consisting of a long and a short vane. The longer vanes controlled the jet vector angle, and the shorter controlled the flow area. Nozzle performance for three nominal flow areas (basic and plus or minus 21 percent of basic area), each at nominal jet vector angles from -20 deg (forward of vertical) to +45 deg (rearward of vertical) are presented. The tests were made with the nozzle mounted on a model tailpipe with a blind flange on the end to simulate a closed cruise nozzle, at tailpipe-to-ambient pressure ratios from 1.8 to 4.0. Also included are jet wake data, single-vaneset vector performance for long/short and equal-length vane designs, and pumping capability. The pumping capability arises from the subambient pressure developed in the cavities between the vanesets, which could be used to aspirate flow from a source such as the engine compartment. Some of the performance characteristics are compared with characteristics of a single-jet nozzle previously reported.

  17. Calibration Performance and Capabilities of the New Compact Ocean Wind Vector Radiometer System

    NASA Astrophysics Data System (ADS)

    Brown, S. T.; Focardi, P.; Kitiyakara, A.; Maiwald, F.; Montes, O.; Padmanabhan, S.; Redick, R.; Russell, D.; Wincentsen, J.

    2014-12-01

    The paper describes performance and capabilities of a new satellite conically imaging microwave radiometer system, the Compact Ocean Wind Vector Radiometer (COWVR), being built by the Jet Propulsion Laboratory (JPL) for an Air Force demonstration mission. COWVR is an 18-34 GHz fully polarimetric radiometer designed to provide measurements of ocean vector winds with an accuracy that meets or exceeds that provided by WindSat, but using a simpler design which has both calibration and cost advantages. Heritage conical radiometer systems, such as WindSat, AMSR, GMI or SSMI(S), all have a similar overall architecture and have exhibited significant intra-channel and inter-sensor calibration biases, due in part to the relative independence of the radiometers between the different polarizations and frequencies in the system. The COWVR system uses a broadband compact hybrid combining architecture and Electronic Polarization Basis Rotation to minimize the number of free calibration parameters between polarization and frequencies, as well as providing a definitive calibration reference from the modulation of the mean polarized signal from the Earth. This second calibration advantage arises because the sensor modulates the incoming polarized signal at the input antenna aperture in a known way based only on the instrument geometry which forces relative calibration consistency between the polarimetric channels of the sensor and provides a gain and offset calibration independent of a model or other ancillary data source, which has typically been a weakness in the calibration and inter-calibration of heritage microwave sensors. This paper will give a description of the COWVR instrument and an overview of the technology demonstration mission. We will discuss the overall calibration approach for this system, its advantages over existing systems and how many of the calibration issues that impact existing satellite radiometers can be eliminated in future operational systems based on

  18. Performance limitations for networked control systems with plant uncertainty

    NASA Astrophysics Data System (ADS)

    Chi, Ming; Guan, Zhi-Hong; Cheng, Xin-Ming; Yuan, Fu-Shun

    2016-04-01

    There has recently been significant interest in performance study for networked control systems with communication constraints. But the existing work mainly assumes that the plant has an exact model. The goal of this paper is to investigate the optimal tracking performance for networked control system in the presence of plant uncertainty. The plant under consideration is assumed to be non-minimum phase and unstable, while the two-parameter controller is employed and the integral square criterion is adopted to measure the tracking error. And we formulate the uncertainty by utilising stochastic embedding. The explicit expression of the tracking performance has been obtained. The results show that the network communication noise and the model uncertainty, as well as the unstable poles and non-minimum phase zeros, can worsen the tracking performance.

  19. High performance network and channel-based storage

    NASA Technical Reports Server (NTRS)

    Katz, Randy H.

    1991-01-01

    In the traditional mainframe-centered view of a computer system, storage devices are coupled to the system through complex hardware subsystems called input/output (I/O) channels. With the dramatic shift towards workstation-based computing, and its associated client/server model of computation, storage facilities are now found attached to file servers and distributed throughout the network. We discuss the underlying technology trends that are leading to high performance network-based storage, namely advances in networks, storage devices, and I/O controller and server architectures. We review several commercial systems and research prototypes that are leading to a new approach to high performance computing based on network-attached storage.

  20. Multimedia application performance on a WiMAX network

    NASA Astrophysics Data System (ADS)

    Halepovic, Emir; Ghaderi, Majid; Williamson, Carey

    2009-01-01

    In this paper, we use experimental measurements to study the performance of multimedia applications over a commercial IEEE 802.16 WiMAX network. Voice-over-IP (VoIP) and video streaming applications are tested. We observe that the WiMAX-based network solidly supports VoIP. The voice quality degradation compared to high-speed Ethernet is only moderate, despite higher packet loss and network delays. Despite different characteristics of the uplink and the downlink, call quality is comparable for both directions. On-demand video streaming performs well using UDP. Smooth playback of high-quality video/audio clips at aggregate rates exceeding 700 Kbps is achieved about 63% of the time, with low-quality playback periods observed only 7% of the time. Our results show that WiMAX networks can adequately support currently popular multimedia Internet applications.

  1. Portals 4 network API definition and performance measurement

    SciTech Connect

    Brightwell, R. B.

    2012-03-01

    Portals is a low-level network programming interface for distributed memory massively parallel computing systems designed by Sandia, UNM, and Intel. Portals has been designed to provide high message rates and to provide the flexibility to support a variety of higher-level communication paradigms. This project developed and analyzed an implementation of Portals using shared memory in order to measure and understand the impact of using general-purpose compute cores to handle network protocol processing functions. The goal of this study was to evaluate an approach to high-performance networking software design and hardware support that would enable important DOE modeling and simulation applications to perform well and to provide valuable input to Intel so they can make informed decisions about future network software and hardware products that impact DOE applications.

  2. Static internal performance of single-expansion-ramp nozzles with thrust-vectoring capability up to 60 deg

    NASA Technical Reports Server (NTRS)

    Berrier, B. L.; Leavitt, L. D.

    1984-01-01

    An investigation has been conducted at static conditions (wind off) in the static-test facility of the Langley 16-Foot Transonic Tunnel. The effects of geometric thrust-vector angle, sidewall containment, ramp curvature, lower-flap lip angle, and ramp length on the internal performance of nonaxisymmetric single-expansion-ramp nozzles were investigated. Geometric thrust-vector angle was varied from -20 deg. to 60 deg., and nozzle pressure ratio was varied from 1.0 (jet off) to approximately 10.0.

  3. Semi-Supervised Multimodal Relevance Vector Regression Improves Cognitive Performance Estimation from Imaging and Biological Biomarkers

    PubMed Central

    Cheng, Bo; Chen, Songcan; Kaufer, Daniel I.

    2013-01-01

    Accurate estimation of cognitive scores for patients can help track the progress of neurological diseases. In this paper, we present a novel semi-supervised multimodal relevance vector regression (SM-RVR) method for predicting clinical scores of neurological diseases from multimodal imaging and biological biomarker, to help evaluate pathological stage and predict progression of diseases, e.g., Alzheimer’s diseases (AD). Unlike most existing methods, we predict clinical scores from multimodal (imaging and biological) biomarkers, including MRI, FDG-PET, and CSF. Considering that the clinical scores of mild cognitive impairment (MCI) subjects are often less stable compared to those of AD and normal control (NC) subjects due to the heterogeneity of MCI, we use only the multimodal data of MCI subjects, but no corresponding clinical scores, to train a semi-supervised model for enhancing the estimation of clinical scores for AD and NC subjects. We also develop a new strategy for selecting the most informative MCI subjects. We evaluate the performance of our approach on 202 subjects with all three modalities of data (MRI, FDG-PET and CSF) from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database. The experimental results show that our SM-RVR method achieves a root-mean-square error (RMSE) of 1.91 and a correlation coefficient (CORR) of 0.80 for estimating the MMSE scores, and also a RMSE of 4.45 and a CORR of 0.78 for estimating the ADAS-Cog scores, demonstrating very promising performances in AD studies. PMID:23504659

  4. Semi-supervised multimodal relevance vector regression improves cognitive performance estimation from imaging and biological biomarkers.

    PubMed

    Cheng, Bo; Zhang, Daoqiang; Chen, Songcan; Kaufer, Daniel I; Shen, Dinggang

    2013-07-01

    Accurate estimation of cognitive scores for patients can help track the progress of neurological diseases. In this paper, we present a novel semi-supervised multimodal relevance vector regression (SM-RVR) method for predicting clinical scores of neurological diseases from multimodal imaging and biological biomarker, to help evaluate pathological stage and predict progression of diseases, e.g., Alzheimer's diseases (AD). Unlike most existing methods, we predict clinical scores from multimodal (imaging and biological) biomarkers, including MRI, FDG-PET, and CSF. Considering that the clinical scores of mild cognitive impairment (MCI) subjects are often less stable compared to those of AD and normal control (NC) subjects due to the heterogeneity of MCI, we use only the multimodal data of MCI subjects, but no corresponding clinical scores, to train a semi-supervised model for enhancing the estimation of clinical scores for AD and NC subjects. We also develop a new strategy for selecting the most informative MCI subjects. We evaluate the performance of our approach on 202 subjects with all three modalities of data (MRI, FDG-PET and CSF) from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. The experimental results show that our SM-RVR method achieves a root-mean-square error (RMSE) of 1.91 and a correlation coefficient (CORR) of 0.80 for estimating the MMSE scores, and also a RMSE of 4.45 and a CORR of 0.78 for estimating the ADAS-Cog scores, demonstrating very promising performances in AD studies. PMID:23504659

  5. Hospital network performance: a survey of hospital stakeholders' perspectives.

    PubMed

    Bravi, F; Gibertoni, D; Marcon, A; Sicotte, C; Minvielle, E; Rucci, P; Angelastro, A; Carradori, T; Fantini, M P

    2013-02-01

    Hospital networks are an emerging organizational form designed to face the new challenges of public health systems. Although the benefits introduced by network models in terms of rationalization of resources are known, evidence about stakeholders' perspectives on hospital network performance from the literature is scanty. Using the Competing Values Framework of organizational effectiveness and its subsequent adaptation by Minvielle et al., we conducted in 2009 a survey in five hospitals of an Italian network for oncological care to examine and compare the views on hospital network performance of internal stakeholders (physicians, nurses and the administrative staff). 329 questionnaires exploring stakeholders' perspectives were completed, with a response rate of 65.8%. Using exploratory factor analysis of the 66 items of the questionnaire, we identified 4 factors, i.e. Centrality of relationships, Quality of care, Attractiveness/Reputation and Staff empowerment and Protection of workers' rights. 42 items were retained in the analysis. Factor scores proved to be high (mean score>8 on a 10-item scale), except for Attractiveness/Reputation (mean score 6.79), indicating that stakeholders attach a higher importance to relational and health care aspects. Comparison of factor scores among stakeholders did not reveal significant differences, suggesting a broadly shared view on hospital network performance. PMID:23201189

  6. Performance analysis of a common-mode signal based low-complexity crosstalk cancelation scheme in vectored VDSL

    NASA Astrophysics Data System (ADS)

    Zafaruddin, SM; Prakriya, Shankar; Prasad, Surendra

    2012-12-01

    In this article, we propose a vectored system by using both common mode (CM) and differential mode (DM) signals in upstream VDSL. We first develop a multi-input multi-output (MIMO) CM channel by using the single-pair CM and MIMO DM channels proposed recently, and study the characteristics of the resultant CM-DM channel matrix. We then propose a low complexity receiver structure in which the CM and DM signals of each twisted-pair (TP) are combined before the application of a MIMO zero forcing (ZF) receiver. We study capacity of the proposed system, and show that the vectored CM-DM processing provides higher data-rates at longer loop-lengths. In the absence of alien crosstalk, application of the ZF receiver on the vectored CM-DM signals yields performance close to the single user bound (SUB). In the presence of alien crosstalk, we show that the vectored CM-DM processing exploits the spatial correlation of CM and DM signals and provides higher data rates than with DM processing only. Simulation results validate the analysis and demonstrate the importance of CM-DM joint processing in vectored VDSL systems.

  7. Virulence Factors of Geminivirus Interact with MYC2 to Subvert Plant Resistance and Promote Vector Performance[C][W

    PubMed Central

    Li, Ran; Weldegergis, Berhane T.; Li, Jie; Jung, Choonkyun; Qu, Jing; Sun, Yanwei; Qian, Hongmei; Tee, ChuanSia; van Loon, Joop J.A.; Dicke, Marcel; Chua, Nam-Hai; Liu, Shu-Sheng

    2014-01-01

    A pathogen may cause infected plants to promote the performance of its transmitting vector, which accelerates the spread of the pathogen. This positive effect of a pathogen on its vector via their shared host plant is termed indirect mutualism. For example, terpene biosynthesis is suppressed in begomovirus-infected plants, leading to reduced plant resistance and enhanced performance of the whiteflies (Bemisia tabaci) that transmit these viruses. Although begomovirus-whitefly mutualism has been known, the underlying mechanism is still elusive. Here, we identified βC1 of Tomato yellow leaf curl China virus, a monopartite begomovirus, as the viral genetic factor that suppresses plant terpene biosynthesis. βC1 directly interacts with the basic helix-loop-helix transcription factor MYC2 to compromise the activation of MYC2-regulated terpene synthase genes, thereby reducing whitefly resistance. MYC2 associates with the bipartite begomoviral protein BV1, suggesting that MYC2 is an evolutionarily conserved target of begomoviruses for the suppression of terpene-based resistance and the promotion of vector performance. Our findings describe how this viral pathogen regulates host plant metabolism to establish mutualism with its insect vector. PMID:25490915

  8. UltraSciencenet: High- Performance Network Research Test-Bed

    SciTech Connect

    Rao, Nageswara S; Wing, William R; Poole, Stephen W; Hicks, Susan Elaine; DeNap, Frank A; Carter, Steven M; Wu, Qishi

    2009-04-01

    The high-performance networking requirements for next generation large-scale applications belong to two broad classes: (a) high bandwidths, typically multiples of 10Gbps, to support bulk data transfers, and (b) stable bandwidths, typically at much lower bandwidths, to support computational steering, remote visualization, and remote control of instrumentation. Current Internet technologies, however, are severely limited in meeting these demands because such bulk bandwidths are available only in the backbone, and stable control channels are hard to realize over shared connections. The UltraScience Net (USN) facilitates the development of such technologies by providing dynamic, cross-country dedicated 10Gbps channels for large data transfers, and 150 Mbps channels for interactive and control operations. Contributions of the USN project are two-fold: (a) Infrastructure Technologies for Network Experimental Facility: USN developed and/or demonstrated a number of infrastructure technologies needed for a national-scale network experimental facility. Compared to Internet, USN's data-plane is different in that it can be partitioned into isolated layer-1 or layer-2 connections, and its control-plane is different in the ability of users and applications to setup and tear down channels as needed. Its design required several new components including a Virtual Private Network infrastructure, a bandwidth and channel scheduler, and a dynamic signaling daemon. The control-plane employs a centralized scheduler to compute the channel allocations and a signaling daemon to generate configuration signals to switches. In a nutshell, USN demonstrated the ability to build and operate a stable national-scale switched network. (b) Structured Network Research Experiments: A number of network research experiments have been conducted on USN that cannot be easily supported over existing network facilities, including test-beds and production networks. It settled an open matter by demonstrating

  9. A dityrosine network mediated by dual oxidase and peroxidase influences the persistence of Lyme disease pathogens within the vector.

    PubMed

    Yang, Xiuli; Smith, Alexis A; Williams, Mark S; Pal, Utpal

    2014-05-01

    Ixodes scapularis ticks transmit a wide array of human and animal pathogens including Borrelia burgdorferi; however, how tick immune components influence the persistence of invading pathogens remains unknown. As originally demonstrated in Caenorhabditis elegans and later in Anopheles gambiae, we show here that an acellular gut barrier, resulting from the tyrosine cross-linking of the extracellular matrix, also exists in I. scapularis ticks. This dityrosine network (DTN) is dependent upon a dual oxidase (Duox), which is a member of the NADPH oxidase family. The Ixodes genome encodes for a single Duox and at least 16 potential peroxidase proteins, one of which, annotated as ISCW017368, together with Duox has been found to be indispensible for DTN formation. This barrier influences pathogen survival in the gut, as an impaired DTN in Doux knockdown or in specific peroxidase knockdown ticks, results in reduced levels of B. burgdorferi persistence within ticks. Absence of a complete DTN formation in knockdown ticks leads to the activation of specific tick innate immune pathway genes that potentially resulted in the reduction of spirochete levels. Together, these results highlighted the evolution of the DTN in a diverse set of arthropod vectors, including ticks, and its role in protecting invading pathogens like B. burgdorferi. Further understanding of the molecular basis of tick innate immune responses, vector-pathogen interaction, and their contributions in microbial persistence may help the development of new targets for disrupting the pathogen life cycle. PMID:24662290

  10. Sub-terahertz spectroscopy of magnetic resonance in BiFeO3 using a vector network analyzer

    NASA Astrophysics Data System (ADS)

    Caspers, Christian; Gandhi, Varun P.; Magrez, Arnaud; de Rijk, Emile; Ansermet, Jean-Philippe

    2016-06-01

    Detection of sub-THz spin cycloid resonances (SCRs) of stoichiometric BiFeO3 (BFO) was demonstrated using a vector network analyzer. Continuous wave absorption spectroscopy is possible, thanks to heterodyning and electronic sweep control using frequency extenders for frequencies from 480 to 760 GHz. High frequency resolution reveals SCR absorption peaks with a frequency precision in the ppm regime. Three distinct SCR features of BFO were observed and identified as Ψ1 and Φ2 modes, which are out-of-plane and in-plane modes of the spin cycloid, respectively. A spin reorientation transition at 200 K is evident in the frequency vs temperature study. The global minimum in linewidth for both Ψ modes at 140 K is ascribed to the critical slowing down of spin fluctuations.

  11. Input vector optimization of feed-forward neural networks for fitting ab initio potential-energy databases.

    PubMed

    Malshe, M; Raff, L M; Hagan, M; Bukkapatnam, S; Komanduri, R

    2010-05-28

    The variation in the fitting accuracy of neural networks (NNs) when used to fit databases comprising potential energies obtained from ab initio electronic structure calculations is investigated as a function of the number and nature of the elements employed in the input vector to the NN. Ab initio databases for H(2)O(2), HONO, Si(5), and H(2)C[Double Bond]CHBr were employed in the investigations. These systems were chosen so as to include four-, five-, and six-body systems containing first, second, third, and fourth row elements with a wide variety of chemical bonding and whose conformations cover a wide range of structures that occur under high-energy machining conditions and in chemical reactions involving cis-trans isomerizations, six different types of two-center bond ruptures, and two different three-center dissociation reactions. The ab initio databases for these systems were obtained using density functional theory/B3LYP, MP2, and MP4 methods with extended basis sets. A total of 31 input vectors were investigated. In each case, the elements of the input vector were chosen from interatomic distances, inverse powers of the interatomic distance, three-body angles, and dihedral angles. Both redundant and nonredundant input vectors were investigated. The results show that among all the input vectors investigated, the set employed in the Z-matrix specification of the molecular configurations in the electronic structure calculations gave the lowest NN fitting accuracy for both Si(5) and vinyl bromide. The underlying reason for this result appears to be the discontinuity present in the dihedral angle for planar geometries. The use of trigometric functions of the angles as input elements produced significantly improved fitting accuracy as this choice eliminates the discontinuity. The most accurate fitting was obtained when the elements of the input vector were taken to have the form R(ij) (-n), where the R(ij) are the interatomic distances. When the Levenberg

  12. Input vector optimization of feed-forward neural networks for fitting ab initio potential-energy databases

    NASA Astrophysics Data System (ADS)

    Malshe, M.; Raff, L. M.; Hagan, M.; Bukkapatnam, S.; Komanduri, R.

    2010-05-01

    The variation in the fitting accuracy of neural networks (NNs) when used to fit databases comprising potential energies obtained from ab initio electronic structure calculations is investigated as a function of the number and nature of the elements employed in the input vector to the NN. Ab initio databases for H2O2, HONO, Si5, and H2CCHBr were employed in the investigations. These systems were chosen so as to include four-, five-, and six-body systems containing first, second, third, and fourth row elements with a wide variety of chemical bonding and whose conformations cover a wide range of structures that occur under high-energy machining conditions and in chemical reactions involving cis-trans isomerizations, six different types of two-center bond ruptures, and two different three-center dissociation reactions. The ab initio databases for these systems were obtained using density functional theory/B3LYP, MP2, and MP4 methods with extended basis sets. A total of 31 input vectors were investigated. In each case, the elements of the input vector were chosen from interatomic distances, inverse powers of the interatomic distance, three-body angles, and dihedral angles. Both redundant and nonredundant input vectors were investigated. The results show that among all the input vectors investigated, the set employed in the Z-matrix specification of the molecular configurations in the electronic structure calculations gave the lowest NN fitting accuracy for both Si5 and vinyl bromide. The underlying reason for this result appears to be the discontinuity present in the dihedral angle for planar geometries. The use of trigometric functions of the angles as input elements produced significantly improved fitting accuracy as this choice eliminates the discontinuity. The most accurate fitting was obtained when the elements of the input vector were taken to have the form Rij-n, where the Rij are the interatomic distances. When the Levenberg-Marquardt procedure was modified

  13. Network latency and operator performance in teleradiology applications.

    PubMed

    Stahl, J N; Tellis, W; Huang, H K

    2000-08-01

    Teleradiology applications often use an interactive conferencing mode with remote control mouse pointers. When a telephone is used for voice communication, latencies of the data network can create a temporal discrepancy between the position of the mouse pointer and the verbal communication. To assess the effects of this dissociation, we examined the performance of 5 test persons carrying out simple teleradiology tasks under varying simulated network conditions. When the network latency exceeded 400 milliseconds, the performance of the test persons dropped, and an increasing number of errors were made. This effect was the same for constant latencies, which can occur on the network path, and for variable delays caused by the Nagle algorithm, an internal buffering scheme used by the TCP/IP protocol. Because the Nagle algorithm used in typical TCP/IP implementations causes a latency of about 300 milliseconds even before a data packet is sent, any additional latency in the network of 100 milliseconds or more will result in a decreased operator performance in teleradiology applications. These conditions frequently occur on the public Internet or on overseas connections. For optimal performance, the authors recommend bypassing the Nagle algorithm in teleradiology applications. PMID:15359750

  14. Efficient resting-state EEG network facilitates motor imagery performance

    NASA Astrophysics Data System (ADS)

    Zhang, Rui; Yao, Dezhong; Valdés-Sosa, Pedro A.; Li, Fali; Li, Peiyang; Zhang, Tao; Ma, Teng; Li, Yongjie; Xu, Peng

    2015-12-01

    Objective. Motor imagery-based brain-computer interface (MI-BCI) systems hold promise in motor function rehabilitation and assistance for motor function impaired people. But the ability to operate an MI-BCI varies across subjects, which becomes a substantial problem for practical BCI applications beyond the laboratory. Approach. Several previous studies have demonstrated that individual MI-BCI performance is related to the resting state of brain. In this study, we further investigate offline MI-BCI performance variations through the perspective of resting-state electroencephalography (EEG) network. Main results. Spatial topologies and statistical measures of the network have close relationships with MI classification accuracy. Specifically, mean functional connectivity, node degrees, edge strengths, clustering coefficient, local efficiency and global efficiency are positively correlated with MI classification accuracy, whereas the characteristic path length is negatively correlated with MI classification accuracy. The above results indicate that an efficient background EEG network may facilitate MI-BCI performance. Finally, a multiple linear regression model was adopted to predict subjects’ MI classification accuracy based on the efficiency measures of the resting-state EEG network, resulting in a reliable prediction. Significance. This study reveals the network mechanisms of the MI-BCI and may help to find new strategies for improving MI-BCI performance.

  15. Energy efficient mechanisms for high-performance Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Alsaify, Baha'adnan

    2009-12-01

    Due to recent advances in microelectronics, the development of low cost, small, and energy efficient devices became possible. Those advances led to the birth of the Wireless Sensor Networks (WSNs). WSNs consist of a large set of sensor nodes equipped with communication capabilities, scattered in the area to monitor. Researchers focus on several aspects of WSNs. Such aspects include the quality of service the WSNs provide (data delivery delay, accuracy of data, etc...), the scalability of the network to contain thousands of sensor nodes (the terms node and sensor node are being used interchangeably), the robustness of the network (allowing the network to work even if a certain percentage of nodes fails), and making the energy consumption in the network as low as possible to prolong the network's lifetime. In this thesis, we present an approach that can be applied to the sensing devices that are scattered in an area for Sensor Networks. This work will use the well-known approach of using a awaking scheduling to extend the network's lifespan. We designed a scheduling algorithm that will reduce the delay's upper bound the reported data will experience, while at the same time keeps the advantages that are offered by the use of the awaking scheduling -- the energy consumption reduction which will lead to the increase in the network's lifetime. The wakeup scheduling is based on the location of the node relative to its neighbors and its distance from the Base Station (the terms Base Station and sink are being used interchangeably). We apply the proposed method to a set of simulated nodes using the "ONE Simulator". We test the performance of this approach with three other approaches -- Direct Routing technique, the well known LEACH algorithm, and a multi-parent scheduling algorithm. We demonstrate a good improvement on the network's quality of service and a reduction of the consumed energy.

  16. Adaptive Optimization of Aircraft Engine Performance Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Long, Theresa W.

    1995-01-01

    Preliminary results are presented on the development of an adaptive neural network based control algorithm to enhance aircraft engine performance. This work builds upon a previous National Aeronautics and Space Administration (NASA) effort known as Performance Seeking Control (PSC). PSC is an adaptive control algorithm which contains a model of the aircraft's propulsion system which is updated on-line to match the operation of the aircraft's actual propulsion system. Information from the on-line model is used to adapt the control system during flight to allow optimal operation of the aircraft's propulsion system (inlet, engine, and nozzle) to improve aircraft engine performance without compromising reliability or operability. Performance Seeking Control has been shown to yield reductions in fuel flow, increases in thrust, and reductions in engine fan turbine inlet temperature. The neural network based adaptive control, like PSC, will contain a model of the propulsion system which will be used to calculate optimal control commands on-line. Hopes are that it will be able to provide some additional benefits above and beyond those of PSC. The PSC algorithm is computationally intensive, it is valid only at near steady-state flight conditions, and it has no way to adapt or learn on-line. These issues are being addressed in the development of the optimal neural controller. Specialized neural network processing hardware is being developed to run the software, the algorithm will be valid at steady-state and transient conditions, and will take advantage of the on-line learning capability of neural networks. Future plans include testing the neural network software and hardware prototype against an aircraft engine simulation. In this paper, the proposed neural network software and hardware is described and preliminary neural network training results are presented.

  17. Equivalent Vectors

    ERIC Educational Resources Information Center

    Levine, Robert

    2004-01-01

    The cross-product is a mathematical operation that is performed between two 3-dimensional vectors. The result is a vector that is orthogonal or perpendicular to both of them. Learning about this for the first time while taking Calculus-III, the class was taught that if AxB = AxC, it does not necessarily follow that B = C. This seemed baffling. The…

  18. On a vector space representation in genetic algorithms for sensor scheduling in wireless sensor networks.

    PubMed

    Martins, F V C; Carrano, E G; Wanner, E F; Takahashi, R H C; Mateus, G R; Nakamura, F G

    2014-01-01

    Recent works raised the hypothesis that the assignment of a geometry to the decision variable space of a combinatorial problem could be useful both for providing meaningful descriptions of the fitness landscape and for supporting the systematic construction of evolutionary operators (the geometric operators) that make a consistent usage of the space geometric properties in the search for problem optima. This paper introduces some new geometric operators that constitute the realization of searches along the combinatorial space versions of the geometric entities descent directions and subspaces. The new geometric operators are stated in the specific context of the wireless sensor network dynamic coverage and connectivity problem (WSN-DCCP). A genetic algorithm (GA) is developed for the WSN-DCCP using the proposed operators, being compared with a formulation based on integer linear programming (ILP) which is solved with exact methods. That ILP formulation adopts a proxy objective function based on the minimization of energy consumption in the network, in order to approximate the objective of network lifetime maximization, and a greedy approach for dealing with the system's dynamics. To the authors' knowledge, the proposed GA is the first algorithm to outperform the lifetime of networks as synthesized by the ILP formulation, also running in much smaller computational times for large instances. PMID:24102647

  19. Performance of social network sensors during Hurricane Sandy.

    PubMed

    Kryvasheyeu, Yury; Chen, Haohui; Moro, Esteban; Van Hentenryck, Pascal; Cebrian, Manuel

    2015-01-01

    Information flow during catastrophic events is a critical aspect of disaster management. Modern communication platforms, in particular online social networks, provide an opportunity to study such flow and derive early-warning sensors, thus improving emergency preparedness and response. Performance of the social networks sensor method, based on topological and behavioral properties derived from the "friendship paradox", is studied here for over 50 million Twitter messages posted before, during, and after Hurricane Sandy. We find that differences in users' network centrality effectively translate into moderate awareness advantage (up to 26 hours); and that geo-location of users within or outside of the hurricane-affected area plays a significant role in determining the scale of such an advantage. Emotional response appears to be universal regardless of the position in the network topology, and displays characteristic, easily detectable patterns, opening a possibility to implement a simple "sentiment sensing" technique that can detect and locate disasters. PMID:25692690

  20. Performance of Social Network Sensors during Hurricane Sandy

    PubMed Central

    Kryvasheyeu, Yury; Chen, Haohui; Moro, Esteban; Van Hentenryck, Pascal; Cebrian, Manuel

    2015-01-01

    Information flow during catastrophic events is a critical aspect of disaster management. Modern communication platforms, in particular online social networks, provide an opportunity to study such flow and derive early-warning sensors, thus improving emergency preparedness and response. Performance of the social networks sensor method, based on topological and behavioral properties derived from the “friendship paradox”, is studied here for over 50 million Twitter messages posted before, during, and after Hurricane Sandy. We find that differences in users’ network centrality effectively translate into moderate awareness advantage (up to 26 hours); and that geo-location of users within or outside of the hurricane-affected area plays a significant role in determining the scale of such an advantage. Emotional response appears to be universal regardless of the position in the network topology, and displays characteristic, easily detectable patterns, opening a possibility to implement a simple “sentiment sensing” technique that can detect and locate disasters. PMID:25692690

  1. Asynchronous transfer mode link performance over ground networks

    NASA Technical Reports Server (NTRS)

    Chow, E. T.; Markley, R. W.

    1993-01-01

    The results of an experiment to determine the feasibility of using asynchronous transfer mode (ATM) technology to support advanced spacecraft missions that require high-rate ground communications and, in particular, full-motion video are reported. Potential nodes in such a ground network include Deep Space Network (DSN) antenna stations, the Jet Propulsion Laboratory, and a set of national and international end users. The experiment simulated a lunar microrover, lunar lander, the DSN ground communications system, and distributed science users. The users were equipped with video-capable workstations. A key feature was an optical fiber link between two high-performance workstations equipped with ATM interfaces. Video was also transmitted through JPL's institutional network to a user 8 km from the experiment. Variations in video depending on the networks and computers were observed, the results are reported.

  2. Investigation of Natural Draft Cooling Tower Performance Using Neural Network

    NASA Astrophysics Data System (ADS)

    Mahdi, Qasim S.; Saleh, Saad M.; Khalaf, Basima S.

    In the present work Artificial Neural Network (ANN) technique is used to investigate the performance of Natural Draft Wet Cooling Tower (NDWCT). Many factors are affected the rang, approach, pressure drop, and effectiveness of the cooling tower which are; fill type, water flow rate, air flow rate, inlet water temperature, wet bulb temperature of air, and nozzle hole diameter. Experimental data included the effects of these factors are used to train the network using Back Propagation (BP) algorithm. The network included seven input variables (Twi, hfill, mw, Taiwb, Taidb, vlow, vup) and five output variables (ma, Taowb, Two, Δp, ɛ) while hidden layer is different for each case. Network results compared with experimental results and good agreement was observed between the experimental and theoretical results.

  3. Hardware Efficient and High-Performance Networks for Parallel Computers.

    NASA Astrophysics Data System (ADS)

    Chien, Minze Vincent

    High performance interconnection networks are the key to high utilization and throughput in large-scale parallel processing systems. Since many interconnection problems in parallel processing such as concentration, permutation and broadcast problems can be cast as sorting problems, this dissertation considers the problem of sorting on a new model, called an adaptive sorting network. It presents four adaptive binary sorters the first two of which are ordinary combinational circuits while the last two exploit time-multiplexing and pipelining techniques. These sorter constructions demonstrate that any sequence of n bits can be sorted in O(log^2n) bit-level delay, using O(n) constant fanin gates. This improves the cost complexity of Batcher's binary sorters by a factor of O(log^2n) while matching their sorting time. It is further shown that any sequence of n numbers can be sorted on the same model in O(log^2n) comparator-level delay using O(nlog nloglog n) comparators. The adaptive binary sorter constructions lead to new O(n) bit-level cost concentrators and superconcentrators with O(log^2n) bit-level delay. Their employment in recently constructed permutation and generalized connectors lead to permutation and generalized connection networks with O(nlog n) bit-level cost and O(log^3n) bit-level delay. These results provide the least bit-level cost for such networks with competitive delays. Finally, the dissertation considers a key issue in the implementation of interconnection networks, namely, the pin constraint. Current VLSI technologies can house a large number of switches in a single chip, but the mere fact that one chip cannot have too many pins precludes the possibility of implementing a large connection network on a single chip. The dissertation presents techniques for partitioning connection networks into identical modules of switches in such a way that each module is contained in a single chip with an arbitrarily specified number of pins, and that the cost of

  4. SIMD machine using cube connected cycles network architecture for vector processing

    SciTech Connect

    Wagner, R.A.; Poirier, C.J.

    1986-11-04

    This patent describes a single instruction multiple data processor comprising: processing elements, interconnected in a Cube Connected Cycle Network design and using interprocessor communication links which carry one bit at a time in both directions simultaneously; controller means for controlling processor elements which feeds each of the processor elements identical local memory addresses, identical switching control bits, identical Boolean function selection codes, and distinct activation control bits, depending on each of the processor's position in the cube Connected Cycles Network in a prescribed fashion; and input/output devices connected to the network by switching devices wherein, each of the processing element comprises: two single-bit accumulator registors (A, B); two Boolean function generator units, each of which computes any one of 2/sup 8/ possible Boolean functions of three Boolean variables as specified by Boolean function codes sent 2 at a time by the controller to each of the processing elements; and switching circuit means controlled by the controller which select the three inputs to the logic function generators.

  5. Distribution and larval habitat characterization of Anopheles moucheti, Anopheles nili, and other malaria vectors in river networks of southern Cameroon.

    PubMed

    Antonio-Nkondjio, Christophe; Ndo, Cyrille; Costantini, Carlo; Awono-Ambene, Parfait; Fontenille, Didier; Simard, Frédéric

    2009-12-01

    Despite their importance as malaria vectors, little is known of the bionomic of Anopheles nili and Anopheles moucheti. Larval collections from 24 sites situated along the dense hydrographic network of south Cameroon were examined to assess key ecological factors associated with these mosquitoes distribution in river networks. Morphological identification of the III and IV instar larvae by the use of microscopy revealed that 47.6% of the larvae belong to An. nili and 22.6% to An. moucheti. Five variables were significantly involved with species distribution, the pace of flow of the river (lotic, or lentic), the light exposure (sunny or shady), vegetation (presence or absence of vegetation) the temperature and the presence or absence of debris. Using canonical correspondence analysis, it appeared that lotic rivers, exposed to light, with vegetation or debris were the best predictors of An. nili larval abundance. Whereas, An. moucheti and An. ovengensis were highly associated with lentic rivers, low temperature, having Pistia. An. nili and An. moucheti distribution along river systems across south Cameroon was highly correlated with environmental variables. The distribution of An. nili conforms to that of a generalist species which is adapted to exploiting a variety of environmental conditions, Whereas, An. moucheti, Anopheles ovengensis and Anopheles carnevalei appeared as specialist forest mosquitoes. PMID:19682965

  6. Impact of sensor installation techniques on seismic network performance

    NASA Astrophysics Data System (ADS)

    Bainbridge, Geoffrey; Laporte, Michael; Baturan, Dario; Greig, Wesley

    2015-04-01

    The magnitude of completeness (Mc) of a seismic network is determined by a number of factors including station density, self-noise and passband of the sensor used, ambient noise environment and sensor installation method and depth. Sensor installation techniques related to depth are of particular importance due to their impact on overall monitoring network deployment costs. We present a case study which evaluates performance of Trillium Compact Posthole seismometers installed using different methods as well as depths, and evaluate its impact on seismic network operation in terms of the target area of interest average magnitude of completeness in various monitoring applications. We evaluate three sensor installation methods: direct burial in soil at 0.5 m depth, 5 m screwpile and 15 m cemented casing borehole at sites chosen to represent high, medium and low ambient noise environments. In all cases, noise performance improves with depth with noise suppression generally more prominent at higher frequencies but with significant variations from site to site. When extended to overall network performance, the observed noise suppression results in improved (decreased) target area average Mc. However, the extent of the improvement with depth varies significantly, and can be negligible. The increased cost associated with installation at depth uses funds that could be applied to the deployment of additional stations. Using network modelling tools, we compare the improvement in magnitude of completeness and location accuracy associated with increasing installation depth to those associated with increased number of stations. The appropriate strategy is applied on a case-by-case and driven by network-specific performance requirements, deployment constraints and site noise conditions.

  7. A new integrated approach for characterizing the soil electromagnetic properties and detecting landmines using a hand-held vector network analyzer

    NASA Astrophysics Data System (ADS)

    Lopera, Olga; Lambot, Sebastien; Slob, Evert; Vanclooster, Marnik; Macq, Benoit; Milisavljevic, Nada

    2006-05-01

    The application of ground-penetrating radar (GPR) in humanitarian demining labors presents two major challenges: (1) the development of affordable and practical systems to detect metallic and non-metallic antipersonnel (AP) landmines under different conditions, and (2) the development of accurate soil characterization techniques to evaluate soil properties effects and determine the performance of these GPR-based systems. In this paper, we present a new integrated approach for characterizing electromagnetic (EM) properties of mine-affected soils and detecting landmines using a low cost hand-held vector network analyzer (VNA) connected to a highly directive antenna. Soil characterization is carried out using the radar-antenna-subsurface model of Lambot et al.1 and full-wave inversion of the radar signal focused in the time domain on the surface reflection. This methodology is integrated to background subtraction (BS) and migration to enhance landmine detection. Numerical and laboratory experiments are performed to show the effect of the soil EM properties on the detectability of the landmines and how the proposed approach can ameliorate the GPR performance.

  8. Comparison of Support Vector Machine, Neural Network, and CART Algorithms for the Land-Cover Classification Using Limited Training Data Points

    EPA Science Inventory

    Support vector machine (SVM) was applied for land-cover characterization using MODIS time-series data. Classification performance was examined with respect to training sample size, sample variability, and landscape homogeneity (purity). The results were compared to two convention...

  9. Competitive Learning Neural Network Ensemble Weighted by Predicted Performance

    ERIC Educational Resources Information Center

    Ye, Qiang

    2010-01-01

    Ensemble approaches have been shown to enhance classification by combining the outputs from a set of voting classifiers. Diversity in error patterns among base classifiers promotes ensemble performance. Multi-task learning is an important characteristic for Neural Network classifiers. Introducing a secondary output unit that receives different…

  10. Public Management and Educational Performance: The Impact of Managerial Networking.

    ERIC Educational Resources Information Center

    Meier, Kenneth J.; O'Toole, Laurence J., Jr.

    2003-01-01

    A 5-year performance analysis of managers in more than 500 school districts used a nonlinear, interactive, contingent model of management. Empirical support was found for key elements of the network-management portion of the model. Results showed that public management matters in policy implementation, but its impact is often nonlinear. (Contains…

  11. USING MULTIRAIL NETWORKS IN HIGH-PERFORMANCE CLUSTERS

    SciTech Connect

    Coll, S.; Fratchtenberg, E.; Petrini, F.; Hoisie, A.; Gurvits, L.

    2001-01-01

    Using multiple independent networks (also known as rails) is an emerging technique to overcome bandwidth limitations and enhance fault tolerance of current high-performance clusters. We present an extensive experimental comparison of the behavior of various allocation schemes in terms of bandwidth and latency. We show that striping messages over multiple rails can substantially reduce network latency, depending on average message size, network load, and allocation scheme. The compared methods include a basic round-robin rail allocation, a local-dynamic allocation based on local knowledge, and a dynamic rail allocation that reserves both communication endpoints of a message before sending it. The last method is shown to perform better than the others at higher loads: up to 49% better than local-knowledge allocation and 37% better than the round-robin allocation. This allocation scheme also shows lower latency and it saturates on higher loads (for messages large enough). Most importantly, this proposed allocation scheme scales well with the number of rails and message sizes. In addition we propose a hybrid algorithm that combines the benefits of the local-dynamic for short messages with those of the dynamic algorithm for large messages. Keywords: Communication Protocols, High-Performance Interconnection Networks, Performance Evaluation, Routing, Communication Libraries, Parallel Architectures.

  12. High Performance Computing and Networking for Science--Background Paper.

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. Office of Technology Assessment.

    The Office of Technology Assessment is conducting an assessment of the effects of new information technologies--including high performance computing, data networking, and mass data archiving--on research and development. This paper offers a view of the issues and their implications for current discussions about Federal supercomputer initiatives…

  13. Optical performance monitoring (OPM) in next-generation optical networks

    NASA Astrophysics Data System (ADS)

    Neuhauser, Richard E.

    2002-09-01

    DWDM transmission is the enabling technology currently pushing the transmission bandwidths in core networks towards the multi-Tb/s regime with unregenerated transmission distances of several thousand km. Such systems represent the basic platform for transparent DWDM networks enabling both the transport of client signals with different data formats and bit rates (e.g. SDH/SONET, IP over WDM, Gigabit Ethernet, etc.) and dynamic provisioning of optical wavelength channels. Optical Performance Monitoring (OPM) will be one of the key elements for providing the capabilities of link set-up/control, fault localization, protection/restoration and path supervisioning for stable network operation becoming the major differentiator in next-generation networks. Currently, signal quality is usually characterized by DWDM power levels, spectrum-interpolated Optical Signal-to-Noise-Ratio (OSNR), and channel wavelengths. On the other hand there is urgent need for new OPM technologies and strategies providing solutions for in-channel OSNR, signal quality measurement, fault localization and fault identification. Innovative research and product activities include polarization nulling, electrical and optical amplitude sampling, BER estimation, electrical spectrum analysis, and pilot tone technologies. This presentation focuses on reviewing the requirements and solution concepts in current and next-generation networks with respect to Optical Performance Monitoring.

  14. Performance analysis of wireless sensor networks in geophysical sensing applications

    NASA Astrophysics Data System (ADS)

    Uligere Narasimhamurthy, Adithya

    Performance is an important criteria to consider before switching from a wired network to a wireless sensing network. Performance is especially important in geophysical sensing where the quality of the sensing system is measured by the precision of the acquired signal. Can a wireless sensing network maintain the same reliability and quality metrics that a wired system provides? Our work focuses on evaluating the wireless GeoMote sensor motes that were developed by previous computer science graduate students at Mines. Specifically, we conducted a set of experiments, namely WalkAway and Linear Array experiments, to characterize the performance of the wireless motes. The motes were also equipped with the Sticking Heartbeat Aperture Resynchronization Protocol (SHARP), a time synchronization protocol developed by a previous computer science graduate student at Mines. This protocol should automatically synchronize the mote's internal clocks and reduce time synchronization errors. We also collected passive data to evaluate the response of GeoMotes to various frequency components associated with the seismic waves. With the data collected from these experiments, we evaluated the performance of the SHARP protocol and compared the performance of our GeoMote wireless system against the industry standard wired seismograph system (Geometric-Geode). Using arrival time analysis and seismic velocity calculations, we set out to answer the following question. Can our wireless sensing system (GeoMotes) perform similarly to a traditional wired system in a realistic scenario?

  15. Performance Evaluation in Network-Based Parallel Computing

    NASA Technical Reports Server (NTRS)

    Dezhgosha, Kamyar

    1996-01-01

    Network-based parallel computing is emerging as a cost-effective alternative for solving many problems which require use of supercomputers or massively parallel computers. The primary objective of this project has been to conduct experimental research on performance evaluation for clustered parallel computing. First, a testbed was established by augmenting our existing SUNSPARCs' network with PVM (Parallel Virtual Machine) which is a software system for linking clusters of machines. Second, a set of three basic applications were selected. The applications consist of a parallel search, a parallel sort, a parallel matrix multiplication. These application programs were implemented in C programming language under PVM. Third, we conducted performance evaluation under various configurations and problem sizes. Alternative parallel computing models and workload allocations for application programs were explored. The performance metric was limited to elapsed time or response time which in the context of parallel computing can be expressed in terms of speedup. The results reveal that the overhead of communication latency between processes in many cases is the restricting factor to performance. That is, coarse-grain parallelism which requires less frequent communication between processes will result in higher performance in network-based computing. Finally, we are in the final stages of installing an Asynchronous Transfer Mode (ATM) switch and four ATM interfaces (each 155 Mbps) which will allow us to extend our study to newer applications, performance metrics, and configurations.

  16. Performance of Neural Networks Methods In Intrusion Detection

    SciTech Connect

    Dao, V N; Vemuri, R

    2001-07-09

    By accurately profiling the users via their unique attributes, it is possible to view the intrusion detection problem as a classification of authorized users and intruders. This paper demonstrates that artificial neural network (ANN) techniques can be used to solve this classification problem. Furthermore, the paper compares the performance of three neural networks methods in classifying authorized users and intruders using synthetically generated data. The three methods are the gradient descent back propagation (BP) with momentum, the conjugate gradient BP, and the quasi-Newton BP.

  17. Network Performance Testing for the BaBar Event Builder

    SciTech Connect

    Pavel, Tomas J

    1998-11-17

    We present an overview of the design of event building in the BABAR Online, based upon TCP/IP and commodity networking technology. BABAR is a high-rate experiment to study CP violation in asymmetric e{sup +}e{sup {minus}} collisions. In order to validate the event-builder design, an extensive program was undertaken to test the TCP performance delivered by various machine types with both ATM OC-3 and Fast Ethernet networks. The buffering characteristics of several candidate switches were examined and found to be generally adequate for our purposes. We highlight the results of this testing and present some of the more significant findings.

  18. Performance evaluation of a routing algorithm based on Hopfield Neural Network for network-on-chip

    NASA Astrophysics Data System (ADS)

    Esmaelpoor, Jamal; Ghafouri, Abdollah

    2015-12-01

    Network on chip (NoC) has emerged as a solution to overcome the system on chip growing complexity and design challenges. A proper routing algorithm is a key issue of an NoC design. An appropriate routing method balances load across the network channels and keeps path length as short as possible. This survey investigates the performance of a routing algorithm based on Hopfield Neural Network. It is a dynamic programming to provide optimal path and network monitoring in real time. The aim of this article is to analyse the possibility of using a neural network as a router. The algorithm takes into account the path with the lowest delay (cost) form source to destination. In other words, the path a message takes from source to destination depends on network traffic situation at the time and it is the fastest one. The simulation results show that the proposed approach improves average delay, throughput and network congestion efficiently. At the same time, the increase in power consumption is almost negligible.

  19. Team Assembly Mechanisms Determine Collaboration Network Structure and Team Performance

    PubMed Central

    Guimerà, Roger; Uzzi, Brian; Spiro, Jarrett; Nunes Amaral, Luís A.

    2007-01-01

    Agents in creative enterprises are embedded in networks that inspire, support, and evaluate their work. Here, we investigate how the mechanisms by which creative teams self-assemble determine the structure of these collaboration networks. We propose a model for the self-assembly of creative teams that has its basis in three parameters: team size, the fraction of newcomers in new productions, and the tendency of incumbents to repeat previous collaborations. The model suggests that the emergence of a large connected community of practitioners can be described as a phase transition. We find that team assembly mechanisms determine both the structure of the collaboration network and team performance for teams derived from both artistic and scientific fields. PMID:15860629

  20. Enhanced memory performance thanks to neural network assortativity

    SciTech Connect

    Franciscis, S. de; Johnson, S.; Torres, J. J.

    2011-03-24

    The behaviour of many complex dynamical systems has been found to depend crucially on the structure of the underlying networks of interactions. An intriguing feature of empirical networks is their assortativity--i.e., the extent to which the degrees of neighbouring nodes are correlated. However, until very recently it was difficult to take this property into account analytically, most work being exclusively numerical. We get round this problem by considering ensembles of equally correlated graphs and apply this novel technique to the case of attractor neural networks. Assortativity turns out to be a key feature for memory performance in these systems - so much so that for sufficiently correlated topologies the critical temperature diverges. We predict that artificial and biological neural systems could significantly enhance their robustness to noise by developing positive correlations.

  1. Statistical performance evaluation of ECG transmission using wireless networks.

    PubMed

    Shakhatreh, Walid; Gharaibeh, Khaled; Al-Zaben, Awad

    2013-07-01

    This paper presents simulation of the transmission of biomedical signals (using ECG signal as an example) over wireless networks. Investigation of the effect of channel impairments including SNR, pathloss exponent, path delay and network impairments such as packet loss probability; on the diagnosability of the received ECG signal are presented. The ECG signal is transmitted through a wireless network system composed of two communication protocols; an 802.15.4- ZigBee protocol and an 802.11b protocol. The performance of the transmission is evaluated using higher order statistics parameters such as kurtosis and Negative Entropy in addition to the common techniques such as the PRD, RMS and Cross Correlation. PMID:23777301

  2. Communications performance of an undersea acoustic large-area network

    NASA Astrophysics Data System (ADS)

    Kriewaldt, Hannah A.; Rice, Joseph A.

    2005-04-01

    The U.S. Navy is developing Seaweb acoustic networking capability for integrating undersea systems. Seaweb architectures generally involve a wide-area network of fixed nodes consistent with future distributed autonomous sensors on the seafloor. Mobile nodes including autonomous undersea vehicles (AUVs) and submarines operate in the context of the grid by using the fixed nodes as both navigation reference points and communication access points. In October and November 2004, Theater Anti-Submarine Warfare Exercise (TASWEX04) showcased Seaweb in its first fleet appearance. This paper evaluates the TASWEX04 Seaweb performance in support of networked communications between a submarine and a surface ship. Considerations include physical-layer dependencies on the 9-14 kHz acoustic channel, such as refraction, wind-induced ambient noise, and submarine aspect angle. [Work supported by SSC San Diego.

  3. Performance evaluation of a FPGA implementation of a digital rotation support vector machine

    NASA Astrophysics Data System (ADS)

    Lamela, Horacio; Gimeno, Jesús; Jiménez, Matías; Ruiz, Marta

    2008-04-01

    In this paper we provide a simple and fast hardware implementation for a Support Vector Machine (SVM). By using the CORDIC algorithm and implementing a 2-based exponential kernel that allows us to simplify operations, we overcome the problems caused by too many internal multiplications found in the classification process, both while applying the Kernel formula and later on multiplying by the weights. We show a simple example of classification with the algorithm and analyze the classification speed and accuracy.

  4. On the Performance of TCP Spoofing in Satellite Networks

    NASA Technical Reports Server (NTRS)

    Ishac, Joseph; Allman, Mark

    2001-01-01

    In this paper, we analyze the performance of Transmission Control Protocol (TCP) in a network that consists of both satellite and terrestrial components. One method, proposed by outside research, to improve the performance of data transfers over satellites is to use a performance enhancing proxy often dubbed 'spoofing.' Spoofing involves the transparent splitting of a TCP connection between the source and destination by some entity within the network path. In order to analyze the impact of spoofing, we constructed a simulation suite based around the network simulator ns-2. The simulation reflects a host with a satellite connection to the Internet and allows the option to spoof connections just prior to the satellite. The methodology used in our simulation allows us to analyze spoofing over a large range of file sizes and under various congested conditions, while prior work on this topic has primarily focused on bulk transfers with no congestion. As a result of these simulations, we find that the performance of spoofing is dependent upon a number of conditions.

  5. Integrated System for Performance Monitoring of the ATLAS TDAQ Network

    NASA Astrophysics Data System (ADS)

    Octavian Savu, Dan; Al-Shabibi, Ali; Martin, Brian; Sjoen, Rune; Batraneanu, Silvia Maria; Stancu, Stefan

    2011-12-01

    The ATLAS TDAQ Network consists of three separate networks spanning four levels of the experimental building. Over 200 edge switches and 5 multi-blade chassis routers are used to interconnect 2000 processors, adding up to more than 7000 high speed interfaces. In order to substantially speed-up ad-hoc and post mortem analysis, a scalable, yet flexible, integrated system for monitoring both network statistics and environmental conditions, processor parameters and data taking characteristics was required. For successful up-to-the-minute monitoring, information from many SNMP compliant devices, independent databases and custom APIs was gathered, stored and displayed in an optimal way. Easy navigation and compact aggregation of multiple data sources were the main requirements; characteristics not found in any of the tested products, either open-source or commercial. This paper describes how performance, scalability and display issues were addressed and what challenges the project faced during development and deployment. A full set of modules, including a fast polling SNMP engine, user interfaces using latest web technologies and caching mechanisms, has been designed and developed from scratch. Over the last year the system proved to be stable and reliable, replacing the previous performance monitoring system and extending its capabilities. Currently it is operated using a precision interval of 25 seconds (the industry standard is 300 seconds). Although it was developed in order to address the needs for integrated performance monitoring of the ATLAS TDAQ network, the package can be used for monitoring any network with rigid demands of precision and scalability, exceeding normal industry standards.

  6. Design and implementation of a high performance network security processor

    NASA Astrophysics Data System (ADS)

    Wang, Haixin; Bai, Guoqiang; Chen, Hongyi

    2010-03-01

    The last few years have seen many significant progresses in the field of application-specific processors. One example is network security processors (NSPs) that perform various cryptographic operations specified by network security protocols and help to offload the computation intensive burdens from network processors (NPs). This article presents a high performance NSP system architecture implementation intended for both internet protocol security (IPSec) and secure socket layer (SSL) protocol acceleration, which are widely employed in virtual private network (VPN) and e-commerce applications. The efficient dual one-way pipelined data transfer skeleton and optimised integration scheme of the heterogenous parallel crypto engine arrays lead to a Gbps rate NSP, which is programmable with domain specific descriptor-based instructions. The descriptor-based control flow fragments large data packets and distributes them to the crypto engine arrays, which fully utilises the parallel computation resources and improves the overall system data throughput. A prototyping platform for this NSP design is implemented with a Xilinx XC3S5000 based FPGA chip set. Results show that the design gives a peak throughput for the IPSec ESP tunnel mode of 2.85 Gbps with over 2100 full SSL handshakes per second at a clock rate of 95 MHz.

  7. Parallel access alignment network with barrel switch implementation for d-ordered vector elements

    NASA Technical Reports Server (NTRS)

    Barnes, George H. (Inventor)

    1980-01-01

    An alignment network between N parallel data input ports and N parallel data outputs includes a first and a second barrel switch. The first barrel switch fed by the N parallel input ports shifts the N outputs thereof and in turn feeds the N-1 input data paths of the second barrel switch according to the relationship X=k.sup.y modulo N wherein x represents the output data path ordering of the first barrel switch, y represents the input data path ordering of the second barrel switch, and k equals a primitive root of the number N. The zero (0) ordered output data path of the first barrel switch is fed directly to the zero ordered output port. The N-1 output data paths of the second barrel switch are connected to the N output ports in the reverse ordering of the connections between the output data paths of the first barrel switch and the input data paths of the second barrel switch. The second switch is controlled by a value m, which in the preferred embodiment is produced at the output of a ROM addressed by the value d wherein d represents the incremental spacing or distance between data elements to be accessed from the N input ports, and m is generated therefrom according to the relationship d=k.sup.m modulo N.

  8. Comparison of Bayesian network and support vector machine models for two-year survival prediction in lung cancer patients treated with radiotherapy

    SciTech Connect

    Jayasurya, K.; Fung, G.; Yu, S.; Dehing-Oberije, C.; De Ruysscher, D.; Hope, A.; De Neve, W.; Lievens, Y.; Lambin, P.; Dekker, A. L. A. J.

    2010-04-15

    Purpose: Classic statistical and machine learning models such as support vector machines (SVMs) can be used to predict cancer outcome, but often only perform well if all the input variables are known, which is unlikely in the medical domain. Bayesian network (BN) models have a natural ability to reason under uncertainty and might handle missing data better. In this study, the authors hypothesize that a BN model can predict two-year survival in non-small cell lung cancer (NSCLC) patients as accurately as SVM, but will predict survival more accurately when data are missing. Methods: A BN and SVM model were trained on 322 inoperable NSCLC patients treated with radiotherapy from Maastricht and validated in three independent data sets of 35, 47, and 33 patients from Ghent, Leuven, and Toronto. Missing variables occurred in the data set with only 37, 28, and 24 patients having a complete data set. Results: The BN model structure and parameter learning identified gross tumor volume size, performance status, and number of positive lymph nodes on a PET as prognostic factors for two-year survival. When validated in the full validation set of Ghent, Leuven, and Toronto, the BN model had an AUC of 0.77, 0.72, and 0.70, respectively. A SVM model based on the same variables had an overall worse performance (AUC 0.71, 0.68, and 0.69) especially in the Ghent set, which had the highest percentage of missing the important GTV size data. When only patients with complete data sets were considered, the BN and SVM model performed more alike. Conclusions: Within the limitations of this study, the hypothesis is supported that BN models are better at handling missing data than SVM models and are therefore more suitable for the medical domain. Future works have to focus on improving the BN performance by including more patients, more variables, and more diversity.

  9. Copercolating Networks: An Approach for Realizing High-Performance Transparent Conductors using Multicomponent Nanostructured Networks

    NASA Astrophysics Data System (ADS)

    Das, Suprem R.; Sadeque, Sajia; Jeong, Changwook; Chen, Ruiyi; Alam, Muhammad A.; Janes, David B.

    2016-06-01

    Although transparent conductive oxides such as indium tin oxide (ITO) are widely employed as transparent conducting electrodes (TCEs) for applications such as touch screens and displays, new nanostructured TCEs are of interest for future applications, including emerging transparent and flexible electronics. A number of twodimensional networks of nanostructured elements have been reported, including metallic nanowire networks consisting of silver nanowires, metallic carbon nanotubes (m-CNTs), copper nanowires or gold nanowires, and metallic mesh structures. In these single-component systems, it has generally been difficult to achieve sheet resistances that are comparable to ITO at a given broadband optical transparency. A relatively new third category of TCEs consisting of networks of 1D-1D and 1D-2D nanocomposites (such as silver nanowires and CNTs, silver nanowires and polycrystalline graphene, silver nanowires and reduced graphene oxide) have demonstrated TCE performance comparable to, or better than, ITO. In such hybrid networks, copercolation between the two components can lead to relatively low sheet resistances at nanowire densities corresponding to high optical transmittance. This review provides an overview of reported hybrid networks, including a comparison of the performance regimes achievable with those of ITO and single-component nanostructured networks. The performance is compared to that expected from bulk thin films and analyzed in terms of the copercolation model. In addition, performance characteristics relevant for flexible and transparent applications are discussed. The new TCEs are promising, but significant work must be done to ensure earth abundance, stability, and reliability so that they can eventually replace traditional ITO-based transparent conductors.

  10. Sensor Networking Testbed with IEEE 1451 Compatibility and Network Performance Monitoring

    NASA Technical Reports Server (NTRS)

    Gurkan, Deniz; Yuan, X.; Benhaddou, D.; Figueroa, F.; Morris, Jonathan

    2007-01-01

    Design and implementation of a testbed for testing and verifying IEEE 1451-compatible sensor systems with network performance monitoring is of significant importance. The performance parameters measurement as well as decision support systems implementation will enhance the understanding of sensor systems with plug-and-play capabilities. The paper will present the design aspects for such a testbed environment under development at University of Houston in collaboration with NASA Stennis Space Center - SSST (Smart Sensor System Testbed).

  11. On using multiple routing metrics with destination sequenced distance vector protocol for MultiHop wireless ad hoc networks

    NASA Astrophysics Data System (ADS)

    Mehic, M.; Fazio, P.; Voznak, M.; Partila, P.; Komosny, D.; Tovarek, J.; Chmelikova, Z.

    2016-05-01

    A mobile ad hoc network is a collection of mobile nodes which communicate without a fixed backbone or centralized infrastructure. Due to the frequent mobility of nodes, routes connecting two distant nodes may change. Therefore, it is not possible to establish a priori fixed paths for message delivery through the network. Because of its importance, routing is the most studied problem in mobile ad hoc networks. In addition, if the Quality of Service (QoS) is demanded, one must guarantee the QoS not only over a single hop but over an entire wireless multi-hop path which may not be a trivial task. In turns, this requires the propagation of QoS information within the network. The key to the support of QoS reporting is QoS routing, which provides path QoS information at each source. To support QoS for real-time traffic one needs to know not only minimum delay on the path to the destination but also the bandwidth available on it. Therefore, throughput, end-to-end delay, and routing overhead are traditional performance metrics used to evaluate the performance of routing protocol. To obtain additional information about the link, most of quality-link metrics are based on calculation of the lost probabilities of links by broadcasting probe packets. In this paper, we address the problem of including multiple routing metrics in existing routing packets that are broadcasted through the network. We evaluate the efficiency of such approach with modified version of DSDV routing protocols in ns-3 simulator.

  12. The Algerian Seismic Network: Performance from data quality analysis

    NASA Astrophysics Data System (ADS)

    Yelles, Abdelkarim; Allili, Toufik; Alili, Azouaou

    2013-04-01

    densify the network and to enhance performance of the Algerian Digital Seismic Network.

  13. Design and Performance Analysis of Incremental Networked Predictive Control Systems.

    PubMed

    Pang, Zhong-Hua; Liu, Guo-Ping; Zhou, Donghua

    2016-06-01

    This paper is concerned with the design and performance analysis of networked control systems with network-induced delay, packet disorder, and packet dropout. Based on the incremental form of the plant input-output model and an incremental error feedback control strategy, an incremental networked predictive control (INPC) scheme is proposed to actively compensate for the round-trip time delay resulting from the above communication constraints. The output tracking performance and closed-loop stability of the resulting INPC system are considered for two cases: 1) plant-model match case and 2) plant-model mismatch case. For the former case, the INPC system can achieve the same output tracking performance and closed-loop stability as those of the corresponding local control system. For the latter case, a sufficient condition for the stability of the closed-loop INPC system is derived using the switched system theory. Furthermore, for both cases, the INPC system can achieve a zero steady-state output tracking error for step commands. Finally, both numerical simulations and practical experiments on an Internet-based servo motor system illustrate the effectiveness of the proposed method. PMID:26186798

  14. Network Performance Measurements for NASA's Earth Observation System

    NASA Technical Reports Server (NTRS)

    Loiacono, Joe; Gormain, Andy; Smith, Jeff

    2004-01-01

    NASA's Earth Observation System (EOS) Project studies all aspects of planet Earth from space, including climate change, and ocean, ice, land, and vegetation characteristics. It consists of about 20 satellite missions over a period of about a decade. Extensive collaboration is used, both with other US. agencies (e.g., National Oceanic and Atmospheric Administration (NOA), United States Geological Survey (USGS), Department of Defense (DoD), and international agencies (e.g., European Space Agency (ESA), Japan Aerospace Exploration Agency (JAXA)), to improve cost effectiveness and obtain otherwise unavailable data. Scientific researchers are located at research institutions worldwide, primarily government research facilities and research universities. The EOS project makes extensive use of networks to support data acquisition, data production, and data distribution. Many of these functions impose requirements on the networks, including throughput and availability. In order to verify that these requirements are being met, and be pro-active in recognizing problems, NASA conducts on-going performance measurements. The purpose of this paper is to examine techniques used by NASA to measure the performance of the networks used by EOSDIS (EOS Data and Information System) and to indicate how this performance information is used.

  15. Enhancement of Network Performance through Integration of Borehole Stations

    NASA Astrophysics Data System (ADS)

    Korger, Edith; Plenkers, Katrin; Clinton, John; Kraft, Toni; Diehl, Tobias; Husen, Stephan; Schnellmann, Michael

    2014-05-01

    In order to improve the detection and characterisation of weak seismic events across northern Switzerland/southern Germany, the Swiss Digital Seismic Network has installed 10 new seismic stations during 2012 and 2013. The newly densified network was funded within a 10-year project by NAGRA and is expected to monitor seismicity with a magnitude of completeness Mc (ML) below 1.3 and provide high quality locations for all these events. The goal of this project is the monitoring of areas surrounding potential nuclear waste repositories, in order to gain a thorough understanding of the seismotectonic processes and consequent evaluation of the seimsic hazard in the region. Northern Switzerland lies in a molasse basin and is densely populated. Therefore it is a major challenge in this region to find stations with noise characteristics low enough to meet the monitoring requirements. The new stations include three borehole sites equipped with 1 Hz Lennartz LE3D-BH velocity sensors (depths between 120 and 160 m), which are at critical locations for the new network but at areas where the ambient noise at the surface is too high for convential surface stations. At each borehole, a strong motion seismometer is also installed at the surface. Through placing the seismometers at depth, the ambient noise level is significantly lowered - which means detection of smaller local and larger regional events is enhanced. We present here a comparison of the performance of each of the three borehole stations, reflecting on the improvement in noise compared to surface installations at these sites, as well as with other conventional surface stations within the network. We also demonstrate the benefits in the operation network performance, in terms of earthquakes detected and located, which arise from installing borehole stations with lower background noise.

  16. Simulation Modeling and Performance Evaluation of Space Networks

    NASA Technical Reports Server (NTRS)

    Jennings, Esther H.; Segui, John

    2006-01-01

    In space exploration missions, the coordinated use of spacecraft as communication relays increases the efficiency of the endeavors. To conduct trade-off studies of the performance and resource usage of different communication protocols and network designs, JPL designed a comprehensive extendable tool, the Multi-mission Advanced Communications Hybrid Environment for Test and Evaluation (MACHETE). The design and development of MACHETE began in 2000 and is constantly evolving. Currently, MACHETE contains Consultative Committee for Space Data Systems (CCSDS) protocol standards such as Proximity-1, Advanced Orbiting Systems (AOS), Packet Telemetry/Telecommand, Space Communications Protocol Specification (SCPS), and the CCSDS File Delivery Protocol (CFDP). MACHETE uses the Aerospace Corporation s Satellite Orbital Analysis Program (SOAP) to generate the orbital geometry information and contact opportunities. Matlab scripts provide the link characteristics. At the core of MACHETE is a discrete event simulator, QualNet. Delay Tolerant Networking (DTN) is an end-to-end architecture providing communication in and/or through highly stressed networking environments. Stressed networking environments include those with intermittent connectivity, large and/or variable delays, and high bit error rates. To provide its services, the DTN protocols reside at the application layer of the constituent internets, forming a store-and-forward overlay network. The key capabilities of the bundling protocols include custody-based reliability, ability to cope with intermittent connectivity, ability to take advantage of scheduled and opportunistic connectivity, and late binding of names to addresses. In this presentation, we report on the addition of MACHETE models needed to support DTN, namely: the Bundle Protocol (BP) model. To illustrate the use of MACHETE with the additional DTN model, we provide an example simulation to benchmark its performance. We demonstrate the use of the DTN protocol

  17. Coexistence: Threat to the Performance of Heterogeneous Network

    NASA Astrophysics Data System (ADS)

    Sharma, Neetu; Kaur, Amanpreet

    2010-11-01

    Wireless technology is gaining broad acceptance as users opt for the freedom that only wireless network can provide. Well-accepted wireless communication technologies generally operate in frequency bands that are shared among several users, often using different RF schemes. This is true in particular for WiFi, Bluetooth, and more recently ZigBee. These all three operate in the unlicensed 2.4 GHz band, also known as ISM band, which has been key to the development of a competitive and innovative market for wireless embedded devices. But, as with any resource held in common, it is crucial that those technologies coexist peacefully to allow each user of the band to fulfill its communication goals. This has led to an increase in wireless devices intended for use in IEEE 802.11 wireless local area networks (WLANs) and wireless personal area networks (WPANs), both of which support operation in the crowded 2.4-GHz industrial, scientific and medical (ISM) band. Despite efforts made by standardization bodies to ensure smooth coexistence it may occur that communication technologies transmitting for instance at very different power levels interfere with each other. In particular, it has been pointed out that ZigBee could potentially experience interference from WiFi traffic given that while both protocols can transmit on the same channel, WiFi transmissions usually occur at much higher power level. In this work, we considered a heterogeneous network and analyzed the impact of coexistence between IEEE 802.15.4 and IEEE 802.11b. To evaluate the performance of this network, measurement and simulation study are conducted and developed in the QualNet Network simulator, version 5.0.Model is analyzed for different placement models or topologies such as Random. Grid & Uniform. Performance is analyzed on the basis of characteristics such as throughput, average jitter and average end to end delay. Here, the impact of varying different antenna gain & shadowing model for this

  18. Practical Performance Analysis for Multiple Information Fusion Based Scalable Localization System Using Wireless Sensor Networks.

    PubMed

    Zhao, Yubin; Li, Xiaofan; Zhang, Sha; Meng, Tianhui; Zhang, Yiwen

    2016-01-01

    In practical localization system design, researchers need to consider several aspects to make the positioning efficiently and effectively, e.g., the available auxiliary information, sensing devices, equipment deployment and the environment. Then, these practical concerns turn out to be the technical problems, e.g., the sequential position state propagation, the target-anchor geometry effect, the Non-line-of-sight (NLOS) identification and the related prior information. It is necessary to construct an efficient framework that can exploit multiple available information and guide the system design. In this paper, we propose a scalable method to analyze system performance based on the Cramér-Rao lower bound (CRLB), which can fuse all of the information adaptively. Firstly, we use an abstract function to represent all of the wireless localization system model. Then, the unknown vector of the CRLB consists of two parts: the first part is the estimated vector, and the second part is the auxiliary vector, which helps improve the estimation accuracy. Accordingly, the Fisher information matrix is divided into two parts: the state matrix and the auxiliary matrix. Unlike the theoretical analysis, our CRLB can be a practical fundamental limit to denote the system that fuses multiple information in the complicated environment, e.g., recursive Bayesian estimation based on the hidden Markov model, the map matching method and the NLOS identification and mitigation methods. Thus, the theoretical results are approaching the real case more. In addition, our method is more adaptable than other CRLBs when considering more unknown important factors. We use the proposed method to analyze the wireless sensor network-based indoor localization system. The influence of the hybrid LOS/NLOS channels, the building layout information and the relative height differences between the target and anchors are analyzed. It is demonstrated that our method exploits all of the available information for

  19. Road safety performance indicators for the interurban road network.

    PubMed

    Yannis, George; Weijermars, Wendy; Gitelman, Victoria; Vis, Martijn; Chaziris, Antonis; Papadimitriou, Eleonora; Azevedo, Carlos Lima

    2013-11-01

    Various road safety performance indicators (SPIs) have been proposed for different road safety research areas, mainly as regards driver behaviour (e.g. seat belt use, alcohol, drugs, etc.) and vehicles (e.g. passive safety); however, no SPIs for the road network and design have been developed. The objective of this research is the development of an SPI for the road network, to be used as a benchmark for cross-region comparisons. The developed SPI essentially makes a comparison of the existing road network to the theoretically required one, defined as one which meets some minimum requirements with respect to road safety. This paper presents a theoretical concept for the determination of this SPI as well as a translation of this theory into a practical method. Also, the method is applied in a number of pilot countries namely the Netherlands, Portugal, Greece and Israel. The results show that the SPI could be efficiently calculated in all countries, despite some differences in the data sources. In general, the calculated overall SPI scores were realistic and ranged from 81 to 94%, with the exception of Greece where the SPI was relatively lower (67%). However, the SPI should be considered as a first attempt to determine the safety level of the road network. The proposed method has some limitations and could be further improved. The paper presents directions for further research to further develop the SPI. PMID:23268762

  20. Social value of high bandwidth networks: creative performance and education.

    PubMed

    Mansell, Robin; Foresta, Don

    2016-03-01

    This paper considers limitations of existing network technologies for distributed theatrical performance in the creative arts and for symmetrical real-time interaction in online learning environments. It examines the experience of a multidisciplinary research consortium that aimed to introduce a solution to latency and other network problems experienced by users in these sectors. The solution builds on the Multicast protocol, Access Grid, an environment supported by very high bandwidth networks. The solution is intended to offer high-quality image and sound, interaction with other network platforms, maximum user control of multipoint transmissions, and open programming tools that are flexible and modifiable for specific uses. A case study is presented drawing upon an extended period of participant observation by the authors. This provides a basis for an examination of the challenges of promoting technological innovation in a multidisciplinary project. We highlight the kinds of technical advances and cultural and organizational changes that would be required to meet demanding quality standards, the way a research consortium planned to engage in experimentation and learning, and factors making it difficult to achieve an open platform that is responsive to the needs of users in the creative arts and education sectors. PMID:26809576

  1. Network DEA: an application to analysis of academic performance

    NASA Astrophysics Data System (ADS)

    Saniee Monfared, Mohammad Ali; Safi, Mahsa

    2013-05-01

    As governmental subsidies to universities are declining in recent years, sustaining excellence in academic performance and more efficient use of resources have become important issues for university stakeholders. To assess the academic performances and the utilization of the resources, two important issues need to be addressed, i.e., a capable methodology and a set of good performance indicators as we consider in this paper. In this paper, we propose a set of performance indicators to enable efficiency analysis of academic activities and apply a novel network DEA structure to account for subfunctional efficiencies such as teaching quality, research productivity, as well as the overall efficiency. We tested our approach on the efficiency analysis of academic colleges at Alzahra University in Iran.

  2. Performance analysis of reactive congestion control for ATM networks

    NASA Astrophysics Data System (ADS)

    Kawahara, Kenji; Oie, Yuji; Murata, Masayuki; Miyahara, Hideo

    1995-05-01

    In ATM networks, preventive congestion control is widely recognized for efficiently avoiding congestion, and it is implemented by a conjunction of connection admission control and usage parameter control. However, congestion may still occur because of unpredictable statistical fluctuation of traffic sources even when preventive control is performed in the network. In this paper, we study another kind of congestion control, i.e., reactive congestion control, in which each source changes its cell emitting rate adaptively to the traffic load at the switching node (or at the multiplexer). Our intention is that, by incorporating such a congestion control method in ATM networks, more efficient congestion control is established. We develop an analytical model, and carry out an approximate analysis of reactive congestion control algorithm. Numerical results show that the reactive congestion control algorithms are very effective in avoiding congestion and in achieving the statistical gain. Furthermore, the binary congestion control algorithm with pushout mechanism is shown to provide the best performance among the reactive congestion control algorithms treated here.

  3. Improving stochastic communication network performance: Reliability vs throughput

    NASA Astrophysics Data System (ADS)

    Jansen, Leonard J.

    1991-12-01

    This research investigated the measurement and improvement of two performance parameters, expected flow and reliability, for stochastic communication networks. There were three objectives of this research. The first was to measure the reliability of large stochastic networks. This was accomplished through an investigation into the current methodologies in the literature, with a subsequent selection and application of a factoring program developed by Page and Perry. The second objective was to develop a reliability improvement model given that a mathematical reliability expression did not exist. This was accomplished modeling a heuristic by Jain and Gopal, into a linear improvement model. Finally, the third objective was to examine the trade-off between maximizing expected flow and reliability. This was accomplished through generating bounds for the efficient frontier in a modified multicriteria optimization approach. Using the methodologies formulated in this research, the performance parameters of both expected flow and reliability can be measured and subsequent improvements made providing insight into the operational capabilities of stochastic communication networks.

  4. Performance evaluation of cellular phone network based portable ECG device.

    PubMed

    Hong, Joo-Hyun; Cha, Eun-Jong; Lee, Tae-Soo

    2008-01-01

    In this study, cellular phone network based portable ECG device was developed and three experiments were performed to evaluate the accuracy, reliability and operability, applicability during daily life of the developed device. First, ECG signals were measured using the developed device and Biopac device (reference device) during sitting and marking time and compared to verify the accuracy of R-R intervals. Second, the reliable data transmission to remote server was verified on two types of simulated emergency event using patient simulator. Third, during daily life with five types of motion, accuracy of data transmission to remote server was verified on two types of event occurring. By acquiring and comparing subject's biomedical signal and motion signal, the accuracy, reliability and operability, applicability during daily life of the developed device were verified. Therefore, cellular phone network based portable ECG device can monitor patient with inobtrusive manner. PMID:19162767

  5. Digitally controlled high-performance dc SQUID readout electronics for a 304-channel vector magnetometer

    NASA Astrophysics Data System (ADS)

    Bechstein, S.; Petsche, F.; Scheiner, M.; Drung, D.; Thiel, F.; Schnabel, A.; Schurig, Th

    2006-06-01

    Recently, we have developed a family of dc superconducting quantum interference device (SQUID) readout electronics for several applications. These electronics comprise a low-noise preamplifier followed by an integrator, and an analog SQUID bias circuit. A highly-compact low-power version with a flux-locked loop bandwidth of 0.3 MHz and a white noise level of 1 nV/√Hz was specially designed for a 304-channel low-Tc dc SQUID vector magnetometer, intended to operate in the new Berlin Magnetically Shielded Room (BMSR-2). In order to minimize the space needed to mount the electronics on top of the dewar and to minimize the power consumption, we have integrated four electronics channels on one 3 cm × 10 cm sized board. Furthermore we embedded the analog components of these four channels into a digitally controlled system including an in-system programmable microcontroller. Four of these integrated boards were combined to one module with a size of 4 cm × 4 cm × 16 cm. 19 of these modules were implemented, resulting in a total power consumption of about 61 W. To initialize the 304 channels and to service the system we have developed software tools running on a laptop computer. By means of these software tools the microcontrollers are fed with all required data such as the working points, the characteristic parameters of the sensors (noise, voltage swing), or the sensor position inside of the vector magnetometer system. In this paper, the developed electronics including the software tools are described, and first results are presented.

  6. High-performance, scalable optical network-on-chip architectures

    NASA Astrophysics Data System (ADS)

    Tan, Xianfang

    The rapid advance of technology enables a large number of processing cores to be integrated into a single chip which is called a Chip Multiprocessor (CMP) or a Multiprocessor System-on-Chip (MPSoC) design. The on-chip interconnection network, which is the communication infrastructure for these processing cores, plays a central role in a many-core system. With the continuously increasing complexity of many-core systems, traditional metallic wired electronic networks-on-chip (NoC) became a bottleneck because of the unbearable latency in data transmission and extremely high energy consumption on chip. Optical networks-on-chip (ONoC) has been proposed as a promising alternative paradigm for electronic NoC with the benefits of optical signaling communication such as extremely high bandwidth, negligible latency, and low power consumption. This dissertation focus on the design of high-performance and scalable ONoC architectures and the contributions are highlighted as follow: 1. A micro-ring resonator (MRR)-based Generic Wavelength-routed Optical Router (GWOR) is proposed. A method for developing any sized GWOR is introduced. GWOR is a scalable non-blocking ONoC architecture with simple structure, low cost and high power efficiency compared to existing ONoC designs. 2. To expand the bandwidth and improve the fault tolerance of the GWOR, a redundant GWOR architecture is designed by cascading different type of GWORs into one network. 3. The redundant GWOR built with MRR-based comb switches is proposed. Comb switches can expand the bandwidth while keep the topology of GWOR unchanged by replacing the general MRRs with comb switches. 4. A butterfly fat tree (BFT)-based hybrid optoelectronic NoC (HONoC) architecture is developed in which GWORs are used for global communication and electronic routers are used for local communication. The proposed HONoC uses less numbers of electronic routers and links than its counterpart of electronic BFT-based NoC. It takes the advantages of

  7. High-Performance Tools: Nevada's Experiences Growing Network Capability

    NASA Astrophysics Data System (ADS)

    Biasi, G.; Smith, K. D.; Slater, D.; Preston, L.; Tibuleac, I.

    2007-05-01

    Like most regional seismic networks, the Nevada Seismic Network relies on a combination of software components to perform its mission. Core components for automatic network operation are from Antelope, a real- time environmental monitoring software system from Boulder Real-Time Technologies (BRTT). We configured the detector for multiple filtering bands, generally to distinguish local, regional, and teleseismic phases. The associator can use all or a subset of detections for each location grid. Presently we use detailed grids in the Reno-Carson City, Las Vegas, and Yucca Mountain areas, a large regional grid and a teleseismic grid, with a configurable order of precedence among solutions. Incorporating USArray stations into the network was straight- forward. Locations for local events are available in 30-60 seconds, and relocations are computed every 20 seconds. Testing indicates that relocations could be computed every few seconds or less if desired on a modest Sun server. Successive locations may be kept in the database, or criteria applied to select a single preferred location. New code developed by BRTT partially in response to an NSL request automatically launches a gradient-based relocator to refine locations and depths. Locations are forwarded to QDDS and other notification mechanisms. We also use Antelope tools for earthquake picking and analysis and for database viewing and maintenance. We have found the programming interfaces supplied with Antelope instrumental as we work toward ANSS system performance requirements. For example, the Perl language interface to the real-time Object Ring Buffer (ORB) was used to reduce the time to produce ShakeMaps to the present value of ~3 minutes. Hypoinverse was incorporated into a real-time system with Perl ORB access tools. Using the Antelope PHP interface, we now have off-site review capabilities for events and ShakeMaps from hand-held internet devices. PHP and Perl tools were used to develop a remote capability, now

  8. Performance characteristics of omnidirectional antennas for spacecraft using NASA networks

    NASA Technical Reports Server (NTRS)

    Hilliard, Lawrence M.

    1987-01-01

    Described are the performance capabilities and critical elements of the shaped omni antenna developed for NASA for space users of NASA networks. The shaped omni is designed to be operated in tandem for virtually omnidirectional coverage and uniform gain free of spacecraft interference. These antennas are ideal for low gain data requirements and emergency backup, deployment, amd retrieval of higher gain RF systems. Other omnidirectional antennas that have flown in space are described in the final section. A performance summary for the shaped omni is in the Appendix. This document introduces organizations and projects to the shaped omni applications for NASA's space use. Coverage, gain, weight, power, and implementation and other performance information for satisfying a wide range of data requirements are included.

  9. A simulation study of TCP performance in ATM networks

    SciTech Connect

    Chien Fang; Chen, Helen; Hutchins, J.

    1994-08-01

    This paper presents a simulation study of TCP performance over congested ATM local area networks. We simulated a variety of schemes for congestion control for ATM LANs, including a simple cell-drop, a credit-based flow control scheme that back-pressures individual VC`s, and two selective cell-drop schemes. Our simulation results for congested ATM LANs show the following: (1) TCP performance is poor under simple cell-drop, (2) the selective cell-drop schemes increase effective link utilization and result in higher TCP throughputs than the simple cell-drop scheme, and (3) the credit-based flow control scheme eliminates cell loss and achieves maximum performance and effective link utilization.

  10. Introducing Vectors.

    ERIC Educational Resources Information Center

    Roche, John

    1997-01-01

    Suggests an approach to teaching vectors that promotes active learning through challenging questions addressed to the class, as opposed to subtle explanations. Promotes introducing vector graphics with concrete examples, beginning with an explanation of the displacement vector. Also discusses artificial vectors, vector algebra, and unit vectors.…