Science.gov

Sample records for performance vector network

  1. Vector Network Analysis

    Energy Science and Technology Software Center (ESTSC)

    1997-10-20

    Vector network analyzers are a convenient way to measure scattering parameters of a variety of microwave devices. However, these instruments, unlike oscilloscopes for example, require a relatively high degree of user knowledge and expertise. Due to the complexity of the instrument and of the calibration process, there are many ways in which an incorrect measurement may be produced. The Microwave Project, which is part of Sandia National Laboratories Primary Standards Laboratory, routinely uses check standardsmore » to verify that the network analyzer is operating properly. In the past, these measurements were recorded manually and, sometimes, interpretation of the results was problematic. To aid our measurement assurance process, a software program was developed to automatically measure a check standard and compare the new measurements with an historical database of measurements of the same device. The program acquires new measurement data from selected check standards, plots the new data against the mean and standard deviation of prior data for the same check standard, and updates the database files for the check standard. The program is entirely menu-driven requiring little additional work by the user.« less

  2. Vector Encoding in Biochemical Networks

    NASA Astrophysics Data System (ADS)

    Potter, Garrett; Sun, Bo

    Encoding of environmental cues via biochemical signaling pathways is of vital importance in the transmission of information for cells in a network. The current literature assumes a single cell state is used to encode information, however, recent research suggests the optimal strategy utilizes a vector of cell states sampled at various time points. To elucidate the optimal sampling strategy for vector encoding, we take an information theoretic approach and determine the mutual information of the calcium signaling dynamics obtained from fibroblast cells perturbed with different concentrations of ATP. Specifically, we analyze the sampling strategies under the cases of fixed and non-fixed vector dimension as well as the efficiency of these strategies. Our results show that sampling with greater frequency is optimal in the case of non-fixed vector dimension but that, in general, a lower sampling frequency is best from both a fixed vector dimension and efficiency standpoint. Further, we find the use of a simple modified Ornstein-Uhlenbeck process as a model qualitatively captures many of our experimental results suggesting that sampling in biochemical networks is based on a few basic components.

  3. Applying knowledge engineering and representation methods to improve support vector machine and multivariate probabilistic neural network CAD performance

    NASA Astrophysics Data System (ADS)

    Land, Walker H., Jr.; Anderson, Frances; Smith, Tom; Fahlbusch, Stephen; Choma, Robert; Wong, Lut

    2005-04-01

    Achieving consistent and correct database cases is crucial to the correct evaluation of any computer-assisted diagnostic (CAD) paradigm. This paper describes the application of artificial intelligence (AI), knowledge engineering (KE) and knowledge representation (KR) to a data set of ~2500 cases from six separate hospitals, with the objective of removing/reducing inconsistent outlier data. Several support vector machine (SVM) kernels were used to measure diagnostic performance of the original and a "cleaned" data set. Specifically, KE and ER principles were applied to the two data sets which were re-examined with respect to the environment and agents. One data set was found to contain 25 non-characterizable sets. The other data set contained 180 non-characterizable sets. CAD system performance was measured with both the original and "cleaned" data sets using two SVM kernels as well as a multivariate probabilistic neural network (PNN). Results demonstrated: (i) a 10% average improvement in overall Az and (ii) approximately a 50% average improvement in partial Az.

  4. Video data compression using artificial neural network differential vector quantization

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, Ashok K.; Bibyk, Steven B.; Ahalt, Stanley C.

    1991-01-01

    An artificial neural network vector quantizer is developed for use in data compression applications such as Digital Video. Differential Vector Quantization is used to preserve edge features, and a new adaptive algorithm, known as Frequency-Sensitive Competitive Learning, is used to develop the vector quantizer codebook. To develop real time performance, a custom Very Large Scale Integration Application Specific Integrated Circuit (VLSI ASIC) is being developed to realize the associative memory functions needed in the vector quantization algorithm. By using vector quantization, the need for Huffman coding can be eliminated, resulting in superior performance against channel bit errors than methods that use variable length codes.

  5. Ranked Retrieval with Semantic Networks and Vector Spaces.

    ERIC Educational Resources Information Center

    Kulyukin, Vladimir A.; Settle, Amber

    2001-01-01

    Discussion of semantic networks and ranked retrieval focuses on two models, the semantic network model with spreading activation and the vector space model with dot product. Suggests a formal method to analyze the two models in terms of their relative performance in the same universe of objects. (Author/LRW)

  6. Distributed Estimation for Vector Signal in Linear Coherent Sensor Networks

    NASA Astrophysics Data System (ADS)

    Wu, Chien-Hsien; Lin, Ching-An

    We introduce the distributed estimation of a random vector signal in wireless sensor networks that follow coherent multiple access channel model. We adopt the linear minimum mean squared error fusion rule. The problem of interest is to design linear coding matrices for those sensors in the network so as to minimize mean squared error of the estimated vector signal under a total power constraint. We show that the problem can be formulated as a convex optimization problem and we obtain closed form expressions of the coding matrices. Numerical results are used to illustrate the performance of the proposed method.

  7. Performance evaluation of vector-machine architectures

    SciTech Connect

    Tang, Ju-ho.

    1989-01-01

    Vector machines are well known for their high-peak performance, but the delivered performance varies greatly over different workloads and depends strongly on compiler optimizations. Recently it has been claimed that several horizontal superscalar architectures, e.g., VLIW and polycyclic architectures, provide a more balanced performance across a wider range of scientific workloads than do vector machines. The purpose of this research is to study the performance of register-register vector processors, such as Cray supercomputers, as a function of their architectural features, scheduling schemes, compiler optimization capabilities, and program parameters. The results of this study also provide a base for comparing vector machines with horizontal superscalar machines. An evaluation methodology, based on timing parameters, bottle-necks, and run time bounds, is developed. Cray-1 performance is degraded by the multiple memory loads of index-misaligned vectors and the inability of the Cray Fortran Compiler (CFT) to produce code that hits all the chain slot times. The impact of chaining and two instruction scheduling schemes on one-memory-port vector supercomputers, illustrated by the Cray-1 and Cray-2, is studied. The lack of instruction chaining on the Cray-2 requires a different instruction scheduling scheme from that of the Cray-1. Situations are characterized in which simple vector scheduling can generate code that fully utilizes one functional unit for machines with chaining. Even without chaining, polycyclic scheduling guarantees full utilization of one functional unit, after an initial transient, for loops with acyclic dependence graphs.

  8. A calibration free vector network analyzer

    NASA Astrophysics Data System (ADS)

    Kothari, Arpit

    Recently, two novel single-port, phase-shifter based vector network analyzer (VNA) systems were developed and tested at X-band (8.2--12.4 GHz) and Ka-band (26.4--40 GHz), respectively. These systems operate based on electronically moving the standing wave pattern, set up in a waveguide, over a Schottky detector and sample the standing wave voltage for several phase shift values. Once this system is fully characterized, all parameters in the system become known and hence theoretically, no other correction (or calibration) should be required to obtain the reflection coefficient, (Gamma), of an unknown load. This makes this type of VNA "calibration free" which is a significant advantage over other types of VNAs. To this end, a VNA system, based on this design methodology, was developed at X-band using several design improvements (compared to the previous designs) with the aim of demonstrating this "calibration-free" feature. It was found that when a commercial VNA (HP8510C) is used as the source and the detector, the system works as expected. However, when a detector is used (Schottky diode, log detector, etc.), obtaining correct Gamma still requires the customary three-load calibration. With the aim of exploring the cause, a detailed sensitivity analysis of prominent error sources was performed. Extensive measurements were done with different detection techniques including use of a spectrum analyzer as power detector. The system was tested even for electromagnetic compatibility (EMC) which may have contributed to this issue. Although desired results could not be obtained using the proposed standing-wave-power measuring devices like the Schottky diode but the principle of "calibration-free VNA" was shown to be true.

  9. Performance of vector sensors in noise

    NASA Astrophysics Data System (ADS)

    Cox, Henry; Baggeroer, Arthur

    2003-10-01

    Vector sensors are super gain devices that can provide ``array gain'' against ocean noise with a point sensor. As supergain devices they have increased sensitivity to nonacoustic noise components. This paper reviews and summarizes the processing gain that is achievable in various noise fields. Comparisons are made with an omni-directional sensor and with the correlation of a pair of closely spaced omni-directional sensors. Total processing gain that consists of both spatial and temporal gain is considered so that a proper analysis and interpretation of multiplicative processing can be made. The performance of ``intensity sensors'' (pressure times velocity) that are obtained by multiplying the omnidirectional component with a co-located dipole is also considered. A misinterpretation, that is common in the literature, concerning the performance of intensity sensors is discussed. The adaptive cardioid processing of vector sensors is also reviewed.

  10. Distributed Signal Decorrelation and Detection in Multi View Camera Networks Using the Vector Sparse Matrix Transform.

    PubMed

    Bachega, Leonardo R; Hariharan, Srikanth; Bouman, Charles A; Shroff, Ness B

    2015-12-01

    This paper introduces the vector sparse matrix transform (vector SMT), a new decorrelating transform suitable for performing distributed processing of high-dimensional signals in sensor networks. We assume that each sensor in the network encodes its measurements into vector outputs instead of scalar ones. The proposed transform decorrelates a sequence of pairs of vector outputs, until these vectors are decorrelated. In our experiments, we simulate distributed anomaly detection by a network of cameras, monitoring a spatial region. Each camera records an image of the monitored environment from its particular viewpoint and outputs a vector encoding the image. Our results, with both artificial and real data, show that the proposed vector SMT transform effectively decorrelates image measurements from the multiple cameras in the network while maintaining low overall communication energy consumption. Since it enables joint processing of the multiple vector outputs, our method provides significant improvements to anomaly detection accuracy when compared with the baseline case when the images are processed independently. PMID:26415179

  11. NASF transposition network: A computing network for unscrambling p-ordered vectors

    NASA Technical Reports Server (NTRS)

    Lim, R. S.

    1979-01-01

    The viewpoints of design, programming, and application of the transportation network (TN) is presented. The TN is a programmable combinational logic network that connects 521 memory modules to 512 processors. The unscrambling of p-ordered vectors to 1-ordered vectors in one cycle is described. The TN design is based upon the concept of cyclic groups from abstract algebra and primitive roots and indices from number theory. The programming of the TN is very simple, requiring only 20 bits: 10 bits for offset control and 10 bits for barrel switch shift control. This simple control is executed by the control unit (CU), not the processors. Any memory access by a processor must be coordinated with the CU and wait for all other processors to come to a synchronization point. These wait and synchronization events can be a degradation in performance to a computation. The TN application is for multidimensional data manipulation, matrix processing, and data sorting, and can also perform a perfect shuffle. Unlike other more complicated and powerful permutation networks, the TN cannot, if possible at all, unscramble non-p-ordered vectors in one cycle.

  12. A Distributed Support Vector Machine Learning Over Wireless Sensor Networks.

    PubMed

    Kim, Woojin; Stanković, Milos S; Johansson, Karl H; Kim, H Jin

    2015-11-01

    This paper is about fully-distributed support vector machine (SVM) learning over wireless sensor networks. With the concept of the geometric SVM, we propose to gossip the set of extreme points of the convex hull of local data set with neighboring nodes. It has the advantages of a simple communication mechanism and finite-time convergence to a common global solution. Furthermore, we analyze the scalability with respect to the amount of exchanged information and convergence time, with a specific emphasis on the small-world phenomenon. First, with the proposed naive convex hull algorithm, the message length remains bounded as the number of nodes increases. Second, by utilizing a small-world network, we have an opportunity to drastically improve the convergence performance with only a small increase in power consumption. These properties offer a great advantage when dealing with a large-scale network. Simulation and experimental results support the feasibility and effectiveness of the proposed gossip-based process and the analysis. PMID:26470063

  13. Performance of the butterfly processor-memory interconnection in a vector environment

    NASA Astrophysics Data System (ADS)

    Brooks, E. D., III

    1985-02-01

    A fundamental hurdle impeding the development of large N common memory multiprocessors is the performance limitation in the switch connecting the processors to the memory modules. Multistage networks currently considered for this connection have a memory latency which grows like (ALPHA)log2N*. For scientific computing, it is natural to look for a multiprocessor architecture that will enable the use of vector operations to mask memory latency. The problem to be overcome here is the chaotic behavior introduced by conflicts occurring in the switch. The performance of the butterfly or indirect binary n-cube network in a vector processing environment is examined. A simple modification of the standard 2X2 switch node used in such networks which adaptively removes chaotic behavior during a vector operation is described.

  14. High Performance Network Monitoring

    SciTech Connect

    Martinez, Jesse E

    2012-08-10

    Network Monitoring requires a substantial use of data and error analysis to overcome issues with clusters. Zenoss and Splunk help to monitor system log messages that are reporting issues about the clusters to monitoring services. Infiniband infrastructure on a number of clusters upgraded to ibmon2. ibmon2 requires different filters to report errors to system administrators. Focus for this summer is to: (1) Implement ibmon2 filters on monitoring boxes to report system errors to system administrators using Zenoss and Splunk; (2) Modify and improve scripts for monitoring and administrative usage; (3) Learn more about networks including services and maintenance for high performance computing systems; and (4) Gain a life experience working with professionals under real world situations. Filters were created to account for clusters running ibmon2 v1.0.0-1 10 Filters currently implemented for ibmon2 using Python. Filters look for threshold of port counters. Over certain counts, filters report errors to on-call system administrators and modifies grid to show local host with issue.

  15. Demonstration of Cost-Effective, High-Performance Computing at Performance and Reliability Levels Equivalent to a 1994 Vector Supercomputer

    NASA Technical Reports Server (NTRS)

    Babrauckas, Theresa

    2000-01-01

    The Affordable High Performance Computing (AHPC) project demonstrated that high-performance computing based on a distributed network of computer workstations is a cost-effective alternative to vector supercomputers for running CPU and memory intensive design and analysis tools. The AHPC project created an integrated system called a Network Supercomputer. By connecting computer work-stations through a network and utilizing the workstations when they are idle, the resulting distributed-workstation environment has the same performance and reliability levels as the Cray C90 vector Supercomputer at less than 25 percent of the C90 cost. In fact, the cost comparison between a Cray C90 Supercomputer and Sun workstations showed that the number of distributed networked workstations equivalent to a C90 costs approximately 8 percent of the C90.

  16. Improving neural network performance on SIMD architectures

    NASA Astrophysics Data System (ADS)

    Limonova, Elena; Ilin, Dmitry; Nikolaev, Dmitry

    2015-12-01

    Neural network calculations for the image recognition problems can be very time consuming. In this paper we propose three methods of increasing neural network performance on SIMD architectures. The usage of SIMD extensions is a way to speed up neural network processing available for a number of modern CPUs. In our experiments, we use ARM NEON as SIMD architecture example. The first method deals with half float data type for matrix computations. The second method describes fixed-point data type for the same purpose. The third method considers vectorized activation functions implementation. For each method we set up a series of experiments for convolutional and fully connected networks designed for image recognition task.

  17. Internal performance characteristics of thrust-vectored axisymmetric ejector nozzles

    NASA Technical Reports Server (NTRS)

    Lamb, Milton

    1995-01-01

    A series of thrust-vectored axisymmetric ejector nozzles were designed and experimentally tested for internal performance and pumping characteristics at the Langley research center. This study indicated that discontinuities in the performance occurred at low primary nozzle pressure ratios and that these discontinuities were mitigated by decreasing expansion area ratio. The addition of secondary flow increased the performance of the nozzles. The mid-to-high range of secondary flow provided the most overall improvements, and the greatest improvements were seen for the largest ejector area ratio. Thrust vectoring the ejector nozzles caused a reduction in performance and discharge coefficient. With or without secondary flow, the vectored ejector nozzles produced thrust vector angles that were equivalent to or greater than the geometric turning angle. With or without secondary flow, spacing ratio (ejector passage symmetry) had little effect on performance (gross thrust ratio), discharge coefficient, or thrust vector angle. For the unvectored ejectors, a small amount of secondary flow was sufficient to reduce the pressure levels on the shroud to provide cooling, but for the vectored ejector nozzles, a larger amount of secondary air was required to reduce the pressure levels to provide cooling.

  18. Modeling and performance analysis of GPS vector tracking algorithms

    NASA Astrophysics Data System (ADS)

    Lashley, Matthew

    This dissertation provides a detailed analysis of GPS vector tracking algorithms and the advantages they have over traditional receiver architectures. Standard GPS receivers use a decentralized architecture that separates the tasks of signal tracking and position/velocity estimation. Vector tracking algorithms combine the two tasks into a single algorithm. The signals from the various satellites are processed collectively through a Kalman filter. The advantages of vector tracking over traditional, scalar tracking methods are thoroughly investigated. A method for making a valid comparison between vector and scalar tracking loops is developed. This technique avoids the ambiguities encountered when attempting to make a valid comparison between tracking loops (which are characterized by noise bandwidths and loop order) and the Kalman filters (which are characterized by process and measurement noise covariance matrices) that are used by vector tracking algorithms. The improvement in performance offered by vector tracking is calculated in multiple different scenarios. Rule of thumb analysis techniques for scalar Frequency Lock Loops (FLL) are extended to the vector tracking case. The analysis tools provide a simple method for analyzing the performance of vector tracking loops. The analysis tools are verified using Monte Carlo simulations. Monte Carlo simulations are also used to study the effects of carrier to noise power density (C/N0) ratio estimation and the advantage offered by vector tracking over scalar tracking. The improvement from vector tracking ranges from 2.4 to 6.2 dB in various scenarios. The difference in the performance of the three vector tracking architectures is analyzed. The effects of using a federated architecture with and without information sharing between the receiver's channels are studied. A combination of covariance analysis and Monte Carlo simulation is used to analyze the performance of the three algorithms. The federated algorithm without

  19. A feedforward artificial neural network based on quantum effect vector-matrix multipliers.

    PubMed

    Levy, H J; McGill, T C

    1993-01-01

    The vector-matrix multiplier is the engine of many artificial neural network implementations because it can simulate the way in which neurons collect weighted input signals from a dendritic arbor. A new technology for building analog weighting elements that is theoretically capable of densities and speeds far beyond anything that conventional VLSI in silicon could ever offer is presented. To illustrate the feasibility of such a technology, a small three-layer feedforward prototype network with five binary neurons and six tri-state synapses was built and used to perform all of the fundamental logic functions: XOR, AND, OR, and NOT. PMID:18267745

  20. Biologically relevant neural network architectures for support vector machines.

    PubMed

    Jändel, Magnus

    2014-01-01

    Neural network architectures that implement support vector machines (SVM) are investigated for the purpose of modeling perceptual one-shot learning in biological organisms. A family of SVM algorithms including variants of maximum margin, 1-norm, 2-norm and ν-SVM is considered. SVM training rules adapted for neural computation are derived. It is found that competitive queuing memory (CQM) is ideal for storing and retrieving support vectors. Several different CQM-based neural architectures are examined for each SVM algorithm. Although most of the sixty-four scanned architectures are unconvincing for biological modeling four feasible candidates are found. The seemingly complex learning rule of a full ν-SVM implementation finds a particularly simple and natural implementation in bisymmetric architectures. Since CQM-like neural structures are thought to encode skilled action sequences and bisymmetry is ubiquitous in motor systems it is speculated that trainable pattern recognition in low-level perception has evolved as an internalized motor programme. PMID:24126252

  1. Optical vector network analyzer based on amplitude-phase modulation

    NASA Astrophysics Data System (ADS)

    Morozov, Oleg G.; Morozov, Gennady A.; Nureev, Ilnur I.; Kasimova, Dilyara I.; Zastela, Mikhail Y.; Gavrilov, Pavel V.; Makarov, Igor A.; Purtov, Vadim A.

    2016-03-01

    The article describes the principles of optical vector network analyzer (OVNA) design for fiber Bragg gratings (FBG) characterization based on amplitude-phase modulation of optical carrier that allow us to improve the measurement accuracy of amplitude and phase parameters of the elements under test. Unlike existing OVNA based on a single-sideband and unbalanced double sideband amplitude modulation, the ratio of the two side components of the probing radiation is used for analysis of amplitude and phase parameters of the tested elements, and the radiation of the optical carrier is suppressed, or the latter is used as a local oscillator. The suggested OVNA is designed for the narrow band-stop elements (π-phaseshift FBG) and wide band-pass elements (linear chirped FBG) research.

  2. Performance evaluation of the SX-6 vector architecture forscientific computations

    SciTech Connect

    Oliker, Leonid; Canning, Andrew; Carter, Jonathan Carter; Shalf,John; Skinner, David; Ethier, Stephane; Biswas, Rupak; Djomehri,Jahed; Van der Wijngaart, Rob

    2005-01-01

    The growing gap between sustained and peak performance for scientific applications is a well-known problem in high performance computing. The recent development of parallel vector systems offers the potential to reduce this gap for many computational science codes and deliver a substantial increase in computing capabilities. This paper examines the intranode performance of the NEC SX-6 vector processor, and compares it against the cache-based IBMPower3 and Power4 superscalar architectures, across a number of key scientific computing areas. First, we present the performance of a microbenchmark suite that examines many low-level machine characteristics. Next, we study the behavior of the NAS Parallel Benchmarks. Finally, we evaluate the performance of several scientific computing codes. Overall results demonstrate that the SX-6 achieves high performance on a large fraction of our application suite and often significantly outperforms the cache-based architectures. However, certain classes of applications are not easily amenable to vectorization and would require extensive algorithm and implementation reengineering to utilize the SX-6 effectively.

  3. Folksonomical P2P File Sharing Networks Using Vectorized KANSEI Information as Search Tags

    NASA Astrophysics Data System (ADS)

    Ohnishi, Kei; Yoshida, Kaori; Oie, Yuji

    We present the concept of folksonomical peer-to-peer (P2P) file sharing networks that allow participants (peers) to freely assign structured search tags to files. These networks are similar to folksonomies in the present Web from the point of view that users assign search tags to information distributed over a network. As a concrete example, we consider an unstructured P2P network using vectorized Kansei (human sensitivity) information as structured search tags for file search. Vectorized Kansei information as search tags indicates what participants feel about their files and is assigned by the participant to each of their files. A search query also has the same form of search tags and indicates what participants want to feel about files that they will eventually obtain. A method that enables file search using vectorized Kansei information is the Kansei query-forwarding method, which probabilistically propagates a search query to peers that are likely to hold more files having search tags that are similar to the query. The similarity between the search query and the search tags is measured in terms of their dot product. The simulation experiments examine if the Kansei query-forwarding method can provide equal search performance for all peers in a network in which only the Kansei information and the tendency with respect to file collection are different among all of the peers. The simulation results show that the Kansei query forwarding method and a random-walk-based query forwarding method, for comparison, work effectively in different situations and are complementary. Furthermore, the Kansei query forwarding method is shown, through simulations, to be superior to or equal to the random-walk based one in terms of search speed.

  4. Vector Symbolic Spiking Neural Network Model of Hippocampal Subarea CA1 Novelty Detection Functionality.

    PubMed

    Agerskov, Claus

    2016-04-01

    A neural network model is presented of novelty detection in the CA1 subdomain of the hippocampal formation from the perspective of information flow. This computational model is restricted on several levels by both anatomical information about hippocampal circuitry and behavioral data from studies done in rats. Several studies report that the CA1 area broadcasts a generalized novelty signal in response to changes in the environment. Using the neural engineering framework developed by Eliasmith et al., a spiking neural network architecture is created that is able to compare high-dimensional vectors, symbolizing semantic information, according to the semantic pointer hypothesis. This model then computes the similarity between the vectors, as both direct inputs and a recalled memory from a long-term memory network by performing the dot-product operation in a novelty neural network architecture. The developed CA1 model agrees with available neuroanatomical data, as well as the presented behavioral data, and so it is a biologically realistic model of novelty detection in the hippocampus, which can provide a feasible explanation for experimentally observed dynamics. PMID:26890351

  5. Biasing vector network analyzers using variable frequency and amplitude signals

    NASA Astrophysics Data System (ADS)

    Nobles, J. E.; Zagorodnii, V.; Hutchison, A.; Celinski, Z.

    2016-08-01

    We report the development of a test setup designed to provide a variable frequency biasing signal to a vector network analyzer (VNA). The test setup is currently used for the testing of liquid crystal (LC) based devices in the microwave region. The use of an AC bias for LC based devices minimizes the negative effects associated with ionic impurities in the media encountered with DC biasing. The test setup utilizes bias tees on the VNA test station to inject the bias signal. The square wave biasing signal is variable from 0.5 to 36.0 V peak-to-peak (VPP) with a frequency range of DC to 10 kHz. The test setup protects the VNA from transient processes, voltage spikes, and high-frequency leakage. Additionally, the signals to the VNA are fused to ½ amp and clipped to a maximum of 36 VPP based on bias tee limitations. This setup allows us to measure S-parameters as a function of both the voltage and the frequency of the applied bias signal.

  6. Biasing vector network analyzers using variable frequency and amplitude signals.

    PubMed

    Nobles, J E; Zagorodnii, V; Hutchison, A; Celinski, Z

    2016-08-01

    We report the development of a test setup designed to provide a variable frequency biasing signal to a vector network analyzer (VNA). The test setup is currently used for the testing of liquid crystal (LC) based devices in the microwave region. The use of an AC bias for LC based devices minimizes the negative effects associated with ionic impurities in the media encountered with DC biasing. The test setup utilizes bias tees on the VNA test station to inject the bias signal. The square wave biasing signal is variable from 0.5 to 36.0 V peak-to-peak (VPP) with a frequency range of DC to 10 kHz. The test setup protects the VNA from transient processes, voltage spikes, and high-frequency leakage. Additionally, the signals to the VNA are fused to ½ amp and clipped to a maximum of 36 VPP based on bias tee limitations. This setup allows us to measure S-parameters as a function of both the voltage and the frequency of the applied bias signal. PMID:27587141

  7. Monthly evaporation forecasting using artificial neural networks and support vector machines

    NASA Astrophysics Data System (ADS)

    Tezel, Gulay; Buyukyildiz, Meral

    2016-04-01

    Evaporation is one of the most important components of the hydrological cycle, but is relatively difficult to estimate, due to its complexity, as it can be influenced by numerous factors. Estimation of evaporation is important for the design of reservoirs, especially in arid and semi-arid areas. Artificial neural network methods and support vector machines (SVM) are frequently utilized to estimate evaporation and other hydrological variables. In this study, usability of artificial neural networks (ANNs) (multilayer perceptron (MLP) and radial basis function network (RBFN)) and ɛ-support vector regression (SVR) artificial intelligence methods was investigated to estimate monthly pan evaporation. For this aim, temperature, relative humidity, wind speed, and precipitation data for the period 1972 to 2005 from Beysehir meteorology station were used as input variables while pan evaporation values were used as output. The Romanenko and Meyer method was also considered for the comparison. The results were compared with observed class A pan evaporation data. In MLP method, four different training algorithms, gradient descent with momentum and adaptive learning rule backpropagation (GDX), Levenberg-Marquardt (LVM), scaled conjugate gradient (SCG), and resilient backpropagation (RBP), were used. Also, ɛ-SVR model was used as SVR model. The models were designed via 10-fold cross-validation (CV); algorithm performance was assessed via mean absolute error (MAE), root mean square error (RMSE), and coefficient of determination (R 2). According to the performance criteria, the ANN algorithms and ɛ-SVR had similar results. The ANNs and ɛ-SVR methods were found to perform better than the Romanenko and Meyer methods. Consequently, the best performance using the test data was obtained using SCG(4,2,2,1) with R 2 = 0.905.

  8. Performance of Ultra-Scale Applications on Leading Vector andScalar HPC Platforms

    SciTech Connect

    Oliker, Leonid; Canning, Andrew; Carter, Jonathan Carter; Shalf,John; Simon, Horst; Ethier, Stephane; Parks, David; Kitawaki, Shigemune; Tsuda, Yoshinori; Sato, Tetsuya

    2005-01-01

    The last decade has witnessed a rapid proliferation of superscalar cache-based microprocessors to build high-end capability and capacity computers primarily because of their generality, scalability, and cost effectiveness. However, the constant degradation of superscalar sustained performance, has become a well-known problem in the scientific computing community. This trend has been widely attributed to the use of superscalar-based commodity components who's architectural designs offer a balance between memory performance, network capability, and execution rate that is poorly matched to the requirements of large-scale numerical computations. The recent development of massively parallel vector systems offers the potential to increase the performance gap for many important classes of algorithms. In this study we examine four diverse scientific applications with the potential to run at ultrascale, from the areas of plasma physics, material science, astrophysics, and magnetic fusion. We compare performance between the vector-based Earth Simulator (ES) and Cray X1, with leading superscalar-based platforms: the IBM Power3/4 and the SGI Altix. Results demonstrate that the ES vector systems achieve excellent performance on our application suite - the highest of any architecture tested to date.

  9. Maximizing sparse matrix vector product performance in MIMD computers

    SciTech Connect

    McLay, R.T.; Kohli, H.S.; Swift, S.L.; Carey, G.F.

    1994-12-31

    A considerable component of the computational effort involved in conjugate gradient solution of structured sparse matrix systems is expended during the Matrix-Vector Product (MVP), and hence it is the focus of most efforts at improving performance. Such efforts are hindered on MIMD machines due to constraints on memory, cache and speed of memory-cpu data transfer. This paper describes a strategy for maximizing the performance of the local computations associated with the MVP. The method focuses on single stride memory access, and the efficient use of cache by pre-loading it with data that is re-used while bypassing it for other data. The algorithm is designed to behave optimally for varying grid sizes and number of unknowns per gridpoint. Results from an assembly language implementation of the strategy on the iPSC/860 show a significant improvement over the performance using FORTRAN.

  10. Locally connected neural network with improved feature vector

    NASA Technical Reports Server (NTRS)

    Thomas, Tyson (Inventor)

    2004-01-01

    A pattern recognizer which uses neuromorphs with a fixed amount of energy that is distributed among the elements. The distribution of the energy is used to form a histogram which is used as a feature vector.

  11. Double Virus Vector Infection to the Prefrontal Network of the Macaque Brain

    PubMed Central

    Tanaka, Shingo; Koizumi, Masashi; Kikusui, Takefumi; Ichihara, Nobutsune; Kato, Shigeki; Kobayashi, Kazuto; Sakagami, Masamichi

    2015-01-01

    To precisely understand how higher cognitive functions are implemented in the prefrontal network of the brain, optogenetic and pharmacogenetic methods to manipulate the signal transmission of a specific neural pathway are required. The application of these methods, however, has been mostly restricted to animals other than the primate, which is the best animal model to investigate higher cognitive functions. In this study, we used a double viral vector infection method in the prefrontal network of the macaque brain. This enabled us to express specific constructs into specific neurons that constitute a target pathway without use of germline genetic manipulation. The double-infection technique utilizes two different virus vectors in two monosynaptically connected areas. One is a vector which can locally infect cell bodies of projection neurons (local vector) and the other can retrogradely infect from axon terminals of the same projection neurons (retrograde vector). The retrograde vector incorporates the sequence which encodes Cre recombinase and the local vector incorporates the “Cre-On” FLEX double-floxed sequence in which a reporter protein (mCherry) was encoded. mCherry thus came to be expressed only in doubly infected projection neurons with these vectors. We applied this method to two macaque monkeys and targeted two different pathways in the prefrontal network: The pathway from the lateral prefrontal cortex to the caudate nucleus and the pathway from the lateral prefrontal cortex to the frontal eye field. As a result, mCherry-positive cells were observed in the lateral prefrontal cortex in all of the four injected hemispheres, indicating that the double virus vector transfection is workable in the prefrontal network of the macaque brain. PMID:26193102

  12. Distributed Vector Estimation for Power- and Bandwidth-Constrained Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Sani, Alireza; Vosoughi, Azadeh

    2016-08-01

    We consider distributed estimation of a Gaussian vector with a linear observation model in an inhomogeneous wireless sensor network, where a fusion center (FC) reconstructs the unknown vector, using a linear estimator. Sensors employ uniform multi-bit quantizers and binary PSK modulation, and communicate with the FC over orthogonal power- and bandwidth-constrained wireless channels. We study transmit power and quantization rate (measured in bits per sensor) allocation schemes that minimize mean-square error (MSE). In particular, we derive two closed-form upper bounds on the MSE, in terms of the optimization parameters and propose coupled and decoupled resource allocation schemes that minimize these bounds. We show that the bounds are good approximations of the simulated MSE and the performance of the proposed schemes approaches the clairvoyant centralized estimation when total transmit power or bandwidth is very large. We study how the power and rate allocation are dependent on sensors observation qualities and channel gains, as well as total transmit power and bandwidth constraints. Our simulations corroborate our analytical results and illustrate the superior performance of the proposed algorithms.

  13. Intercomparison of Terahertz Dielectric Measurements Using Vector Network Analyzer and Time-Domain Spectrometer

    NASA Astrophysics Data System (ADS)

    Naftaly, Mira; Shoaib, Nosherwan; Stokes, Daniel; Ridler, Nick M.

    2016-07-01

    We describe a method for direct intercomparison of terahertz permittivities at 200 GHz obtained by a Vector Network Analyzer and a Time-Domain Spectrometer, whereby both instruments operate in their customary configurations, i.e., the VNA in waveguide and TDS in free-space. The method employs material that can be inserted into a waveguide for VNA measurements or contained in a cell for TDS measurements. The intercomparison experiments were performed using two materials: petroleum jelly and a mixture of petroleum jelly with carbon powder. The obtained values of complex permittivities were similar within the measurement uncertainty. An intercomparison between VNA and TDS measurements is of importance because the two modalities are customarily employed separately and require different approaches. Since material permittivities can and have been measured using either platform, it is necessary to ascertain that the obtained data is similar in both cases.

  14. Intercomparison of Terahertz Dielectric Measurements Using Vector Network Analyzer and Time-Domain Spectrometer

    NASA Astrophysics Data System (ADS)

    Naftaly, Mira; Shoaib, Nosherwan; Stokes, Daniel; Ridler, Nick M.

    2016-02-01

    We describe a method for direct intercomparison of terahertz permittivities at 200 GHz obtained by a Vector Network Analyzer and a Time-Domain Spectrometer, whereby both instruments operate in their customary configurations, i.e., the VNA in waveguide and TDS in free-space. The method employs material that can be inserted into a waveguide for VNA measurements or contained in a cell for TDS measurements. The intercomparison experiments were performed using two materials: petroleum jelly and a mixture of petroleum jelly with carbon powder. The obtained values of complex permittivities were similar within the measurement uncertainty. An intercomparison between VNA and TDS measurements is of importance because the two modalities are customarily employed separately and require different approaches. Since material permittivities can and have been measured using either platform, it is necessary to ascertain that the obtained data is similar in both cases.

  15. Diagnosing Anomalous Network Performance with Confidence

    SciTech Connect

    Settlemyer, Bradley W; Hodson, Stephen W; Kuehn, Jeffery A; Poole, Stephen W

    2011-04-01

    Variability in network performance is a major obstacle in effectively analyzing the throughput of modern high performance computer systems. High performance interconnec- tion networks offer excellent best-case network latencies; how- ever, highly parallel applications running on parallel machines typically require consistently high levels of performance to adequately leverage the massive amounts of available computing power. Performance analysts have usually quantified network performance using traditional summary statistics that assume the observational data is sampled from a normal distribution. In our examinations of network performance, we have found this method of analysis often provides too little data to under- stand anomalous network performance. Our tool, Confidence, instead uses an empirically derived probability distribution to characterize network performance. In this paper we describe several instances where the Confidence toolkit allowed us to understand and diagnose network performance anomalies that we could not adequately explore with the simple summary statis- tics provided by traditional measurement tools. In particular, we examine a multi-modal performance scenario encountered with an Infiniband interconnection network and we explore the performance repeatability on the custom Cray SeaStar2 interconnection network after a set of software and driver updates.

  16. HYBRID NEURAL NETWORK AND SUPPORT VECTOR MACHINE METHOD FOR OPTIMIZATION

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan (Inventor)

    2005-01-01

    System and method for optimization of a design associated with a response function, using a hybrid neural net and support vector machine (NN/SVM) analysis to minimize or maximize an objective function, optionally subject to one or more constraints. As a first example, the NN/SVM analysis is applied iteratively to design of an aerodynamic component, such as an airfoil shape, where the objective function measures deviation from a target pressure distribution on the perimeter of the aerodynamic component. As a second example, the NN/SVM analysis is applied to data classification of a sequence of data points in a multidimensional space. The NN/SVM analysis is also applied to data regression.

  17. Hybrid Neural Network and Support Vector Machine Method for Optimization

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan (Inventor)

    2007-01-01

    System and method for optimization of a design associated with a response function, using a hybrid neural net and support vector machine (NN/SVM) analysis to minimize or maximize an objective function, optionally subject to one or more constraints. As a first example, the NN/SVM analysis is applied iteratively to design of an aerodynamic component, such as an airfoil shape, where the objective function measures deviation from a target pressure distribution on the perimeter of the aerodynamic component. As a second example, the NN/SVM analysis is applied to data classification of a sequence of data points in a multidimensional space. The NN/SVM analysis is also applied to data regression.

  18. Data Access Performance Through Parallelization and Vectored Access: Some Results

    SciTech Connect

    Furano, Fabrizio; Hanushevsky, Andrew; /SLAC

    2011-11-10

    High Energy Physics data processing and analysis applications typically deal with the problem of accessing and processing data at high speed. Recent studies, development and test work have shown that the latencies due to data access can often be hidden by parallelizing them with the data processing, thus giving the ability to have applications which process remote data with a high level of efficiency. Techniques and algorithms able to reach this result have been implemented in the client side of the Scalla/xrootd system, and in this contribution we describe the results of some tests done in order to compare their performance and characteristics. These techniques, if used together with multiple streams data access, can also be effective in allowing to efficiently and transparently deal with data repositories accessible via a Wide Area Network.

  19. Radio to microwave dielectric characterisation of constitutive electromagnetic soil properties using vector network analyses

    NASA Astrophysics Data System (ADS)

    Schwing, M.; Wagner, N.; Karlovsek, J.; Chen, Z.; Williams, D. J.; Scheuermann, A.

    2016-04-01

    The knowledge of constitutive broadband electromagnetic (EM) properties of porous media such as soils and rocks is essential in the theoretical and numerical modeling of EM wave propagation in the subsurface. This paper presents an experimental and numerical study on the performance EM measuring instruments for broadband EM wave in the radio-microwave frequency range. 3-D numerical calculations of a specific sensor were carried out using the Ansys HFSS (high frequency structural simulator) to further evaluate the probe performance. In addition, six different sensors of varying design, application purpose, and operational frequency range, were tested on different calibration liquids and a sample of fine-grained soil over a frequency range of 1 MHz-40 GHz using four vector network analysers. The resulting dielectric spectrum of the soil was analysed and interpreted using a 3-term Cole-Cole model under consideration of a direct current conductivity contribution. Comparison of sensor performances on calibration materials and fine-grained soils showed consistency in the measured dielectric spectra at a frequency range from 100 MHz-2 GHz. By combining open-ended coaxial line and coaxial transmission line measurements, the observable frequency window could be extended to a truly broad frequency range of 1 MHz-40 GHz.

  20. Retroviral vector performance in defined chromosomal Loci of modular packaging cell lines.

    PubMed

    Gama-Norton, L; Herrmann, S; Schucht, R; Coroadinha, A S; Löw, R; Alves, P M; Bartholomae, C C; Schmidt, M; Baum, C; Schambach, A; Hauser, H; Wirth, D

    2010-08-01

    The improvement of safety and titer of retroviral vectors produced in standard retroviral packaging cell lines is hampered because production relies on uncontrollable vector integration events. The influences of chromosomal surroundings make it difficult to dissect the performance of a specific vector from the chromosomal surroundings of the respective integration site. Taking advantage of a technology that relies on the use of packaging cell lines with predefined integration sites, we have systematically evaluated the performance of several retroviral vectors. In two previously established modular packaging cell lines (Flp293A and 293 FLEX) with single, defined chromosomal integration sites, retroviral vectors were integrated by means of Flp-mediated site-specific recombination. Vectors that are distinguished by different long terminal repeat promoters were introduced in either the sense or reverse orientation. The results show that the promoter, viral vector orientation, and integration site are the main determinants of the titer. Furthermore, we exploited the viral production systems to evaluate read-through activity. Read-through is thought to be caused by inefficient termination of vector transcription and is inherent to the nature of retroviral vectors. We assessed the frequency of transduction of sequences flanking the retroviral vectors from both integration sites. The approach presented here provides a platform for systematic design and evaluation of the efficiency and safety of retroviral vectors optimized for a given producer cell line. PMID:20222806

  1. Improved input representation for enhancement of neural network performance

    SciTech Connect

    Aldrich, C.H.; An, Z.G.; Lee, K.; Lee, Y.C.

    1987-01-01

    The performance of an associate memory network depends significantly on the representation of the data. For example, it has already been recognized that bipolar representation of neurons with -1 and +1 states out- perform neurons with on and off states of +1 and 0 respectively. This paper will show that a simple modification of the pattern vector to have zero bias will provide even more significant increase for the performance of an associative memory network. The higher order algorithm is used for the numerical simulation studies of this paper. To the lowest order this algorithm reduces to the Hopfield model for auto-associative memory and the bidirectional associative memory (BAM) for hetero-associative memory model respectively. 16 refs., 4 figs., 1 tabs.

  2. Effects of internal yaw-vectoring devices on the static performance of a pitch-vectoring nonaxisymmetric convergent-divergent nozzle

    NASA Technical Reports Server (NTRS)

    Asbury, Scott C.

    1993-01-01

    An investigation was conducted in the static test facility of the Langley 16-Foot Transonic Tunnel to evaluate the internal performance of a nonaxisymmetric convergent divergent nozzle designed to have simultaneous pitch and yaw thrust vectoring capability. This concept utilized divergent flap deflection for thrust vectoring in the pitch plane and flow-turning deflectors installed within the divergent flaps for yaw thrust vectoring. Modifications consisting of reducing the sidewall length and deflecting the sidewall outboard were investigated as means to increase yaw-vectoring performance. This investigation studied the effects of multiaxis (pitch and yaw) thrust vectoring on nozzle internal performance characteristics. All tests were conducted with no external flow, and nozzle pressure ratio was varied from 2.0 to approximately 13.0. The results indicate that this nozzle concept can successfully generate multiaxis thrust vectoring. Deflection of the divergent flaps produced resultant pitch vector angles that, although dependent on nozzle pressure ratio, were nearly equal to the geometric pitch vector angle. Losses in resultant thrust due to pitch vectoring were small or negligible. The yaw deflectors produced resultant yaw vector angles up to 21 degrees that were controllable by varying yaw deflector rotation. However, yaw deflector rotation resulted in significant losses in thrust ratios and, in some cases, nozzle discharge coefficient. Either of the sidewall modifications generally reduced these losses and increased maximum resultant yaw vector angle. During multiaxis (simultaneous pitch and yaw) thrust vectoring, little or no cross coupling between the thrust vectoring processes was observed.

  3. Student Collaborative Networks and Academic Performance

    NASA Astrophysics Data System (ADS)

    Schmidt, David; Bridgeman, Ariel; Kohl, Patrick

    2013-04-01

    Undergraduate physics students commonly collaborate with one another on homework assignments, especially in more challenging courses. However, there currently exists a dearth of empirical research directly comparing the structure of students' collaborative networks to their academic performances in lower and upper division physics courses. We investigate such networks and associated performances through a mandated collaboration reporting system in two sophomore level and three junior level physics courses during the Fall 2012 and Spring 2013 semesters. We employ social network analysis to quantify the structure and time evolution of networks involving approximately 140 students. Analysis includes analytical and numerical assignments in addition to homework and exam scores. Preliminary results are discussed.

  4. Target localization in wireless sensor networks using online semi-supervised support vector regression.

    PubMed

    Yoo, Jaehyun; Kim, H Jin

    2015-01-01

    Machine learning has been successfully used for target localization in wireless sensor networks (WSNs) due to its accurate and robust estimation against highly nonlinear and noisy sensor measurement. For efficient and adaptive learning, this paper introduces online semi-supervised support vector regression (OSS-SVR). The first advantage of the proposed algorithm is that, based on semi-supervised learning framework, it can reduce the requirement on the amount of the labeled training data, maintaining accurate estimation. Second, with an extension to online learning, the proposed OSS-SVR automatically tracks changes of the system to be learned, such as varied noise characteristics. We compare the proposed algorithm with semi-supervised manifold learning, an online Gaussian process and online semi-supervised colocalization. The algorithms are evaluated for estimating the unknown location of a mobile robot in a WSN. The experimental results show that the proposed algorithm is more accurate under the smaller amount of labeled training data and is robust to varying noise. Moreover, the suggested algorithm performs fast computation, maintaining the best localization performance in comparison with the other methods. PMID:26024420

  5. The interplay of vaccination and vector control on small dengue networks.

    PubMed

    Hendron, Ross-William S; Bonsall, Michael B

    2016-10-21

    Dengue fever is a major public health issue affecting billions of people in over 100 countries across the globe. This challenge is growing as the invasive mosquito vectors, Aedes aegypti and Aedes albopictus, expand their distributions and increase their population sizes. Hence there is an increasing need to devise effective control methods that can contain dengue outbreaks. Here we construct an epidemiological model for virus transmission between vectors and hosts on a network of host populations distributed among city and town patches, and investigate disease control through vaccination and vector control using variants of the sterile insect technique (SIT). Analysis of the basic reproductive number and simulations indicate that host movement across this small network influences the severity of epidemics. Both vaccination and vector control strategies are investigated as methods of disease containment and our results indicate that these controls can be made more effective with mixed strategy solutions. We predict that reduced lethality through poor SIT methods or imperfectly efficacious vaccines will impact efforts to control disease spread. In particular, weakly efficacious vaccination strategies against multiple virus serotype diversity may be counter productive to disease control efforts. Even so, failings of one method may be mitigated by supplementing it with an alternative control strategy. Generally, our network approach encourages decision making to consider connected populations, to emphasise that successful control methods must effectively suppress dengue epidemics at this landscape scale. PMID:27457093

  6. Artificial Astrocytes Improve Neural Network Performance

    PubMed Central

    Porto-Pazos, Ana B.; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-01-01

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function. PMID:21526157

  7. Artificial astrocytes improve neural network performance.

    PubMed

    Porto-Pazos, Ana B; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-01-01

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function. PMID:21526157

  8. Epidemic spreading and global stability of an SIS model with an infective vector on complex networks

    NASA Astrophysics Data System (ADS)

    Kang, Huiyan; Fu, Xinchu

    2015-10-01

    In this paper, we present a new SIS model with delay on scale-free networks. The model is suitable to describe some epidemics which are not only transmitted by a vector but also spread between individuals by direct contacts. In view of the biological relevance and real spreading process, we introduce a delay to denote average incubation period of disease in a vector. By mathematical analysis, we obtain the epidemic threshold and prove the global stability of equilibria. The simulation shows the delay will effect the epidemic spreading. Finally, we investigate and compare two major immunization strategies, uniform immunization and targeted immunization.

  9. Ultimate conductivity performance in metallic nanowire networks

    NASA Astrophysics Data System (ADS)

    Gomes da Rocha, Claudia; Manning, Hugh G.; O'Callaghan, Colin; Ritter, Carlos; Bellew, Allen T.; Boland, John J.; Ferreira, Mauro S.

    2015-07-01

    In this work, we introduce a combined experimental and computational approach to describe the conductivity of metallic nanowire networks. Due to their highly disordered nature, these materials are typically described by simplified models in which network junctions control the overall conductivity. Here, we introduce a combined experimental and simulation approach that involves a wire-by-wire junction-by-junction simulation of an actual network. Rather than dealing with computer-generated networks, we use a computational approach that captures the precise spatial distribution of wires from an SEM analysis of a real network. In this way, we fully account for all geometric aspects of the network, i.e. for the properties of the junctions and wire segments. Our model predicts characteristic junction resistances that are smaller than those found by earlier simplified models. The model outputs characteristic values that depend on the detailed connectivity of the network, which can be used to compare the performance of different networks and to predict the optimum performance of any network and its scope for improvement.In this work, we introduce a combined experimental and computational approach to describe the conductivity of metallic nanowire networks. Due to their highly disordered nature, these materials are typically described by simplified models in which network junctions control the overall conductivity. Here, we introduce a combined experimental and simulation approach that involves a wire-by-wire junction-by-junction simulation of an actual network. Rather than dealing with computer-generated networks, we use a computational approach that captures the precise spatial distribution of wires from an SEM analysis of a real network. In this way, we fully account for all geometric aspects of the network, i.e. for the properties of the junctions and wire segments. Our model predicts characteristic junction resistances that are smaller than those found by earlier

  10. A Performance Management Initiative for Local Health Department Vector Control Programs

    PubMed Central

    Gerding, Justin; Kirshy, Micaela; Moran, John W.; Bialek, Ron; Lamers, Vanessa; Sarisky, John

    2016-01-01

    Local health department (LHD) vector control programs have experienced reductions in funding and capacity. Acknowledging this situation and its potential effect on the ability to respond to vector-borne diseases, the U.S. Centers for Disease Control and Prevention and the Public Health Foundation partnered on a performance management initiative for LHD vector control programs. The initiative involved 14 programs that conducted a performance assessment using the Environmental Public Health Performance Standards. The programs, assisted by quality improvement (QI) experts, used the assessment results to prioritize improvement areas that were addressed with QI projects intended to increase effectiveness and efficiency in the delivery of services such as responding to mosquito complaints and educating the public about vector-borne disease prevention. This article describes the initiative as a process LHD vector control programs may adapt to meet their performance management needs. This study also reviews aggregate performance assessment results and QI projects, which may reveal common aspects of LHD vector control program performance and priority improvement areas. LHD vector control programs interested in performance assessment and improvement may benefit from engaging in an approach similar to this performance management initiative. PMID:27429555

  11. High Performance Networks for High Impact Science

    SciTech Connect

    Scott, Mary A.; Bair, Raymond A.

    2003-02-13

    This workshop was the first major activity in developing a strategic plan for high-performance networking in the Office of Science. Held August 13 through 15, 2002, it brought together a selection of end users, especially representing the emerging, high-visibility initiatives, and network visionaries to identify opportunities and begin defining the path forward.

  12. Pipelining performance of structured dataflow networks

    SciTech Connect

    Tonge, F.M.

    1983-01-01

    A particular approach to specifying procedure interconnection and allocation is presented. The major result is that, within stated assumptions, networks constructed using a small set of structured process connectives can achieve at least as good throughput (pipelining performance) as arbitrarily interconnected networks. 20 references.

  13. Static internal performance of a single expansion ramp nozzle with multiaxis thrust vectoring capability

    NASA Technical Reports Server (NTRS)

    Capone, Francis J.; Schirmer, Alberto W.

    1993-01-01

    An investigation was conducted at static conditions in order to determine the internal performance characteristics of a multiaxis thrust vectoring single expansion ramp nozzle. Yaw vectoring was achieved by deflecting yaw flaps in the nozzle sidewall into the nozzle exhaust flow. In order to eliminate any physical interference between the variable angle yaw flap deflected into the exhaust flow and the nozzle upper ramp and lower flap which were deflected for pitch vectoring, the downstream corners of both the nozzle ramp and lower flap were cut off to allow for up to 30 deg of yaw vectoring. The effects of nozzle upper ramp and lower flap cutout, yaw flap hinge line location and hinge inclination angle, sidewall containment, geometric pitch vector angle, and geometric yaw vector angle were studied. This investigation was conducted in the static-test facility of the Langley 16-Foot Transonic Tunnel at nozzle pressure ratios up to 8.0.

  14. Internal performance of two nozzles utilizing gimbal concepts for thrust vectoring

    NASA Technical Reports Server (NTRS)

    Berrier, Bobby L.; Taylor, John G.

    1990-01-01

    The internal performance of an axisymmetric convergent-divergent nozzle and a nonaxisymmetric convergent-divergent nozzle, both of which utilized a gimbal type mechanism for thrust vectoring was evaluated in the Static Test Facility of the Langley 16-Foot Transonic Tunnel. The nonaxisymmetric nozzle used the gimbal concept for yaw thrust vectoring only; pitch thrust vectoring was accomplished by simultaneous deflection of the upper and lower divergent flaps. The model geometric parameters investigated were pitch vector angle for the axisymmetric nozzle and pitch vector angle, yaw vector angle, nozzle throat aspect ratio, and nozzle expansion ratio for the nonaxisymmetric nozzle. All tests were conducted with no external flow, and nozzle pressure ratio was varied from 2.0 to approximately 12.0.

  15. Enhancing neural-network performance via assortativity.

    PubMed

    de Franciscis, Sebastiano; Johnson, Samuel; Torres, Joaquín J

    2011-03-01

    The performance of attractor neural networks has been shown to depend crucially on the heterogeneity of the underlying topology. We take this analysis a step further by examining the effect of degree-degree correlations--assortativity--on neural-network behavior. We make use of a method recently put forward for studying correlated networks and dynamics thereon, both analytically and computationally, which is independent of how the topology may have evolved. We show how the robustness to noise is greatly enhanced in assortative (positively correlated) neural networks, especially if it is the hub neurons that store the information. PMID:21517565

  16. High-performance neural networks. [Neural computers

    SciTech Connect

    Dress, W.B.

    1987-06-01

    The new Forth hardware architectures offer an intermediate solution to high-performance neural networks while the theory and programming details of neural networks for synthetic intelligence are developed. This approach has been used successfully to determine the parameters and run the resulting network for a synthetic insect consisting of a 200-node ''brain'' with 1760 interconnections. Both the insect's environment and its sensor input have thus far been simulated. However, the frequency-coded nature of the Browning network allows easy replacement of the simulated sensors by real-world counterparts.

  17. Enhancing neural-network performance via assortativity

    SciTech Connect

    Franciscis, Sebastiano de; Johnson, Samuel; Torres, Joaquin J.

    2011-03-15

    The performance of attractor neural networks has been shown to depend crucially on the heterogeneity of the underlying topology. We take this analysis a step further by examining the effect of degree-degree correlations - assortativity - on neural-network behavior. We make use of a method recently put forward for studying correlated networks and dynamics thereon, both analytically and computationally, which is independent of how the topology may have evolved. We show how the robustness to noise is greatly enhanced in assortative (positively correlated) neural networks, especially if it is the hub neurons that store the information.

  18. Spatial Variance in Resting fMRI Networks of Schizophrenia Patients: An Independent Vector Analysis.

    PubMed

    Gopal, Shruti; Miller, Robyn L; Michael, Andrew; Adali, Tulay; Cetin, Mustafa; Rachakonda, Srinivas; Bustillo, Juan R; Cahill, Nathan; Baum, Stefi A; Calhoun, Vince D

    2016-01-01

    Spatial variability in resting functional MRI (fMRI) brain networks has not been well studied in schizophrenia, a disease known for both neurodevelopmental and widespread anatomic changes. Motivated by abundant evidence of neuroanatomical variability from previous studies of schizophrenia, we draw upon a relatively new approach called independent vector analysis (IVA) to assess this variability in resting fMRI networks. IVA is a blind-source separation algorithm, which segregates fMRI data into temporally coherent but spatially independent networks and has been shown to be especially good at capturing spatial variability among subjects in the extracted networks. We introduce several new ways to quantify differences in variability of IVA-derived networks between schizophrenia patients (SZs = 82) and healthy controls (HCs = 89). Voxelwise amplitude analyses showed significant group differences in the spatial maps of auditory cortex, the basal ganglia, the sensorimotor network, and visual cortex. Tests for differences (HC-SZ) in the spatial variability maps suggest, that at rest, SZs exhibit more activity within externally focused sensory and integrative network and less activity in the default mode network thought to be related to internal reflection. Additionally, tests for difference of variance between groups further emphasize that SZs exhibit greater network variability. These results, consistent with our prediction of increased spatial variability within SZs, enhance our understanding of the disease and suggest that it is not just the amplitude of connectivity that is different in schizophrenia, but also the consistency in spatial connectivity patterns across subjects. PMID:26106217

  19. Measurements by a Vector Network Analyzer at 325 to 508 GHz

    NASA Technical Reports Server (NTRS)

    Fung, King Man; Samoska, Lorene; Chattopadhyay, Goutam; Gaier, Todd; Kangaslahti, Pekka; Pukala, David; Lau, Yuenie; Oleson, Charles; Denning, Anthony

    2008-01-01

    Recent experiments were performed in which return loss and insertion loss of waveguide test assemblies in the frequency range from 325 to 508 GHz were measured by use of a swept-frequency two-port vector network analyzer (VNA) test set. The experiments were part of a continuing effort to develop means of characterizing passive and active electronic components and systems operating at ever increasing frequencies. The waveguide test assemblies comprised WR-2.2 end sections collinear with WR-3.3 middle sections. The test set, assembled from commercially available components, included a 50-GHz VNA scattering- parameter test set and external signal synthesizers, augmented with recently developed frequency extenders, and further augmented with attenuators and amplifiers as needed to adjust radiofrequency and intermediate-frequency power levels between the aforementioned components. The tests included line-reflect-line calibration procedures, using WR-2.2 waveguide shims as the "line" standards and waveguide flange short circuits as the "reflect" standards. Calibrated dynamic ranges somewhat greater than about 20 dB for return loss and 35 dB for insertion loss were achieved. The measurement data of the test assemblies were found to substantially agree with results of computational simulations.

  20. Belief network algorithms: A study of performance

    SciTech Connect

    Jitnah, N.

    1996-12-31

    This abstract gives an overview of the work. We present a survey of Belief Network algorithms and propose a domain characterization system to be used as a basis for algorithm comparison and for predicting algorithm performance.

  1. Evaluation of Raman spectra of human brain tumor tissue using the learning vector quantization neural network

    NASA Astrophysics Data System (ADS)

    Liu, Tuo; Chen, Changshui; Shi, Xingzhe; Liu, Chengyong

    2016-05-01

    The Raman spectra of tissue of 20 brain tumor patients was recorded using a confocal microlaser Raman spectroscope with 785 nm excitation in vitro. A total of 133 spectra were investigated. Spectra peaks from normal white matter tissue and tumor tissue were analyzed. Algorithms, such as principal component analysis, linear discriminant analysis, and the support vector machine, are commonly used to analyze spectral data. However, in this study, we employed the learning vector quantization (LVQ) neural network, which is typically used for pattern recognition. By applying the proposed method, a normal diagnosis accuracy of 85.7% and a glioma diagnosis accuracy of 89.5% were achieved. The LVQ neural network is a recent approach to excavating Raman spectra information. Moreover, it is fast and convenient, does not require the spectra peak counterpart, and achieves a relatively high accuracy. It can be used in brain tumor prognostics and in helping to optimize the cutting margins of gliomas.

  2. Estimating computer communication network performance using network simulations

    SciTech Connect

    Garcia, A.B.

    1985-01-01

    A generalized queuing model simulation of store-and-forward computer communication networks is developed and implemented using Simulation Language for Alternative Modeling (SLAM). A baseline simulation model is validated by comparison with published analytic models. The baseline model is expanded to include an ACK/NAK data link protocol, four-level message precedence, finite queues, and a response traffic scenario. Network performance, as indicated by average message delay and message throughput, is estimated using the simulation model.

  3. Speech recognition method based on genetic vector quantization and BP neural network

    NASA Astrophysics Data System (ADS)

    Gao, Li'ai; Li, Lihua; Zhou, Jian; Zhao, Qiuxia

    2009-07-01

    Vector Quantization is one of popular codebook design methods for speech recognition at present. In the process of codebook design, traditional LBG algorithm owns the advantage of fast convergence, but it is easy to get the local optimal result and be influenced by initial codebook. According to the understanding that Genetic Algorithm has the capability of getting the global optimal result, this paper proposes a hybrid clustering method GA-L based on Genetic Algorithm and LBG algorithm to improve the codebook.. Then using genetic neural networks for speech recognition. consequently search a global optimization codebook of the training vector space. The experiments show that neural network identification method based on genetic algorithm can extricate from its local maximum value and the initial restrictions, it can show superior to the standard genetic algorithm and BP neural network algorithm from various sources, and the genetic BP neural networks has a higher recognition rate and the unique application advantages than the general BP neural network in the same GA-VQ codebook, it can achieve a win-win situation in the time and efficiency.

  4. Diversity Performance Analysis on Multiple HAP Networks.

    PubMed

    Dong, Feihong; Li, Min; Gong, Xiangwu; Li, Hongjun; Gao, Fengyue

    2015-01-01

    One of the main design challenges in wireless sensor networks (WSNs) is achieving a high-data-rate transmission for individual sensor devices. The high altitude platform (HAP) is an important communication relay platform for WSNs and next-generation wireless networks. Multiple-input multiple-output (MIMO) techniques provide the diversity and multiplexing gain, which can improve the network performance effectively. In this paper, a virtual MIMO (V-MIMO) model is proposed by networking multiple HAPs with the concept of multiple assets in view (MAV). In a shadowed Rician fading channel, the diversity performance is investigated. The probability density function (PDF) and cumulative distribution function (CDF) of the received signal-to-noise ratio (SNR) are derived. In addition, the average symbol error rate (ASER) with BPSK and QPSK is given for the V-MIMO model. The system capacity is studied for both perfect channel state information (CSI) and unknown CSI individually. The ergodic capacity with various SNR and Rician factors for different network configurations is also analyzed. The simulation results validate the effectiveness of the performance analysis. It is shown that the performance of the HAPs network in WSNs can be significantly improved by utilizing the MAV to achieve overlapping coverage, with the help of the V-MIMO techniques. PMID:26134102

  5. Diversity Performance Analysis on Multiple HAP Networks

    PubMed Central

    Dong, Feihong; Li, Min; Gong, Xiangwu; Li, Hongjun; Gao, Fengyue

    2015-01-01

    One of the main design challenges in wireless sensor networks (WSNs) is achieving a high-data-rate transmission for individual sensor devices. The high altitude platform (HAP) is an important communication relay platform for WSNs and next-generation wireless networks. Multiple-input multiple-output (MIMO) techniques provide the diversity and multiplexing gain, which can improve the network performance effectively. In this paper, a virtual MIMO (V-MIMO) model is proposed by networking multiple HAPs with the concept of multiple assets in view (MAV). In a shadowed Rician fading channel, the diversity performance is investigated. The probability density function (PDF) and cumulative distribution function (CDF) of the received signal-to-noise ratio (SNR) are derived. In addition, the average symbol error rate (ASER) with BPSK and QPSK is given for the V-MIMO model. The system capacity is studied for both perfect channel state information (CSI) and unknown CSI individually. The ergodic capacity with various SNR and Rician factors for different network configurations is also analyzed. The simulation results validate the effectiveness of the performance analysis. It is shown that the performance of the HAPs network in WSNs can be significantly improved by utilizing the MAV to achieve overlapping coverage, with the help of the V-MIMO techniques. PMID:26134102

  6. WDM backbone network with guaranteed performance planning

    NASA Astrophysics Data System (ADS)

    Liang, Peng; Sheng, Wang; Zhong, Xusi; Li, Lemin

    2005-11-01

    Wavelength-Division multiplexing (WDM), which allows a single fibre to carry multiple signals simultaneously, has been widely used to increase link capacity and is a promising technology in backbone transport network. But designing such WDM backbone network is hard for two reasons, one is the uncertainty of future traffic demand, the other is difficulty of planning of the backup resource for failure conditions. As a result, enormous amount of link capacity for the network has to be provided for the network. Recently, a new approach called Valiant Load-Balanced Scheme (VLBS) has been proposed to design the WDM backbone network. The network planned by Valiant Load-Balanced Scheme is insensitive to the traffic and continues to guarantee performance under a user defined number of link or node failures. In this paper, the Valiant Load-Balanced Scheme (VLBS) for backbone network planning has been studied and a new Valiant Load-Balanced Scheme has been proposed. Compared with the early work, the new Valiant Load-Balanced Scheme is much more general and can be used for the computation of the link capacity of both homogeneous and heterogeneous networks. The abbreviation for the general Valiant Load-Balanced Scheme is GVLBS. After a brief description of the VLBS, we will give the detail derivation of the GVLBS. The central concept of the derivation of GVLBS is transforming the heterogeneous network into a homogeneous network, and taking advantage of VLBS to get GVLBS. Such transformation process is described and the derivation and analysis of GVLBS for link capacity under normal and failure conditions is also given. The numerical results show that GVLBS can compute the minimum link capacity required for the heterogeneous backbone network under different conditions (normal or failure).

  7. Performance characterization of a broadband vector Apodizing Phase Plate coronagraph.

    PubMed

    Otten, Gilles P P L; Snik, Frans; Kenworthy, Matthew A; Miskiewicz, Matthew N; Escuti, Michael J

    2014-12-01

    One of the main challenges for the direct imaging of planets around nearby stars is the suppression of the diffracted halo from the primary star. Coronagraphs are angular filters that suppress this diffracted halo. The Apodizing Phase Plate coronagraph modifies the pupil-plane phase with an anti-symmetric pattern to suppress diffraction over a 180 degree region from 2 to 7 λ/D and achieves a mean raw contrast of 10(-4) in this area, independent of the tip-tilt stability of the system. Current APP coronagraphs implemented using classical phase techniques are limited in bandwidth and suppression region geometry (i.e. only on one side of the star). In this paper, we introduce the vector-APP (vAPP) whose phase pattern is implemented through the vector phase imposed by the orientation of patterned liquid crystals. Beam-splitting according to circular polarization states produces two, complementary PSFs with dark holes on either side. We have developed a prototype vAPP that consists of a stack of three twisting liquid crystal layers to yield a bandwidth of 500 to 900 nm. We characterize the properties of this device using reconstructions of the pupil-plane pattern, and of the ensuing PSF structures. By imaging the pupil between crossed and parallel polarizers we reconstruct the fast axis pattern, transmission, and retardance of the vAPP, and use this as input for a PSF model. This model includes aberrations of the laboratory set-up, and matches the measured PSF, which shows a raw contrast of 10(-3.8) between 2 and 7 λ/D in a 135 degree wedge. The vAPP coronagraph is relatively easy to manufacture and can be implemented together with a broadband quarter-wave plate and Wollaston prism in a pupil wheel in high-contrast imaging instruments. The liquid crystal patterning technique permits the application of extreme phase patterns with deeper contrasts inside the dark holes, and the multilayer liquid crystal achromatization technique enables unprecedented spectral bandwidths

  8. Performance analysis of distributed symmetric sparse matrix vector multiplication algorithm for multi-core architectures

    SciTech Connect

    Oryspayev, Dossay; Aktulga, Hasan Metin; Sosonkina, Masha; Maris, Pieter; Vary, James P.

    2015-07-14

    In this article, sparse matrix vector multiply (SpMVM) is an important kernel that frequently arises in high performance computing applications. Due to its low arithmetic intensity, several approaches have been proposed in literature to improve its scalability and efficiency in large scale computations. In this paper, our target systems are high end multi-core architectures and we use messaging passing interface + open multiprocessing hybrid programming model for parallelism. We analyze the performance of recently proposed implementation of the distributed symmetric SpMVM, originally developed for large sparse symmetric matrices arising in ab initio nuclear structure calculations. We also study important features of this implementation and compare with previously reported implementations that do not exploit underlying symmetry. Our SpMVM implementations leverage the hybrid paradigm to efficiently overlap expensive communications with computations. Our main comparison criterion is the "CPU core hours" metric, which is the main measure of resource usage on supercomputers. We analyze the effects of topology-aware mapping heuristic using simplified network load model. Furthermore, we have tested the different SpMVM implementations on two large clusters with 3D Torus and Dragonfly topology. Our results show that the distributed SpMVM implementation that exploits matrix symmetry and hides communication yields the best value for the "CPU core hours" metric and significantly reduces data movement overheads.

  9. Performance analysis of distributed symmetric sparse matrix vector multiplication algorithm for multi-core architectures

    DOE PAGESBeta

    Oryspayev, Dossay; Aktulga, Hasan Metin; Sosonkina, Masha; Maris, Pieter; Vary, James P.

    2015-07-14

    In this article, sparse matrix vector multiply (SpMVM) is an important kernel that frequently arises in high performance computing applications. Due to its low arithmetic intensity, several approaches have been proposed in literature to improve its scalability and efficiency in large scale computations. In this paper, our target systems are high end multi-core architectures and we use messaging passing interface + open multiprocessing hybrid programming model for parallelism. We analyze the performance of recently proposed implementation of the distributed symmetric SpMVM, originally developed for large sparse symmetric matrices arising in ab initio nuclear structure calculations. We also study important featuresmore » of this implementation and compare with previously reported implementations that do not exploit underlying symmetry. Our SpMVM implementations leverage the hybrid paradigm to efficiently overlap expensive communications with computations. Our main comparison criterion is the "CPU core hours" metric, which is the main measure of resource usage on supercomputers. We analyze the effects of topology-aware mapping heuristic using simplified network load model. Furthermore, we have tested the different SpMVM implementations on two large clusters with 3D Torus and Dragonfly topology. Our results show that the distributed SpMVM implementation that exploits matrix symmetry and hides communication yields the best value for the "CPU core hours" metric and significantly reduces data movement overheads.« less

  10. Health professional networks as a vector for improving healthcare quality and safety: a systematic review

    PubMed Central

    Ranmuthugala, Geetha; Plumb, Jennifer; Georgiou, Andrew; Westbrook, Johanna I; Braithwaite, Jeffrey

    2011-01-01

    Background While there is a considerable corpus of theoretical and empirical literature on networks within and outside of the health sector, multiple research questions are yet to be answered. Objective To conduct a systematic review of studies of professionals' network structures, identifying factors associated with network effectiveness and sustainability, particularly in relation to quality of care and patient safety. Methods The authors searched MEDLINE, CINAHL, EMBASE, Web of Science and Business Source Premier from January 1995 to December 2009. Results A majority of the 26 unique studies identified used social network analysis to examine structural relationships in networks: structural relationships within and between networks, health professionals and their social context, health collaboratives and partnerships, and knowledge sharing networks. Key aspects of networks explored were administrative and clinical exchanges, network performance, integration, stability and influences on the quality of healthcare. More recent studies show that cohesive and collaborative health professional networks can facilitate the coordination of care and contribute to improving quality and safety of care. Structural network vulnerabilities include cliques, professional and gender homophily, and over-reliance on central agencies or individuals. Conclusions Effective professional networks employ natural structural network features (eg, bridges, brokers, density, centrality, degrees of separation, social capital, trust) in producing collaboratively oriented healthcare. This requires efficient transmission of information and social and professional interaction within and across networks. For those using networks to improve care, recurring success factors are understanding your network's characteristics, attending to its functioning and investing time in facilitating its improvement. Despite this, there is no guarantee that time spent on networks will necessarily improve patient

  11. Predictable nonwandering localization of covariant Lyapunov vectors and cluster synchronization in scale-free networks of chaotic maps.

    PubMed

    Kuptsov, Pavel V; Kuptsova, Anna V

    2014-09-01

    Covariant Lyapunov vectors for scale-free networks of Hénon maps are highly localized. We revealed two mechanisms of the localization related to full and phase cluster synchronization of network nodes. In both cases the localization nodes remain unaltered in the course of the dynamics, i.e., the localization is nonwandering. Moreover, this is predictable: The localization nodes are found to have specific dynamical and topological properties and they can be found without computing of the covariant vectors. This is an example of explicit relations between the system topology, its phase-space dynamics, and the associated tangent-space dynamics of covariant Lyapunov vectors. PMID:25314498

  12. Static performance of five twin-engine nonaxisymmetric nozzles with vectoring and reversing capability

    NASA Technical Reports Server (NTRS)

    Capone, F. J.

    1978-01-01

    Transonic tunnel test was performed to determine the static performance of five twin-engine nonaxisymmetric nozzles and a base-line axisymmetric nozzle at three nozzle power settings. Static thrust-vectoring and thrust-reversing performance were also determined. Nonaxisymmetric-nozzle concepts included two-dimensional convergent-divergent nozzles, wedge nozzles, and a nozzle with a single external-expansion ramp. All nonaxisymmetric nozzles had essentially the same statis performance as the axisymmetric nozzle. Effective thrust vectoring and reversing was also achieved.

  13. Selected Performance Measurements of the F-15 ACTIVE Axisymmetric Thrust-Vectoring Nozzle

    NASA Technical Reports Server (NTRS)

    Orme, John S.; Sims, Robert L.

    1999-01-01

    Flight tests recently completed at the NASA Dryden Flight Research Center evaluated performance of a hydromechanically vectored axisymmetric nozzle onboard the F-15 ACTIVE. A flight-test technique whereby strain gages installed onto engine mounts provided for the direct measurement of thrust and vector forces has proven to be extremely valuable. Flow turning and thrust efficiency, as well as nozzle static pressure distributions were measured and analyzed. This report presents results from testing at an altitude of 30,000 ft and a speed of Mach 0.9. Flow turning and thrust efficiency were found to be significantly different than predicted, and moreover, varied substantially with power setting and pitch vector angle. Results of an in-flight comparison of the direct thrust measurement technique and an engine simulation fell within the expected uncertainty bands. Overall nozzle performance at this flight condition demonstrated the F100-PW-229 thrust-vectoring nozzles to be highly capable and efficient.

  14. Selected Performance Measurements of the F-15 Active Axisymmetric Thrust-vectoring Nozzle

    NASA Technical Reports Server (NTRS)

    Orme, John S.; Sims, Robert L.

    1998-01-01

    Flight tests recently completed at the NASA Dryden Flight Research Center evaluated performance of a hydromechanically vectored axisymmetric nozzle onboard the F-15 ACTIVE. A flight-test technique whereby strain gages installed onto engine mounts provided for the direct measurement of thrust and vector forces has proven to be extremely valuable. Flow turning and thrust efficiency, as well as nozzle static pressure distributions were measured and analyzed. This report presents results from testing at an altitude of 30,000 ft and a speed of Mach 0.9. Flow turning and thrust efficiency were found to be significantly different than predicted, and moreover, varied substantially with power setting and pitch vector angle. Results of an in-flight comparison of the direct thrust measurement technique and an engine simulation fell within the expected uncertainty bands. Overall nozzle performance at this flight condition demonstrated the F100-PW-229 thrust-vectoring nozzles to be highly capable and efficient.

  15. Monthly river flow forecasting using artificial neural network and support vector regression models coupled with wavelet transform

    NASA Astrophysics Data System (ADS)

    Kalteh, Aman Mohammad

    2013-04-01

    Reliable and accurate forecasts of river flow is needed in many water resources planning, design development, operation and maintenance activities. In this study, the relative accuracy of artificial neural network (ANN) and support vector regression (SVR) models coupled with wavelet transform in monthly river flow forecasting is investigated, and compared to regular ANN and SVR models, respectively. The relative performance of regular ANN and SVR models is also compared to each other. For this, monthly river flow data of Kharjegil and Ponel stations in Northern Iran are used. The comparison of the results reveals that both ANN and SVR models coupled with wavelet transform, are able to provide more accurate forecasting results than the regular ANN and SVR models. However, it is found that SVR models coupled with wavelet transform provide better forecasting results than ANN models coupled with wavelet transform. The results also indicate that regular SVR models perform slightly better than regular ANN models.

  16. Static internal performance of an axisymmetric nozzle with multiaxis thrust-vectoring capability

    NASA Technical Reports Server (NTRS)

    Carson, George T., Jr.; Capone, Francis J.

    1991-01-01

    An investigation was conducted in the static test facility of the Langley 16 Foot Transonic Tunnel in order to determine the internal performance characteristics of a multiaxis thrust vectoring axisymmetric nozzle. Thrust vectoring for this nozzle was achieved by deflection of only the divergent section of this nozzle. The effects of nozzle power setting and divergent flap length were studied at nozzle deflection angles of 0 to 30 at nozzle pressure ratios up to 8.0.

  17. Geometrically nonlinear design sensitivity analysis on parallel-vector high-performance computers

    NASA Technical Reports Server (NTRS)

    Baddourah, Majdi A.; Nguyen, Duc T.

    1993-01-01

    Parallel-vector solution strategies for generation and assembly of element matrices, solution of the resulted system of linear equations, calculations of the unbalanced loads, displacements, stresses, and design sensitivity analysis (DSA) are all incorporated into the Newton Raphson (NR) procedure for nonlinear finite element analysis and DSA. Numerical results are included to show the performance of the proposed method for structural analysis and DSA in a parallel-vector computer environment.

  18. Performance Analysis of IIUM Wireless Campus Network

    NASA Astrophysics Data System (ADS)

    Abd Latif, Suhaimi; Masud, Mosharrof H.; Anwar, Farhat

    2013-12-01

    International Islamic University Malaysia (IIUM) is one of the leading universities in the world in terms of quality of education that has been achieved due to providing numerous facilities including wireless services to every enrolled student. The quality of this wireless service is controlled and monitored by Information Technology Division (ITD), an ISO standardized organization under the university. This paper aims to investigate the constraints of wireless campus network of IIUM. It evaluates the performance of the IIUM wireless campus network in terms of delay, throughput and jitter. QualNet 5.2 simulator tool has employed to measure these performances of IIUM wireless campus network. The observation from the simulation result could be one of the influencing factors in improving wireless services for ITD and further improvement.

  19. Genetic algorithm-support vector regression for high reliability SHM system based on FBG sensor network

    NASA Astrophysics Data System (ADS)

    Zhang, XiaoLi; Liang, DaKai; Zeng, Jie; Asundi, Anand

    2012-02-01

    Structural Health Monitoring (SHM) based on fiber Bragg grating (FBG) sensor network has attracted considerable attention in recent years. However, FBG sensor network is embedded or glued in the structure simply with series or parallel. In this case, if optic fiber sensors or fiber nodes fail, the fiber sensors cannot be sensed behind the failure point. Therefore, for improving the survivability of the FBG-based sensor system in the SHM, it is necessary to build high reliability FBG sensor network for the SHM engineering application. In this study, a model reconstruction soft computing recognition algorithm based on genetic algorithm-support vector regression (GA-SVR) is proposed to achieve the reliability of the FBG-based sensor system. Furthermore, an 8-point FBG sensor system is experimented in an aircraft wing box. The external loading damage position prediction is an important subject for SHM system; as an example, different failure modes are selected to demonstrate the SHM system's survivability of the FBG-based sensor network. Simultaneously, the results are compared with the non-reconstruct model based on GA-SVR in each failure mode. Results show that the proposed model reconstruction algorithm based on GA-SVR can still keep the predicting precision when partial sensors failure in the SHM system; thus a highly reliable sensor network for the SHM system is facilitated without introducing extra component and noise.

  20. On-wafer vector network analyzer measurements in the 220-325 Ghz frequency band

    NASA Technical Reports Server (NTRS)

    Fung, King Man Andy; Dawson, D.; Samoska, L.; Lee, K.; Oleson, C.; Boll, G.

    2006-01-01

    We report on a full two-port on-wafer vector network analyzer test set for the 220-325 GHz (WR3) frequency band. The test set utilizes Oleson Microwave Labs frequency extenders with the Agilent 8510C network analyzer. Two port on-wafer measurements are made with GGB Industries coplanar waveguide (CPW) probes. With this test set we have measured the WR3 band S-parameters of amplifiers on-wafer, and the characteristics of the CPW wafer probes. Results for a three stage InP HEMT amplifier show 10 dB gain at 235 GHz [1], and that of a single stage amplifier, 2.9 dB gain at 231 GHz. The approximate upper limit of loss per CPW probe range from 3.0 to 4.8 dB across the WR3 frequency band.

  1. Static performance investigation of a skewed-throat multiaxis thrust-vectoring nozzle concept

    NASA Technical Reports Server (NTRS)

    Wing, David J.

    1994-01-01

    The static performance of a jet exhaust nozzle which achieves multiaxis thrust vectoring by physically skewing the geometric throat has been characterized in the static test facility of the 16-Foot Transonic Tunnel at NASA Langley Research Center. The nozzle has an asymmetric internal geometry defined by four surfaces: a convergent-divergent upper surface with its ridge perpendicular to the nozzle centerline, a convergent-divergent lower surface with its ridge skewed relative to the nozzle centerline, an outwardly deflected sidewall, and a straight sidewall. The primary goal of the concept is to provide efficient yaw thrust vectoring by forcing the sonic plane (nozzle throat) to form at a yaw angle defined by the skewed ridge of the lower surface contour. A secondary goal is to provide multiaxis thrust vectoring by combining the skewed-throat yaw-vectoring concept with upper and lower pitch flap deflections. The geometric parameters varied in this investigation included lower surface ridge skew angle, nozzle expansion ratio (divergence angle), aspect ratio, pitch flap deflection angle, and sidewall deflection angle. Nozzle pressure ratio was varied from 2 to a high of 11.5 for some configurations. The results of the investigation indicate that efficient, substantial multiaxis thrust vectoring was achieved by the skewed-throat nozzle concept. However, certain control surface deflections destabilized the internal flow field, which resulted in substantial shifts in the position and orientation of the sonic plane and had an adverse effect on thrust-vectoring and weight flow characteristics. By increasing the expansion ratio, the location of the sonic plane was stabilized. The asymmetric design resulted in interdependent pitch and yaw thrust vectoring as well as nonzero thrust-vector angles with undeflected control surfaces. By skewing the ridges of both the upper and lower surface contours, the interdependency between pitch and yaw thrust vectoring may be eliminated

  2. Initial Flight Test Evaluation of the F-15 ACTIVE Axisymmetric Vectoring Nozzle Performance

    NASA Technical Reports Server (NTRS)

    Orme, John S.; Hathaway, Ross; Ferguson, Michael D.

    1998-01-01

    A full envelope database of a thrust-vectoring axisymmetric nozzle performance for the Pratt & Whitney Pitch/Yaw Balance Beam Nozzle (P/YBBN) is being developed using the F-15 Advanced Control Technology for Integrated Vehicles (ACTIVE) aircraft. At this time, flight research has been completed for steady-state pitch vector angles up to 20' at an altitude of 30,000 ft from low power settings to maximum afterburner power. The nozzle performance database includes vector forces, internal nozzle pressures, and temperatures all of which can be used for regression analysis modeling. The database was used to substantiate a set of nozzle performance data from wind tunnel testing and computational fluid dynamic analyses. Findings from initial flight research at Mach 0.9 and 1.2 are presented in this paper. The results show that vector efficiency is strongly influenced by power setting. A significant discrepancy in nozzle performance has been discovered between predicted and measured results during vectoring.

  3. Analysis of a general SIS model with infective vectors on the complex networks

    NASA Astrophysics Data System (ADS)

    Juang, Jonq; Liang, Yu-Hao

    2015-11-01

    A general SIS model with infective vectors on complex networks is studied in this paper. In particular, the model considers the linear combination of three possible routes of disease propagation between infected and susceptible individuals as well as two possible transmission types which describe how the susceptible vectors attack the infected individuals. A new technique based on the basic reproduction matrix is introduced to obtain the following results. First, necessary and sufficient conditions are obtained for the global stability of the model through a unified approach. As a result, we are able to produce the exact basic reproduction number and the precise epidemic thresholds with respect to three spreading strengths, the curing strength or the immunization strength all at once. Second, the monotonicity of the basic reproduction number and the above mentioned epidemic thresholds with respect to all other parameters can be rigorously characterized. Finally, we are able to compare the effectiveness of various immunization strategies under the assumption that the number of persons getting vaccinated is the same for all strategies. In particular, we prove that in the scale-free networks, both targeted and acquaintance immunizations are more effective than uniform and active immunizations and that active immunization is the least effective strategy among those four. We are also able to determine how the vaccine should be used at minimum to control the outbreak of the disease.

  4. Online learning vector quantization: a harmonic competition approach based on conservation network.

    PubMed

    Wang, J H; Sun, W D

    1999-01-01

    This paper presents a self-creating neural network in which a conservation principle is incorporated with the competitive learning algorithm to harmonize equi-probable and equi-distortion criteria. Each node is associated with a measure of vitality which is updated after each input presentation. The total amount of vitality in the network at any time is 1, hence the name conservation. Competitive learning based on a vitality conservation principle is near-optimum, in the sense that problem of trapping in a local minimum is alleviated by adding perturbations to the learning rate during node generation processes. Combined with a procedure that redistributes the learning rate variables after generation and removal of nodes, the competitive conservation strategy provides a novel approach to the problem of harmonizing equi-error and equi-probable criteria. The training process is smooth and incremental, it not only achieves the biologically plausible learning property, but also facilitates systematic derivations for training parameters. Comparison studies on learning vector quantization involving stationary and nonstationary, structured and nonstructured inputs demonstrate that the proposed network outperforms other competitive networks in terms of quantization error, learning speed, and codeword search efficiency. PMID:18252343

  5. Performance of TCP variants over LTE network

    NASA Astrophysics Data System (ADS)

    Nor, Shahrudin Awang; Maulana, Ade Novia

    2016-08-01

    One of the implementation of a wireless network is based on mobile broadband technology Long Term Evolution (LTE). LTE offers a variety of advantages, especially in terms of access speed, capacity, architectural simplicity and ease of implementation, as well as the breadth of choice of the type of user equipment (UE) that can establish the access. The majority of the Internet connections in the world happen using the TCP (Transmission Control Protocol) due to the TCP's reliability in transmitting packets in the network. TCP reliability lies in the ability to control the congestion. TCP was originally designed for wired media, but LTE connected through a wireless medium that is not stable in comparison to wired media. A wide variety of TCP has been made to produce a better performance than its predecessor. In this study, we simulate the performance provided by the TCP NewReno and TCP Vegas based on simulation using network simulator version 2 (ns2). The TCP performance is analyzed in terms of throughput, packet loss and end-to-end delay. In comparing the performance of TCP NewReno and TCP Vegas, the simulation result shows that the throughput of TCP NewReno is slightly higher than TCP Vegas, while TCP Vegas gives significantly better end-to-end delay and packet loss. The analysis of throughput, packet loss and end-to-end delay are made to evaluate the simulation.

  6. Performance Evaluation Modeling of Network Sensors

    NASA Technical Reports Server (NTRS)

    Clare, Loren P.; Jennings, Esther H.; Gao, Jay L.

    2003-01-01

    Substantial benefits are promised by operating many spatially separated sensors collectively. Such systems are envisioned to consist of sensor nodes that are connected by a communications network. A simulation tool is being developed to evaluate the performance of networked sensor systems, incorporating such metrics as target detection probabilities, false alarms rates, and classification confusion probabilities. The tool will be used to determine configuration impacts associated with such aspects as spatial laydown, and mixture of different types of sensors (acoustic, seismic, imaging, magnetic, RF, etc.), and fusion architecture. The QualNet discrete-event simulation environment serves as the underlying basis for model development and execution. This platform is recognized for its capabilities in efficiently simulating networking among mobile entities that communicate via wireless media. We are extending QualNet's communications modeling constructs to capture the sensing aspects of multi-target sensing (analogous to multiple access communications), unimodal multi-sensing (broadcast), and multi-modal sensing (multiple channels and correlated transmissions). Methods are also being developed for modeling the sensor signal sources (transmitters), signal propagation through the media, and sensors (receivers) that are consistent with the discrete event paradigm needed for performance determination of sensor network systems. This work is supported under the Microsensors Technical Area of the Army Research Laboratory (ARL) Advanced Sensors Collaborative Technology Alliance.

  7. USING MULTITAIL NETWORKS IN HIGH PERFORMANCE CLUSTERS

    SciTech Connect

    S. COLL; E. FRACHTEMBERG; F. PETRINI; A. HOISIE; L. GURVITS

    2001-03-01

    Using multiple independent networks (also known as rails) is an emerging technique to overcome bandwidth limitations and enhance fault-tolerance of current high-performance clusters. We present and analyze various venues for exploiting multiple rails. Different rail access policies are presented and compared, including static and dynamic allocation schemes. An analytical lower bound on the number of networks required for static rail allocation is shown. We also present an extensive experimental comparison of the behavior of various allocation schemes in terms of bandwidth and latency. Striping messages over multiple rails can substantially reduce network latency, depending on average message size, network load and allocation scheme. The methods compared include a static rail allocation, a round-robin rail allocation, a dynamic allocation based on local knowledge, and a rail allocation that reserves both end-points of a message before sending it. The latter is shown to perform better than other methods at higher loads: up to 49% better than local-knowledge allocation and 37% better than the round-robin allocation. This allocation scheme also shows lower latency and it saturates on higher loads (for messages large enough). Most importantly, this proposed allocation scheme scales well with the number of rails and message sizes.

  8. The design of a broadband ocean acoustic laboratory: detailed examination of vector sensor performance

    NASA Astrophysics Data System (ADS)

    Carpenter, Robert; Silvia, Manuel; Cray, Benjamin A.

    2006-05-01

    Acoustic vector sensors measure the acoustic pressure and three orthogonal components of the acoustic particle acceleration at a single point in space. These sensors, and arrays composed of them, have a number of advantages over traditional hydrophone arrays. This includes full azimuth/elevation angle estimation, even with a single sensor. It is of interest to see how in-water vector sensor performance matches theoretical bounds. A series of experiments designed to characterize the performance of vector sensors operating in shallow water was conducted to assess sensor mounting techniques, and evaluate the sensor's ability to measure bearing and elevation angles to a source as a function of waveform characteristics and signal-to-noise ratio.

  9. The Helioseismic and Magnetic Imager (HMI) Vector Magnetic Field Pipeline: Overview and Performance

    NASA Astrophysics Data System (ADS)

    Hoeksema, J. Todd; Liu, Yang; Hayashi, Keiji; Sun, Xudong; Schou, Jesper; Couvidat, Sebastien; Norton, Aimee; Bobra, Monica; Centeno, Rebecca; Leka, K. D.; Barnes, Graham; Turmon, Michael

    2014-09-01

    The Helioseismic and Magnetic Imager (HMI) began near-continuous full-disk solar measurements on 1 May 2010 from the Solar Dynamics Observatory (SDO). An automated processing pipeline keeps pace with observations to produce observable quantities, including the photospheric vector magnetic field, from sequences of filtergrams. The basic vector-field frame list cadence is 135 seconds, but to reduce noise the filtergrams are combined to derive data products every 720 seconds. The primary 720 s observables were released in mid-2010, including Stokes polarization parameters measured at six wavelengths, as well as intensity, Doppler velocity, and the line-of-sight magnetic field. More advanced products, including the full vector magnetic field, are now available. Automatically identified HMI Active Region Patches (HARPs) track the location and shape of magnetic regions throughout their lifetime. The vector field is computed using the Very Fast Inversion of the Stokes Vector (VFISV) code optimized for the HMI pipeline; the remaining 180∘ azimuth ambiguity is resolved with the Minimum Energy (ME0) code. The Milne-Eddington inversion is performed on all full-disk HMI observations. The disambiguation, until recently run only on HARP regions, is now implemented for the full disk. Vector and scalar quantities in the patches are used to derive active region indices potentially useful for forecasting; the data maps and indices are collected in the SHARP data series, hmi.sharp_720s. Definitive SHARP processing is completed only after the region rotates off the visible disk; quick-look products are produced in near real time. Patches are provided in both CCD and heliographic coordinates. HMI provides continuous coverage of the vector field, but has modest spatial, spectral, and temporal resolution. Coupled with limitations of the analysis and interpretation techniques, effects of the orbital velocity, and instrument performance, the resulting measurements have a certain dynamic

  10. Scheduling and performance limits of networks with constantly changing topology

    SciTech Connect

    Tassiulas, L.

    1997-01-01

    A communication network with time-varying topology is considered. The network consists of M receivers and N transmitters that may access in principle every receiver. An underlying network state process with Markovian statistics is considered, that reflects the physical characteristics of the network affecting the link service capacity. The transmissions are scheduled dynamically, based on information about the link capacities and the backlog in the network. The region of achievable throughputs is characterized. A transmission scheduling policy is proposed, that utilizes current topology state information and achieves all throughput vectors achievable by any anticipative policy. The changing topology model applies to networks of Low Earth Orbit (LEO) satellites, meteor-burst communication networks and networks with mobile users. {copyright} {ital 1997 American Institute of Physics.}

  11. Inference of nonlinear gene regulatory networks through optimized ensemble of support vector regression and dynamic Bayesian networks.

    PubMed

    Akutekwe, Arinze; Seker, Huseyin

    2015-08-01

    Comprehensive understanding of gene regulatory networks (GRNs) is a major challenge in systems biology. Most methods for modeling and inferring the dynamics of GRNs, such as those based on state space models, vector autoregressive models and G1DBN algorithm, assume linear dependencies among genes. However, this strong assumption does not make for true representation of time-course relationships across the genes, which are inherently nonlinear. Nonlinear modeling methods such as the S-systems and causal structure identification (CSI) have been proposed, but are known to be statistically inefficient and analytically intractable in high dimensions. To overcome these limitations, we propose an optimized ensemble approach based on support vector regression (SVR) and dynamic Bayesian networks (DBNs). The method called SVR-DBN, uses nonlinear kernels of the SVR to infer the temporal relationships among genes within the DBN framework. The two-stage ensemble is further improved by SVR parameter optimization using Particle Swarm Optimization. Results on eight insilico-generated datasets, and two real world datasets of Drosophila Melanogaster and Escherichia Coli, show that our method outperformed the G1DBN algorithm by a total average accuracy of 12%. We further applied our method to model the time-course relationships of ovarian carcinoma. From our results, four hub genes were discovered. Stratified analysis further showed that the expression levels Prostrate differentiation factor and BTG family member 2 genes, were significantly increased by the cisplatin and oxaliplatin platinum drugs; while expression levels of Polo-like kinase and Cyclin B1 genes, were both decreased by the platinum drugs. These hub genes might be potential biomarkers for ovarian carcinoma. PMID:26738192

  12. Functional performance requirements for seismic network upgrade

    SciTech Connect

    Lee, R.C.

    1991-08-18

    The SRL seismic network, established in 1976, was developed to monitor site and regional seismic activity that may have any potential to impact the safety or reduce containment capability of existing and planned structures and systems at the SRS, report seismic activity that may be relevant to emergency preparedness, including rapid assessments of earthquake location and magnitude, and estimates of potential on-site and off-site damage to facilities and lifelines for mitigation measures. All of these tasks require SRL seismologists to provide rapid analysis of large amounts of seismic data. The current seismic network upgrade, the subject of this Functional Performance Requirements Document, is necessary to improve system reliability and resolution. The upgrade provides equipment for the analysis of the network seismic data and replacement of old out-dated equipment. The digital network upgrade is configured for field station and laboratory digital processing systems. The upgrade consists of the purchase and installation of seismic sensors,, data telemetry digital upgrades, a dedicated Seismic Data Processing (SDP) system (already in procurement stage), and a Seismic Signal Analysis (SSA) system. The field stations and telephone telemetry upgrades include equipment necessary for three remote station upgrades including seismic amplifiers, voltage controlled oscillators, pulse calibrators, weather protection (including lightning protection) systems, seismometers, seismic amplifiers, and miscellaneous other parts. The central receiving and recording station upgrades will include discriminators, helicopter amplifier, omega timing system, strong motion instruments, wide-band velocity sensors, and other miscellaneous equipment.

  13. Static internal performance of single expansion-ramp nozzles with thrust vectoring and reversing

    NASA Technical Reports Server (NTRS)

    Re, R. J.; Berrier, B. L.

    1982-01-01

    The effects of geometric design parameters on the internal performance of nonaxisymmetric single expansion-ramp nozzles were investigated at nozzle pressure ratios up to approximately 10. Forward-flight (cruise), vectored-thrust, and reversed-thrust nozzle operating modes were investigated.

  14. Modeling and Performance Simulation of the Mass Storage Network Environment

    NASA Technical Reports Server (NTRS)

    Kim, Chan M.; Sang, Janche

    2000-01-01

    This paper describes the application of modeling and simulation in evaluating and predicting the performance of the mass storage network environment. Network traffic is generated to mimic the realistic pattern of file transfer, electronic mail, and web browsing. The behavior and performance of the mass storage network and a typical client-server Local Area Network (LAN) are investigated by modeling and simulation. Performance characteristics in throughput and delay demonstrate the important role of modeling and simulation in network engineering and capacity planning.

  15. Parallel-vector unsymmetric Eigen-Solver on high performance computers

    NASA Technical Reports Server (NTRS)

    Nguyen, Duc T.; Jiangning, Qin

    1993-01-01

    The popular QR algorithm for solving all eigenvalues of an unsymmetric matrix is reviewed. Among the basic components in the QR algorithm, it was concluded from this study, that the reduction of an unsymmetric matrix to a Hessenberg form (before applying the QR algorithm itself) can be done effectively by exploiting the vector speed and multiple processors offered by modern high-performance computers. Numerical examples of several test cases have indicated that the proposed parallel-vector algorithm for converting a given unsymmetric matrix to a Hessenberg form offers computational advantages over the existing algorithm. The time saving obtained by the proposed methods is increased as the problem size increased.

  16. Improving the performance of tensor matrix vector multiplication in quantum chemistry codes.

    SciTech Connect

    Gropp, W. D.; Kaushik, D. K.; Minkoff, M.; Smith, B. F.

    2008-05-08

    Cumulative reaction probability (CRP) calculations provide a viable computational approach to estimate reaction rate coefficients. However, in order to give meaningful results these calculations should be done in many dimensions (ten to fifteen). This makes CRP codes memory intensive. For this reason, these codes use iterative methods to solve the linear systems, where a good fraction of the execution time is spent on matrix-vector multiplication. In this paper, we discuss the tensor product form of applying the system operator on a vector. This approach shows much better performance and provides huge savings in memory as compared to the explicit sparse representation of the system matrix.

  17. Condition classification of small reciprocating compressor for refrigerators using artificial neural networks and support vector machines

    NASA Astrophysics Data System (ADS)

    Yang, Bo-Suk; Hwang, Won-Woo; Kim, Dong-Jo; Chit Tan, Andy

    2005-03-01

    The need to increase machine reliability and decrease production loss due to faulty products in highly automated line requires accurate and reliable fault classification technique. Wavelet transform and statistical method are used to extract salient features from raw noise and vibration signals. The wavelet transform decomposes the raw time-waveform signals into two respective parts in the time space and frequency domain. With wavelet transform prominent features can be obtained easily than from time-waveform analysis. This paper focuses on the development of an advanced signal classifier for small reciprocating refrigerator compressors using noise and vibration signals. Three classifiers, self-organising feature map, learning vector quantisation and support vector machine (SVM) are applied in training and testing for feature extraction and the classification accuracies of the techniques are compared to determine the optimum fault classifier. The classification technique selected for detecting faulty reciprocating refrigerator compressors involves artificial neural networks and SVMs. The results confirm that the classification technique can differentiate faulty compressors from healthy ones and with high flexibility and reliability.

  18. Artificial neural network simulation of battery performance

    SciTech Connect

    O`Gorman, C.C.; Ingersoll, D.; Jungst, R.G.; Paez, T.L.

    1998-12-31

    Although they appear deceptively simple, batteries embody a complex set of interacting physical and chemical processes. While the discrete engineering characteristics of a battery such as the physical dimensions of the individual components, are relatively straightforward to define explicitly, their myriad chemical and physical processes, including interactions, are much more difficult to accurately represent. Within this category are the diffusive and solubility characteristics of individual species, reaction kinetics and mechanisms of primary chemical species as well as intermediates, and growth and morphology characteristics of reaction products as influenced by environmental and operational use profiles. For this reason, development of analytical models that can consistently predict the performance of a battery has only been partially successful, even though significant resources have been applied to this problem. As an alternative approach, the authors have begun development of a non-phenomenological model for battery systems based on artificial neural networks. Both recurrent and non-recurrent forms of these networks have been successfully used to develop accurate representations of battery behavior. The connectionist normalized linear spline (CMLS) network has been implemented with a self-organizing layer to model a battery system with the generalized radial basis function net. Concurrently, efforts are under way to use the feedforward back propagation network to map the {open_quotes}state{close_quotes} of a battery system. Because of the complexity of battery systems, accurate representation of the input and output parameters has proven to be very important. This paper describes these initial feasibility studies as well as the current models and makes comparisons between predicted and actual performance.

  19. Measured performances on vectorization and multitasking with a Monte Carlo code for neutron transport problems

    NASA Astrophysics Data System (ADS)

    Chauvet, Yves

    1985-07-01

    This paper summarized two improvements of a real production code by using vectorization and multitasking techniques. After a short description of Monte Carlo algorithms employed in our neutron transport problems, we briefly describe the work we have done in order to get a vector code. Vectorization principles will be presented and measured performances on the CRAY 1S, CYBER 205 and CRAY X-MP compared in terms of vector lengths. The second part of this work is an adaptation to multitasking on the CRAY X-MP using exclusively standard multitasking tools available with FORTRAN under the COS 1.13 system. Two examples will be presented. The goal of the first one is to measure the overhead inherent to multitasking when tasks become too small and to define a granularity threshold that is to say a minimum size for a task. With the second example we propose a method that is very X-MP oriented in order to get the best speedup factor on such a computer. In conclusion we prove that Monte Carlo algorithms are very well suited to future vector and parallel computers.

  20. Static Thrust and Vectoring Performance of a Spherical Convergent Flap Nozzle with a Nonrectangular Divergent Duct

    NASA Technical Reports Server (NTRS)

    Wing, David J.

    1998-01-01

    The static internal performance of a multiaxis-thrust-vectoring, spherical convergent flap (SCF) nozzle with a non-rectangular divergent duct was obtained in the model preparation area of the Langley 16-Foot Transonic Tunnel. Duct cross sections of hexagonal and bowtie shapes were tested. Additional geometric parameters included throat area (power setting), pitch flap deflection angle, and yaw gimbal angle. Nozzle pressure ratio was varied from 2 to 12 for dry power configurations and from 2 to 6 for afterburning power configurations. Approximately a 1-percent loss in thrust efficiency from SCF nozzles with a rectangular divergent duct was incurred as a result of internal oblique shocks in the flow field. The internal oblique shocks were the result of cross flow generated by the vee-shaped geometric throat. The hexagonal and bowtie nozzles had mirror-imaged flow fields and therefore similar thrust performance. Thrust vectoring was not hampered by the three-dimensional internal geometry of the nozzles. Flow visualization indicates pitch thrust-vector angles larger than 10' may be achievable with minimal adverse effect on or a possible gain in resultant thrust efficiency as compared with the performance at a pitch thrust-vector angle of 10 deg.

  1. Scientific Application Performance on Leading Scalar and VectorSupercomputing Platforms

    SciTech Connect

    Oliker, Leonid; Canning, Andrew; Carter, Jonathan; Shalf, John; Ethier, Stephane

    2007-01-01

    The last decade has witnessed a rapid proliferation of superscalar cache-based microprocessors to build high-end computing (HEC) platforms, primarily because of their generality, scalability, and cost effectiveness. However, the growing gap between sustained and peak performance for full-scale scientific applications on conventional supercomputers has become a major concern in high performance computing, requiring significantly larger systems and application scalability than implied by peak performance in order to achieve desired performance. The latest generation of custom-built parallel vector systems have the potential to address this issue for numerical algorithms with sufficient regularity in their computational structure. In this work we explore applications drawn from four areas: magnetic fusion (GTC), plasma physics (LBMHD3D), astrophysics (Cactus), and material science (PARATEC). We compare performance of the vector-based Cray X1, X1E, Earth Simulator, NEC SX-8, with performance of three leading commodity-based superscalar platforms utilizing the IBM Power3, Intel Itanium2, and AMD Opteron processors. Our work makes several significant contributions: a new data-decomposition scheme for GTC that (for the first time) enables a breakthrough of the Teraflop barrier; the introduction of a new three-dimensional Lattice Boltzmann magneto-hydrodynamic implementation used to study the onset evolution of plasma turbulence that achieves over 26Tflop/s on 4800 ES processors; the highest per processor performance (by far) achieved by the full-production version of the Cactus ADM-BSSN; and the largest PARATEC cell size atomistic simulation to date. Overall, results show that the vector architectures attain unprecedented aggregate performance across our application suite, demonstrating the tremendous potential of modern parallel vector systems.

  2. Advancements and performance of iterative methods in industrial applications codes on CRAY parallel/vector supercomputers

    SciTech Connect

    Poole, G.; Heroux, M.

    1994-12-31

    This paper will focus on recent work in two widely used industrial applications codes with iterative methods. The ANSYS program, a general purpose finite element code widely used in structural analysis applications, has now added an iterative solver option. Some results are given from real applications comparing performance with the tradition parallel/vector frontal solver used in ANSYS. Discussion of the applicability of iterative solvers as a general purpose solver will include the topics of robustness, as well as memory requirements and CPU performance. The FIDAP program is a widely used CFD code which uses iterative solvers routinely. A brief description of preconditioners used and some performance enhancements for CRAY parallel/vector systems is given. The solution of large-scale applications in structures and CFD includes examples from industry problems solved on CRAY systems.

  3. Curriculum Assessment Using Artificial Neural Network and Support Vector Machine Modeling Approaches: A Case Study. IR Applications. Volume 29

    ERIC Educational Resources Information Center

    Chen, Chau-Kuang

    2010-01-01

    Artificial Neural Network (ANN) and Support Vector Machine (SVM) approaches have been on the cutting edge of science and technology for pattern recognition and data classification. In the ANN model, classification accuracy can be achieved by using the feed-forward of inputs, back-propagation of errors, and the adjustment of connection weights. In…

  4. Evaluation models for soil nutrient based on support vector machine and artificial neural networks.

    PubMed

    Li, Hao; Leng, Weijia; Zhou, Yibing; Chen, Fudi; Xiu, Zhilong; Yang, Dazuo

    2014-01-01

    Soil nutrient is an important aspect that contributes to the soil fertility and environmental effects. Traditional evaluation approaches of soil nutrient are quite hard to operate, making great difficulties in practical applications. In this paper, we present a series of comprehensive evaluation models for soil nutrient by using support vector machine (SVM), multiple linear regression (MLR), and artificial neural networks (ANNs), respectively. We took the content of organic matter, total nitrogen, alkali-hydrolysable nitrogen, rapidly available phosphorus, and rapidly available potassium as independent variables, while the evaluation level of soil nutrient content was taken as dependent variable. Results show that the average prediction accuracies of SVM models are 77.87% and 83.00%, respectively, while the general regression neural network (GRNN) model's average prediction accuracy is 92.86%, indicating that SVM and GRNN models can be used effectively to assess the levels of soil nutrient with suitable dependent variables. In practical applications, both SVM and GRNN models can be used for determining the levels of soil nutrient. PMID:25548781

  5. Performance of wireless sensor networks under random node failures

    SciTech Connect

    Bradonjic, Milan; Hagberg, Aric; Feng, Pan

    2011-01-28

    Networks are essential to the function of a modern society and the consequence of damages to a network can be large. Assessing network performance of a damaged network is an important step in network recovery and network design. Connectivity, distance between nodes, and alternative routes are some of the key indicators to network performance. In this paper, random geometric graph (RGG) is used with two types of node failure, uniform failure and localized failure. Since the network performance are multi-facet and assessment can be time constrained, we introduce four measures, which can be computed in polynomial time, to estimate performance of damaged RGG. Simulation experiments are conducted to investigate the deterioration of networks through a period of time. With the empirical results, the performance measures are analyzed and compared to provide understanding of different failure scenarios in a RGG.

  6. Quantifying performance limitations of Kalman filters in state vector estimation problems

    NASA Astrophysics Data System (ADS)

    Bageshwar, Vibhor Lal

    In certain applications, the performance objectives of a Kalman filter (KF) are to compute unbiased, minimum variance estimates of a state mean vector governed by a stochastic system. The KF can be considered as a model based algorithm used to recursively estimate the state mean vector and state covariance matrix. The general objective of this thesis is to investigate the performance limitations of the KF in three state vector estimation applications. Stochastic observability is a property of a system and refers to the existence of a filter for which the errors of the estimated state mean vector have bounded variance. In the first application, we derive a test to assess the stochastic observability of a KF implemented for discrete linear time-varying systems consisting of known, deterministic parameters. This class of system includes discrete nonlinear systems linearized about the true state vector trajectory. We demonstrate the utility of the stochastic observability test using an aided INS problem. Attitude determination systems consist of a sensor set, a stochastic system, and a filter to estimate attitude. In the second application, we design an inertially aided (IA) vector matching algorithm (VMA) architecture for estimating a spacecraft's attitude. The sensor set includes rate gyros and a three-axis magnetometer (TAM). The VMA is a filtering algorithm that solves Wahba's problem. The VMA is then extended by incorporating dynamic and sensor models to formulate the IA VMA architecture. We evaluate the performance of the IA VMA architectures by using an extended KF to blend post-processed spaceflight data. Model predictive control (MPC) algorithms achieve offset-free control by augmenting the nominal system model with a disturbance model. In the third application, we consider an offset-free MPC framework that includes an output integrator disturbance model and a KF to estimate the state and disturbance vectors. Using root locus techniques, we identify sufficient

  7. Improving Memory Subsystem Performance Using ViVA: Virtual Vector Architecture

    SciTech Connect

    Gebis, Joseph; Oliker, Leonid; Shalf, John; Williams, Samuel; Yelick, Katherine

    2009-01-12

    The disparity between microprocessor clock frequencies and memory latency is a primary reason why many demanding applications run well below peak achievable performance. Software controlled scratchpad memories, such as the Cell local store, attempt to ameliorate this discrepancy by enabling precise control over memory movement; however, scratchpad technology confronts the programmer and compiler with an unfamiliar and difficult programming model. In this work, we present the Virtual Vector Architecture (ViVA), which combines the memory semantics of vector computers with a software-controlled scratchpad memory in order to provide a more effective and practical approach to latency hiding. ViVA requires minimal changes to the core design and could thus be easily integrated with conventional processor cores. To validate our approach, we implemented ViVA on the Mambo cycle-accurate full system simulator, which was carefully calibrated to match the performance on our underlying PowerPC Apple G5 architecture. Results show that ViVA is able to deliver significant performance benefits over scalar techniques for a variety of memory access patterns as well as two important memory-bound compact kernels, corner turn and sparse matrix-vector multiplication -- achieving 2x-13x improvement compared the scalar version. Overall, our preliminary ViVA exploration points to a promising approach for improving application performance on leading microprocessors with minimal design and complexity costs, in a power efficient manner.

  8. Support vector machine based training of multilayer feedforward neural networks as optimized by particle swarm algorithm: application in QSAR studies of bioactivity of organic compounds.

    PubMed

    Lin, Wei-Qi; Jiang, Jian-Hui; Zhou, Yan-Ping; Wu, Hai-Long; Shen, Guo-Li; Yu, Ru-Qin

    2007-01-30

    Multilayer feedforward neural networks (MLFNNs) are important modeling techniques widely used in QSAR studies for their ability to represent nonlinear relationships between descriptors and activity. However, the problems of overfitting and premature convergence to local optima still pose great challenges in the practice of MLFNNs. To circumvent these problems, a support vector machine (SVM) based training algorithm for MLFNNs has been developed with the incorporation of particle swarm optimization (PSO). The introduction of the SVM based training mechanism imparts the developed algorithm with inherent capacity for combating the overfitting problem. Moreover, with the implementation of PSO for searching the optimal network weights, the SVM based learning algorithm shows relatively high efficiency in converging to the optima. The proposed algorithm has been evaluated using the Hansch data set. Application to QSAR studies of the activity of COX-2 inhibitors is also demonstrated. The results reveal that this technique provides superior performance to backpropagation (BP) and PSO training neural networks. PMID:17186488

  9. Performance monitoring for coherent DP-QPSK systems based on stokes vectors analysis

    NASA Astrophysics Data System (ADS)

    Louchet, Hadrien; Koltchanov, Igor; Richter, André

    2010-12-01

    We show how to estimate accurately the Jones matrix of the transmission line by analyzing the Stokes vectors of DP-QPSK signals. This method can be used to perform in-situ PMD measurement in dual-polarization QPSK systems, and in addition to the constant modulus algorithm (CMA) to mitigate polarization-induced impairments. The applicability of this method to other modulation formats is discussed.

  10. Performance of an integrated network model

    PubMed Central

    Lehmann, François; Dunn, David; Beaulieu, Marie-Dominique; Brophy, James

    2016-01-01

    Objective To evaluate the changes in accessibility, patients’ care experiences, and quality-of-care indicators following a clinic’s transformation into a fully integrated network clinic. Design Mixed-methods study. Setting Verdun, Que. Participants Data on all patient visits were used, in addition to 2 distinct patient cohorts: 134 patients with chronic illness (ie, diabetes, arteriosclerotic heart disease, or both); and 450 women between the ages of 20 and 70 years. Main outcome measures Accessibility was measured by the number of walk-in visits, scheduled visits, and new patient enrolments. With the first cohort, patients’ care experiences were measured using validated serial questionnaires; and quality-of-care indicators were measured using biologic data. With the second cohort, quality of preventive care was measured using the number of Papanicolaou tests performed as a surrogate marker. Results Despite a negligible increase in the number of physicians, there was an increase in accessibility after the clinic’s transition to an integrated network model. During the first 4 years of operation, the number of scheduled visits more than doubled, nonscheduled visits (walk-in visits) increased by 29%, and enrolment of vulnerable patients (those with chronic illnesses) at the clinic remained high. Patient satisfaction with doctors was rated very highly at all points of time that were evaluated. While the number of Pap tests done did not increase with time, the proportion of patients meeting hemoglobin A1c and low-density lipoprotein guideline target levels increased, as did the number of patients tested for microalbuminuria. Conclusion Transformation to an integrated network model of care led to increased efficiency and enhanced accessibility with no negative effects on the doctor-patient relationship. Improvements in biologic data also suggested better quality of care. PMID:27521410

  11. Online monitoring and control of particle size in the grinding process using least square support vector regression and resilient back propagation neural network.

    PubMed

    Pani, Ajaya Kumar; Mohanta, Hare Krishna

    2015-05-01

    Particle size soft sensing in cement mills will be largely helpful in maintaining desired cement fineness or Blaine. Despite the growing use of vertical roller mills (VRM) for clinker grinding, very few research work is available on VRM modeling. This article reports the design of three types of feed forward neural network models and least square support vector regression (LS-SVR) model of a VRM for online monitoring of cement fineness based on mill data collected from a cement plant. In the data pre-processing step, a comparative study of the various outlier detection algorithms has been performed. Subsequently, for model development, the advantage of algorithm based data splitting over random selection is presented. The training data set obtained by use of Kennard-Stone maximal intra distance criterion (CADEX algorithm) was used for development of LS-SVR, back propagation neural network, radial basis function neural network and generalized regression neural network models. Simulation results show that resilient back propagation model performs better than RBF network, regression network and LS-SVR model. Model implementation has been done in SIMULINK platform showing the online detection of abnormal data and real time estimation of cement Blaine from the knowledge of the input variables. Finally, closed loop study shows how the model can be effectively utilized for maintaining cement fineness at desired value. PMID:25528293

  12. Performance Characteristics of an Adaptive Mesh RefinementCalculation on Scalar and Vector Platforms

    SciTech Connect

    Welcome, Michael; Rendleman, Charles; Oliker, Leonid; Biswas, Rupak

    2006-01-31

    Adaptive mesh refinement (AMR) is a powerful technique thatreduces the resources necessary to solve otherwise in-tractable problemsin computational science. The AMR strategy solves the problem on arelatively coarse grid, and dynamically refines it in regions requiringhigher resolution. However, AMR codes tend to be far more complicatedthan their uniform grid counterparts due to the software infrastructurenecessary to dynamically manage the hierarchical grid framework. Despitethis complexity, it is generally believed that future multi-scaleapplications will increasingly rely on adaptive methods to study problemsat unprecedented scale and resolution. Recently, a new generation ofparallel-vector architectures have become available that promise toachieve extremely high sustained performance for a wide range ofapplications, and are the foundation of many leadership-class computingsystems worldwide. It is therefore imperative to understand the tradeoffsbetween conventional scalar and parallel-vector platforms for solvingAMR-based calculations. In this paper, we examine the HyperCLaw AMRframework to compare and contrast performance on the Cray X1E, IBM Power3and Power5, and SGI Altix. To the best of our knowledge, this is thefirst work that investigates and characterizes the performance of an AMRcalculation on modern parallel-vector systems.

  13. Non-metallic coating thickness prediction using artificial neural network and support vector machine with time resolved thermography

    NASA Astrophysics Data System (ADS)

    Wang, Hongjin; Hsieh, Sheng-Jen; Peng, Bo; Zhou, Xunfei

    2016-07-01

    A method without requirements on knowledge about thermal properties of coatings or those of substrates will be interested in the industrial application. Supervised machine learning regressions may provide possible solution to the problem. This paper compares the performances of two regression models (artificial neural networks (ANN) and support vector machines for regression (SVM)) with respect to coating thickness estimations made based on surface temperature increments collected via time resolved thermography. We describe SVM roles in coating thickness prediction. Non-dimensional analyses are conducted to illustrate the effects of coating thicknesses and various factors on surface temperature increments. It's theoretically possible to correlate coating thickness with surface increment. Based on the analyses, the laser power is selected in such a way: during the heating, the temperature increment is high enough to determine the coating thickness variance but low enough to avoid surface melting. Sixty-one pain-coated samples with coating thicknesses varying from 63.5 μm to 571 μm are used to train models. Hyper-parameters of the models are optimized by 10-folder cross validation. Another 28 sets of data are then collected to test the performance of the three methods. The study shows that SVM can provide reliable predictions of unknown data, due to its deterministic characteristics, and it works well when used for a small input data group. The SVM model generates more accurate coating thickness estimates than the ANN model.

  14. Connections between inversion, kriging, wiener filters, support vector machines, and neural networks.

    NASA Astrophysics Data System (ADS)

    Kuzma, H. A.; Kappler, K. A.; Rector, J. W.

    2006-12-01

    Kriging, wiener filters, support vector machines (SVMs), neural networks, linear and non-linear inversion are methods for predicting the values of one set of variables given the values of another. They can all be used to estimate a set of model parameters from measured data given that a physical relationship exists between models and data. However, since the methods were developed in different fields, the mathematics used to describe them tend to obscure rather than highlight the links between them. In this poster, we diagram the methods and clarify their connections in hopes that practitioners of one method will be able to understand and learn from the insights developed in another. At the heart of all of the methods are a set of coefficients that must be found by minimizing an objective function. The solution to the objective function can be found either by inverting a matrix, or by searching through a space of possible answers. We distinguish between direct inversion, in which the desired coefficients are those of the model itself, and indirect inversion, in which examples of models and data are used to estimate the coefficients of an inverse process that, once discovered, can be used to compute new models from new data. Kriging is developed from Geostatistics. The model is usually a rock property (such as gold concentration) and the data is a sample location (x,y,z). The desired coefficients are a set of weights which are used to predict the concentration of a sample taken at a new location based on a variogram. The variogram is computed by averaging across a given set of known samples and is manually adjusted to reflect prior knowledge. Wiener filters were developed in signal processing to predict the values of one time-series from measurements of another. A wiener filter can be derived from kriging by replacing variograms with correlation. Support vector machines are an offshoot of statistical learning theory. They can be written as a form of kriging in which

  15. Communication Network Patterns and Employee Performance with New Technology.

    ERIC Educational Resources Information Center

    Papa, Michael J.

    1990-01-01

    Investigates the relationship between employee performance, new technology, employee communication network variables (activity, size, diversity, and integrativeness), and productivity at two corporate offices. Reports significant positive relationships between three of the network variables and employee productivity with new technology. Discusses…

  16. Static internal performance including thrust vectoring and reversing of two-dimensional convergent-divergent nozzles

    NASA Technical Reports Server (NTRS)

    Re, R. J.; Leavitt, L. D.

    1984-01-01

    The effects of geometric design parameters on two dimensional convergent-divergent nozzles were investigated at nozzle pressure ratios up to 12 in the static test facility. Forward flight (dry and afterburning power settings), vectored-thrust (afterburning power setting), and reverse-thrust (dry power setting) nozzles were investigated. The nozzles had thrust vector angles from 0 deg to 20.26 deg, throat aspect ratios of 3.696 to 7.612, throat radii from sharp to 2.738 cm, expansion ratios from 1.089 to 1.797, and various sidewall lengths. The results indicate that unvectored two dimensional convergent-divergent nozzles have static internal performance comparable to axisymmetric nozzles with similar expansion ratios.

  17. Optimization of ion-exchange protein separations using a vector quantizing neural network.

    PubMed

    Klein, E J; Rivera, S L; Porter, J E

    2000-01-01

    In this work, a previously proposed methodology for the optimization of analytical scale protein separations using ion-exchange chromatography is subjected to two challenging case studies. The optimization methodology uses a Doehlert shell design for design of experiments and a novel criteria function to rank chromatograms in order of desirability. This chromatographic optimization function (COF) accounts for the separation between neighboring peaks, the total number of peaks eluted, and total analysis time. The COF is penalized when undesirable peak geometries (i.e., skewed and/or shouldered peaks) are present as determined by a vector quantizing neural network. Results of the COF analysis are fit to a quadratic response model, which is optimized with respect to the optimization variables using an advanced Nelder and Mead simplex algorithm. The optimization methodology is tested on two case study sample mixtures, the first of which is composed of equal parts of lysozyme, conalbumin, bovine serum albumin, and transferrin, and the second of which contains equal parts of conalbumin, bovine serum albumin, tranferrin, beta-lactoglobulin, insulin, and alpha -chymotrypsinogen A. Mobile-phase pH and gradient length are optimized to achieve baseline resolution of all solutes for both case studies in acceptably short analysis times, thus demonstrating the usefulness of the empirical optimization methodology. PMID:10835256

  18. Efficient modeling of vector hysteresis using a novel Hopfield neural network implementation of Stoner–Wohlfarth-like operators

    PubMed Central

    Adly, Amr A.; Abd-El-Hafiz, Salwa K.

    2012-01-01

    Incorporation of hysteresis models in electromagnetic analysis approaches is indispensable to accurate field computation in complex magnetic media. Throughout those computations, vector nature and computational efficiency of such models become especially crucial when sophisticated geometries requiring massive sub-region discretization are involved. Recently, an efficient vector Preisach-type hysteresis model constructed from only two scalar models having orthogonally coupled elementary operators has been proposed. This paper presents a novel Hopfield neural network approach for the implementation of Stoner–Wohlfarth-like operators that could lead to a significant enhancement in the computational efficiency of the aforementioned model. Advantages of this approach stem from the non-rectangular nature of these operators that substantially minimizes the number of operators needed to achieve an accurate vector hysteresis model. Details of the proposed approach, its identification and experimental testing are presented in the paper. PMID:25685446

  19. Support Vector Machine and Artificial Neural Network Models for the Classification of Grapevine Varieties Using a Portable NIR Spectrophotometer.

    PubMed

    Gutiérrez, Salvador; Tardaguila, Javier; Fernández-Novales, Juan; Diago, María P

    2015-01-01

    The identification of different grapevine varieties, currently attended using visual ampelometry, DNA analysis and very recently, by hyperspectral analysis under laboratory conditions, is an issue of great importance in the wine industry. This work presents support vector machine and artificial neural network's modelling for grapevine varietal classification from in-field leaf spectroscopy. Modelling was attempted at two scales: site-specific and a global scale. Spectral measurements were obtained on the near-infrared (NIR) spectral range between 1600 to 2400 nm under field conditions in a non-destructive way using a portable spectrophotometer. For the site specific approach, spectra were collected from the adaxial side of 400 individual leaves of 20 grapevine (Vitis vinifera L.) varieties one week after veraison. For the global model, two additional sets of spectra were collected one week before harvest from two different vineyards in another vintage, each one consisting on 48 measurement from individual leaves of six varieties. Several combinations of spectra scatter correction and smoothing filtering were studied. For the training of the models, support vector machines and artificial neural networks were employed using the pre-processed spectra as input and the varieties as the classes of the models. The results from the pre-processing study showed that there was no influence whether using scatter correction or not. Also, a second-degree derivative with a window size of 5 Savitzky-Golay filtering yielded the highest outcomes. For the site-specific model, with 20 classes, the best results from the classifiers thrown an overall score of 87.25% of correctly classified samples. These results were compared under the same conditions with a model trained using partial least squares discriminant analysis, which showed a worse performance in every case. For the global model, a 6-class dataset involving samples from three different vineyards, two years and leaves

  20. Performance Evaluation of Lattice-Boltzmann MagnetohydrodynamicsSimulations on Modern Parallel Vector Systems

    SciTech Connect

    Carter, Jonathan; Oliker, Leonid

    2006-01-09

    The last decade has witnessed a rapid proliferation of superscalarcache-based microprocessors to build high-end computing (HEC) platforms, primarily because of their generality, scalability, and cost effectiveness. However, the growing gap between sustained and peak performance for full-scale scientific applications on such platforms has become major concern in high performance computing. The latest generation of custom-built parallel vector systems have the potential to address this concern for numerical algorithms with sufficient regularity in their computational structure. In this work, we explore two and three dimensional implementations of a lattice-Boltzmann magnetohydrodynamics (MHD) physics application, on some of today's most powerful supercomputing platforms. Results compare performance between the vector-based Cray X1, Earth Simulator, and newly-released NEC SX-8, with the commodity-based superscalar platforms of the IBM Power3, IntelItanium2, and AMD Opteron. Overall results show that the SX-8 attains unprecedented aggregate performance across our evaluated applications.

  1. Analysis of complex network performance and heuristic node removal strategies

    NASA Astrophysics Data System (ADS)

    Jahanpour, Ehsan; Chen, Xin

    2013-12-01

    Removing important nodes from complex networks is a great challenge in fighting against criminal organizations and preventing disease outbreaks. Six network performance metrics, including four new metrics, are applied to quantify networks' diffusion speed, diffusion scale, homogeneity, and diameter. In order to efficiently identify nodes whose removal maximally destroys a network, i.e., minimizes network performance, ten structured heuristic node removal strategies are designed using different node centrality metrics including degree, betweenness, reciprocal closeness, complement-derived closeness, and eigenvector centrality. These strategies are applied to remove nodes from the September 11, 2001 hijackers' network, and their performance are compared to that of a random strategy, which removes randomly selected nodes, and the locally optimal solution (LOS), which removes nodes to minimize network performance at each step. The computational complexity of the 11 strategies and LOS is also analyzed. Results show that the node removal strategies using degree and betweenness centralities are more efficient than other strategies.

  2. Improving the performance of physiologic hot flash measures with support vector machines.

    PubMed

    Thurston, Rebecca C; Matthews, Karen A; Hernandez, Javier; De La Torre, Fernando

    2009-03-01

    Hot flashes are experienced by over 70% of menopausal women. Criteria to classify hot flashes from physiologic signals show variable performance. The primary aim was to compare conventional criteria to Support Vector Machines (SVMs), an advanced machine learning method, to classify hot flashes from sternal skin conductance. Thirty women with > or =4 hot flashes/day underwent laboratory hot flash testing with skin conductance measurement. Hot flashes were quantified with conventional (> or =2 micromho, 30 s) and SVM methods. Conventional methods had poor sensitivity (sensitivity=0.41, specificity=1, positive predictive value (PPV)=0.94, negative predictive value (NPV)=0.85) in classifying hot flashes, with poorest performance among women with high body mass index or anxiety. SVM models showed improved performance (sensitivity=0.89, specificity=0.96, PPV=0.85, NPV=0.96). SVM may improve the performance of skin conductance measures of hot flashes. PMID:19170952

  3. Wireless Local Area Network Performance Inside Aircraft Passenger Cabins

    NASA Technical Reports Server (NTRS)

    Whetten, Frank L.; Soroker, Andrew; Whetten, Dennis A.; Whetten, Frank L.; Beggs, John H.

    2005-01-01

    An examination of IEEE 802.11 wireless network performance within an aircraft fuselage is performed. This examination measured the propagated RF power along the length of the fuselage, and the associated network performance: the link speed, total throughput, and packet losses and errors. A total of four airplanes: one single-aisle and three twin-aisle airplanes were tested with 802.11a, 802.11b, and 802.11g networks.

  4. Performance and human factors results from thrust vectoring investigations in simulated air combat

    NASA Technical Reports Server (NTRS)

    Pennington, J. E.; Meintel, A. J., Jr.

    1980-01-01

    In support of research related to advanced fighter technology, the Langley Differential Maneuvering Simulator (DMS) has been used to investigate the effects of advanced aerodynamic concepts, parametric changes in performance parameters, and advanced flight control systems on the combat capability of fighter airplanes. At least five studies were related to thrust vectoring and/or inflight thrust reversing. The aircraft simulated ranged from F-4 class to F-15 class, and included the AV-8 Harrier. This paper presents an overview of these studies including the assumptions involved, trends of results, and human factors considerations that were found.

  5. Multiple-error-correcting codes for improving the performance of optical matrix-vector processors.

    PubMed

    Neifeld, M A

    1995-04-01

    I examine the use of Reed-Solomon multiple-error-correcting codes for enhancing the performance of optical matrix-vector processors. An optimal code rate of 0.75 is found, and n = 127 block-length codes are seen to increase the optical matrix dimension achievable by a factor of 2.0 for a required system bit-error rate of 10(-15). The optimal codes required for various matrix dimensions are determined. I show that single code word implementations are more efficient than those utilizing multiple code words. PMID:19859320

  6. Static internal performance of a two-dimensional convergent nozzle with thrust-vectoring capability up to 60 deg

    NASA Technical Reports Server (NTRS)

    Leavitt, L. D.

    1985-01-01

    An investigation was conducted at wind-off conditions in the static-test facility of the Langley 16-Foot Transonic Tunnel to determine the internal performance characteristics of a two-dimensional convergent nozzle with a thrust-vectoring capability up to 60 deg. Vectoring was accomplished by a downward rotation of a hinged upper convergent flap and a corresponding rotation of a center-pivoted lower convergent flap. The effects of geometric thrust-vector angle and upper-rotating-flap geometry on internal nozzle performance characteristics were investigated. Nozzle pressure ratio was varied from 1.0 (jet off) to approximately 5.0.

  7. Performance characteristics of a one-third-scale, vectorable ventral nozzle for SSTOVL aircraft

    NASA Technical Reports Server (NTRS)

    Esker, Barbara S.; Mcardle, Jack G.

    1990-01-01

    Several proposed configurations for supersonic short takeoff, vertical landing aircraft will require one or more ventral nozzles for lift and pitch control. The swivel nozzle is one possible ventral nozzle configuration. A swivel nozzle (approximately one-third scale) was built and tested on a generic model tailpipe. This nozzle was capable of vectoring the flow up to + or - 23 deg from the vertical position. Steady-state performance data were obtained at pressure ratios to 4.5, and pitot-pressure surveys of the nozzle exit plane were made. Two configurations were tested: the swivel nozzle with a square contour of the leading edge of the ventral duct inlet, and the same nozzle with a round leading edge contour. The swivel nozzle showed good performance overall, and the round-leading edge configuration showed an improvement in performance over the square-leading edge configuration.

  8. A novel application classification and its impact on network performance

    NASA Astrophysics Data System (ADS)

    Zhang, Shuo; Huang, Ning; Sun, Xiaolei; Zhang, Yue

    2016-07-01

    Network traffic is believed to have a significant impact on network performance and is the result of the application operation on networks. Majority of current network performance analysis are based on the premise that the traffic transmission is through the shortest path, which is too simple to reflect a real traffic process. The real traffic process is related to the network application process characteristics, involving the realistic user behavior. In this paper, first, an application can be divided into the following three categories according to realistic application process characteristics: random application, customized application and routine application. Then, numerical simulations are carried out to analyze the effect of different applications on the network performance. The main results show that (i) network efficiency for the BA scale-free network is less than the ER random network when similar single application is loaded on the network; (ii) customized application has the greatest effect on the network efficiency when mixed multiple applications are loaded on BA network.

  9. Manipulation of Host Quality and Defense by a Plant Virus Improves Performance of Whitefly Vectors.

    PubMed

    Su, Qi; Preisser, Evan L; Zhou, Xiao Mao; Xie, Wen; Liu, Bai Ming; Wang, Shao Li; Wu, Qing Jun; Zhang, You Jun

    2015-02-01

    Pathogen-mediated interactions between insect vectors and their host plants can affect herbivore fitness and the epidemiology of plant diseases. While the role of plant quality and defense in mediating these tripartite interactions has been recognized, there are many ecologically and economically important cases where the nature of the interaction has yet to be characterized. The Bemisia tabaci (Gennadius) cryptic species Mediterranean (MED) is an important vector of tomato yellow leaf curl virus (TYLCV), and performs better on virus-infected tomato than on uninfected controls. We assessed the impact of TYLCV infection on plant quality and defense, and the direct impact of TYLCV infection on MED feeding. We found that although TYLCV infection has a minimal direct impact on MED, the virus alters the nutritional content of leaf tissue and phloem sap in a manner beneficial to MED. TYLCV infection also suppresses herbivore-induced production of plant defensive enzymes and callose deposition. The strongly positive net effect on TYLCV on MED is consistent with previously reported patterns of whitefly behavior and performance, and provides a foundation for further exploration of the molecular mechanisms responsible for these effects and the evolutionary processes that shape them. PMID:26470098

  10. Improving matrix-vector product performance and multi-level preconditioning for the parallel PCG package

    SciTech Connect

    McLay, R.T.; Carey, G.F.

    1996-12-31

    In this study we consider parallel solution of sparse linear systems arising from discretized PDE`s. As part of our continuing work on our parallel PCG Solver package, we have made improvements in two areas. The first is improving the performance of the matrix-vector product. Here on regular finite-difference grids, we are able to use the cache memory more efficiently for smaller domains or where there are multiple degrees of freedom. The second problem of interest in the present work is the construction of preconditioners in the context of the parallel PCG solver we are developing. Here the problem is partitioned over a set of processors subdomains and the matrix-vector product for PCG is carried out in parallel for overlapping grid subblocks. For problems of scaled speedup, the actual rate of convergence of the unpreconditioned system deteriorates as the mesh is refined. Multigrid and subdomain strategies provide a logical approach to resolving the problem. We consider the parallel trade-offs between communication and computation and provide a complexity analysis of a representative algorithm. Some preliminary calculations using the parallel package and comparisons with other preconditioners are provided together with parallel performance results.

  11. Network interface unit design options performance analysis

    NASA Technical Reports Server (NTRS)

    Miller, Frank W.

    1991-01-01

    An analysis is presented of three design options for the Space Station Freedom (SSF) onboard Data Management System (DMS) Network Interface Unit (NIU). The NIU provides the interface from the Fiber Distributed Data Interface (FDDI) local area network (LAN) to the DMS processing elements. The FDDI LAN provides the primary means for command and control and low and medium rate telemetry data transfers on board the SSF. The results of this analysis provide the basis for the implementation of the NIU.

  12. On the MAC/network/energy performance evaluation of Wireless Sensor Networks: Contrasting MPH, AODV, DSR and ZTR routing protocols.

    PubMed

    Del-Valle-Soto, Carolina; Mex-Perera, Carlos; Orozco-Lugo, Aldo; Lara, Mauricio; Galván-Tejada, Giselle M; Olmedo, Oscar

    2014-01-01

    Wireless Sensor Networks deliver valuable information for long periods, then it is desirable to have optimum performance, reduced delays, low overhead, and reliable delivery of information. In this work, proposed metrics that influence energy consumption are used for a performance comparison among our proposed routing protocol, called Multi-Parent Hierarchical (MPH), the well-known protocols for sensor networks, Ad hoc On-Demand Distance Vector (AODV), Dynamic Source Routing (DSR), and Zigbee Tree Routing (ZTR), all of them working with the IEEE 802.15.4 MAC layer. Results show how some communication metrics affect performance, throughput, reliability and energy consumption. It can be concluded that MPH is an efficient protocol since it reaches the best performance against the other three protocols under evaluation, such as 19.3% reduction of packet retransmissions, 26.9% decrease of overhead, and 41.2% improvement on the capacity of the protocol for recovering the topology from failures with respect to AODV protocol. We implemented and tested MPH in a real network of 99 nodes during ten days and analyzed parameters as number of hops, connectivity and delay, in order to validate our Sensors 2014, 14 22812 simulator and obtain reliable results. Moreover, an energy model of CC2530 chip is proposed and used for simulations of the four aforementioned protocols, showing that MPH has 15.9% reduction of energy consumption with respect to AODV, 13.7% versus DSR, and 5% against ZTR. PMID:25474377

  13. On the MAC/Network/Energy Performance Evaluation of Wireless Sensor Networks: Contrasting MPH, AODV, DSR and ZTR Routing Protocols

    PubMed Central

    Del-Valle-Soto, Carolina; Mex-Perera, Carlos; Orozco-Lugo, Aldo; Lara, Mauricio; Galván-Tejada, Giselle M.; Olmedo, Oscar

    2014-01-01

    Wireless Sensor Networks deliver valuable information for long periods, then it is desirable to have optimum performance, reduced delays, low overhead, and reliable delivery of information. In this work, proposed metrics that influence energy consumption are used for a performance comparison among our proposed routing protocol, called Multi-Parent Hierarchical (MPH), the well-known protocols for sensor networks, Ad hoc On-Demand Distance Vector (AODV), Dynamic Source Routing (DSR), and Zigbee Tree Routing (ZTR), all of them working with the IEEE 802.15.4 MAC layer. Results show how some communication metrics affect performance, throughput, reliability and energy consumption. It can be concluded that MPH is an efficient protocol since it reaches the best performance against the other three protocols under evaluation, such as 19.3% reduction of packet retransmissions, 26.9% decrease of overhead, and 41.2% improvement on the capacity of the protocol for recovering the topology from failures with respect to AODV protocol. We implemented and tested MPH in a real network of 99 nodes during ten days and analyzed parameters as number of hops, connectivity and delay, in order to validate our simulator and obtain reliable results. Moreover, an energy model of CC2530 chip is proposed and used for simulations of the four aforementioned protocols, showing that MPH has 15.9% reduction of energy consumption with respect to AODV, 13.7% versus DSR, and 5% against ZTR. PMID:25474377

  14. Experimental performance evaluation of software defined networking (SDN) based data communication networks for large scale flexi-grid optical networks.

    PubMed

    Zhao, Yongli; He, Ruiying; Chen, Haoran; Zhang, Jie; Ji, Yuefeng; Zheng, Haomian; Lin, Yi; Wang, Xinbo

    2014-04-21

    Software defined networking (SDN) has become the focus in the current information and communication technology area because of its flexibility and programmability. It has been introduced into various network scenarios, such as datacenter networks, carrier networks, and wireless networks. Optical transport network is also regarded as an important application scenario for SDN, which is adopted as the enabling technology of data communication networks (DCN) instead of general multi-protocol label switching (GMPLS). However, the practical performance of SDN based DCN for large scale optical networks, which is very important for the technology selection in the future optical network deployment, has not been evaluated up to now. In this paper we have built a large scale flexi-grid optical network testbed with 1000 virtual optical transport nodes to evaluate the performance of SDN based DCN, including network scalability, DCN bandwidth limitation, and restoration time. A series of network performance parameters including blocking probability, bandwidth utilization, average lightpath provisioning time, and failure restoration time have been demonstrated under various network environments, such as with different traffic loads and different DCN bandwidths. The demonstration in this work can be taken as a proof for the future network deployment. PMID:24787842

  15. Using high-performance networks to enable computational aerosciences applications

    NASA Technical Reports Server (NTRS)

    Johnson, Marjory J.

    1992-01-01

    One component of the U.S. Federal High Performance Computing and Communications Program (HPCCP) is the establishment of a gigabit network to provide a communications infrastructure for researchers across the nation. This gigabit network will provide new services and capabilities, in addition to increased bandwidth, to enable future applications. An understanding of these applications is necessary to guide the development of the gigabit network and other high-performance networks of the future. In this paper we focus on computational aerosciences applications run remotely using the Numerical Aerodynamic Simulation (NAS) facility located at NASA Ames Research Center. We characterize these applications in terms of network-related parameters and relate user experiences that reveal limitations imposed by the current wide-area networking infrastructure. Then we investigate how the development of a nationwide gigabit network would enable users of the NAS facility to work in new, more productive ways.

  16. Performance Evaluation of Plasma and Astrophysics Applications onModern Parallel Vector Systems

    SciTech Connect

    Carter, Jonathan; Oliker, Leonid; Shalf, John

    2005-10-28

    The last decade has witnessed a rapid proliferation ofsuperscalar cache-based microprocessors to build high-endcomputing (HEC)platforms, primarily because of their generality,scalability, and costeffectiveness. However, the growing gap between sustained and peakperformance for full-scale scientific applications on such platforms hasbecome major concern in highperformance computing. The latest generationof custom-built parallel vector systems have the potential to addressthis concern for numerical algorithms with sufficient regularity in theircomputational structure. In this work, we explore two and threedimensional implementations of a plasma physics application, as well as aleading astrophysics package on some of today's most powerfulsupercomputing platforms. Results compare performance between the thevector-based Cray X1, EarthSimulator, and newly-released NEC SX- 8, withthe commodity-based superscalar platforms of the IBM Power3, IntelItanium2, and AMDOpteron. Overall results show that the SX-8 attainsunprecedented aggregate performance across our evaluatedapplications.

  17. Support Vector Machine and Artificial Neural Network Models for the Classification of Grapevine Varieties Using a Portable NIR Spectrophotometer

    PubMed Central

    Gutiérrez, Salvador; Tardaguila, Javier; Fernández-Novales, Juan; Diago, María P.

    2015-01-01

    The identification of different grapevine varieties, currently attended using visual ampelometry, DNA analysis and very recently, by hyperspectral analysis under laboratory conditions, is an issue of great importance in the wine industry. This work presents support vector machine and artificial neural network’s modelling for grapevine varietal classification from in-field leaf spectroscopy. Modelling was attempted at two scales: site-specific and a global scale. Spectral measurements were obtained on the near-infrared (NIR) spectral range between 1600 to 2400 nm under field conditions in a non-destructive way using a portable spectrophotometer. For the site specific approach, spectra were collected from the adaxial side of 400 individual leaves of 20 grapevine (Vitis vinifera L.) varieties one week after veraison. For the global model, two additional sets of spectra were collected one week before harvest from two different vineyards in another vintage, each one consisting on 48 measurement from individual leaves of six varieties. Several combinations of spectra scatter correction and smoothing filtering were studied. For the training of the models, support vector machines and artificial neural networks were employed using the pre-processed spectra as input and the varieties as the classes of the models. The results from the pre-processing study showed that there was no influence whether using scatter correction or not. Also, a second-degree derivative with a window size of 5 Savitzky-Golay filtering yielded the highest outcomes. For the site-specific model, with 20 classes, the best results from the classifiers thrown an overall score of 87.25% of correctly classified samples. These results were compared under the same conditions with a model trained using partial least squares discriminant analysis, which showed a worse performance in every case. For the global model, a 6-class dataset involving samples from three different vineyards, two years and leaves

  18. Building and measuring a high performance network architecture

    SciTech Connect

    Kramer, William T.C.; Toole, Timothy; Fisher, Chuck; Dugan, Jon; Wheeler, David; Wing, William R; Nickless, William; Goddard, Gregory; Corbato, Steven; Love, E. Paul; Daspit, Paul; Edwards, Hal; Mercer, Linden; Koester, David; Decina, Basil; Dart, Eli; Paul Reisinger, Paul; Kurihara, Riki; Zekauskas, Matthew J; Plesset, Eric; Wulf, Julie; Luce, Douglas; Rogers, James; Duncan, Rex; Mauth, Jeffery

    2001-04-20

    Once a year, the SC conferences present a unique opportunity to create and build one of the most complex and highest performance networks in the world. At SC2000, large-scale and complex local and wide area networking connections were demonstrated, including large-scale distributed applications running on different architectures. This project was designed to use the unique opportunity presented at SC2000 to create a testbed network environment and then use that network to demonstrate and evaluate high performance computational and communication applications. This testbed was designed to incorporate many interoperable systems and services and was designed for measurement from the very beginning. The end results were key insights into how to use novel, high performance networking technologies and to accumulate measurements that will give insights into the networks of the future.

  19. Performance Statistics of the DWD Ceilometer Network

    NASA Astrophysics Data System (ADS)

    Wagner, Frank; Mattis, Ina; Flentje, Harald; Thomas, Werner

    2015-04-01

    The DWD ceilometer network was created in 2008. In the following years more and more ceilometers of type CHM15k (manufacturer Jenoptik) were installed with the aim of observing atmospheric aerosol particles. Now, 58 ceilometers are in continuous operation. The presentation aims on the one side on the statistical behavior of a several instrumental parameters which are related to the measurement performance. Some problems are addressed and conclusions or recommendations which parameters should be monitored for unattended automated operation. On the other side, the presentation aims on a statistical analysis of several measured quantities. Differences between geographic locations (e.g. north versus south, mountainous versus flat terrain) are investigated. For instance the occurrence of fog in lowlands is associated with the overall meteorological situation whereas mountain stations such as Hohenpeissenberg are often within a cumulus cloud which appears as fog in the measurements. The longest time series of data were acquired at Lindenberg. The ceilometer was installed in 2008. Until the end of 2008 the number of installed ceilometers increased to 28 and in the end of 2009 already 42 instruments were measuring. In 2011 the ceilometers were upgraded to the so-called Nimbus instruments. The nimbus instruments have enhanced capabilities of coping and correcting short-term instrumental fluctuations (e.g. detector sensitivity). About 30% of all ceilometer measurements were done under clear skies and hence can be used without limitations for aerosol particle observations. Multiple cloud layers could only be detected in about 23% of all cases with clouds. This is caused either by the presence of only 1 cloud layer or that the ceilometer laser beam could not see through the lowest cloud and hence was blind for the detection of several cloud layers. 3 cloud layers could only be detected in 5% of all cases with clouds. Considering only cases without clouds the diurnal cycle for

  20. Performance Enhancement for a GPS Vector-Tracking Loop Utilizing an Adaptive Iterated Extended Kalman Filter

    PubMed Central

    Chen, Xiyuan; Wang, Xiying; Xu, Yuan

    2014-01-01

    This paper deals with the problem of state estimation for the vector-tracking loop of a software-defined Global Positioning System (GPS) receiver. For a nonlinear system that has the model error and white Gaussian noise, a noise statistics estimator is used to estimate the model error, and based on this, a modified iterated extended Kalman filter (IEKF) named adaptive iterated Kalman filter (AIEKF) is proposed. A vector-tracking GPS receiver utilizing AIEKF is implemented to evaluate the performance of the proposed method. Through road tests, it is shown that the proposed method has an obvious accuracy advantage over the IEKF and Adaptive Extended Kalman filter (AEKF) in position determination. The results show that the proposed method is effective to reduce the root-mean-square error (RMSE) of position (including longitude, latitude and altitude). Comparing with EKF, the position RMSE values of AIEKF are reduced by about 45.1%, 40.9% and 54.6% in the east, north and up directions, respectively. Comparing with IEKF, the position RMSE values of AIEKF are reduced by about 25.7%, 19.3% and 35.7% in the east, north and up directions, respectively. Compared with AEKF, the position RMSE values of AIEKF are reduced by about 21.6%, 15.5% and 30.7% in the east, north and up directions, respectively. PMID:25502124

  1. Urban Heat Island Growth Modeling Using Artificial Neural Networks and Support Vector Regression: A case study of Tehran, Iran

    NASA Astrophysics Data System (ADS)

    Sherafati, Sh. A.; Saradjian, M. R.; Niazmardi, S.

    2013-09-01

    Numerous investigations on Urban Heat Island (UHI) show that land cover change is the main factor of increasing Land Surface Temperature (LST) in urban areas. Therefore, to achieve a model which is able to simulate UHI growth, urban expansion should be concerned first. Considerable researches on urban expansion modeling have been done based on cellular automata. Accordingly the objective of this paper is to implement CA method for trend detection of Tehran UHI spatiotemporal growth based on urban sprawl parameters (such as Distance to nearest road, Digital Elevation Model (DEM), Slope and Aspect ratios). It should be mentioned that UHI growth modeling may have more complexities in comparison with urban expansion, since the amount of each pixel's temperature should be investigated instead of its state (urban and non-urban areas). The most challenging part of CA model is the definition of Transfer Rules. Here, two methods have used to find appropriate transfer Rules which are Artificial Neural Networks (ANN) and Support Vector Regression (SVR). The reason of choosing these approaches is that artificial neural networks and support vector regression have significant abilities to handle the complications of such a spatial analysis in comparison with other methods like Genetic or Swarm intelligence. In this paper, UHI change trend has discussed between 1984 and 2007. For this purpose, urban sprawl parameters in 1984 have calculated and added to the retrieved LST of this year. In order to achieve LST, Thematic Mapper (TM) and Enhanced Thematic Mapper (ETM+) night-time images have exploited. The reason of implementing night-time images is that UHI phenomenon is more obvious during night hours. After that multilayer feed-forward neural networks and support vector regression have used separately to find the relationship between this data and the retrieved LST in 2007. Since the transfer rules might not be the same in different regions, the satellite image of the city has

  2. Topology design and performance analysis of an integrated communication network

    NASA Technical Reports Server (NTRS)

    Li, V. O. K.; Lam, Y. F.; Hou, T. C.; Yuen, J. H.

    1985-01-01

    A research study on the topology design and performance analysis for the Space Station Information System (SSIS) network is conducted. It is begun with a survey of existing research efforts in network topology design. Then a new approach for topology design is presented. It uses an efficient algorithm to generate candidate network designs (consisting of subsets of the set of all network components) in increasing order of their total costs, and checks each design to see if it forms an acceptable network. This technique gives the true cost-optimal network, and is particularly useful when the network has many constraints and not too many components. The algorithm for generating subsets is described in detail, and various aspects of the overall design procedure are discussed. Two more efficient versions of this algorithm (applicable in specific situations) are also given. Next, two important aspects of network performance analysis: network reliability and message delays are discussed. A new model is introduced to study the reliability of a network with dependent failures. For message delays, a collection of formulas from existing research results is given to compute or estimate the delays of messages in a communication network without making the independence assumption. The design algorithm coded in PASCAL is included as an appendix.

  3. Vector performance analysis of three supercomputers - Cray-2, Cray Y-MP, and ETA10-Q

    NASA Technical Reports Server (NTRS)

    Fatoohi, Rod A.

    1989-01-01

    Results are presented of a series of experiments to study the single-processor performance of three supercomputers: Cray-2, Cray Y-MP, and ETA10-Q. The main object of this study is to determine the impact of certain architectural features on the performance of modern supercomputers. Features such as clock period, memory links, memory organization, multiple functional units, and chaining are considered. A simple performance model is used to examine the impact of these features on the performance of a set of basic operations. The results of implementing this set on these machines for three vector lengths and three memory strides are presented and compared. For unit stride operations, the Cray Y-MP outperformed the Cray-2 by as much as three times and the ETA10-Q by as much as four times for these operations. Moreover, unlike the Cray-2 and ETA10-Q, even-numbered strides do not cause a major performance degradation on the Cray Y-MP. Two numerical algorithms are also used for comparison. For three problem sizes of both algorithms, the Cray Y-MP outperformed the Cray-2 by 43 percent to 68 percent and the ETA10-Q by four to eight times.

  4. Network based high performance concurrent computing

    SciTech Connect

    Sunderam, V.S.

    1991-01-01

    The overall objectives of this project are to investigate research issues pertaining to programming tools and efficiency issues in network based concurrent computing systems. The basis for these efforts is the PVM project that evolved during my visits to Oak Ridge Laboratories under the DOE Faculty Research Participation program; I continue to collaborate with researchers at Oak Ridge on some portions of the project.

  5. Comparison of Two Machine Learning Regression Approaches (Multivariate Relevance Vector Machine and Artificial Neural Network) Coupled with Wavelet Decomposition to Forecast Monthly Streamflow in Peru

    NASA Astrophysics Data System (ADS)

    Ticlavilca, A. M.; Maslova, I.; McKee, M.

    2011-12-01

    This research presents a modeling approach that incorporates wavelet-based analysis techniques used in statistical signal processing and multivariate machine learning regression to forecast monthly streamflow in Peru. Two machine learning regression approaches, Multivariate Relevance Vector Machine and Artificial Neural Network, are compared in terms of performance and robustness. The inputs of the model utilize information of streamflow and Pacific sea surface temperature (SST). The monthly Pacific SST data (from 1950 to 2010) are obtained from the NOAA Climate Prediction Center website. The inputs are decomposed into meaningful components formulated in terms of wavelet multiresolution analysis (MRA). The outputs are the forecasts of streamflow two, four and six months ahead simultaneously. The proposed hybrid modeling approach of wavelet decomposition and machine learning regression can capture sufficient information at meaningful temporal scales to improve the performance of the streamflow forecasts in Peru. A bootstrap analysis is used to explore the robustness of the hybrid modeling approaches.

  6. End-to-end network/application performance troubleshooting methodology

    SciTech Connect

    Wu, Wenji; Bobyshev, Andrey; Bowden, Mark; Crawford, Matt; Demar, Phil; Grigaliunas, Vyto; Grigoriev, Maxim; Petravick, Don; /Fermilab

    2007-09-01

    The computing models for HEP experiments are globally distributed and grid-based. Obstacles to good network performance arise from many causes and can be a major impediment to the success of the computing models for HEP experiments. Factors that affect overall network/application performance exist on the hosts themselves (application software, operating system, hardware), in the local area networks that support the end systems, and within the wide area networks. Since the computer and network systems are globally distributed, it can be very difficult to locate and identify the factors that are hurting application performance. In this paper, we present an end-to-end network/application performance troubleshooting methodology developed and in use at Fermilab. The core of our approach is to narrow down the problem scope with a divide and conquer strategy. The overall complex problem is split into two distinct sub-problems: host diagnosis and tuning, and network path analysis. After satisfactorily evaluating, and if necessary resolving, each sub-problem, we conduct end-to-end performance analysis and diagnosis. The paper will discuss tools we use as part of the methodology. The long term objective of the effort is to enable site administrators and end users to conduct much of the troubleshooting themselves, before (or instead of) calling upon network and operating system 'wizards,' who are always in short supply.

  7. A parallel-vector algorithm for rapid structural analysis on high-performance computers

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.; Nguyen, Duc T.; Agarwal, Tarun K.

    1990-01-01

    A fast, accurate Choleski method for the solution of symmetric systems of linear equations is presented. This direct method is based on a variable-band storage scheme and takes advantage of column heights to reduce the number of operations in the Choleski factorization. The method employs parallel computation in the outermost DO-loop and vector computation via the loop unrolling technique in the innermost DO-loop. The method avoids computations with zeros outside the column heights, and as an option, zeros inside the band. The close relationship between Choleski and Gauss elimination methods is examined. The minor changes required to convert the Choleski code to a Gauss code to solve non-positive-definite symmetric systems of equations are identified. The results for two large scale structural analyses performed on supercomputers, demonstrate the accuracy and speed of the method.

  8. A performance data network for solar process heat systems

    SciTech Connect

    Barker, G.; Hale, M.J.

    1996-03-01

    A solar process heat (SPH) data network has been developed to access remote-site performance data from operational solar heat systems. Each SPH system in the data network is outfitted with monitoring equipment and a datalogger. The datalogger is accessed via modem from the data network computer at the National Renewable Energy Laboratory (NREL). The dataloggers collect both ten-minute and hourly data and download it to the data network every 24-hours for archiving, processing, and plotting. The system data collected includes energy delivered (fluid temperatures and flow rates) and site meteorological conditions, such as solar insolation and ambient temperature. The SPH performance data network was created for collecting performance data from SPH systems that are serving in industrial applications or from systems using technologies that show promise for industrial applications. The network will be used to identify areas of SPH technology needing further development, to correlate computer models with actual performance, and to improve the credibility of SPH technology. The SPH data network also provides a centralized bank of user-friendly performance data that will give prospective SPH users an indication of how actual systems perform. There are currently three systems being monitored and archived under the SPH data network: two are parabolic trough systems and the third is a flat-plate system. The two trough systems both heat water for prisons; the hot water is used for personal hygiene, kitchen operations, and laundry. The flat plate system heats water for meat processing at a slaughter house. We plan to connect another parabolic trough system to the network during the first months of 1996. We continue to look for good examples of systems using other types of collector technologies and systems serving new applications (such as absorption chilling) to include in the SPH performance data network.

  9. Performance of a Regional Aeronautical Telecommunications Network

    NASA Technical Reports Server (NTRS)

    Bretmersky, Steven C.; Ripamonti, Claudio; Konangi, Vijay K.; Kerczewski, Robert J.

    2001-01-01

    This paper reports the findings of the simulation of the ATN (Aeronautical Telecommunications Network) for three typical average-sized U.S. airports and their associated air traffic patterns. The models of the protocols were designed to achieve the same functionality and meet the ATN specifications. The focus of this project is on the subnetwork and routing aspects of the simulation. To maintain continuous communication between the aircrafts and the ground facilities, a model based on mobile IP is used. The results indicate that continuous communication is indeed possible. The network can support two applications of significance in the immediate future FTP and HTTP traffic. Results from this simulation prove the feasibility of development of the ATN concept for AC/ATM (Advanced Communications for Air Traffic Management).

  10. Optimal Beamforming and Performance Analysis of Wireless Relay Networks with Unmanned Aerial Vehicle

    NASA Astrophysics Data System (ADS)

    Ouyang, Jian; Lin, Min

    2015-03-01

    In this paper, we investigate a wireless communication system employing a multi-antenna unmanned aerial vehicle (UAV) as the relay to improve the connectivity between the base station (BS) and the receive node (RN), where the BS-UAV link undergoes the correlated Rician fading while the UAV-RN link follows the correlated Rayleigh fading with large scale path loss. By assuming that the amplify-and-forward (AF) protocol is adopted at UAV, we first propose an optimal beamforming (BF) scheme to maximize the mutual information of the UAV-assisted dual-hop relay network, by calculating the BF weight vectors and the power allocation coefficient. Then, we derive the analytical expressions for the outage probability (OP) and the ergodic capacity (EC) of the relay network to evaluate the system performance conveniently. Finally, computer simulation results are provided to demonstrate the validity and efficiency of the proposed scheme as well as the performance analysis.

  11. Applying neural networks to optimize instrumentation performance

    SciTech Connect

    Start, S.E.; Peters, G.G.

    1995-06-01

    Well calibrated instrumentation is essential in providing meaningful information about the status of a plant. Signals from plant instrumentation frequently have inherent non-linearities, may be affected by environmental conditions and can therefore cause calibration difficulties for the people who maintain them. Two neural network approaches are described in this paper for improving the accuracy of a non-linear, temperature sensitive level probe ised in Expermental Breeder Reactor II (EBR-II) that was difficult to calibrate.

  12. Evaluation of delay performance in valiant load-balancing network

    NASA Astrophysics Data System (ADS)

    Yu, Yingdi; Jin, Yaohui; Cheng, Hong; Gao, Yu; Sun, Weiqiang; Guo, Wei; Hu, Weisheng

    2007-11-01

    Network traffic grows in an unpredictable way, which forces network operators to over-provision their backbone network in order to meet the increasing demands. In the consideration of new users, applications and unexpected failures, the utilization is typically below 30% [1]. There are two methods aimed to solve this problem. The first one is to adjust link capacity with the variation of traffic. However in optical network, rapid signaling scheme and large buffer is required. The second method is to use the statistical multiplexing function of IP routers connected point-to-point by optical links to counteract the effect brought by traffic's variation [2]. But the routing mechanism would be much more complex, and introduce more overheads into backbone network. To exert the potential of network and reduce its overhead, the use of Valiant Load-balancing for backbone network has been proposed in order to enhance the utilization of the network and to simplify the routing process. Raising the network utilization and improving throughput would inevitably influence the end-to-end delay. However, the study on delays of Load-balancing is lack. In the work presented in this paper, we study the delay performance in Valiant Load-balancing network, and isolate the queuing delay for modeling and detail analysis. We design the architecture of a switch with the ability of load-balancing for our simulation and experiment, and analyze the relationship between switch architecture and delay performance.

  13. Flood damage assessment performed based on Support Vector Machines combined with Landsat TM imagery and GIS

    NASA Astrophysics Data System (ADS)

    Alouene, Y.; Petropoulos, G. P.; Kalogrias, A.; Papanikolaou, F.

    2012-04-01

    Floods are a water-related natural disaster affecting and often threatening different aspects of human life, such as property damage, economic degradation, and in some instances even loss of precious human lives. Being able to provide accurately and cost-effectively assessment of damage from floods is essential to both scientists and policy makers in many aspects ranging from mitigating to assessing damage extent as well as in rehabilitation of affected areas. Remote Sensing often combined with Geographical Information Systems (GIS) has generally shown a very promising potential in performing rapidly and cost-effectively flooding damage assessment, particularly so in remote, otherwise inaccessible locations. The progress in remote sensing during the last twenty years or so has resulted to the development of a large number of image processing techniques suitable for use with a range of remote sensing data in performing flooding damage assessment. Supervised image classification is regarded as one of the most widely used approaches employed for this purpose. Yet, the use of recently developed image classification algorithms such as of machine learning-based Support Vector Machines (SVMs) classifier has not been adequately investigated for this purpose. The objective of our work had been to quantitatively evaluate the ability of SVMs combined with Landsat TM multispectral imagery in performing a damage assessment of a flood occurred in a Mediterranean region. A further objective has been to examine if the inclusion of additional spectral information apart from the original TM bands in SVMs can improve flooded area extraction accuracy. As a case study is used the case of a river Evros flooding of 2010 located in the north of Greece, in which TM imagery before and shortly after the flooding was available. Assessment of the flooded area is performed in a GIS environment on the basis of classification accuracy assessment metrics as well as comparisons versus a vector

  14. Static performance of nonaxisymmetric nozzles with yaw thrust-vectoring vanes

    NASA Technical Reports Server (NTRS)

    Mason, Mary L.; Berrier, Bobby L.

    1988-01-01

    A static test was conducted in the static test facility of the Langley 16 ft Transonic Tunnel to evaluate the effects of post exit vane vectoring on nonaxisymmetric nozzles. Three baseline nozzles were tested: an unvectored two dimensional convergent nozzle, an unvectored two dimensional convergent-divergent nozzle, and a pitch vectored two dimensional convergent-divergent nozzle. Each nozzle geometry was tested with 3 exit aspect ratios (exit width divided by exit height) of 1.5, 2.5 and 4.0. Two post exit yaw vanes were externally mounted on the nozzle sidewalls at the nozzle exit to generate yaw thrust vectoring. Vane deflection angle (0, -20 and -30 deg), vane planform and vane curvature were varied during the test. Results indicate that the post exit vane concept produced resultant yaw vector angles which were always smaller than the geometric yaw vector angle. Losses in resultant thrust ratio increased with the magnitude of resultant yaw vector angle. The widest post exit vane produced the largest degree of flow turning, but vane curvature had little effect on thrust vectoring. Pitch vectoring was independent of yaw vectoring, indicating that multiaxis thrust vectoring is feasible for the nozzle concepts tested.

  15. Towards a Social Networks Model for Online Learning & Performance

    ERIC Educational Resources Information Center

    Chung, Kon Shing Kenneth; Paredes, Walter Christian

    2015-01-01

    In this study, we develop a theoretical model to investigate the association between social network properties, "content richness" (CR) in academic learning discourse, and performance. CR is the extent to which one contributes content that is meaningful, insightful and constructive to aid learning and by social network properties we…

  16. Performance analysis of electronic code division multiple access based virtual private networks over passive optical networks

    NASA Astrophysics Data System (ADS)

    Nadarajah, Nishaanthan; Nirmalathas, Ampalavanapillai

    2008-03-01

    A solution for implementing multiple secure virtual private networks over a passive optical network using electronic code division multiple access is proposed and experimentally demonstrated. The multiple virtual private networking capability is experimentally demonstrated with 40 Mb/s data multiplexed with a 640 Mb/s electronic code that is unique to each of the virtual private networks in the passive optical network, and the transmission of the electronically coded data is carried out using Fabry-Perot laser diodes. A theoretical scalability analysis for electronic code division multiple access based virtual private networks over a passive optical network is also carried out to identify the performance limits of the scheme. Several sources of noise such as optical beat interference and multiple access interference that are present in the receiver are considered with different operating system parameters such as transmitted optical power, spectral width of the broadband optical source, and processing gain to study the scalability of the network.

  17. IBM SP high-performance networking with a GRF.

    SciTech Connect

    Navarro, J.P.

    1999-05-27

    Increasing use of highly distributed applications, demand for faster data exchange, and highly parallel applications can push the limits of conventional external networking for IBM SP sites. In technical computing applications we have observed a growing use of a pipeline of hosts and networks collaborating to collect, process, and visualize large amounts of realtime data. The GRF, a high-performance IP switch from Ascend and IBM, is the first backbone network switch to offer a media card that can directly connect to an SP Switch. This enables switch attached hosts in an SP complex to communicate at near SP Switch speeds with other GRF attached hosts and networks.

  18. Improving performance in a contracted physician network.

    PubMed

    Smith, A L; Epstein, A L

    1999-01-01

    Health care organizations face significant performance challenges. Achieving desired results requires the highest level of partnership with independent physicians. Tufts Health Plan invited medical directors of its affiliated groups to participate in a leadership development process to improve clinical, service, and business performance. The design included performance review, gap analysis, priority setting, improvement work plans, and defining the optimum practice culture. Medical directors practiced core leadership capabilities, including building a shared context, getting physician buy-in, and managing outliers. The peer learning environment has been sustained in redesigned medical directors' meetings. There has been significant performance improvement in several practices and enhanced relations between the health plan and medical directors. PMID:10788102

  19. The performance analysis of linux networking - packet receiving

    SciTech Connect

    Wu, Wenji; Crawford, Matt; Bowden, Mark; /Fermilab

    2006-11-01

    The computing models for High-Energy Physics experiments are becoming ever more globally distributed and grid-based, both for technical reasons (e.g., to place computational and data resources near each other and the demand) and for strategic reasons (e.g., to leverage equipment investments). To support such computing models, the network and end systems, computing and storage, face unprecedented challenges. One of the biggest challenges is to transfer scientific data sets--now in the multi-petabyte (10{sup 15} bytes) range and expected to grow to exabytes within a decade--reliably and efficiently among facilities and computation centers scattered around the world. Both the network and end systems should be able to provide the capabilities to support high bandwidth, sustained, end-to-end data transmission. Recent trends in technology are showing that although the raw transmission speeds used in networks are increasing rapidly, the rate of advancement of microprocessor technology has slowed down. Therefore, network protocol-processing overheads have risen sharply in comparison with the time spent in packet transmission, resulting in degraded throughput for networked applications. More and more, it is the network end system, instead of the network, that is responsible for degraded performance of network applications. In this paper, the Linux system's packet receive process is studied from NIC to application. We develop a mathematical model to characterize the Linux packet receiving process. Key factors that affect Linux systems network performance are analyzed.

  20. Performance analysis of local area networks

    NASA Technical Reports Server (NTRS)

    Alkhatib, Hasan S.; Hall, Mary Grace

    1990-01-01

    A simulation of the TCP/IP protocol running on a CSMA/CD data link layer was described. The simulation was implemented using the simula language, and object oriented discrete event language. It allows the user to set the number of stations at run time, as well as some station parameters. Those parameters are the interrupt time and the dma transfer rate for each station. In addition, the user may configure the network at run time with stations of differing characteristics. Two types are available, and the parameters of both types are read from input files at run time. The parameters include the dma transfer rate, interrupt time, data rate, average message size, maximum frame size and the average interarrival time of messages per station. The information collected for the network is the throughput and the mean delay per packet. For each station, the number of messages attempted as well as the number of messages successfully transmitted is collected in addition to the throughput and mean packet delay per station.

  1. Optical performance monitoring for the next generation optical communication networks

    NASA Astrophysics Data System (ADS)

    Pan, Zhongqi; Yu, Changyuan; Willner, Alan E.

    2010-01-01

    Today's optical networks function are in a fairly static fashion and are built to operate within well-defined specifications. This scenario is quite challenging for next generation high-capacity systems, since network paths are not static and channel-degrading effects can change with temperature, component drift, aging, fiber plant maintenance and many other factors. Moreover, we are far from being able to simply "plug-and-play" an optical node into an existing network in such a way that the network itself can allocate resources to ensure error-free transmission. Optical performance monitoring could potentially enable higher stability, reconfigurability, and flexibility in a self-managed optical network. This paper will describe the specific fiber impairments that future intelligent optical network might want to monitor as well as some promising techniques.

  2. Leveraging Structure to Improve Classification Performance in Sparsely Labeled Networks

    SciTech Connect

    Gallagher, B; Eliassi-Rad, T

    2007-10-22

    We address the problem of classification in a partially labeled network (a.k.a. within-network classification), with an emphasis on tasks in which we have very few labeled instances to start with. Recent work has demonstrated the utility of collective classification (i.e., simultaneous inferences over class labels of related instances) in this general problem setting. However, the performance of collective classification algorithms can be adversely affected by the sparseness of labels in real-world networks. We show that on several real-world data sets, collective classification appears to offer little advantage in general and hurts performance in the worst cases. In this paper, we explore a complimentary approach to within-network classification that takes advantage of network structure. Our approach is motivated by the observation that real-world networks often provide a great deal more structural information than attribute information (e.g., class labels). Through experiments on supervised and semi-supervised classifiers of network data, we demonstrate that a small number of structural features can lead to consistent and sometimes dramatic improvements in classification performance. We also examine the relative utility of individual structural features and show that, in many cases, it is a combination of both local and global network structure that is most informative.

  3. Neural network submodel as an abstraction tool: relating network performance to combat outcome

    NASA Astrophysics Data System (ADS)

    Jablunovsky, Greg; Dorman, Clark; Yaworsky, Paul S.

    2000-06-01

    Simulation of Command and Control (C2) networks has historically emphasized individual system performance with little architectural context or credible linkage to `bottom- line' measures of combat outcomes. Renewed interest in modeling C2 effects and relationships stems from emerging network intensive operational concepts. This demands improved methods to span the analytical hierarchy between C2 system performance models and theater-level models. Neural network technology offers a modeling approach that can abstract the essential behavior of higher resolution C2 models within a campaign simulation. The proposed methodology uses off-line learning of the relationships between network state and campaign-impacting performance of a complex C2 architecture and then approximation of that performance as a time-varying parameter in an aggregated simulation. Ultimately, this abstraction tool offers an increased fidelity of C2 system simulation that captures dynamic network dependencies within a campaign context.

  4. Challenges for high-performance networking for exascale computing.

    SciTech Connect

    Barrett, Brian W.; Hemmert, K. Scott; Underwood, Keith Douglas; Brightwell, Ronald Brian

    2010-05-01

    Achieving the next three orders of magnitude performance increase to move from petascale to exascale computing will require a significant advancements in several fundamental areas. Recent studies have outlined many of the challenges in hardware and software that will be needed. In this paper, we examine these challenges with respect to high-performance networking. We describe the repercussions of anticipated changes to computing and networking hardware and discuss the impact that alternative parallel programming models will have on the network software stack. We also present some ideas on possible approaches that address some of these challenges.

  5. Performance enhancement of OSPF protocol in the private network

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Lu, Yang; Lin, Xiaokang

    2005-11-01

    The private network serves as an information exchange platform to support the integrated services via microwave channels and accordingly selects the open shortest path first (OSPF) as the IP routing protocol. But the existing OSPF can't fit the private network very well for its special characteristics. This paper presents our modifications to the standard protocol in such aspects as the single-area scheme, link state advertisement (LSA) types and formats, OSPF packet formats, important state machines, setting of protocol parameters and link flap damping. Finally simulations are performed in various scenarios and the results indicate that our modifications can enhance the OSPF performance in the private network effectively.

  6. Optimal sampling in network performance evaluation

    SciTech Connect

    Fedorov, V.; Flanagan, D.; Batsell, S.

    1998-11-01

    Unlike many other experiments, in meteorology and seismology for instance, monitoring measurements on communication networks are cheap and fast. Even the simplest measurement tools, which are usually some interrogating programs, can provide a huge amount of data at almost no expense. The problem is not decreasing the cost of measurements, but rather reducing the amount of stored data and the measurement and analysis time. The authors address the approach that is based on the covariances between the measurements for various sites. The corresponding covariance matrix can be constructed either theoretically under some assumptions about the observed random processes, or can be estimated from some preliminary experiments. The authors compare the proposed algorithm with heuristic procedures that are used in other monitoring problems.

  7. Investigation of road network features and safety performance.

    PubMed

    Wang, Xuesong; Wu, Xingwei; Abdel-Aty, Mohamed; Tremont, Paul J

    2013-07-01

    The analysis of road network designs can provide useful information to transportation planners as they seek to improve the safety of road networks. The objectives of this study were to compare and define the effective road network indices and to analyze the relationship between road network structure and traffic safety at the level of the Traffic Analysis Zone (TAZ). One problem in comparing different road networks is establishing criteria that can be used to scale networks in terms of their structures. Based on data from Orange and Hillsborough Counties in Florida, road network structural properties within TAZs were scaled using 3 indices: Closeness Centrality, Betweenness Centrality, and Meshedness Coefficient. The Meshedness Coefficient performed best in capturing the structural features of the road network. Bayesian Conditional Autoregressive (CAR) models were developed to assess the safety of various network configurations as measured by total crashes, crashes on state roads, and crashes on local roads. The models' results showed that crash frequencies on local roads were closely related to factors within the TAZs (e.g., zonal network structure, TAZ population), while crash frequencies on state roads were closely related to the road and traffic features of state roads. For the safety effects of different networks, the Grid type was associated with the highest frequency of crashes, followed by the Mixed type, the Loops & Lollipops type, and the Sparse type. This study shows that it is possible to develop a quantitative scale for structural properties of a road network, and to use that scale to calculate the relationships between network structural properties and safety. PMID:23584537

  8. Performance Analysis of a NASA Integrated Network Array

    NASA Technical Reports Server (NTRS)

    Nessel, James A.

    2012-01-01

    The Space Communications and Navigation (SCaN) Program is planning to integrate its individual networks into a unified network which will function as a single entity to provide services to user missions. This integrated network architecture is expected to provide SCaN customers with the capabilities to seamlessly use any of the available SCaN assets to support their missions to efficiently meet the collective needs of Agency missions. One potential optimal application of these assets, based on this envisioned architecture, is that of arraying across existing networks to significantly enhance data rates and/or link availabilities. As such, this document provides an analysis of the transmit and receive performance of a proposed SCaN inter-network antenna array. From the study, it is determined that a fully integrated internetwork array does not provide any significant advantage over an intra-network array, one in which the assets of an individual network are arrayed for enhanced performance. Therefore, it is the recommendation of this study that NASA proceed with an arraying concept, with a fundamental focus on a network-centric arraying.

  9. Development of task network models of human performance in microgravity

    NASA Technical Reports Server (NTRS)

    Diaz, Manuel F.; Adam, Susan

    1992-01-01

    This paper discusses the utility of task-network modeling for quantifying human performance variability in microgravity. The data are gathered for: (1) improving current methodologies for assessing human performance and workload in the operational space environment; (2) developing tools for assessing alternative system designs; and (3) developing an integrated set of methodologies for the evaluation of performance degradation during extended duration spaceflight. The evaluation entailed an analysis of the Remote Manipulator System payload-grapple task performed on many shuttle missions. Task-network modeling can be used as a tool for assessing and enhancing human performance in man-machine systems, particularly for modeling long-duration manned spaceflight. Task-network modeling can be directed toward improving system efficiency by increasing the understanding of basic capabilities of the human component in the system and the factors that influence these capabilities.

  10. High-Performance Satellite/Terrestrial-Network Gateway

    NASA Technical Reports Server (NTRS)

    Beering, David R.

    2005-01-01

    A gateway has been developed to enable digital communication between (1) the high-rate receiving equipment at NASA's White Sands complex and (2) a standard terrestrial digital communication network at data rates up to 622 Mb/s. The design of this gateway can also be adapted for use in commercial Earth/satellite and digital communication networks, and in terrestrial digital communication networks that include wireless subnetworks. Gateway as used here signifies an electronic circuit that serves as an interface between two electronic communication networks so that a computer (or other terminal) on one network can communicate with a terminal on the other network. The connection between this gateway and the high-rate receiving equipment is made via a synchronous serial data interface at the emitter-coupled-logic (ECL) level. The connection between this gateway and a standard asynchronous transfer mode (ATM) terrestrial communication network is made via a standard user network interface with a synchronous optical network (SONET) connector. The gateway contains circuitry that performs the conversion between the ECL and SONET interfaces. The data rate of the SONET interface can be either 155.52 or 622.08 Mb/s. The gateway derives its clock signal from a satellite modem in the high-rate receiving equipment and, hence, is agile in the sense that it adapts to the data rate of the serial interface.

  11. Urban traffic-network performance: flow theory and simulation experiments

    SciTech Connect

    Williams, J.C.

    1986-01-01

    Performance models for urban street networks were developed to describe the response of a traffic network to given travel-demand levels. The three basic traffic flow variables, speed, flow, and concentration, are defined at the network level, and three model systems are proposed. Each system consists of a series of interrelated, consistent functions between the three basic traffic-flow variables as well as the fraction of stopped vehicles in the network. These models are subsequently compared with the results of microscopic simulation of a small test network. The sensitivity of one of the model systems to a variety of network features was also explored. Three categories of features were considered, with the specific features tested listed in parentheses: network topology (block length and street width), traffic control (traffic signal coordination), and traffic characteristics (level of inter-vehicular interaction). Finally, a fundamental issue concerning the estimation of two network-level parameters (from a nonlinear relation in the two-fluid theory) was examined. The principal concern was that of comparability of these parameters when estimated with information from a single vehicle (or small group of vehicles), as done in conjunction with previous field studies, and when estimated with network-level information (i.e., all the vehicles), as is possible with simulation.

  12. Body Area Networks performance analysis using UWB.

    PubMed

    Fatehy, Mohammed; Kohno, Ryuji

    2013-01-01

    The successful realization of a Wireless Body Area Network (WBAN) using Ultra Wideband (UWB) technology supports different medical and consumer electronics (CE) applications but stand in a need for an innovative solution to meet the different requirements of these applications. Previously, we proposed to use adaptive processing gain (PG) to fulfill the different QoS requirements of these WBAN applications. In this paper, interference occurred between two different BANs in a UWB-based system has been analyzed in terms of acceptable ratio of overlapping between these BANs' PG providing the required QoS for each BAN. The first BAN employed for a healthcare device (e.g. EEG, ECG, etc.) with a relatively longer spreading sequence is used and the second customized for entertainment application (e.g. wireless headset, wireless game pad, etc.) where a shorter spreading code is assigned. Considering bandwidth utilization and difference in the employed spreading sequence, the acceptable ratio of overlapping between these BANs should fall between 0.05 and 0.5 in order to optimize the used spreading sequence and in the meantime satisfying the required QoS for these applications. PMID:24109913

  13. Performance evaluation of reactive and proactive routing protocol in IEEE 802.11 ad hoc network

    NASA Astrophysics Data System (ADS)

    Hamma, Salima; Cizeron, Eddy; Issaka, Hafiz; Guédon, Jean-Pierre

    2006-10-01

    Wireless technology based on the IEEE 802.11 standard is widely deployed. This technology is used to support multiple types of communication services (data, voice, image) with different QoS requirements. MANET (Mobile Adhoc NETwork) does not require a fixed infrastructure. Mobile nodes communicate through multihop paths. The wireless communication medium has variable and unpredictable characteristics. Furthermore, node mobility creates a continuously changing communication topology in which paths break and new one form dynamically. The routing table of each router in an adhoc network must be kept up-to-date. MANET uses Distance Vector or Link State algorithms which insure that the route to every host is always known. However, this approach must take into account the adhoc networks specific characteristics: dynamic topologies, limited bandwidth, energy constraints, limited physical security, ... Two main routing protocols categories are studied in this paper: proactive protocols (e.g. Optimised Link State Routing - OLSR) and reactive protocols (e.g. Ad hoc On Demand Distance Vector - AODV, Dynamic Source Routing - DSR). The proactive protocols are based on periodic exchanges that update the routing tables to all possible destinations, even if no traffic goes through. The reactive protocols are based on on-demand route discoveries that update routing tables only for the destination that has traffic going through. The present paper focuses on study and performance evaluation of these categories using NS2 simulations. We have considered qualitative and quantitative criteria. The first one concerns distributed operation, loop-freedom, security, sleep period operation. The second are used to assess performance of different routing protocols presented in this paper. We can list end-to-end data delay, jitter, packet delivery ratio, routing load, activity distribution. Comparative study will be presented with number of networking context consideration and the results show

  14. Performance evaluation of transport protocols for networked haptic collaboration

    NASA Astrophysics Data System (ADS)

    Lee, Seokhee; Moon, Sungtae; Kim, JongWon

    2006-10-01

    In this paper, we explain two transport-related experimental results for networked haptic CVEs (collaborative virtual environments). The first set of experiments evaluate the performance changes in terms of QoE (quality of experience) with the haptic-based CVEs under different network settings. The evaluation results are then used to define the minimum networking requirements for CVEs with force-feedback haptic interface. The second experiments verify whether the existing haptics-specialized transport protocols can satisfy the networking QoE requirements for the networked haptic CVEs. The results will be used to suggest in design guidelines for an effective transport protocol for this highly-interactive (i.e., extremely low-delay latency at up to 1 kHz processing cycle) haptic CVEs over the delay-crippled Internet.

  15. Diversity improves performance in excitable networks

    PubMed Central

    Copelli, Mauro; Roberts, James A.

    2016-01-01

    As few real systems comprise indistinguishable units, diversity is a hallmark of nature. Diversity among interacting units shapes properties of collective behavior such as synchronization and information transmission. However, the benefits of diversity on information processing at the edge of a phase transition, ordinarily assumed to emerge from identical elements, remain largely unexplored. Analyzing a general model of excitable systems with heterogeneous excitability, we find that diversity can greatly enhance optimal performance (by two orders of magnitude) when distinguishing incoming inputs. Heterogeneous systems possess a subset of specialized elements whose capability greatly exceeds that of the nonspecialized elements. We also find that diversity can yield multiple percolation, with performance optimized at tricriticality. Our results are robust in specific and more realistic neuronal systems comprising a combination of excitatory and inhibitory units, and indicate that diversity-induced amplification can be harnessed by neuronal systems for evaluating stimulus intensities. PMID:27168961

  16. Performance characteristics of a variable-area vane nozzle for vectoring an ASTOVL exhaust jet up to 45 deg

    NASA Technical Reports Server (NTRS)

    Mcardle, Jack G.; Esker, Barbara S.

    1993-01-01

    Many conceptual designs for advanced short-takeoff, vertical landing (ASTOVL) aircraft need exhaust nozzles that can vector the jet to provide forces and moments for controlling the aircraft's movement or attitude in flight near the ground. A type of nozzle that can both vector the jet and vary the jet flow area is called a vane nozzle. Basically, the nozzle consists of parallel, spaced-apart flow passages formed by pairs of vanes (vanesets) that can be rotated on axes perpendicular to the flow. Two important features of this type of nozzle are the abilities to vector the jet rearward up to 45 degrees and to produce less harsh pressure and velocity footprints during vertical landing than does an equivalent single jet. A one-third-scale model of a generic vane nozzle was tested with unheated air at the NASA Lewis Research Center's Powered Lift Facility. The model had three parallel flow passages. Each passage was formed by a vaneset consisting of a long and a short vane. The longer vanes controlled the jet vector angle, and the shorter controlled the flow area. Nozzle performance for three nominal flow areas (basic and plus or minus 21 percent of basic area), each at nominal jet vector angles from -20 deg (forward of vertical) to +45 deg (rearward of vertical) are presented. The tests were made with the nozzle mounted on a model tailpipe with a blind flange on the end to simulate a closed cruise nozzle, at tailpipe-to-ambient pressure ratios from 1.8 to 4.0. Also included are jet wake data, single-vaneset vector performance for long/short and equal-length vane designs, and pumping capability. The pumping capability arises from the subambient pressure developed in the cavities between the vanesets, which could be used to aspirate flow from a source such as the engine compartment. Some of the performance characteristics are compared with characteristics of a single-jet nozzle previously reported.

  17. Calibration Performance and Capabilities of the New Compact Ocean Wind Vector Radiometer System

    NASA Astrophysics Data System (ADS)

    Brown, S. T.; Focardi, P.; Kitiyakara, A.; Maiwald, F.; Montes, O.; Padmanabhan, S.; Redick, R.; Russell, D.; Wincentsen, J.

    2014-12-01

    The paper describes performance and capabilities of a new satellite conically imaging microwave radiometer system, the Compact Ocean Wind Vector Radiometer (COWVR), being built by the Jet Propulsion Laboratory (JPL) for an Air Force demonstration mission. COWVR is an 18-34 GHz fully polarimetric radiometer designed to provide measurements of ocean vector winds with an accuracy that meets or exceeds that provided by WindSat, but using a simpler design which has both calibration and cost advantages. Heritage conical radiometer systems, such as WindSat, AMSR, GMI or SSMI(S), all have a similar overall architecture and have exhibited significant intra-channel and inter-sensor calibration biases, due in part to the relative independence of the radiometers between the different polarizations and frequencies in the system. The COWVR system uses a broadband compact hybrid combining architecture and Electronic Polarization Basis Rotation to minimize the number of free calibration parameters between polarization and frequencies, as well as providing a definitive calibration reference from the modulation of the mean polarized signal from the Earth. This second calibration advantage arises because the sensor modulates the incoming polarized signal at the input antenna aperture in a known way based only on the instrument geometry which forces relative calibration consistency between the polarimetric channels of the sensor and provides a gain and offset calibration independent of a model or other ancillary data source, which has typically been a weakness in the calibration and inter-calibration of heritage microwave sensors. This paper will give a description of the COWVR instrument and an overview of the technology demonstration mission. We will discuss the overall calibration approach for this system, its advantages over existing systems and how many of the calibration issues that impact existing satellite radiometers can be eliminated in future operational systems based on

  18. Performance limitations for networked control systems with plant uncertainty

    NASA Astrophysics Data System (ADS)

    Chi, Ming; Guan, Zhi-Hong; Cheng, Xin-Ming; Yuan, Fu-Shun

    2016-04-01

    There has recently been significant interest in performance study for networked control systems with communication constraints. But the existing work mainly assumes that the plant has an exact model. The goal of this paper is to investigate the optimal tracking performance for networked control system in the presence of plant uncertainty. The plant under consideration is assumed to be non-minimum phase and unstable, while the two-parameter controller is employed and the integral square criterion is adopted to measure the tracking error. And we formulate the uncertainty by utilising stochastic embedding. The explicit expression of the tracking performance has been obtained. The results show that the network communication noise and the model uncertainty, as well as the unstable poles and non-minimum phase zeros, can worsen the tracking performance.

  19. High performance network and channel-based storage

    NASA Technical Reports Server (NTRS)

    Katz, Randy H.

    1991-01-01

    In the traditional mainframe-centered view of a computer system, storage devices are coupled to the system through complex hardware subsystems called input/output (I/O) channels. With the dramatic shift towards workstation-based computing, and its associated client/server model of computation, storage facilities are now found attached to file servers and distributed throughout the network. We discuss the underlying technology trends that are leading to high performance network-based storage, namely advances in networks, storage devices, and I/O controller and server architectures. We review several commercial systems and research prototypes that are leading to a new approach to high performance computing based on network-attached storage.

  20. Multimedia application performance on a WiMAX network

    NASA Astrophysics Data System (ADS)

    Halepovic, Emir; Ghaderi, Majid; Williamson, Carey

    2009-01-01

    In this paper, we use experimental measurements to study the performance of multimedia applications over a commercial IEEE 802.16 WiMAX network. Voice-over-IP (VoIP) and video streaming applications are tested. We observe that the WiMAX-based network solidly supports VoIP. The voice quality degradation compared to high-speed Ethernet is only moderate, despite higher packet loss and network delays. Despite different characteristics of the uplink and the downlink, call quality is comparable for both directions. On-demand video streaming performs well using UDP. Smooth playback of high-quality video/audio clips at aggregate rates exceeding 700 Kbps is achieved about 63% of the time, with low-quality playback periods observed only 7% of the time. Our results show that WiMAX networks can adequately support currently popular multimedia Internet applications.

  1. Portals 4 network API definition and performance measurement

    SciTech Connect

    Brightwell, R. B.

    2012-03-01

    Portals is a low-level network programming interface for distributed memory massively parallel computing systems designed by Sandia, UNM, and Intel. Portals has been designed to provide high message rates and to provide the flexibility to support a variety of higher-level communication paradigms. This project developed and analyzed an implementation of Portals using shared memory in order to measure and understand the impact of using general-purpose compute cores to handle network protocol processing functions. The goal of this study was to evaluate an approach to high-performance networking software design and hardware support that would enable important DOE modeling and simulation applications to perform well and to provide valuable input to Intel so they can make informed decisions about future network software and hardware products that impact DOE applications.

  2. Static internal performance of single-expansion-ramp nozzles with thrust-vectoring capability up to 60 deg

    NASA Technical Reports Server (NTRS)

    Berrier, B. L.; Leavitt, L. D.

    1984-01-01

    An investigation has been conducted at static conditions (wind off) in the static-test facility of the Langley 16-Foot Transonic Tunnel. The effects of geometric thrust-vector angle, sidewall containment, ramp curvature, lower-flap lip angle, and ramp length on the internal performance of nonaxisymmetric single-expansion-ramp nozzles were investigated. Geometric thrust-vector angle was varied from -20 deg. to 60 deg., and nozzle pressure ratio was varied from 1.0 (jet off) to approximately 10.0.

  3. Semi-supervised multimodal relevance vector regression improves cognitive performance estimation from imaging and biological biomarkers.

    PubMed

    Cheng, Bo; Zhang, Daoqiang; Chen, Songcan; Kaufer, Daniel I; Shen, Dinggang

    2013-07-01

    Accurate estimation of cognitive scores for patients can help track the progress of neurological diseases. In this paper, we present a novel semi-supervised multimodal relevance vector regression (SM-RVR) method for predicting clinical scores of neurological diseases from multimodal imaging and biological biomarker, to help evaluate pathological stage and predict progression of diseases, e.g., Alzheimer's diseases (AD). Unlike most existing methods, we predict clinical scores from multimodal (imaging and biological) biomarkers, including MRI, FDG-PET, and CSF. Considering that the clinical scores of mild cognitive impairment (MCI) subjects are often less stable compared to those of AD and normal control (NC) subjects due to the heterogeneity of MCI, we use only the multimodal data of MCI subjects, but no corresponding clinical scores, to train a semi-supervised model for enhancing the estimation of clinical scores for AD and NC subjects. We also develop a new strategy for selecting the most informative MCI subjects. We evaluate the performance of our approach on 202 subjects with all three modalities of data (MRI, FDG-PET and CSF) from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. The experimental results show that our SM-RVR method achieves a root-mean-square error (RMSE) of 1.91 and a correlation coefficient (CORR) of 0.80 for estimating the MMSE scores, and also a RMSE of 4.45 and a CORR of 0.78 for estimating the ADAS-Cog scores, demonstrating very promising performances in AD studies. PMID:23504659

  4. Semi-Supervised Multimodal Relevance Vector Regression Improves Cognitive Performance Estimation from Imaging and Biological Biomarkers

    PubMed Central

    Cheng, Bo; Chen, Songcan; Kaufer, Daniel I.

    2013-01-01

    Accurate estimation of cognitive scores for patients can help track the progress of neurological diseases. In this paper, we present a novel semi-supervised multimodal relevance vector regression (SM-RVR) method for predicting clinical scores of neurological diseases from multimodal imaging and biological biomarker, to help evaluate pathological stage and predict progression of diseases, e.g., Alzheimer’s diseases (AD). Unlike most existing methods, we predict clinical scores from multimodal (imaging and biological) biomarkers, including MRI, FDG-PET, and CSF. Considering that the clinical scores of mild cognitive impairment (MCI) subjects are often less stable compared to those of AD and normal control (NC) subjects due to the heterogeneity of MCI, we use only the multimodal data of MCI subjects, but no corresponding clinical scores, to train a semi-supervised model for enhancing the estimation of clinical scores for AD and NC subjects. We also develop a new strategy for selecting the most informative MCI subjects. We evaluate the performance of our approach on 202 subjects with all three modalities of data (MRI, FDG-PET and CSF) from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database. The experimental results show that our SM-RVR method achieves a root-mean-square error (RMSE) of 1.91 and a correlation coefficient (CORR) of 0.80 for estimating the MMSE scores, and also a RMSE of 4.45 and a CORR of 0.78 for estimating the ADAS-Cog scores, demonstrating very promising performances in AD studies. PMID:23504659

  5. Hospital network performance: a survey of hospital stakeholders' perspectives.

    PubMed

    Bravi, F; Gibertoni, D; Marcon, A; Sicotte, C; Minvielle, E; Rucci, P; Angelastro, A; Carradori, T; Fantini, M P

    2013-02-01

    Hospital networks are an emerging organizational form designed to face the new challenges of public health systems. Although the benefits introduced by network models in terms of rationalization of resources are known, evidence about stakeholders' perspectives on hospital network performance from the literature is scanty. Using the Competing Values Framework of organizational effectiveness and its subsequent adaptation by Minvielle et al., we conducted in 2009 a survey in five hospitals of an Italian network for oncological care to examine and compare the views on hospital network performance of internal stakeholders (physicians, nurses and the administrative staff). 329 questionnaires exploring stakeholders' perspectives were completed, with a response rate of 65.8%. Using exploratory factor analysis of the 66 items of the questionnaire, we identified 4 factors, i.e. Centrality of relationships, Quality of care, Attractiveness/Reputation and Staff empowerment and Protection of workers' rights. 42 items were retained in the analysis. Factor scores proved to be high (mean score>8 on a 10-item scale), except for Attractiveness/Reputation (mean score 6.79), indicating that stakeholders attach a higher importance to relational and health care aspects. Comparison of factor scores among stakeholders did not reveal significant differences, suggesting a broadly shared view on hospital network performance. PMID:23201189

  6. Performance analysis of a common-mode signal based low-complexity crosstalk cancelation scheme in vectored VDSL

    NASA Astrophysics Data System (ADS)

    Zafaruddin, SM; Prakriya, Shankar; Prasad, Surendra

    2012-12-01

    In this article, we propose a vectored system by using both common mode (CM) and differential mode (DM) signals in upstream VDSL. We first develop a multi-input multi-output (MIMO) CM channel by using the single-pair CM and MIMO DM channels proposed recently, and study the characteristics of the resultant CM-DM channel matrix. We then propose a low complexity receiver structure in which the CM and DM signals of each twisted-pair (TP) are combined before the application of a MIMO zero forcing (ZF) receiver. We study capacity of the proposed system, and show that the vectored CM-DM processing provides higher data-rates at longer loop-lengths. In the absence of alien crosstalk, application of the ZF receiver on the vectored CM-DM signals yields performance close to the single user bound (SUB). In the presence of alien crosstalk, we show that the vectored CM-DM processing exploits the spatial correlation of CM and DM signals and provides higher data rates than with DM processing only. Simulation results validate the analysis and demonstrate the importance of CM-DM joint processing in vectored VDSL systems.

  7. Virulence Factors of Geminivirus Interact with MYC2 to Subvert Plant Resistance and Promote Vector Performance[C][W

    PubMed Central

    Li, Ran; Weldegergis, Berhane T.; Li, Jie; Jung, Choonkyun; Qu, Jing; Sun, Yanwei; Qian, Hongmei; Tee, ChuanSia; van Loon, Joop J.A.; Dicke, Marcel; Chua, Nam-Hai; Liu, Shu-Sheng

    2014-01-01

    A pathogen may cause infected plants to promote the performance of its transmitting vector, which accelerates the spread of the pathogen. This positive effect of a pathogen on its vector via their shared host plant is termed indirect mutualism. For example, terpene biosynthesis is suppressed in begomovirus-infected plants, leading to reduced plant resistance and enhanced performance of the whiteflies (Bemisia tabaci) that transmit these viruses. Although begomovirus-whitefly mutualism has been known, the underlying mechanism is still elusive. Here, we identified βC1 of Tomato yellow leaf curl China virus, a monopartite begomovirus, as the viral genetic factor that suppresses plant terpene biosynthesis. βC1 directly interacts with the basic helix-loop-helix transcription factor MYC2 to compromise the activation of MYC2-regulated terpene synthase genes, thereby reducing whitefly resistance. MYC2 associates with the bipartite begomoviral protein BV1, suggesting that MYC2 is an evolutionarily conserved target of begomoviruses for the suppression of terpene-based resistance and the promotion of vector performance. Our findings describe how this viral pathogen regulates host plant metabolism to establish mutualism with its insect vector. PMID:25490915

  8. UltraSciencenet: High- Performance Network Research Test-Bed

    SciTech Connect

    Rao, Nageswara S; Wing, William R; Poole, Stephen W; Hicks, Susan Elaine; DeNap, Frank A; Carter, Steven M; Wu, Qishi

    2009-04-01

    The high-performance networking requirements for next generation large-scale applications belong to two broad classes: (a) high bandwidths, typically multiples of 10Gbps, to support bulk data transfers, and (b) stable bandwidths, typically at much lower bandwidths, to support computational steering, remote visualization, and remote control of instrumentation. Current Internet technologies, however, are severely limited in meeting these demands because such bulk bandwidths are available only in the backbone, and stable control channels are hard to realize over shared connections. The UltraScience Net (USN) facilitates the development of such technologies by providing dynamic, cross-country dedicated 10Gbps channels for large data transfers, and 150 Mbps channels for interactive and control operations. Contributions of the USN project are two-fold: (a) Infrastructure Technologies for Network Experimental Facility: USN developed and/or demonstrated a number of infrastructure technologies needed for a national-scale network experimental facility. Compared to Internet, USN's data-plane is different in that it can be partitioned into isolated layer-1 or layer-2 connections, and its control-plane is different in the ability of users and applications to setup and tear down channels as needed. Its design required several new components including a Virtual Private Network infrastructure, a bandwidth and channel scheduler, and a dynamic signaling daemon. The control-plane employs a centralized scheduler to compute the channel allocations and a signaling daemon to generate configuration signals to switches. In a nutshell, USN demonstrated the ability to build and operate a stable national-scale switched network. (b) Structured Network Research Experiments: A number of network research experiments have been conducted on USN that cannot be easily supported over existing network facilities, including test-beds and production networks. It settled an open matter by demonstrating

  9. A dityrosine network mediated by dual oxidase and peroxidase influences the persistence of Lyme disease pathogens within the vector.

    PubMed

    Yang, Xiuli; Smith, Alexis A; Williams, Mark S; Pal, Utpal

    2014-05-01

    Ixodes scapularis ticks transmit a wide array of human and animal pathogens including Borrelia burgdorferi; however, how tick immune components influence the persistence of invading pathogens remains unknown. As originally demonstrated in Caenorhabditis elegans and later in Anopheles gambiae, we show here that an acellular gut barrier, resulting from the tyrosine cross-linking of the extracellular matrix, also exists in I. scapularis ticks. This dityrosine network (DTN) is dependent upon a dual oxidase (Duox), which is a member of the NADPH oxidase family. The Ixodes genome encodes for a single Duox and at least 16 potential peroxidase proteins, one of which, annotated as ISCW017368, together with Duox has been found to be indispensible for DTN formation. This barrier influences pathogen survival in the gut, as an impaired DTN in Doux knockdown or in specific peroxidase knockdown ticks, results in reduced levels of B. burgdorferi persistence within ticks. Absence of a complete DTN formation in knockdown ticks leads to the activation of specific tick innate immune pathway genes that potentially resulted in the reduction of spirochete levels. Together, these results highlighted the evolution of the DTN in a diverse set of arthropod vectors, including ticks, and its role in protecting invading pathogens like B. burgdorferi. Further understanding of the molecular basis of tick innate immune responses, vector-pathogen interaction, and their contributions in microbial persistence may help the development of new targets for disrupting the pathogen life cycle. PMID:24662290

  10. Sub-terahertz spectroscopy of magnetic resonance in BiFeO3 using a vector network analyzer

    NASA Astrophysics Data System (ADS)

    Caspers, Christian; Gandhi, Varun P.; Magrez, Arnaud; de Rijk, Emile; Ansermet, Jean-Philippe

    2016-06-01

    Detection of sub-THz spin cycloid resonances (SCRs) of stoichiometric BiFeO3 (BFO) was demonstrated using a vector network analyzer. Continuous wave absorption spectroscopy is possible, thanks to heterodyning and electronic sweep control using frequency extenders for frequencies from 480 to 760 GHz. High frequency resolution reveals SCR absorption peaks with a frequency precision in the ppm regime. Three distinct SCR features of BFO were observed and identified as Ψ1 and Φ2 modes, which are out-of-plane and in-plane modes of the spin cycloid, respectively. A spin reorientation transition at 200 K is evident in the frequency vs temperature study. The global minimum in linewidth for both Ψ modes at 140 K is ascribed to the critical slowing down of spin fluctuations.

  11. Input vector optimization of feed-forward neural networks for fitting ab initio potential-energy databases.

    PubMed

    Malshe, M; Raff, L M; Hagan, M; Bukkapatnam, S; Komanduri, R

    2010-05-28

    The variation in the fitting accuracy of neural networks (NNs) when used to fit databases comprising potential energies obtained from ab initio electronic structure calculations is investigated as a function of the number and nature of the elements employed in the input vector to the NN. Ab initio databases for H(2)O(2), HONO, Si(5), and H(2)C[Double Bond]CHBr were employed in the investigations. These systems were chosen so as to include four-, five-, and six-body systems containing first, second, third, and fourth row elements with a wide variety of chemical bonding and whose conformations cover a wide range of structures that occur under high-energy machining conditions and in chemical reactions involving cis-trans isomerizations, six different types of two-center bond ruptures, and two different three-center dissociation reactions. The ab initio databases for these systems were obtained using density functional theory/B3LYP, MP2, and MP4 methods with extended basis sets. A total of 31 input vectors were investigated. In each case, the elements of the input vector were chosen from interatomic distances, inverse powers of the interatomic distance, three-body angles, and dihedral angles. Both redundant and nonredundant input vectors were investigated. The results show that among all the input vectors investigated, the set employed in the Z-matrix specification of the molecular configurations in the electronic structure calculations gave the lowest NN fitting accuracy for both Si(5) and vinyl bromide. The underlying reason for this result appears to be the discontinuity present in the dihedral angle for planar geometries. The use of trigometric functions of the angles as input elements produced significantly improved fitting accuracy as this choice eliminates the discontinuity. The most accurate fitting was obtained when the elements of the input vector were taken to have the form R(ij) (-n), where the R(ij) are the interatomic distances. When the Levenberg

  12. Input vector optimization of feed-forward neural networks for fitting ab initio potential-energy databases

    NASA Astrophysics Data System (ADS)

    Malshe, M.; Raff, L. M.; Hagan, M.; Bukkapatnam, S.; Komanduri, R.

    2010-05-01

    The variation in the fitting accuracy of neural networks (NNs) when used to fit databases comprising potential energies obtained from ab initio electronic structure calculations is investigated as a function of the number and nature of the elements employed in the input vector to the NN. Ab initio databases for H2O2, HONO, Si5, and H2CCHBr were employed in the investigations. These systems were chosen so as to include four-, five-, and six-body systems containing first, second, third, and fourth row elements with a wide variety of chemical bonding and whose conformations cover a wide range of structures that occur under high-energy machining conditions and in chemical reactions involving cis-trans isomerizations, six different types of two-center bond ruptures, and two different three-center dissociation reactions. The ab initio databases for these systems were obtained using density functional theory/B3LYP, MP2, and MP4 methods with extended basis sets. A total of 31 input vectors were investigated. In each case, the elements of the input vector were chosen from interatomic distances, inverse powers of the interatomic distance, three-body angles, and dihedral angles. Both redundant and nonredundant input vectors were investigated. The results show that among all the input vectors investigated, the set employed in the Z-matrix specification of the molecular configurations in the electronic structure calculations gave the lowest NN fitting accuracy for both Si5 and vinyl bromide. The underlying reason for this result appears to be the discontinuity present in the dihedral angle for planar geometries. The use of trigometric functions of the angles as input elements produced significantly improved fitting accuracy as this choice eliminates the discontinuity. The most accurate fitting was obtained when the elements of the input vector were taken to have the form Rij-n, where the Rij are the interatomic distances. When the Levenberg-Marquardt procedure was modified

  13. Network latency and operator performance in teleradiology applications.

    PubMed

    Stahl, J N; Tellis, W; Huang, H K

    2000-08-01

    Teleradiology applications often use an interactive conferencing mode with remote control mouse pointers. When a telephone is used for voice communication, latencies of the data network can create a temporal discrepancy between the position of the mouse pointer and the verbal communication. To assess the effects of this dissociation, we examined the performance of 5 test persons carrying out simple teleradiology tasks under varying simulated network conditions. When the network latency exceeded 400 milliseconds, the performance of the test persons dropped, and an increasing number of errors were made. This effect was the same for constant latencies, which can occur on the network path, and for variable delays caused by the Nagle algorithm, an internal buffering scheme used by the TCP/IP protocol. Because the Nagle algorithm used in typical TCP/IP implementations causes a latency of about 300 milliseconds even before a data packet is sent, any additional latency in the network of 100 milliseconds or more will result in a decreased operator performance in teleradiology applications. These conditions frequently occur on the public Internet or on overseas connections. For optimal performance, the authors recommend bypassing the Nagle algorithm in teleradiology applications. PMID:15359750

  14. Efficient resting-state EEG network facilitates motor imagery performance

    NASA Astrophysics Data System (ADS)

    Zhang, Rui; Yao, Dezhong; Valdés-Sosa, Pedro A.; Li, Fali; Li, Peiyang; Zhang, Tao; Ma, Teng; Li, Yongjie; Xu, Peng

    2015-12-01

    Objective. Motor imagery-based brain-computer interface (MI-BCI) systems hold promise in motor function rehabilitation and assistance for motor function impaired people. But the ability to operate an MI-BCI varies across subjects, which becomes a substantial problem for practical BCI applications beyond the laboratory. Approach. Several previous studies have demonstrated that individual MI-BCI performance is related to the resting state of brain. In this study, we further investigate offline MI-BCI performance variations through the perspective of resting-state electroencephalography (EEG) network. Main results. Spatial topologies and statistical measures of the network have close relationships with MI classification accuracy. Specifically, mean functional connectivity, node degrees, edge strengths, clustering coefficient, local efficiency and global efficiency are positively correlated with MI classification accuracy, whereas the characteristic path length is negatively correlated with MI classification accuracy. The above results indicate that an efficient background EEG network may facilitate MI-BCI performance. Finally, a multiple linear regression model was adopted to predict subjects’ MI classification accuracy based on the efficiency measures of the resting-state EEG network, resulting in a reliable prediction. Significance. This study reveals the network mechanisms of the MI-BCI and may help to find new strategies for improving MI-BCI performance.

  15. Energy efficient mechanisms for high-performance Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Alsaify, Baha'adnan

    2009-12-01

    Due to recent advances in microelectronics, the development of low cost, small, and energy efficient devices became possible. Those advances led to the birth of the Wireless Sensor Networks (WSNs). WSNs consist of a large set of sensor nodes equipped with communication capabilities, scattered in the area to monitor. Researchers focus on several aspects of WSNs. Such aspects include the quality of service the WSNs provide (data delivery delay, accuracy of data, etc...), the scalability of the network to contain thousands of sensor nodes (the terms node and sensor node are being used interchangeably), the robustness of the network (allowing the network to work even if a certain percentage of nodes fails), and making the energy consumption in the network as low as possible to prolong the network's lifetime. In this thesis, we present an approach that can be applied to the sensing devices that are scattered in an area for Sensor Networks. This work will use the well-known approach of using a awaking scheduling to extend the network's lifespan. We designed a scheduling algorithm that will reduce the delay's upper bound the reported data will experience, while at the same time keeps the advantages that are offered by the use of the awaking scheduling -- the energy consumption reduction which will lead to the increase in the network's lifetime. The wakeup scheduling is based on the location of the node relative to its neighbors and its distance from the Base Station (the terms Base Station and sink are being used interchangeably). We apply the proposed method to a set of simulated nodes using the "ONE Simulator". We test the performance of this approach with three other approaches -- Direct Routing technique, the well known LEACH algorithm, and a multi-parent scheduling algorithm. We demonstrate a good improvement on the network's quality of service and a reduction of the consumed energy.

  16. Adaptive Optimization of Aircraft Engine Performance Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Long, Theresa W.

    1995-01-01

    Preliminary results are presented on the development of an adaptive neural network based control algorithm to enhance aircraft engine performance. This work builds upon a previous National Aeronautics and Space Administration (NASA) effort known as Performance Seeking Control (PSC). PSC is an adaptive control algorithm which contains a model of the aircraft's propulsion system which is updated on-line to match the operation of the aircraft's actual propulsion system. Information from the on-line model is used to adapt the control system during flight to allow optimal operation of the aircraft's propulsion system (inlet, engine, and nozzle) to improve aircraft engine performance without compromising reliability or operability. Performance Seeking Control has been shown to yield reductions in fuel flow, increases in thrust, and reductions in engine fan turbine inlet temperature. The neural network based adaptive control, like PSC, will contain a model of the propulsion system which will be used to calculate optimal control commands on-line. Hopes are that it will be able to provide some additional benefits above and beyond those of PSC. The PSC algorithm is computationally intensive, it is valid only at near steady-state flight conditions, and it has no way to adapt or learn on-line. These issues are being addressed in the development of the optimal neural controller. Specialized neural network processing hardware is being developed to run the software, the algorithm will be valid at steady-state and transient conditions, and will take advantage of the on-line learning capability of neural networks. Future plans include testing the neural network software and hardware prototype against an aircraft engine simulation. In this paper, the proposed neural network software and hardware is described and preliminary neural network training results are presented.

  17. On a vector space representation in genetic algorithms for sensor scheduling in wireless sensor networks.

    PubMed

    Martins, F V C; Carrano, E G; Wanner, E F; Takahashi, R H C; Mateus, G R; Nakamura, F G

    2014-01-01

    Recent works raised the hypothesis that the assignment of a geometry to the decision variable space of a combinatorial problem could be useful both for providing meaningful descriptions of the fitness landscape and for supporting the systematic construction of evolutionary operators (the geometric operators) that make a consistent usage of the space geometric properties in the search for problem optima. This paper introduces some new geometric operators that constitute the realization of searches along the combinatorial space versions of the geometric entities descent directions and subspaces. The new geometric operators are stated in the specific context of the wireless sensor network dynamic coverage and connectivity problem (WSN-DCCP). A genetic algorithm (GA) is developed for the WSN-DCCP using the proposed operators, being compared with a formulation based on integer linear programming (ILP) which is solved with exact methods. That ILP formulation adopts a proxy objective function based on the minimization of energy consumption in the network, in order to approximate the objective of network lifetime maximization, and a greedy approach for dealing with the system's dynamics. To the authors' knowledge, the proposed GA is the first algorithm to outperform the lifetime of networks as synthesized by the ILP formulation, also running in much smaller computational times for large instances. PMID:24102647

  18. Equivalent Vectors

    ERIC Educational Resources Information Center

    Levine, Robert

    2004-01-01

    The cross-product is a mathematical operation that is performed between two 3-dimensional vectors. The result is a vector that is orthogonal or perpendicular to both of them. Learning about this for the first time while taking Calculus-III, the class was taught that if AxB = AxC, it does not necessarily follow that B = C. This seemed baffling. The…

  19. Performance of social network sensors during Hurricane Sandy.

    PubMed

    Kryvasheyeu, Yury; Chen, Haohui; Moro, Esteban; Van Hentenryck, Pascal; Cebrian, Manuel

    2015-01-01

    Information flow during catastrophic events is a critical aspect of disaster management. Modern communication platforms, in particular online social networks, provide an opportunity to study such flow and derive early-warning sensors, thus improving emergency preparedness and response. Performance of the social networks sensor method, based on topological and behavioral properties derived from the "friendship paradox", is studied here for over 50 million Twitter messages posted before, during, and after Hurricane Sandy. We find that differences in users' network centrality effectively translate into moderate awareness advantage (up to 26 hours); and that geo-location of users within or outside of the hurricane-affected area plays a significant role in determining the scale of such an advantage. Emotional response appears to be universal regardless of the position in the network topology, and displays characteristic, easily detectable patterns, opening a possibility to implement a simple "sentiment sensing" technique that can detect and locate disasters. PMID:25692690

  20. Performance of Social Network Sensors during Hurricane Sandy

    PubMed Central

    Kryvasheyeu, Yury; Chen, Haohui; Moro, Esteban; Van Hentenryck, Pascal; Cebrian, Manuel

    2015-01-01

    Information flow during catastrophic events is a critical aspect of disaster management. Modern communication platforms, in particular online social networks, provide an opportunity to study such flow and derive early-warning sensors, thus improving emergency preparedness and response. Performance of the social networks sensor method, based on topological and behavioral properties derived from the “friendship paradox”, is studied here for over 50 million Twitter messages posted before, during, and after Hurricane Sandy. We find that differences in users’ network centrality effectively translate into moderate awareness advantage (up to 26 hours); and that geo-location of users within or outside of the hurricane-affected area plays a significant role in determining the scale of such an advantage. Emotional response appears to be universal regardless of the position in the network topology, and displays characteristic, easily detectable patterns, opening a possibility to implement a simple “sentiment sensing” technique that can detect and locate disasters. PMID:25692690

  1. Asynchronous transfer mode link performance over ground networks

    NASA Technical Reports Server (NTRS)

    Chow, E. T.; Markley, R. W.

    1993-01-01

    The results of an experiment to determine the feasibility of using asynchronous transfer mode (ATM) technology to support advanced spacecraft missions that require high-rate ground communications and, in particular, full-motion video are reported. Potential nodes in such a ground network include Deep Space Network (DSN) antenna stations, the Jet Propulsion Laboratory, and a set of national and international end users. The experiment simulated a lunar microrover, lunar lander, the DSN ground communications system, and distributed science users. The users were equipped with video-capable workstations. A key feature was an optical fiber link between two high-performance workstations equipped with ATM interfaces. Video was also transmitted through JPL's institutional network to a user 8 km from the experiment. Variations in video depending on the networks and computers were observed, the results are reported.

  2. Investigation of Natural Draft Cooling Tower Performance Using Neural Network

    NASA Astrophysics Data System (ADS)

    Mahdi, Qasim S.; Saleh, Saad M.; Khalaf, Basima S.

    In the present work Artificial Neural Network (ANN) technique is used to investigate the performance of Natural Draft Wet Cooling Tower (NDWCT). Many factors are affected the rang, approach, pressure drop, and effectiveness of the cooling tower which are; fill type, water flow rate, air flow rate, inlet water temperature, wet bulb temperature of air, and nozzle hole diameter. Experimental data included the effects of these factors are used to train the network using Back Propagation (BP) algorithm. The network included seven input variables (Twi, hfill, mw, Taiwb, Taidb, vlow, vup) and five output variables (ma, Taowb, Two, Δp, ɛ) while hidden layer is different for each case. Network results compared with experimental results and good agreement was observed between the experimental and theoretical results.

  3. Hardware Efficient and High-Performance Networks for Parallel Computers.

    NASA Astrophysics Data System (ADS)

    Chien, Minze Vincent

    High performance interconnection networks are the key to high utilization and throughput in large-scale parallel processing systems. Since many interconnection problems in parallel processing such as concentration, permutation and broadcast problems can be cast as sorting problems, this dissertation considers the problem of sorting on a new model, called an adaptive sorting network. It presents four adaptive binary sorters the first two of which are ordinary combinational circuits while the last two exploit time-multiplexing and pipelining techniques. These sorter constructions demonstrate that any sequence of n bits can be sorted in O(log^2n) bit-level delay, using O(n) constant fanin gates. This improves the cost complexity of Batcher's binary sorters by a factor of O(log^2n) while matching their sorting time. It is further shown that any sequence of n numbers can be sorted on the same model in O(log^2n) comparator-level delay using O(nlog nloglog n) comparators. The adaptive binary sorter constructions lead to new O(n) bit-level cost concentrators and superconcentrators with O(log^2n) bit-level delay. Their employment in recently constructed permutation and generalized connectors lead to permutation and generalized connection networks with O(nlog n) bit-level cost and O(log^3n) bit-level delay. These results provide the least bit-level cost for such networks with competitive delays. Finally, the dissertation considers a key issue in the implementation of interconnection networks, namely, the pin constraint. Current VLSI technologies can house a large number of switches in a single chip, but the mere fact that one chip cannot have too many pins precludes the possibility of implementing a large connection network on a single chip. The dissertation presents techniques for partitioning connection networks into identical modules of switches in such a way that each module is contained in a single chip with an arbitrarily specified number of pins, and that the cost of

  4. SIMD machine using cube connected cycles network architecture for vector processing

    SciTech Connect

    Wagner, R.A.; Poirier, C.J.

    1986-11-04

    This patent describes a single instruction multiple data processor comprising: processing elements, interconnected in a Cube Connected Cycle Network design and using interprocessor communication links which carry one bit at a time in both directions simultaneously; controller means for controlling processor elements which feeds each of the processor elements identical local memory addresses, identical switching control bits, identical Boolean function selection codes, and distinct activation control bits, depending on each of the processor's position in the cube Connected Cycles Network in a prescribed fashion; and input/output devices connected to the network by switching devices wherein, each of the processing element comprises: two single-bit accumulator registors (A, B); two Boolean function generator units, each of which computes any one of 2/sup 8/ possible Boolean functions of three Boolean variables as specified by Boolean function codes sent 2 at a time by the controller to each of the processing elements; and switching circuit means controlled by the controller which select the three inputs to the logic function generators.

  5. Impact of sensor installation techniques on seismic network performance

    NASA Astrophysics Data System (ADS)

    Bainbridge, Geoffrey; Laporte, Michael; Baturan, Dario; Greig, Wesley

    2015-04-01

    The magnitude of completeness (Mc) of a seismic network is determined by a number of factors including station density, self-noise and passband of the sensor used, ambient noise environment and sensor installation method and depth. Sensor installation techniques related to depth are of particular importance due to their impact on overall monitoring network deployment costs. We present a case study which evaluates performance of Trillium Compact Posthole seismometers installed using different methods as well as depths, and evaluate its impact on seismic network operation in terms of the target area of interest average magnitude of completeness in various monitoring applications. We evaluate three sensor installation methods: direct burial in soil at 0.5 m depth, 5 m screwpile and 15 m cemented casing borehole at sites chosen to represent high, medium and low ambient noise environments. In all cases, noise performance improves with depth with noise suppression generally more prominent at higher frequencies but with significant variations from site to site. When extended to overall network performance, the observed noise suppression results in improved (decreased) target area average Mc. However, the extent of the improvement with depth varies significantly, and can be negligible. The increased cost associated with installation at depth uses funds that could be applied to the deployment of additional stations. Using network modelling tools, we compare the improvement in magnitude of completeness and location accuracy associated with increasing installation depth to those associated with increased number of stations. The appropriate strategy is applied on a case-by-case and driven by network-specific performance requirements, deployment constraints and site noise conditions.

  6. Distribution and larval habitat characterization of Anopheles moucheti, Anopheles nili, and other malaria vectors in river networks of southern Cameroon.

    PubMed

    Antonio-Nkondjio, Christophe; Ndo, Cyrille; Costantini, Carlo; Awono-Ambene, Parfait; Fontenille, Didier; Simard, Frédéric

    2009-12-01

    Despite their importance as malaria vectors, little is known of the bionomic of Anopheles nili and Anopheles moucheti. Larval collections from 24 sites situated along the dense hydrographic network of south Cameroon were examined to assess key ecological factors associated with these mosquitoes distribution in river networks. Morphological identification of the III and IV instar larvae by the use of microscopy revealed that 47.6% of the larvae belong to An. nili and 22.6% to An. moucheti. Five variables were significantly involved with species distribution, the pace of flow of the river (lotic, or lentic), the light exposure (sunny or shady), vegetation (presence or absence of vegetation) the temperature and the presence or absence of debris. Using canonical correspondence analysis, it appeared that lotic rivers, exposed to light, with vegetation or debris were the best predictors of An. nili larval abundance. Whereas, An. moucheti and An. ovengensis were highly associated with lentic rivers, low temperature, having Pistia. An. nili and An. moucheti distribution along river systems across south Cameroon was highly correlated with environmental variables. The distribution of An. nili conforms to that of a generalist species which is adapted to exploiting a variety of environmental conditions, Whereas, An. moucheti, Anopheles ovengensis and Anopheles carnevalei appeared as specialist forest mosquitoes. PMID:19682965

  7. High Performance Computing and Networking for Science--Background Paper.

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. Office of Technology Assessment.

    The Office of Technology Assessment is conducting an assessment of the effects of new information technologies--including high performance computing, data networking, and mass data archiving--on research and development. This paper offers a view of the issues and their implications for current discussions about Federal supercomputer initiatives…

  8. Competitive Learning Neural Network Ensemble Weighted by Predicted Performance

    ERIC Educational Resources Information Center

    Ye, Qiang

    2010-01-01

    Ensemble approaches have been shown to enhance classification by combining the outputs from a set of voting classifiers. Diversity in error patterns among base classifiers promotes ensemble performance. Multi-task learning is an important characteristic for Neural Network classifiers. Introducing a secondary output unit that receives different…

  9. Public Management and Educational Performance: The Impact of Managerial Networking.

    ERIC Educational Resources Information Center

    Meier, Kenneth J.; O'Toole, Laurence J., Jr.

    2003-01-01

    A 5-year performance analysis of managers in more than 500 school districts used a nonlinear, interactive, contingent model of management. Empirical support was found for key elements of the network-management portion of the model. Results showed that public management matters in policy implementation, but its impact is often nonlinear. (Contains…

  10. USING MULTIRAIL NETWORKS IN HIGH-PERFORMANCE CLUSTERS

    SciTech Connect

    Coll, S.; Fratchtenberg, E.; Petrini, F.; Hoisie, A.; Gurvits, L.

    2001-01-01

    Using multiple independent networks (also known as rails) is an emerging technique to overcome bandwidth limitations and enhance fault tolerance of current high-performance clusters. We present an extensive experimental comparison of the behavior of various allocation schemes in terms of bandwidth and latency. We show that striping messages over multiple rails can substantially reduce network latency, depending on average message size, network load, and allocation scheme. The compared methods include a basic round-robin rail allocation, a local-dynamic allocation based on local knowledge, and a dynamic rail allocation that reserves both communication endpoints of a message before sending it. The last method is shown to perform better than the others at higher loads: up to 49% better than local-knowledge allocation and 37% better than the round-robin allocation. This allocation scheme also shows lower latency and it saturates on higher loads (for messages large enough). Most importantly, this proposed allocation scheme scales well with the number of rails and message sizes. In addition we propose a hybrid algorithm that combines the benefits of the local-dynamic for short messages with those of the dynamic algorithm for large messages. Keywords: Communication Protocols, High-Performance Interconnection Networks, Performance Evaluation, Routing, Communication Libraries, Parallel Architectures.

  11. A new integrated approach for characterizing the soil electromagnetic properties and detecting landmines using a hand-held vector network analyzer

    NASA Astrophysics Data System (ADS)

    Lopera, Olga; Lambot, Sebastien; Slob, Evert; Vanclooster, Marnik; Macq, Benoit; Milisavljevic, Nada

    2006-05-01

    The application of ground-penetrating radar (GPR) in humanitarian demining labors presents two major challenges: (1) the development of affordable and practical systems to detect metallic and non-metallic antipersonnel (AP) landmines under different conditions, and (2) the development of accurate soil characterization techniques to evaluate soil properties effects and determine the performance of these GPR-based systems. In this paper, we present a new integrated approach for characterizing electromagnetic (EM) properties of mine-affected soils and detecting landmines using a low cost hand-held vector network analyzer (VNA) connected to a highly directive antenna. Soil characterization is carried out using the radar-antenna-subsurface model of Lambot et al.1 and full-wave inversion of the radar signal focused in the time domain on the surface reflection. This methodology is integrated to background subtraction (BS) and migration to enhance landmine detection. Numerical and laboratory experiments are performed to show the effect of the soil EM properties on the detectability of the landmines and how the proposed approach can ameliorate the GPR performance.

  12. Comparison of Support Vector Machine, Neural Network, and CART Algorithms for the Land-Cover Classification Using Limited Training Data Points

    EPA Science Inventory

    Support vector machine (SVM) was applied for land-cover characterization using MODIS time-series data. Classification performance was examined with respect to training sample size, sample variability, and landscape homogeneity (purity). The results were compared to two convention...

  13. Optical performance monitoring (OPM) in next-generation optical networks

    NASA Astrophysics Data System (ADS)

    Neuhauser, Richard E.

    2002-09-01

    DWDM transmission is the enabling technology currently pushing the transmission bandwidths in core networks towards the multi-Tb/s regime with unregenerated transmission distances of several thousand km. Such systems represent the basic platform for transparent DWDM networks enabling both the transport of client signals with different data formats and bit rates (e.g. SDH/SONET, IP over WDM, Gigabit Ethernet, etc.) and dynamic provisioning of optical wavelength channels. Optical Performance Monitoring (OPM) will be one of the key elements for providing the capabilities of link set-up/control, fault localization, protection/restoration and path supervisioning for stable network operation becoming the major differentiator in next-generation networks. Currently, signal quality is usually characterized by DWDM power levels, spectrum-interpolated Optical Signal-to-Noise-Ratio (OSNR), and channel wavelengths. On the other hand there is urgent need for new OPM technologies and strategies providing solutions for in-channel OSNR, signal quality measurement, fault localization and fault identification. Innovative research and product activities include polarization nulling, electrical and optical amplitude sampling, BER estimation, electrical spectrum analysis, and pilot tone technologies. This presentation focuses on reviewing the requirements and solution concepts in current and next-generation networks with respect to Optical Performance Monitoring.

  14. Performance analysis of wireless sensor networks in geophysical sensing applications

    NASA Astrophysics Data System (ADS)

    Uligere Narasimhamurthy, Adithya

    Performance is an important criteria to consider before switching from a wired network to a wireless sensing network. Performance is especially important in geophysical sensing where the quality of the sensing system is measured by the precision of the acquired signal. Can a wireless sensing network maintain the same reliability and quality metrics that a wired system provides? Our work focuses on evaluating the wireless GeoMote sensor motes that were developed by previous computer science graduate students at Mines. Specifically, we conducted a set of experiments, namely WalkAway and Linear Array experiments, to characterize the performance of the wireless motes. The motes were also equipped with the Sticking Heartbeat Aperture Resynchronization Protocol (SHARP), a time synchronization protocol developed by a previous computer science graduate student at Mines. This protocol should automatically synchronize the mote's internal clocks and reduce time synchronization errors. We also collected passive data to evaluate the response of GeoMotes to various frequency components associated with the seismic waves. With the data collected from these experiments, we evaluated the performance of the SHARP protocol and compared the performance of our GeoMote wireless system against the industry standard wired seismograph system (Geometric-Geode). Using arrival time analysis and seismic velocity calculations, we set out to answer the following question. Can our wireless sensing system (GeoMotes) perform similarly to a traditional wired system in a realistic scenario?

  15. Performance Evaluation in Network-Based Parallel Computing

    NASA Technical Reports Server (NTRS)

    Dezhgosha, Kamyar

    1996-01-01

    Network-based parallel computing is emerging as a cost-effective alternative for solving many problems which require use of supercomputers or massively parallel computers. The primary objective of this project has been to conduct experimental research on performance evaluation for clustered parallel computing. First, a testbed was established by augmenting our existing SUNSPARCs' network with PVM (Parallel Virtual Machine) which is a software system for linking clusters of machines. Second, a set of three basic applications were selected. The applications consist of a parallel search, a parallel sort, a parallel matrix multiplication. These application programs were implemented in C programming language under PVM. Third, we conducted performance evaluation under various configurations and problem sizes. Alternative parallel computing models and workload allocations for application programs were explored. The performance metric was limited to elapsed time or response time which in the context of parallel computing can be expressed in terms of speedup. The results reveal that the overhead of communication latency between processes in many cases is the restricting factor to performance. That is, coarse-grain parallelism which requires less frequent communication between processes will result in higher performance in network-based computing. Finally, we are in the final stages of installing an Asynchronous Transfer Mode (ATM) switch and four ATM interfaces (each 155 Mbps) which will allow us to extend our study to newer applications, performance metrics, and configurations.

  16. Performance of Neural Networks Methods In Intrusion Detection

    SciTech Connect

    Dao, V N; Vemuri, R

    2001-07-09

    By accurately profiling the users via their unique attributes, it is possible to view the intrusion detection problem as a classification of authorized users and intruders. This paper demonstrates that artificial neural network (ANN) techniques can be used to solve this classification problem. Furthermore, the paper compares the performance of three neural networks methods in classifying authorized users and intruders using synthetically generated data. The three methods are the gradient descent back propagation (BP) with momentum, the conjugate gradient BP, and the quasi-Newton BP.

  17. Network Performance Testing for the BaBar Event Builder

    SciTech Connect

    Pavel, Tomas J

    1998-11-17

    We present an overview of the design of event building in the BABAR Online, based upon TCP/IP and commodity networking technology. BABAR is a high-rate experiment to study CP violation in asymmetric e{sup +}e{sup {minus}} collisions. In order to validate the event-builder design, an extensive program was undertaken to test the TCP performance delivered by various machine types with both ATM OC-3 and Fast Ethernet networks. The buffering characteristics of several candidate switches were examined and found to be generally adequate for our purposes. We highlight the results of this testing and present some of the more significant findings.

  18. Performance evaluation of a routing algorithm based on Hopfield Neural Network for network-on-chip

    NASA Astrophysics Data System (ADS)

    Esmaelpoor, Jamal; Ghafouri, Abdollah

    2015-12-01

    Network on chip (NoC) has emerged as a solution to overcome the system on chip growing complexity and design challenges. A proper routing algorithm is a key issue of an NoC design. An appropriate routing method balances load across the network channels and keeps path length as short as possible. This survey investigates the performance of a routing algorithm based on Hopfield Neural Network. It is a dynamic programming to provide optimal path and network monitoring in real time. The aim of this article is to analyse the possibility of using a neural network as a router. The algorithm takes into account the path with the lowest delay (cost) form source to destination. In other words, the path a message takes from source to destination depends on network traffic situation at the time and it is the fastest one. The simulation results show that the proposed approach improves average delay, throughput and network congestion efficiently. At the same time, the increase in power consumption is almost negligible.

  19. Statistical performance evaluation of ECG transmission using wireless networks.

    PubMed

    Shakhatreh, Walid; Gharaibeh, Khaled; Al-Zaben, Awad

    2013-07-01

    This paper presents simulation of the transmission of biomedical signals (using ECG signal as an example) over wireless networks. Investigation of the effect of channel impairments including SNR, pathloss exponent, path delay and network impairments such as packet loss probability; on the diagnosability of the received ECG signal are presented. The ECG signal is transmitted through a wireless network system composed of two communication protocols; an 802.15.4- ZigBee protocol and an 802.11b protocol. The performance of the transmission is evaluated using higher order statistics parameters such as kurtosis and Negative Entropy in addition to the common techniques such as the PRD, RMS and Cross Correlation. PMID:23777301

  20. Enhanced memory performance thanks to neural network assortativity

    SciTech Connect

    Franciscis, S. de; Johnson, S.; Torres, J. J.

    2011-03-24

    The behaviour of many complex dynamical systems has been found to depend crucially on the structure of the underlying networks of interactions. An intriguing feature of empirical networks is their assortativity--i.e., the extent to which the degrees of neighbouring nodes are correlated. However, until very recently it was difficult to take this property into account analytically, most work being exclusively numerical. We get round this problem by considering ensembles of equally correlated graphs and apply this novel technique to the case of attractor neural networks. Assortativity turns out to be a key feature for memory performance in these systems - so much so that for sufficiently correlated topologies the critical temperature diverges. We predict that artificial and biological neural systems could significantly enhance their robustness to noise by developing positive correlations.

  1. Communications performance of an undersea acoustic large-area network

    NASA Astrophysics Data System (ADS)

    Kriewaldt, Hannah A.; Rice, Joseph A.

    2005-04-01

    The U.S. Navy is developing Seaweb acoustic networking capability for integrating undersea systems. Seaweb architectures generally involve a wide-area network of fixed nodes consistent with future distributed autonomous sensors on the seafloor. Mobile nodes including autonomous undersea vehicles (AUVs) and submarines operate in the context of the grid by using the fixed nodes as both navigation reference points and communication access points. In October and November 2004, Theater Anti-Submarine Warfare Exercise (TASWEX04) showcased Seaweb in its first fleet appearance. This paper evaluates the TASWEX04 Seaweb performance in support of networked communications between a submarine and a surface ship. Considerations include physical-layer dependencies on the 9-14 kHz acoustic channel, such as refraction, wind-induced ambient noise, and submarine aspect angle. [Work supported by SSC San Diego.

  2. Team Assembly Mechanisms Determine Collaboration Network Structure and Team Performance

    PubMed Central

    Guimerà, Roger; Uzzi, Brian; Spiro, Jarrett; Nunes Amaral, Luís A.

    2007-01-01

    Agents in creative enterprises are embedded in networks that inspire, support, and evaluate their work. Here, we investigate how the mechanisms by which creative teams self-assemble determine the structure of these collaboration networks. We propose a model for the self-assembly of creative teams that has its basis in three parameters: team size, the fraction of newcomers in new productions, and the tendency of incumbents to repeat previous collaborations. The model suggests that the emergence of a large connected community of practitioners can be described as a phase transition. We find that team assembly mechanisms determine both the structure of the collaboration network and team performance for teams derived from both artistic and scientific fields. PMID:15860629

  3. Performance evaluation of a FPGA implementation of a digital rotation support vector machine

    NASA Astrophysics Data System (ADS)

    Lamela, Horacio; Gimeno, Jesús; Jiménez, Matías; Ruiz, Marta

    2008-04-01

    In this paper we provide a simple and fast hardware implementation for a Support Vector Machine (SVM). By using the CORDIC algorithm and implementing a 2-based exponential kernel that allows us to simplify operations, we overcome the problems caused by too many internal multiplications found in the classification process, both while applying the Kernel formula and later on multiplying by the weights. We show a simple example of classification with the algorithm and analyze the classification speed and accuracy.

  4. On the Performance of TCP Spoofing in Satellite Networks

    NASA Technical Reports Server (NTRS)

    Ishac, Joseph; Allman, Mark

    2001-01-01

    In this paper, we analyze the performance of Transmission Control Protocol (TCP) in a network that consists of both satellite and terrestrial components. One method, proposed by outside research, to improve the performance of data transfers over satellites is to use a performance enhancing proxy often dubbed 'spoofing.' Spoofing involves the transparent splitting of a TCP connection between the source and destination by some entity within the network path. In order to analyze the impact of spoofing, we constructed a simulation suite based around the network simulator ns-2. The simulation reflects a host with a satellite connection to the Internet and allows the option to spoof connections just prior to the satellite. The methodology used in our simulation allows us to analyze spoofing over a large range of file sizes and under various congested conditions, while prior work on this topic has primarily focused on bulk transfers with no congestion. As a result of these simulations, we find that the performance of spoofing is dependent upon a number of conditions.

  5. Integrated System for Performance Monitoring of the ATLAS TDAQ Network

    NASA Astrophysics Data System (ADS)

    Octavian Savu, Dan; Al-Shabibi, Ali; Martin, Brian; Sjoen, Rune; Batraneanu, Silvia Maria; Stancu, Stefan

    2011-12-01

    The ATLAS TDAQ Network consists of three separate networks spanning four levels of the experimental building. Over 200 edge switches and 5 multi-blade chassis routers are used to interconnect 2000 processors, adding up to more than 7000 high speed interfaces. In order to substantially speed-up ad-hoc and post mortem analysis, a scalable, yet flexible, integrated system for monitoring both network statistics and environmental conditions, processor parameters and data taking characteristics was required. For successful up-to-the-minute monitoring, information from many SNMP compliant devices, independent databases and custom APIs was gathered, stored and displayed in an optimal way. Easy navigation and compact aggregation of multiple data sources were the main requirements; characteristics not found in any of the tested products, either open-source or commercial. This paper describes how performance, scalability and display issues were addressed and what challenges the project faced during development and deployment. A full set of modules, including a fast polling SNMP engine, user interfaces using latest web technologies and caching mechanisms, has been designed and developed from scratch. Over the last year the system proved to be stable and reliable, replacing the previous performance monitoring system and extending its capabilities. Currently it is operated using a precision interval of 25 seconds (the industry standard is 300 seconds). Although it was developed in order to address the needs for integrated performance monitoring of the ATLAS TDAQ network, the package can be used for monitoring any network with rigid demands of precision and scalability, exceeding normal industry standards.

  6. Design and implementation of a high performance network security processor

    NASA Astrophysics Data System (ADS)

    Wang, Haixin; Bai, Guoqiang; Chen, Hongyi

    2010-03-01

    The last few years have seen many significant progresses in the field of application-specific processors. One example is network security processors (NSPs) that perform various cryptographic operations specified by network security protocols and help to offload the computation intensive burdens from network processors (NPs). This article presents a high performance NSP system architecture implementation intended for both internet protocol security (IPSec) and secure socket layer (SSL) protocol acceleration, which are widely employed in virtual private network (VPN) and e-commerce applications. The efficient dual one-way pipelined data transfer skeleton and optimised integration scheme of the heterogenous parallel crypto engine arrays lead to a Gbps rate NSP, which is programmable with domain specific descriptor-based instructions. The descriptor-based control flow fragments large data packets and distributes them to the crypto engine arrays, which fully utilises the parallel computation resources and improves the overall system data throughput. A prototyping platform for this NSP design is implemented with a Xilinx XC3S5000 based FPGA chip set. Results show that the design gives a peak throughput for the IPSec ESP tunnel mode of 2.85 Gbps with over 2100 full SSL handshakes per second at a clock rate of 95 MHz.

  7. Parallel access alignment network with barrel switch implementation for d-ordered vector elements

    NASA Technical Reports Server (NTRS)

    Barnes, George H. (Inventor)

    1980-01-01

    An alignment network between N parallel data input ports and N parallel data outputs includes a first and a second barrel switch. The first barrel switch fed by the N parallel input ports shifts the N outputs thereof and in turn feeds the N-1 input data paths of the second barrel switch according to the relationship X=k.sup.y modulo N wherein x represents the output data path ordering of the first barrel switch, y represents the input data path ordering of the second barrel switch, and k equals a primitive root of the number N. The zero (0) ordered output data path of the first barrel switch is fed directly to the zero ordered output port. The N-1 output data paths of the second barrel switch are connected to the N output ports in the reverse ordering of the connections between the output data paths of the first barrel switch and the input data paths of the second barrel switch. The second switch is controlled by a value m, which in the preferred embodiment is produced at the output of a ROM addressed by the value d wherein d represents the incremental spacing or distance between data elements to be accessed from the N input ports, and m is generated therefrom according to the relationship d=k.sup.m modulo N.

  8. Comparison of Bayesian network and support vector machine models for two-year survival prediction in lung cancer patients treated with radiotherapy

    SciTech Connect

    Jayasurya, K.; Fung, G.; Yu, S.; Dehing-Oberije, C.; De Ruysscher, D.; Hope, A.; De Neve, W.; Lievens, Y.; Lambin, P.; Dekker, A. L. A. J.

    2010-04-15

    Purpose: Classic statistical and machine learning models such as support vector machines (SVMs) can be used to predict cancer outcome, but often only perform well if all the input variables are known, which is unlikely in the medical domain. Bayesian network (BN) models have a natural ability to reason under uncertainty and might handle missing data better. In this study, the authors hypothesize that a BN model can predict two-year survival in non-small cell lung cancer (NSCLC) patients as accurately as SVM, but will predict survival more accurately when data are missing. Methods: A BN and SVM model were trained on 322 inoperable NSCLC patients treated with radiotherapy from Maastricht and validated in three independent data sets of 35, 47, and 33 patients from Ghent, Leuven, and Toronto. Missing variables occurred in the data set with only 37, 28, and 24 patients having a complete data set. Results: The BN model structure and parameter learning identified gross tumor volume size, performance status, and number of positive lymph nodes on a PET as prognostic factors for two-year survival. When validated in the full validation set of Ghent, Leuven, and Toronto, the BN model had an AUC of 0.77, 0.72, and 0.70, respectively. A SVM model based on the same variables had an overall worse performance (AUC 0.71, 0.68, and 0.69) especially in the Ghent set, which had the highest percentage of missing the important GTV size data. When only patients with complete data sets were considered, the BN and SVM model performed more alike. Conclusions: Within the limitations of this study, the hypothesis is supported that BN models are better at handling missing data than SVM models and are therefore more suitable for the medical domain. Future works have to focus on improving the BN performance by including more patients, more variables, and more diversity.

  9. Copercolating Networks: An Approach for Realizing High-Performance Transparent Conductors using Multicomponent Nanostructured Networks

    NASA Astrophysics Data System (ADS)

    Das, Suprem R.; Sadeque, Sajia; Jeong, Changwook; Chen, Ruiyi; Alam, Muhammad A.; Janes, David B.

    2016-06-01

    Although transparent conductive oxides such as indium tin oxide (ITO) are widely employed as transparent conducting electrodes (TCEs) for applications such as touch screens and displays, new nanostructured TCEs are of interest for future applications, including emerging transparent and flexible electronics. A number of twodimensional networks of nanostructured elements have been reported, including metallic nanowire networks consisting of silver nanowires, metallic carbon nanotubes (m-CNTs), copper nanowires or gold nanowires, and metallic mesh structures. In these single-component systems, it has generally been difficult to achieve sheet resistances that are comparable to ITO at a given broadband optical transparency. A relatively new third category of TCEs consisting of networks of 1D-1D and 1D-2D nanocomposites (such as silver nanowires and CNTs, silver nanowires and polycrystalline graphene, silver nanowires and reduced graphene oxide) have demonstrated TCE performance comparable to, or better than, ITO. In such hybrid networks, copercolation between the two components can lead to relatively low sheet resistances at nanowire densities corresponding to high optical transmittance. This review provides an overview of reported hybrid networks, including a comparison of the performance regimes achievable with those of ITO and single-component nanostructured networks. The performance is compared to that expected from bulk thin films and analyzed in terms of the copercolation model. In addition, performance characteristics relevant for flexible and transparent applications are discussed. The new TCEs are promising, but significant work must be done to ensure earth abundance, stability, and reliability so that they can eventually replace traditional ITO-based transparent conductors.

  10. Sensor Networking Testbed with IEEE 1451 Compatibility and Network Performance Monitoring

    NASA Technical Reports Server (NTRS)

    Gurkan, Deniz; Yuan, X.; Benhaddou, D.; Figueroa, F.; Morris, Jonathan

    2007-01-01

    Design and implementation of a testbed for testing and verifying IEEE 1451-compatible sensor systems with network performance monitoring is of significant importance. The performance parameters measurement as well as decision support systems implementation will enhance the understanding of sensor systems with plug-and-play capabilities. The paper will present the design aspects for such a testbed environment under development at University of Houston in collaboration with NASA Stennis Space Center - SSST (Smart Sensor System Testbed).

  11. On using multiple routing metrics with destination sequenced distance vector protocol for MultiHop wireless ad hoc networks

    NASA Astrophysics Data System (ADS)

    Mehic, M.; Fazio, P.; Voznak, M.; Partila, P.; Komosny, D.; Tovarek, J.; Chmelikova, Z.

    2016-05-01

    A mobile ad hoc network is a collection of mobile nodes which communicate without a fixed backbone or centralized infrastructure. Due to the frequent mobility of nodes, routes connecting two distant nodes may change. Therefore, it is not possible to establish a priori fixed paths for message delivery through the network. Because of its importance, routing is the most studied problem in mobile ad hoc networks. In addition, if the Quality of Service (QoS) is demanded, one must guarantee the QoS not only over a single hop but over an entire wireless multi-hop path which may not be a trivial task. In turns, this requires the propagation of QoS information within the network. The key to the support of QoS reporting is QoS routing, which provides path QoS information at each source. To support QoS for real-time traffic one needs to know not only minimum delay on the path to the destination but also the bandwidth available on it. Therefore, throughput, end-to-end delay, and routing overhead are traditional performance metrics used to evaluate the performance of routing protocol. To obtain additional information about the link, most of quality-link metrics are based on calculation of the lost probabilities of links by broadcasting probe packets. In this paper, we address the problem of including multiple routing metrics in existing routing packets that are broadcasted through the network. We evaluate the efficiency of such approach with modified version of DSDV routing protocols in ns-3 simulator.

  12. The Algerian Seismic Network: Performance from data quality analysis

    NASA Astrophysics Data System (ADS)

    Yelles, Abdelkarim; Allili, Toufik; Alili, Azouaou

    2013-04-01

    densify the network and to enhance performance of the Algerian Digital Seismic Network.

  13. Design and Performance Analysis of Incremental Networked Predictive Control Systems.

    PubMed

    Pang, Zhong-Hua; Liu, Guo-Ping; Zhou, Donghua

    2016-06-01

    This paper is concerned with the design and performance analysis of networked control systems with network-induced delay, packet disorder, and packet dropout. Based on the incremental form of the plant input-output model and an incremental error feedback control strategy, an incremental networked predictive control (INPC) scheme is proposed to actively compensate for the round-trip time delay resulting from the above communication constraints. The output tracking performance and closed-loop stability of the resulting INPC system are considered for two cases: 1) plant-model match case and 2) plant-model mismatch case. For the former case, the INPC system can achieve the same output tracking performance and closed-loop stability as those of the corresponding local control system. For the latter case, a sufficient condition for the stability of the closed-loop INPC system is derived using the switched system theory. Furthermore, for both cases, the INPC system can achieve a zero steady-state output tracking error for step commands. Finally, both numerical simulations and practical experiments on an Internet-based servo motor system illustrate the effectiveness of the proposed method. PMID:26186798

  14. Network Performance Measurements for NASA's Earth Observation System

    NASA Technical Reports Server (NTRS)

    Loiacono, Joe; Gormain, Andy; Smith, Jeff

    2004-01-01

    NASA's Earth Observation System (EOS) Project studies all aspects of planet Earth from space, including climate change, and ocean, ice, land, and vegetation characteristics. It consists of about 20 satellite missions over a period of about a decade. Extensive collaboration is used, both with other US. agencies (e.g., National Oceanic and Atmospheric Administration (NOA), United States Geological Survey (USGS), Department of Defense (DoD), and international agencies (e.g., European Space Agency (ESA), Japan Aerospace Exploration Agency (JAXA)), to improve cost effectiveness and obtain otherwise unavailable data. Scientific researchers are located at research institutions worldwide, primarily government research facilities and research universities. The EOS project makes extensive use of networks to support data acquisition, data production, and data distribution. Many of these functions impose requirements on the networks, including throughput and availability. In order to verify that these requirements are being met, and be pro-active in recognizing problems, NASA conducts on-going performance measurements. The purpose of this paper is to examine techniques used by NASA to measure the performance of the networks used by EOSDIS (EOS Data and Information System) and to indicate how this performance information is used.

  15. Enhancement of Network Performance through Integration of Borehole Stations

    NASA Astrophysics Data System (ADS)

    Korger, Edith; Plenkers, Katrin; Clinton, John; Kraft, Toni; Diehl, Tobias; Husen, Stephan; Schnellmann, Michael

    2014-05-01

    In order to improve the detection and characterisation of weak seismic events across northern Switzerland/southern Germany, the Swiss Digital Seismic Network has installed 10 new seismic stations during 2012 and 2013. The newly densified network was funded within a 10-year project by NAGRA and is expected to monitor seismicity with a magnitude of completeness Mc (ML) below 1.3 and provide high quality locations for all these events. The goal of this project is the monitoring of areas surrounding potential nuclear waste repositories, in order to gain a thorough understanding of the seismotectonic processes and consequent evaluation of the seimsic hazard in the region. Northern Switzerland lies in a molasse basin and is densely populated. Therefore it is a major challenge in this region to find stations with noise characteristics low enough to meet the monitoring requirements. The new stations include three borehole sites equipped with 1 Hz Lennartz LE3D-BH velocity sensors (depths between 120 and 160 m), which are at critical locations for the new network but at areas where the ambient noise at the surface is too high for convential surface stations. At each borehole, a strong motion seismometer is also installed at the surface. Through placing the seismometers at depth, the ambient noise level is significantly lowered - which means detection of smaller local and larger regional events is enhanced. We present here a comparison of the performance of each of the three borehole stations, reflecting on the improvement in noise compared to surface installations at these sites, as well as with other conventional surface stations within the network. We also demonstrate the benefits in the operation network performance, in terms of earthquakes detected and located, which arise from installing borehole stations with lower background noise.

  16. Simulation Modeling and Performance Evaluation of Space Networks

    NASA Technical Reports Server (NTRS)

    Jennings, Esther H.; Segui, John

    2006-01-01

    In space exploration missions, the coordinated use of spacecraft as communication relays increases the efficiency of the endeavors. To conduct trade-off studies of the performance and resource usage of different communication protocols and network designs, JPL designed a comprehensive extendable tool, the Multi-mission Advanced Communications Hybrid Environment for Test and Evaluation (MACHETE). The design and development of MACHETE began in 2000 and is constantly evolving. Currently, MACHETE contains Consultative Committee for Space Data Systems (CCSDS) protocol standards such as Proximity-1, Advanced Orbiting Systems (AOS), Packet Telemetry/Telecommand, Space Communications Protocol Specification (SCPS), and the CCSDS File Delivery Protocol (CFDP). MACHETE uses the Aerospace Corporation s Satellite Orbital Analysis Program (SOAP) to generate the orbital geometry information and contact opportunities. Matlab scripts provide the link characteristics. At the core of MACHETE is a discrete event simulator, QualNet. Delay Tolerant Networking (DTN) is an end-to-end architecture providing communication in and/or through highly stressed networking environments. Stressed networking environments include those with intermittent connectivity, large and/or variable delays, and high bit error rates. To provide its services, the DTN protocols reside at the application layer of the constituent internets, forming a store-and-forward overlay network. The key capabilities of the bundling protocols include custody-based reliability, ability to cope with intermittent connectivity, ability to take advantage of scheduled and opportunistic connectivity, and late binding of names to addresses. In this presentation, we report on the addition of MACHETE models needed to support DTN, namely: the Bundle Protocol (BP) model. To illustrate the use of MACHETE with the additional DTN model, we provide an example simulation to benchmark its performance. We demonstrate the use of the DTN protocol

  17. Coexistence: Threat to the Performance of Heterogeneous Network

    NASA Astrophysics Data System (ADS)

    Sharma, Neetu; Kaur, Amanpreet

    2010-11-01

    Wireless technology is gaining broad acceptance as users opt for the freedom that only wireless network can provide. Well-accepted wireless communication technologies generally operate in frequency bands that are shared among several users, often using different RF schemes. This is true in particular for WiFi, Bluetooth, and more recently ZigBee. These all three operate in the unlicensed 2.4 GHz band, also known as ISM band, which has been key to the development of a competitive and innovative market for wireless embedded devices. But, as with any resource held in common, it is crucial that those technologies coexist peacefully to allow each user of the band to fulfill its communication goals. This has led to an increase in wireless devices intended for use in IEEE 802.11 wireless local area networks (WLANs) and wireless personal area networks (WPANs), both of which support operation in the crowded 2.4-GHz industrial, scientific and medical (ISM) band. Despite efforts made by standardization bodies to ensure smooth coexistence it may occur that communication technologies transmitting for instance at very different power levels interfere with each other. In particular, it has been pointed out that ZigBee could potentially experience interference from WiFi traffic given that while both protocols can transmit on the same channel, WiFi transmissions usually occur at much higher power level. In this work, we considered a heterogeneous network and analyzed the impact of coexistence between IEEE 802.15.4 and IEEE 802.11b. To evaluate the performance of this network, measurement and simulation study are conducted and developed in the QualNet Network simulator, version 5.0.Model is analyzed for different placement models or topologies such as Random. Grid & Uniform. Performance is analyzed on the basis of characteristics such as throughput, average jitter and average end to end delay. Here, the impact of varying different antenna gain & shadowing model for this

  18. Road safety performance indicators for the interurban road network.

    PubMed

    Yannis, George; Weijermars, Wendy; Gitelman, Victoria; Vis, Martijn; Chaziris, Antonis; Papadimitriou, Eleonora; Azevedo, Carlos Lima

    2013-11-01

    Various road safety performance indicators (SPIs) have been proposed for different road safety research areas, mainly as regards driver behaviour (e.g. seat belt use, alcohol, drugs, etc.) and vehicles (e.g. passive safety); however, no SPIs for the road network and design have been developed. The objective of this research is the development of an SPI for the road network, to be used as a benchmark for cross-region comparisons. The developed SPI essentially makes a comparison of the existing road network to the theoretically required one, defined as one which meets some minimum requirements with respect to road safety. This paper presents a theoretical concept for the determination of this SPI as well as a translation of this theory into a practical method. Also, the method is applied in a number of pilot countries namely the Netherlands, Portugal, Greece and Israel. The results show that the SPI could be efficiently calculated in all countries, despite some differences in the data sources. In general, the calculated overall SPI scores were realistic and ranged from 81 to 94%, with the exception of Greece where the SPI was relatively lower (67%). However, the SPI should be considered as a first attempt to determine the safety level of the road network. The proposed method has some limitations and could be further improved. The paper presents directions for further research to further develop the SPI. PMID:23268762

  19. Social value of high bandwidth networks: creative performance and education.

    PubMed

    Mansell, Robin; Foresta, Don

    2016-03-01

    This paper considers limitations of existing network technologies for distributed theatrical performance in the creative arts and for symmetrical real-time interaction in online learning environments. It examines the experience of a multidisciplinary research consortium that aimed to introduce a solution to latency and other network problems experienced by users in these sectors. The solution builds on the Multicast protocol, Access Grid, an environment supported by very high bandwidth networks. The solution is intended to offer high-quality image and sound, interaction with other network platforms, maximum user control of multipoint transmissions, and open programming tools that are flexible and modifiable for specific uses. A case study is presented drawing upon an extended period of participant observation by the authors. This provides a basis for an examination of the challenges of promoting technological innovation in a multidisciplinary project. We highlight the kinds of technical advances and cultural and organizational changes that would be required to meet demanding quality standards, the way a research consortium planned to engage in experimentation and learning, and factors making it difficult to achieve an open platform that is responsive to the needs of users in the creative arts and education sectors. PMID:26809576

  20. Practical Performance Analysis for Multiple Information Fusion Based Scalable Localization System Using Wireless Sensor Networks.

    PubMed

    Zhao, Yubin; Li, Xiaofan; Zhang, Sha; Meng, Tianhui; Zhang, Yiwen

    2016-01-01

    In practical localization system design, researchers need to consider several aspects to make the positioning efficiently and effectively, e.g., the available auxiliary information, sensing devices, equipment deployment and the environment. Then, these practical concerns turn out to be the technical problems, e.g., the sequential position state propagation, the target-anchor geometry effect, the Non-line-of-sight (NLOS) identification and the related prior information. It is necessary to construct an efficient framework that can exploit multiple available information and guide the system design. In this paper, we propose a scalable method to analyze system performance based on the Cramér-Rao lower bound (CRLB), which can fuse all of the information adaptively. Firstly, we use an abstract function to represent all of the wireless localization system model. Then, the unknown vector of the CRLB consists of two parts: the first part is the estimated vector, and the second part is the auxiliary vector, which helps improve the estimation accuracy. Accordingly, the Fisher information matrix is divided into two parts: the state matrix and the auxiliary matrix. Unlike the theoretical analysis, our CRLB can be a practical fundamental limit to denote the system that fuses multiple information in the complicated environment, e.g., recursive Bayesian estimation based on the hidden Markov model, the map matching method and the NLOS identification and mitigation methods. Thus, the theoretical results are approaching the real case more. In addition, our method is more adaptable than other CRLBs when considering more unknown important factors. We use the proposed method to analyze the wireless sensor network-based indoor localization system. The influence of the hybrid LOS/NLOS channels, the building layout information and the relative height differences between the target and anchors are analyzed. It is demonstrated that our method exploits all of the available information for

  1. Network DEA: an application to analysis of academic performance

    NASA Astrophysics Data System (ADS)

    Saniee Monfared, Mohammad Ali; Safi, Mahsa

    2013-05-01

    As governmental subsidies to universities are declining in recent years, sustaining excellence in academic performance and more efficient use of resources have become important issues for university stakeholders. To assess the academic performances and the utilization of the resources, two important issues need to be addressed, i.e., a capable methodology and a set of good performance indicators as we consider in this paper. In this paper, we propose a set of performance indicators to enable efficiency analysis of academic activities and apply a novel network DEA structure to account for subfunctional efficiencies such as teaching quality, research productivity, as well as the overall efficiency. We tested our approach on the efficiency analysis of academic colleges at Alzahra University in Iran.

  2. Performance analysis of reactive congestion control for ATM networks

    NASA Astrophysics Data System (ADS)

    Kawahara, Kenji; Oie, Yuji; Murata, Masayuki; Miyahara, Hideo

    1995-05-01

    In ATM networks, preventive congestion control is widely recognized for efficiently avoiding congestion, and it is implemented by a conjunction of connection admission control and usage parameter control. However, congestion may still occur because of unpredictable statistical fluctuation of traffic sources even when preventive control is performed in the network. In this paper, we study another kind of congestion control, i.e., reactive congestion control, in which each source changes its cell emitting rate adaptively to the traffic load at the switching node (or at the multiplexer). Our intention is that, by incorporating such a congestion control method in ATM networks, more efficient congestion control is established. We develop an analytical model, and carry out an approximate analysis of reactive congestion control algorithm. Numerical results show that the reactive congestion control algorithms are very effective in avoiding congestion and in achieving the statistical gain. Furthermore, the binary congestion control algorithm with pushout mechanism is shown to provide the best performance among the reactive congestion control algorithms treated here.

  3. Improving stochastic communication network performance: Reliability vs throughput

    NASA Astrophysics Data System (ADS)

    Jansen, Leonard J.

    1991-12-01

    This research investigated the measurement and improvement of two performance parameters, expected flow and reliability, for stochastic communication networks. There were three objectives of this research. The first was to measure the reliability of large stochastic networks. This was accomplished through an investigation into the current methodologies in the literature, with a subsequent selection and application of a factoring program developed by Page and Perry. The second objective was to develop a reliability improvement model given that a mathematical reliability expression did not exist. This was accomplished modeling a heuristic by Jain and Gopal, into a linear improvement model. Finally, the third objective was to examine the trade-off between maximizing expected flow and reliability. This was accomplished through generating bounds for the efficient frontier in a modified multicriteria optimization approach. Using the methodologies formulated in this research, the performance parameters of both expected flow and reliability can be measured and subsequent improvements made providing insight into the operational capabilities of stochastic communication networks.

  4. Performance evaluation of cellular phone network based portable ECG device.

    PubMed

    Hong, Joo-Hyun; Cha, Eun-Jong; Lee, Tae-Soo

    2008-01-01

    In this study, cellular phone network based portable ECG device was developed and three experiments were performed to evaluate the accuracy, reliability and operability, applicability during daily life of the developed device. First, ECG signals were measured using the developed device and Biopac device (reference device) during sitting and marking time and compared to verify the accuracy of R-R intervals. Second, the reliable data transmission to remote server was verified on two types of simulated emergency event using patient simulator. Third, during daily life with five types of motion, accuracy of data transmission to remote server was verified on two types of event occurring. By acquiring and comparing subject's biomedical signal and motion signal, the accuracy, reliability and operability, applicability during daily life of the developed device were verified. Therefore, cellular phone network based portable ECG device can monitor patient with inobtrusive manner. PMID:19162767

  5. Digitally controlled high-performance dc SQUID readout electronics for a 304-channel vector magnetometer

    NASA Astrophysics Data System (ADS)

    Bechstein, S.; Petsche, F.; Scheiner, M.; Drung, D.; Thiel, F.; Schnabel, A.; Schurig, Th

    2006-06-01

    Recently, we have developed a family of dc superconducting quantum interference device (SQUID) readout electronics for several applications. These electronics comprise a low-noise preamplifier followed by an integrator, and an analog SQUID bias circuit. A highly-compact low-power version with a flux-locked loop bandwidth of 0.3 MHz and a white noise level of 1 nV/√Hz was specially designed for a 304-channel low-Tc dc SQUID vector magnetometer, intended to operate in the new Berlin Magnetically Shielded Room (BMSR-2). In order to minimize the space needed to mount the electronics on top of the dewar and to minimize the power consumption, we have integrated four electronics channels on one 3 cm × 10 cm sized board. Furthermore we embedded the analog components of these four channels into a digitally controlled system including an in-system programmable microcontroller. Four of these integrated boards were combined to one module with a size of 4 cm × 4 cm × 16 cm. 19 of these modules were implemented, resulting in a total power consumption of about 61 W. To initialize the 304 channels and to service the system we have developed software tools running on a laptop computer. By means of these software tools the microcontrollers are fed with all required data such as the working points, the characteristic parameters of the sensors (noise, voltage swing), or the sensor position inside of the vector magnetometer system. In this paper, the developed electronics including the software tools are described, and first results are presented.

  6. High-performance, scalable optical network-on-chip architectures

    NASA Astrophysics Data System (ADS)

    Tan, Xianfang

    The rapid advance of technology enables a large number of processing cores to be integrated into a single chip which is called a Chip Multiprocessor (CMP) or a Multiprocessor System-on-Chip (MPSoC) design. The on-chip interconnection network, which is the communication infrastructure for these processing cores, plays a central role in a many-core system. With the continuously increasing complexity of many-core systems, traditional metallic wired electronic networks-on-chip (NoC) became a bottleneck because of the unbearable latency in data transmission and extremely high energy consumption on chip. Optical networks-on-chip (ONoC) has been proposed as a promising alternative paradigm for electronic NoC with the benefits of optical signaling communication such as extremely high bandwidth, negligible latency, and low power consumption. This dissertation focus on the design of high-performance and scalable ONoC architectures and the contributions are highlighted as follow: 1. A micro-ring resonator (MRR)-based Generic Wavelength-routed Optical Router (GWOR) is proposed. A method for developing any sized GWOR is introduced. GWOR is a scalable non-blocking ONoC architecture with simple structure, low cost and high power efficiency compared to existing ONoC designs. 2. To expand the bandwidth and improve the fault tolerance of the GWOR, a redundant GWOR architecture is designed by cascading different type of GWORs into one network. 3. The redundant GWOR built with MRR-based comb switches is proposed. Comb switches can expand the bandwidth while keep the topology of GWOR unchanged by replacing the general MRRs with comb switches. 4. A butterfly fat tree (BFT)-based hybrid optoelectronic NoC (HONoC) architecture is developed in which GWORs are used for global communication and electronic routers are used for local communication. The proposed HONoC uses less numbers of electronic routers and links than its counterpart of electronic BFT-based NoC. It takes the advantages of

  7. High-Performance Tools: Nevada's Experiences Growing Network Capability

    NASA Astrophysics Data System (ADS)

    Biasi, G.; Smith, K. D.; Slater, D.; Preston, L.; Tibuleac, I.

    2007-05-01

    Like most regional seismic networks, the Nevada Seismic Network relies on a combination of software components to perform its mission. Core components for automatic network operation are from Antelope, a real- time environmental monitoring software system from Boulder Real-Time Technologies (BRTT). We configured the detector for multiple filtering bands, generally to distinguish local, regional, and teleseismic phases. The associator can use all or a subset of detections for each location grid. Presently we use detailed grids in the Reno-Carson City, Las Vegas, and Yucca Mountain areas, a large regional grid and a teleseismic grid, with a configurable order of precedence among solutions. Incorporating USArray stations into the network was straight- forward. Locations for local events are available in 30-60 seconds, and relocations are computed every 20 seconds. Testing indicates that relocations could be computed every few seconds or less if desired on a modest Sun server. Successive locations may be kept in the database, or criteria applied to select a single preferred location. New code developed by BRTT partially in response to an NSL request automatically launches a gradient-based relocator to refine locations and depths. Locations are forwarded to QDDS and other notification mechanisms. We also use Antelope tools for earthquake picking and analysis and for database viewing and maintenance. We have found the programming interfaces supplied with Antelope instrumental as we work toward ANSS system performance requirements. For example, the Perl language interface to the real-time Object Ring Buffer (ORB) was used to reduce the time to produce ShakeMaps to the present value of ~3 minutes. Hypoinverse was incorporated into a real-time system with Perl ORB access tools. Using the Antelope PHP interface, we now have off-site review capabilities for events and ShakeMaps from hand-held internet devices. PHP and Perl tools were used to develop a remote capability, now

  8. Performance characteristics of omnidirectional antennas for spacecraft using NASA networks

    NASA Technical Reports Server (NTRS)

    Hilliard, Lawrence M.

    1987-01-01

    Described are the performance capabilities and critical elements of the shaped omni antenna developed for NASA for space users of NASA networks. The shaped omni is designed to be operated in tandem for virtually omnidirectional coverage and uniform gain free of spacecraft interference. These antennas are ideal for low gain data requirements and emergency backup, deployment, amd retrieval of higher gain RF systems. Other omnidirectional antennas that have flown in space are described in the final section. A performance summary for the shaped omni is in the Appendix. This document introduces organizations and projects to the shaped omni applications for NASA's space use. Coverage, gain, weight, power, and implementation and other performance information for satisfying a wide range of data requirements are included.

  9. A simulation study of TCP performance in ATM networks

    SciTech Connect

    Chien Fang; Chen, Helen; Hutchins, J.

    1994-08-01

    This paper presents a simulation study of TCP performance over congested ATM local area networks. We simulated a variety of schemes for congestion control for ATM LANs, including a simple cell-drop, a credit-based flow control scheme that back-pressures individual VC`s, and two selective cell-drop schemes. Our simulation results for congested ATM LANs show the following: (1) TCP performance is poor under simple cell-drop, (2) the selective cell-drop schemes increase effective link utilization and result in higher TCP throughputs than the simple cell-drop scheme, and (3) the credit-based flow control scheme eliminates cell loss and achieves maximum performance and effective link utilization.

  10. Introducing Vectors.

    ERIC Educational Resources Information Center

    Roche, John

    1997-01-01

    Suggests an approach to teaching vectors that promotes active learning through challenging questions addressed to the class, as opposed to subtle explanations. Promotes introducing vector graphics with concrete examples, beginning with an explanation of the displacement vector. Also discusses artificial vectors, vector algebra, and unit vectors.…

  11. Network and User-Perceived Performance of Web Page Retrievals

    NASA Technical Reports Server (NTRS)

    Kruse, Hans; Allman, Mark; Mallasch, Paul

    1998-01-01

    The development of the HTTP protocol has been driven by the need to improve the network performance of the protocol by allowing the efficient retrieval of multiple parts of a web page without the need for multiple simultaneous TCP connections between a client and a server. We suggest that the retrieval of multiple page elements sequentially over a single TCP connection may result in a degradation of the perceived performance experienced by the user. We attempt to quantify this perceived degradation through the use of a model which combines a web retrieval simulation and an analytical model of TCP operation. Starting with the current HTTP/l.1 specification, we first suggest a client@side heuristic to improve the perceived transfer performance. We show that the perceived speed of the page retrieval can be increased without sacrificing data transfer efficiency. We then propose a new client/server extension to the HTTP/l.1 protocol to allow for the interleaving of page element retrievals. We finally address the issue of the display of advertisements on web pages, and in particular suggest a number of mechanisms which can make efficient use of IP multicast to send advertisements to a number of clients within the same network.

  12. Vector Reflectometry in a Beam Waveguide

    NASA Technical Reports Server (NTRS)

    Eimer, J. R.; Bennett, C. L.; Chuss, D. T.; Wollack, E. J.

    2011-01-01

    We present a one-port calibration technique for characterization of beam waveguide components with a vector network analyzer. This technique involves using a set of known delays to separate the responses of the instrument and the device under test. We demonstrate this technique by measuring the reflected performance of a millimeter-wave variable-delay polarization modulator.

  13. High-Performance, Semi-Interpenetrating Polymer Network

    NASA Technical Reports Server (NTRS)

    Pater, Ruth H.; Lowther, Sharon E.; Smith, Janice Y.; Cannon, Michelle S.; Whitehead, Fred M.; Ely, Robert M.

    1992-01-01

    High-performance polymer made by new synthesis in which one or more easy-to-process, but brittle, thermosetting polyimides combined with one or more tough, but difficult-to-process, linear thermoplastics to yield semi-interpenetrating polymer network (semi-IPN) having combination of easy processability and high tolerance to damage. Two commercially available resins combined to form tough, semi-IPN called "LaRC-RP49." Displays improvements in toughness and resistance to microcracking. LaRC-RP49 has potential as high-temperature matrix resin, adhesive, and molding resin. Useful in aerospace, automotive, and electronic industries.

  14. Deploying optical performance monitoring in TeliaSonera's network

    NASA Astrophysics Data System (ADS)

    Svensson, Torbjorn K.; Karlsson, Per-Olov E.

    2004-09-01

    This paper reports on the first steps taken by TeliaSonera towards deploying optical performance monitoring (OPM) in the company"s transport network, in order to assure increasingly reliable communications on the physical layer. The big leap, a world-wide deployment of OPM still awaits a breakthrough. There is required very obvious benefits from using OPM in order to change this stalemate. Reasons may be the anaemic economy of many telecom operators, shareholders" pushing for short-term payback, and reluctance to add complexity and to integrate a system management. Technically, legacy digital systems do already have a proven ability of monitoring, so adding OPM to the dense wavelength division multiplexed (DWDM) systems in operation should be judged with care. Duly installed, today"s DWDM systems do their job well, owing to rigorous rules for link design and a prosperous power budget, a power management inherent to the system, and a reliable supplier"s support. So what may bring this stalemate to an end? -A growing number of appliances of OPM, for enhancing network operation and maintenance, and enabling new customer services, will most certainly bring momentum to a change. The first employment of OPM in TeliaSonera"s network is launched this year, 2004. The preparedness of future OPM dependent services and transport technologies will thereby be granted.

  15. A new interconnection network for SIMD computers: The sigma network

    SciTech Connect

    Seznec, A.

    1987-07-01

    When processing vectors on SIMD computers, some data manipulations (rearrangement, expansion, compression, perfect-shuffle, bit-reversal) have to be performed by an inter-connection network. When this network lacks an efficient routing control, it becomes the bottleneck for performance. It has been pointed out that general algorithms to control rearrangeable networks for arbitrary permutations are time consuming. To overcome this difficulty, Lenfant proposed a set of permutations covering standard needs associated with efficient control algorithms for the Benes network. But to perform explicit permutations on vectors, several passes through the network are necessary because they have to be composed with transfer rearrangements. The author presents efficient control algorithms to perform these vector permutations in a single pass on a new interconnection network.

  16. The challenges of archiving networked-based multimedia performances (Performance cryogenics)

    NASA Astrophysics Data System (ADS)

    Cohen, Elizabeth; Cooperstock, Jeremy; Kyriakakis, Chris

    2002-11-01

    Music archives and libraries have cultural preservation at the core of their charters. New forms of art often race ahead of the preservation infrastructure. The ability to stream multiple synchronized ultra-low latency streams of audio and video across a continent for a distributed interactive performance such as music and dance with high-definition video and multichannel audio raises a series of challenges for the architects of digital libraries and those responsible for cultural preservation. The archiving of such performances presents numerous challenges that go beyond simply recording each stream. Case studies of storage and subsequent retrieval issues for Internet2 collaborative performances are discussed. The development of shared reality and immersive environments generate issues about, What constitutes an archived performance that occurs across a network (in multiple spaces over time)? What are the families of necessary metadata to reconstruct this virtual world in another venue or era? For example, if the network exhibited changes in latency the performers most likely adapted. In a future recreation, the latency will most likely be completely different. We discuss the parameters of immersive environment acquisition and rendering, network architectures, software architecture, musical/choreographic scores, and environmental acoustics that must be considered to address this problem.

  17. Support vector machine-an alternative to artificial neuron network for water quality forecasting in an agricultural nonpoint source polluted river?

    PubMed

    Liu, Mei; Lu, Jun

    2014-09-01

    Water quality forecasting in agricultural drainage river basins is difficult because of the complicated nonpoint source (NPS) pollution transport processes and river self-purification processes involved in highly nonlinear problems. Artificial neural network (ANN) and support vector model (SVM) were developed to predict total nitrogen (TN) and total phosphorus (TP) concentrations for any location of the river polluted by agricultural NPS pollution in eastern China. River flow, water temperature, flow travel time, rainfall, dissolved oxygen, and upstream TN or TP concentrations were selected as initial inputs of the two models. Monthly, bimonthly, and trimonthly datasets were selected to train the two models, respectively, and the same monthly dataset which had not been used for training was chosen to test the models in order to compare their generalization performance. Trial and error analysis and genetic algorisms (GA) were employed to optimize the parameters of ANN and SVM models, respectively. The results indicated that the proposed SVM models performed better generalization ability due to avoiding the occurrence of overtraining and optimizing fewer parameters based on structural risk minimization (SRM) principle. Furthermore, both TN and TP SVM models trained by trimonthly datasets achieved greater forecasting accuracy than corresponding ANN models. Thus, SVM models will be a powerful alternative method because it is an efficient and economic tool to accurately predict water quality with low risk. The sensitivity analyses of two models indicated that decreasing upstream input concentrations during the dry season and NPS emission along the reach during average or flood season should be an effective way to improve Changle River water quality. If the necessary water quality and hydrology data and even trimonthly data are available, the SVM methodology developed here can easily be applied to other NPS-polluted rivers. PMID:24894753

  18. Performance evaluation for epileptic electroencephalogram (EEG) detection by using Neyman-Pearson criteria and a support vector machine

    NASA Astrophysics Data System (ADS)

    Wang, Chun-mei; Zhang, Chong-ming; Zou, Jun-zhong; Zhang, Jian

    2012-02-01

    The diagnosis of several neurological disorders is based on the detection of typical pathological patterns in electroencephalograms (EEGs). This is a time-consuming task requiring significant training and experience. A lot of effort has been devoted to developing automatic detection techniques which might help not only in accelerating this process but also in avoiding the disagreement among readers of the same record. In this work, Neyman-Pearson criteria and a support vector machine (SVM) are applied for detecting an epileptic EEG. Decision making is performed in two stages: feature extraction by computing the wavelet coefficients and the approximate entropy (ApEn) and detection by using Neyman-Pearson criteria and an SVM. Then the detection performance of the proposed method is evaluated. Simulation results demonstrate that the wavelet coefficients and the ApEn are features that represent the EEG signals well. By comparison with Neyman-Pearson criteria, an SVM applied on these features achieved higher detection accuracies.

  19. A Generic Framework of Performance Measurement in Networked Enterprises

    NASA Astrophysics Data System (ADS)

    Kim, Duk-Hyun; Kim, Cheolhan

    Performance measurement (PM) is essential for managing networked enterprises (NEs) because it greatly affects the effectiveness of collaboration among members of NE.PM in NE requires somewhat different approaches from PM in a single enterprise because of heterogeneity, dynamism, and complexity of NE’s. This paper introduces a generic framework of PM in NE (we call it NEPM) based on the Balanced Scorecard (BSC) approach. In NEPM key performance indicators and cause-and-effect relationships among them are defined in a generic strategy map. NEPM could be applied to various types of NEs after specializing KPIs and relationships among them. Effectiveness of NEPM is shown through a case study of some Korean NEs.

  20. The Deep Space Network: Noise temperature concepts, measurements, and performance

    NASA Technical Reports Server (NTRS)

    Stelzried, C. T.

    1982-01-01

    The use of higher operational frequencies is being investigated for improved performance of the Deep Space Network. Noise temperature and noise figure concepts are used to describe the noise performance of these receiving systems. The ultimate sensitivity of a linear receiving system is limited by the thermal noise of the source and the quantum noise of the receiver amplifier. The atmosphere, antenna and receiver amplifier of an Earth station receiving system are analyzed separately and as a system. Performance evaluation and error analysis techniques are investigated. System noise temperature and antenna gain parameters are combined to give an overall system figure of merit G/T. Radiometers are used to perform radio ""star'' antenna and system sensitivity calibrations. These are analyzed and the performance of several types compared to an idealized total power radiometer. The theory of radiative transfer is applicable to the analysis of transmission medium loss. A power series solution in terms of the transmission medium loss is given for the solution of the noise temperature contribution.

  1. Methods to improve neural network performance in daily flows prediction

    NASA Astrophysics Data System (ADS)

    Wu, C. L.; Chau, K. W.; Li, Y. S.

    2009-06-01

    SummaryIn this paper, three data-preprocessing techniques, moving average (MA), singular spectrum analysis (SSA), and wavelet multi-resolution analysis (WMRA), were coupled with artificial neural network (ANN) to improve the estimate of daily flows. Six models, including the original ANN model without data preprocessing, were set up and evaluated. Five new models were ANN-MA, ANN-SSA1, ANN-SSA2, ANN-WMRA1, and ANN-WMRA2. The ANN-MA was derived from the raw ANN model combined with the MA. The ANN-SSA1, ANN-SSA2, ANN-WMRA1 and ANN-WMRA2 were generated by using the original ANN model coupled with SSA and WMRA in terms of two different means. Two daily flow series from different watersheds in China (Lushui and Daning) were used in six models for three prediction horizons (i.e., 1-, 2-, and 3-day-ahead forecast). The poor performance on ANN forecast models was mainly due to the existence of the lagged prediction. The ANN-MA, among six models, performed best and eradicated the lag effect. The performances from the ANN-SSA1 and ANN-SSA2 were similar, and the performances from the ANN-WMRA1 and ANN-WMRA2 were also similar. However, the models based on the SSA presented better performance than the models based on the WMRA at all forecast horizons, which meant that the SSA is more effective than the WMRA in improving the ANN performance in the current study. Based on an overall consideration including the model performance and the complexity of modeling, the ANN-MA model was optimal, then the ANN model coupled with SSA, and finally the ANN model coupled with WMRA.

  2. Performance improvement of direct torque control system for induction motor in low-speed operation using wavelet network

    NASA Astrophysics Data System (ADS)

    Liu, Hua; Liao, Wei; Wang, Yuguo; Shen, Songhua

    2006-11-01

    To improve the low-speed dynamic performance of induction motor in direct torque control (DTC), a novel method of stator resistance identification based on wavelet network (WN) is presented and the determination of wavelet network structure is discussed. The inputs of the WN are the current error and the change in the current error and the output of the WN is the stator resistance error. The improved least squares algorithm (LSA) is used to fulfill the network structure and parameter identification. By the use of wavelet transform that accurately localizes the characteristics of a signal both in the time and frequency domains, the occurring instants of the stator resistance change can be identified by the multi-scale representation of the signal. Once the instants are detected, the accurate stator flux vector and electromagnetic torque are acquired by the parameter estimator, which makes the DTC applicable in the low region, optimizing the inverter control strategy. By detailed comparison between the wavelet and the typical backward-propagation (BP) neural network, the simulation results show that the proposed method can efficiently reduce the torque ripple and current ripple, superior to the BP neural network.

  3. Structuring networks for maximum performance under managed care.

    PubMed

    Miller, T R

    1996-12-01

    Healthcare providers interested in forming delivery networks to secure managed care contracts must decide how to structure their networks. Two basic structural models are available: the noncorporate model and the corporate model. The noncorporate model delivery network typically has a single governing body and management infrastructure to oversee only managed care contracting and related business. The corporate model delivery system has a unified governance management infrastructure that handles all of the network's business. While either structure can work, corporate model networks usually are better able to enforce provider behavior that is in the best interest of a network as a whole. PMID:10163003

  4. Implementation and performance evaluation of mobile ad hoc network for Emergency Telemedicine System in disaster areas.

    PubMed

    Kim, J C; Kim, D Y; Jung, S M; Lee, M H; Kim, K S; Lee, C K; Nah, J Y; Lee, S H; Kim, J H; Choi, W J; Yoo, S K

    2009-01-01

    So far we have developed Emergency Telemedicine System (ETS) which is a robust system using heterogeneous networks. In disaster areas, however, ETS cannot be used if the primary network channel is disabled due to damages on the network infrastructures. Thus we designed network management software for disaster communication network by combination of Mobile Ad hoc Network (MANET) and Wireless LAN (WLAN). This software maintains routes to a Backbone Gateway Node in dynamic network topologies. In this paper, we introduce the proposed disaster communication network with management software, and evaluate its performance using ETS between Medical Center and simulated disaster areas. We also present the results of network performance analysis which identifies the possibility of actual Telemedicine Service in disaster areas via MANET and mobile network (e.g. HSDPA, WiBro). PMID:19964544

  5. On network performance and data quality of a lightning detection network in Korea (KLDN)

    NASA Astrophysics Data System (ADS)

    Kuk, BongJae; Schmidt, Kersten; Lee, Gyu Won

    2014-11-01

    The quality of lightning data and system performance of network are keys to improving timely launching at space facilities. The quality of lightning data that are obtained by Korea Meteorological Administration (KMA) lightning detection network (KLDN) is evaluated over the Korean Peninsula in climatological perspective. A new methodology is developed to evaluate the performance of KLDN. The spatial distributions of the ellipse area, chi-square, and peak current are analyzed for the purpose of quantifying the quality of data from KLDN. The performance of the KLDN is also evaluated with a normalized frequency distribution function (NFDF) of peak currents and the peak values of NFDF with ranges. The monthly and diurnal variations of lightning strokes are presented. Of the total number of lightning strokes, 74% occur in summer (June, July, and August). Diurnal variation shows a bimodal distribution with peaks at 0600 LST and 1500 LST. High stroke density is identified over two locations: the Yellow Sea and the western inland region of the Korean Peninsula. The mean value of the peak current is more than 12.5 kA and the ellipse area is less than 20 km2 in most of the inland regions. The spatial distributions of the mean peak current and the ellipse area show the effects of the topography and geometry of the lightning sensor network. Theoretical simulation with topography shows that the time-of-arrival (TOA) sensor uncertainty of the KLDN network is at least 0.4 km so that significant delay of the propagation path due to topography is not detectable. The NFDF is derived from the distribution of the peak current as a function of detection ranges and then is fitted with a log-normal function. The peak of NFDF is derived as functions of ranges and the linear and quadratic fitting are applied. Peak Current = 6 × 10- 5 × Range2 + 0.0040 × Range + 3.3456 with R2 = 0.9938. The slope of this function and the peak values of NFDF are bigger than those from the simulation

  6. Networks: A Route to Improving Performance in Manufacturing SMEs

    ERIC Educational Resources Information Center

    Coleman, J.

    2003-01-01

    Perceived as important contributors to economic growth, network and cluster groups are currently receiving much attention. The same may be said of SMEs. But practical and theoretical perspectives indicate that SMEs, and particularly the owner-managers, place little value on networks and have only limited networking resources. Consequently, they do…

  7. Mean Throughput: A Method for Analyzing and Comparing Computer Network Performances

    PubMed Central

    Dwyer, Samuel J.; Cox, Glendon G.; Templeton, Arch W.; Cook, Larry T.; Hensley, Kenneth L.; Johnson, Joy A.; Anderson, William H.; Bramble, John M.

    1986-01-01

    Computer networks for managing and transmitting digitally formatted radiographic images are being developed by industrial firms and academic research groups. The ability to measure and compare the performance of these networks is absolutely essential when proposing network operational protocols. Mean throughput analysis is an excellent method for predicting and documenting a network's performance. Mean throughput measurements for digital image networks are analogous to the use of modulation transfer function measurements of radiographic systems. This paper describes mean throughput. The mean throughput for the interactive diagnostic display stations on the digital network in our department are presented.

  8. Design and Development of Communication Network Based on ODK and Performance Analysis

    NASA Astrophysics Data System (ADS)

    Gao, Lijuan; Jiang, Taijie

    The communication network simulation model is researched based on ODK. It develops the self-defined simulation model interface of communication network in Chinese. It designs the function of simulation model with three modules as simulation scene configuration module, node property configuration module, simulation running configuration and control module. Then the simulation model is development based on ODK in order to analyze the network performance. The wired network and satellite network simulation scenes are built by the self-defined simulation models based on ODK. Then the network performance is analyzed by simulation.

  9. Vector Quantization Algorithm Based on Associative Memories

    NASA Astrophysics Data System (ADS)

    Guzmán, Enrique; Pogrebnyak, Oleksiy; Yáñez, Cornelio; Manrique, Pablo

    This paper presents a vector quantization algorithm for image compression based on extended associative memories. The proposed algorithm is divided in two stages. First, an associative network is generated applying the learning phase of the extended associative memories between a codebook generated by the LBG algorithm and a training set. This associative network is named EAM-codebook and represents a new codebook which is used in the next stage. The EAM-codebook establishes a relation between training set and the LBG codebook. Second, the vector quantization process is performed by means of the recalling stage of EAM using as associative memory the EAM-codebook. This process generates a set of the class indices to which each input vector belongs. With respect to the LBG algorithm, the main advantages offered by the proposed algorithm is high processing speed and low demand of resources (system memory); results of image compression and quality are presented.

  10. Do plant viruses facilitate their aphid vectors by inducing symptoms that alter behavior and performance?

    PubMed

    Hodge, Simon; Powell, Glen

    2008-12-01

    Aphids can respond both positively and negatively to virus-induced modifications of the shared host plant. It can be speculated that viruses dependent on aphids for their transmission might evolve to induce changes in the host plant that attract aphids and improve their performance, subsequently enhancing the success of the pathogen itself. We studied how pea aphids [Acyrthosiphon pisum (Harris)] responded to infection of tic beans (Vicia faba L.) by three viruses with varying degrees of dependence on this aphid for their transmission: pea enation mosaic virus (PEMV), bean yellow mosaic virus (BYMV), and broad bean mottle virus (BBMV). BYMV has a nonpersistent mode of transmission by aphids, whereas PEMV is transmitted in a circulative-persistent manner. BBMV is not aphid transmitted. When reared on plants infected by PEMV, no changes in aphid survival, growth, or reproductive performance were observed, whereas infection of beans by the other aphid-dependent virus, BYMV, actually caused a reduction in aphid survival in some assays. None of the viruses induced A. pisum to increase production of winged progeny, and aphids settled preferentially on leaf tissue from plants infected by all three viruses, the likely mechanism being visual responses to yellowing of foliage. Thus, in this system, the attractiveness of an infected host plant and its quality in terms of aphid growth and reproduction were not related to the pathogen's dependence on the aphid for transmission to new hosts. PMID:19161702

  11. Preliminary performance of a vertical-attitude takeoff and landing, supersonic cruise aircraft concept having thrust vectoring integrated into the flight control system

    NASA Technical Reports Server (NTRS)

    Robins, A. W.; Beissner, F. L., Jr.; Domack, C. S.; Swanson, E. E.

    1985-01-01

    A performance study was made of a vertical attitude takeoff and landing (VATOL), supersonic cruise aircraft concept having thrust vectoring integrated into the flight control system. Those characteristics considered were aerodynamics, weight, balance, and performance. Preliminary results indicate that high levels of supersonic aerodynamic performance can be achieved. Further, with the assumption of an advanced (1985 technology readiness) low bypass ratio turbofan engine and advanced structures, excellent mission performance capability is indicated.

  12. Sandia`s research network for Supercomputing `93: A demonstration of advanced technologies for building high-performance networks

    SciTech Connect

    Gossage, S.A.; Vahle, M.O.

    1993-12-01

    Supercomputing `93, a high-performance computing and communications conference, was held November 15th through 19th, 1993 in Portland, Oregon. For the past two years, Sandia National Laboratories has used this conference to showcase and focus its communications and networking endeavors. At the 1993 conference, the results of Sandia`s efforts in exploring and utilizing Asynchronous Transfer Mode (ATM) and Synchronous Optical Network (SONET) technologies were vividly demonstrated by building and operating three distinct networks. The networks encompassed a Switched Multimegabit Data Service (SMDS) network running at 44.736 megabits per second, an ATM network running on a SONET circuit at the Optical Carrier (OC) rate of 155.52 megabits per second, and a High Performance Parallel Interface (HIPPI) network running over a 622.08 megabits per second SONET circuit. The SMDS and ATM networks extended from Albuquerque, New Mexico to the showroom floor, while the HIPPI/SONET network extended from Beaverton, Oregon to the showroom floor. This paper documents and describes these networks.

  13. Performance of demand assignment TDMA and multicarrier TDMA satellite networks

    NASA Astrophysics Data System (ADS)

    Jabbari, Bijan; McDysan, David

    1992-02-01

    The authors develop an analytical model of satellite communication networks using time-division multiple-access (TDMA) and multiple-carrier TDMA (MC-TDMA) systems to support circuit-switched traffic. The model defines the functions required to implement fixed assignments (FAs), variable destinations (VDs), and demand assignments (DAs). The authors describe a general system model for the various allocation schemes and traffic activity. They define analytical expressions for the blocking and freeze-out probabilities. This is followed by the derivation of the satellite capacity requirements at a specified performance level for FA, VD, and DA systems with and without digital speech interpolation. The analysis of disjoint pools (DP) and combined pools (CP) in DA systems is presented and attention is given to MC-TDMA with limited connectivity demand assignment. Expressions for the required satellite capacity for specified traffic and performance are derived along with numerical results. The degree of complexity and implementation alternatives for the various allocation schemes are considered.

  14. Wireless Body Area Network (WBAN) design techniques and performance evaluation.

    PubMed

    Khan, Jamil Yusuf; Yuce, Mehmet R; Bulger, Garrick; Harding, Benjamin

    2012-06-01

    In recent years interest in the application of Wireless Body Area Network (WBAN) for patient monitoring applications has grown significantly. A WBAN can be used to develop patient monitoring systems which offer flexibility to medical staff and mobility to patients. Patients monitoring could involve a range of activities including data collection from various body sensors for storage and diagnosis, transmitting data to remote medical databases, and controlling medical appliances, etc. Also, WBANs could operate in an interconnected mode to enable remote patient monitoring using telehealth/e-health applications. A WBAN can also be used to monitor athletes' performance and assist them in training activities. For such applications it is very important that a WBAN collects and transmits data reliably, and in a timely manner to a monitoring entity. In order to address these issues, this paper presents WBAN design techniques for medical applications. We examine the WBAN design issues with particular emphasis on the design of MAC protocols and power consumption profiles of WBAN. Some simulation results are presented to further illustrate the performances of various WBAN design techniques. PMID:20953680

  15. Clinical Performance of a New Biomimetic Double Network Material

    PubMed Central

    Dirxen, Christine; Blunck, Uwe; Preissner, Saskia

    2013-01-01

    Background: The development of ceramics during the last years was overwhelming. However, the focus was laid on the hardness and the strength of the restorative materials, resulting in high antagonistic tooth wear. This is critical for patients with bruxism. Objectives: The purpose of this study was to evaluate the clinical performance of the new double hybrid material for non-invasive treatment approaches. Material and Methods: The new approach of the material tested, was to modify ceramics to create a biomimetic material that has similar physical properties like dentin and enamel and is still as strong as conventional ceramics. Results: The produced crowns had a thickness ranging from 0.5 to 1.5 mm. To evaluate the clinical performance and durability of the crowns, the patient was examined half a year later. The crowns were still intact and soft tissues appeared healthy and this was achieved without any loss of tooth structure. Conclusions: The material can be milled to thin layers, but is still strong enough to prevent cracks which are stopped by the interpenetrating polymer within the network. Depending on the clinical situation, minimally- up to non-invasive restorations can be milled. Clinical Relevance: Dentistry aims in preservation of tooth structure. Patients suffering from loss of tooth structure (dental erosion, Amelogenesis imperfecta) or even young patients could benefit from minimally-invasive crowns. Due to a Vickers hardness between dentin and enamel, antagonistic tooth wear is very low. This might be interesting for treating patients with bruxism. PMID:24167534

  16. Performance Improvement in Geographic Routing for Vehicular Ad Hoc Networks

    PubMed Central

    Kaiwartya, Omprakash; Kumar, Sushil; Lobiyal, D. K.; Abdullah, Abdul Hanan; Hassan, Ahmed Nazar

    2014-01-01

    Geographic routing is one of the most investigated themes by researchers for reliable and efficient dissemination of information in Vehicular Ad Hoc Networks (VANETs). Recently, different Geographic Distance Routing (GEDIR) protocols have been suggested in the literature. These protocols focus on reducing the forwarding region towards destination to select the Next Hop Vehicles (NHV). Most of these protocols suffer from the problem of elevated one-hop link disconnection, high end-to-end delay and low throughput even at normal vehicle speed in high vehicle density environment. This paper proposes a Geographic Distance Routing protocol based on Segment vehicle, Link quality and Degree of connectivity (SLD-GEDIR). The protocol selects a reliable NHV using the criteria segment vehicles, one-hop link quality and degree of connectivity. The proposed protocol has been simulated in NS-2 and its performance has been compared with the state-of-the-art protocols: P-GEDIR, J-GEDIR and V-GEDIR. The empirical results clearly reveal that SLD-GEDIR has lower link disconnection and end-to-end delay, and higher throughput as compared to the state-of-the-art protocols. It should be noted that the performance of the proposed protocol is preserved irrespective of vehicle density and speed. PMID:25429415

  17. Assessing Infrasound Network Performance Using the Ambient Ocean Noise

    NASA Astrophysics Data System (ADS)

    Stopa, J. E.; Cheung, K.; Garces, M. A.; Williams, B.; Le Pichon, A.

    2013-12-01

    Infrasonic microbarom signals are attributed to the nonlinear resonant interaction of ocean surface waves. IMS stations around the globe routinely detect microbaroms with a dominant frequency of ~0.2 Hz from regions of marine storminess. We have produced the predicted global microbarom source field for 2000-2010 using the spectral wave model WAVEWATCH III in hindcast mode. The wave hindcast utilizes NCEP's Climate Forecast System Reanalysis (CFSR) winds to drive the ocean waves. CFSR is a coupled global modeling system created by a state-of-the-art numerical models and assimilation techniques to construct a homogenous dataset in time and space at 0.5° resolution. The microbarom source model of Waxler and Gilbert (2005) is implemented to estimate the ocean noise created by counter-propagating waves with similar wave frequencies. Comparisons between predicted and observed global microbarom fields suggest the model results are reasonable; however, further error analysis between the predicted and observed infrasound signals is required to quantitatively assess the predictions. The 11-year hindcast suggests global sources are stable in both magnitude and spatial distribution. These statistically stable features represent the ambient microbarom climatology of the ambient ocean noise. This supports the use of numerical forecast models to assess the IMS infrasound network performance and explosion detection capabilities in the 0.1-0.4 Hz frequency band above the ambient ocean noise. Theoretical/modeled microbarom source strength (colors) versus infrasonic observations from the IMS network (directional histograms). The contours represent the maximum intersections from the recorded acoustic signals for a large extra-tropical event on December 7, 2009.

  18. Is functional integration of resting state brain networks an unspecific biomarker for working memory performance?

    PubMed

    Alavash, Mohsen; Doebler, Philipp; Holling, Heinz; Thiel, Christiane M; Gießing, Carsten

    2015-03-01

    Is there one optimal topology of functional brain networks at rest from which our cognitive performance would profit? Previous studies suggest that functional integration of resting state brain networks is an important biomarker for cognitive performance. However, it is still unknown whether higher network integration is an unspecific predictor for good cognitive performance or, alternatively, whether specific network organization during rest predicts only specific cognitive abilities. Here, we investigated the relationship between network integration at rest and cognitive performance using two tasks that measured different aspects of working memory; one task assessed visual-spatial and the other numerical working memory. Network clustering, modularity and efficiency were computed to capture network integration on different levels of network organization, and to statistically compare their correlations with the performance in each working memory test. The results revealed that each working memory aspect profits from a different resting state topology, and the tests showed significantly different correlations with each of the measures of network integration. While higher global network integration and modularity predicted significantly better performance in visual-spatial working memory, both measures showed no significant correlation with numerical working memory performance. In contrast, numerical working memory was superior in subjects with highly clustered brain networks, predominantly in the intraparietal sulcus, a core brain region of the working memory network. Our findings suggest that a specific balance between local and global functional integration of resting state brain networks facilitates special aspects of cognitive performance. In the context of working memory, while visual-spatial performance is facilitated by globally integrated functional resting state brain networks, numerical working memory profits from increased capacities for local processing

  19. Performance measurements of mixed data acquisition and LAN traffic on a credit-based flow-controlled ATM network

    SciTech Connect

    Nomachi, M.; Sugaya, Y.; Togawa, H.; Yasuda, K.; Mandjavidze, I.

    1998-08-01

    The high speed network is a key component in networked data acquisition systems. An ATM switch is a candidate for the network system in DAQ (data acquisition system). The authors have studied the DAQ performance of the ATM network at RCNP (Research Center for Nuclear Physics), Osaka University. Data traffic on DAQ system has a very much different traffic pattern from the other network traffic. It may slow down the network performance. The authors have studied the network performance on several traffic patterns.

  20. Performance evaluation of NASA/KSC CAD/CAE graphics local area network

    NASA Technical Reports Server (NTRS)

    Zobrist, George

    1988-01-01

    This study had as an objective the performance evaluation of the existing CAD/CAE graphics network at NASA/KSC. This evaluation will also aid in projecting planned expansions, such as the Space Station project on the existing CAD/CAE network. The objectives were achieved by collecting packet traffic on the various integrated sub-networks. This included items, such as total number of packets on the various subnetworks, source/destination of packets, percent utilization of network capacity, peak traffic rates, and packet size distribution. The NASA/KSC LAN was stressed to determine the useable bandwidth of the Ethernet network and an average design station workload was used to project the increased traffic on the existing network and the planned T1 link. This performance evaluation of the network will aid the NASA/KSC network managers in planning for the integration of future workload requirements into the existing network.

  1. Experimental performance of a ventral nozzle with pitch and yaw vectoring capability for SSTOVL aircraft

    NASA Technical Reports Server (NTRS)

    Esker, Barbara S.; Mcardle, Jack G.

    1993-01-01

    Aircraft with supersonic, short takeoff, and vertical landing capability were proposed to replace some of the current high-performance aircraft. Several of these configurations use a ventral nozzle in the lower fuselage, aft of the center of gravity, for lift or pitch control. Internal vanes canted at 20 deg were added to a swivel-type ventral nozzle and tested at tailpipe-to-ambient pressure ratios up to 5.0 on the Powered Lift Facility at NASA LeRC. The addition of sets of four and seven vanes decreased the discharge coefficient by at least 6 percent and did not affect the thrust coefficient. Side force produced by the nozzle with vanes was 14 percent or more of the vertical force. In addition, this side force caused only a small loss in vertical force in comparison to the nozzle without vanes. The net thrust force was 8 deg from the vertical for four vanes and 10.5 deg for seven.

  2. Experimental performance of a ventral nozzle with pitch and yaw vectoring capability for SSTOVL aircraft

    NASA Technical Reports Server (NTRS)

    Esker, Barbara S.; Mcardle, Jack G.

    1993-01-01

    Aircraft with supersonic, short takeoff and vertical landing capability have been proposed to replace some of the current high-performance aircraft. Several of these configurations use a ventral nozzle in the lower fuselage, aft of the center of gravity, for lift or pitch control. Internal vanes canted at 20 deg were added to a swivel-type ventral nozzle and tested at tailpipe to ambient pressure ratios up to 5.0 on the Powered Lift Facility at NASA Lewis Research Center. The addition of sets of four or seven vanes decreased the discharge coefficient of the nozzle by at least 6 percent and did not effect the thrust coefficient. Side force produced by the nozzle with vanes was 14 percent or more of the vertical force. In addition, this side force caused only a smalll loss in vertical force in comparison to the nozzle without vanes. The net thrust force was 8 deg from the vertical for four vanes and 10.5 deg for seven.

  3. Effects of Infection by Trypanosoma cruzi and Trypanosoma rangeli on the Reproductive Performance of the Vector Rhodnius prolixus

    PubMed Central

    Fellet, Maria Raquel; Lorenzo, Marcelo Gustavo; Elliot, Simon Luke; Carrasco, David; Guarneri, Alessandra Aparecida

    2014-01-01

    The insect Rhodnius prolixus is responsible for the transmission of Trypanosoma cruzi, which is the etiological agent of Chagas disease in areas of Central and South America. Besides this, it can be infected by other trypanosomes such as Trypanosoma rangeli. The effects of these parasites on vectors are poorly understood and are often controversial so here we focussed on possible negative effects of these parasites on the reproductive performance of R. prolixus, specifically comparing infected and uninfected couples. While T. cruzi infection did not delay pre-oviposition time of infected couples at either temperature tested (25 and 30°C) it did, at 25°C, increase the e-value in the second reproductive cycle, as well as hatching rates. Meanwhile, at 30°C, T. cruzi infection decreased the e-value of insects during the first cycle and also the fertility of older insects. When couples were instead infected with T. rangeli, pre-oviposition time was delayed, while reductions in the e-value and hatching rate were observed in the second and third cycles. We conclude that both T. cruzi and T. rangeli can impair reproductive performance of R. prolixus, although for T. cruzi, this is dependent on rearing temperature and insect age. We discuss these reproductive costs in terms of potential consequences on triatomine behavior and survival. PMID:25136800

  4. Battery Performance Modelling ad Simulation: a Neural Network Based Approach

    NASA Astrophysics Data System (ADS)

    Ottavianelli, Giuseppe; Donati, Alessandro

    2002-01-01

    This project has developed on the background of ongoing researches within the Control Technology Unit (TOS-OSC) of the Special Projects Division at the European Space Operations Centre (ESOC) of the European Space Agency. The purpose of this research is to develop and validate an Artificial Neural Network tool (ANN) able to model, simulate and predict the Cluster II battery system's performance degradation. (Cluster II mission is made of four spacecraft flying in tetrahedral formation and aimed to observe and study the interaction between sun and earth by passing in and out of our planet's magnetic field). This prototype tool, named BAPER and developed with a commercial neural network toolbox, could be used to support short and medium term mission planning in order to improve and maximise the batteries lifetime, determining which are the future best charge/discharge cycles for the batteries given their present states, in view of a Cluster II mission extension. This study focuses on the five Silver-Cadmium batteries onboard of Tango, the fourth Cluster II satellite, but time restrains have allowed so far to perform an assessment only on the first battery. In their most basic form, ANNs are hyper-dimensional curve fits for non-linear data. With their remarkable ability to derive meaning from complicated or imprecise history data, ANN can be used to extract patterns and detect trends that are too complex to be noticed by either humans or other computer techniques. ANNs learn by example, and this is why they can be described as an inductive, or data-based models for the simulation of input/target mappings. A trained ANN can be thought of as an "expert" in the category of information it has been given to analyse, and this expert can then be used, as in this project, to provide projections given new situations of interest and answer "what if" questions. The most appropriate algorithm, in terms of training speed and memory storage requirements, is clearly the Levenberg

  5. A Bayesian Network Approach to Modeling Learning Progressions and Task Performance. CRESST Report 776

    ERIC Educational Resources Information Center

    West, Patti; Rutstein, Daisy Wise; Mislevy, Robert J.; Liu, Junhui; Choi, Younyoung; Levy, Roy; Crawford, Aaron; DiCerbo, Kristen E.; Chappel, Kristina; Behrens, John T.

    2010-01-01

    A major issue in the study of learning progressions (LPs) is linking student performance on assessment tasks to the progressions. This report describes the challenges faced in making this linkage using Bayesian networks to model LPs in the field of computer networking. The ideas are illustrated with exemplar Bayesian networks built on Cisco…

  6. Evaluation of Techniques to Detect Significant Network Performance Problems using End-to-End Active Network Measurements

    SciTech Connect

    Cottrell, R.Les; Logg, Connie; Chhaparia, Mahesh; Grigoriev, Maxim; Haro, Felipe; Nazir, Fawad; Sandford, Mark

    2006-01-25

    End-to-End fault and performance problems detection in wide area production networks is becoming increasingly hard as the complexity of the paths, the diversity of the performance, and dependency on the network increase. Several monitoring infrastructures are built to monitor different network metrics and collect monitoring information from thousands of hosts around the globe. Typically there are hundreds to thousands of time-series plots of network metrics which need to be looked at to identify network performance problems or anomalous variations in the traffic. Furthermore, most commercial products rely on a comparison with user configured static thresholds and often require access to SNMP-MIB information, to which a typical end-user does not usually have access. In our paper we propose new techniques to detect network performance problems proactively in close to realtime and we do not rely on static thresholds and SNMP-MIB information. We describe and compare the use of several different algorithms that we have implemented to detect persistent network problems using anomalous variations analysis in real end-to-end Internet performance measurements. We also provide methods and/or guidance for how to set the user settable parameters. The measurements are based on active probes running on 40 production network paths with bottlenecks varying from 0.5Mbits/s to 1000Mbit/s. For well behaved data (no missed measurements and no very large outliers) with small seasonal changes most algorithms identify similar events. We compare the algorithms' robustness with respect to false positives and missed events especially when there are large seasonal effects in the data. Our proposed techniques cover a wide variety of network paths and traffic patterns. We also discuss the applicability of the algorithms in terms of their intuitiveness, their speed of execution as implemented, and areas of applicability. Our encouraging results compare and evaluate the accuracy of our detection

  7. Molten carbonate fuel cell networks: Principles, analysis and performance

    SciTech Connect

    Wimer, J.G.; Williams, M.C.; Archer, D.H.; Osterle, J.F.

    1993-09-01

    Key to the concept of networking is multiple fuel cell stacks with regard to flow of reactant streams. In a fuel cell network, reactant streams are ducted so that they are fed and recycled through stacks in series. Stacks networked in series more closely approach a reversible process, which increases efficiency. Higher total reactant utilizations can be achieved by stacks networked in series. Placing stacks in series also allows reactant streams to be conditioned at different stages of utilization. Between stacks, heat can be consumed or removed, (methane injection, heat exchange) which improves thermal balance. Composition of streams can be adjusted between stacks by mixing exhaust streams or by injecting reactant streams. Computer simulations demonstrated that a combined cycle system with MCFC stacks networked in series is more efficient than an identical system with MCFC stacks in parallel.

  8. Performance evaluation of power control algorithms in wireless cellular networks

    NASA Astrophysics Data System (ADS)

    Temaneh-Nyah, C.; Iita, V.

    2014-10-01

    Power control in a mobile communication network intents to control the transmission power levels in such a way that the required quality of service (QoS) for the users is guaranteed with lowest possible transmission powers. Most of the studies of power control algorithms in the literature are based on some kind of simplified assumptions which leads to compromise in the validity of the results when applied in a real environment. In this paper, a CDMA network was simulated. The real environment was accounted for by defining the analysis area and the network base stations and mobile stations are defined by their geographical coordinates, the mobility of the mobile stations is accounted for. The simulation also allowed for a number of network parameters including the network traffic, and the wireless channel models to be modified. Finally, we present the simulation results of a convergence speed based comparative analysis of three uplink power control algorithms.

  9. Index Sets and Vectorization

    SciTech Connect

    Keasler, J A

    2012-03-27

    Vectorization is data parallelism (SIMD, SIMT, etc.) - extension of ISA enabling the same instruction to be performed on multiple data items simultaeously. Many/most CPUs support vectorization in some form. Vectorization is difficult to enable, but can yield large efficiency gains. Extra programmer effort is required because: (1) not all algorithms can be vectorized (regular algorithm structure and fine-grain parallelism must be used); (2) most CPUs have data alignment restrictions for load/store operations (obey or risk incorrect code); (3) special directives are often needed to enable vectorization; and (4) vector instructions are architecture-specific. Vectorization is the best way to optimize for power and performance due to reduced clock cycles. When data is organized properly, a vector load instruction (i.e. movaps) can replace 'normal' load instructions (i.e. movsd). Vector operations can potentially have a smaller footprint in the instruction cache when fewer instructions need to be executed. Hybrid index sets insulate users from architecture specific details. We have applied hybrid index sets to achieve optimal vectorization. We can extend this concept to handle other programming models.

  10. Static internal performance of a thrust vectoring and reversing two-dimensional convergent-divergent nozzle with an aft flap

    NASA Technical Reports Server (NTRS)

    Re, R. J.; Leavitt, L. D.

    1986-01-01

    The static internal performance of a multifunction nozzle having some of the geometric characteristics of both two-dimensional convergent-divergent and single expansion ramp nozzles has been investigated in the static-test facility of the Langley 16-Foot Transonic Tunnel. The internal expansion portion of the nozzle consisted of two symmetrical flat surfaces of equal length, and the external expansion portion of the nozzle consisted of a single aft flap. The aft flap could be varied in angle independently of the upper internal expansion surface to which it was attached. The effects of internal expansion ratio, nozzle thrust-vector angle (-30 deg. to 30 deg., aft flap shape, aft flap angle, and sidewall containment were determined for dry and afterburning power settings. In addition, a partial afterburning power setting nozzle, a fully deployed thrust reverser, and four vertical takeoff or landing nozzle, configurations were investigated. Nozzle pressure ratio was varied up to 10 for the dry power nozzles and 7 for the afterburning power nozzles.

  11. The Current State of Human Performance Technology: A Citation Network Analysis of "Performance Improvement Quarterly," 1988-2010

    ERIC Educational Resources Information Center

    Cho, Yonjoo; Jo, Sung Jun; Park, Sunyoung; Kang, Ingu; Chen, Zengguan

    2011-01-01

    This study conducted a citation network analysis (CNA) of human performance technology (HPT) to examine its current state of the field. Previous reviews of the field have used traditional research methods, such as content analysis, survey, Delphi, and citation analysis. The distinctive features of CNA come from using a social network analysis…

  12. The Global Seismographic Network (GSN): Challenges and Methods for Maintaining High Quality Network Performance

    NASA Astrophysics Data System (ADS)

    Hafner, Katrin; Davis, Peter; Wilson, David; Sumy, Danielle; Woodward, Bob

    2016-04-01

    The Global Seismographic Network (GSN) is a 152 station, globally-distributed, permanent network of state-of-the-art seismological and geophysical sensors. The GSN has been operating for over 20 years via an ongoing successful partnership between IRIS, the USGS, the University of California at San Diego, NSF and numerous host institutions worldwide. The central design goal of the GSN may be summarized as "to record with full fidelity and bandwidth all seismic signals above the Earth noise, accompanied by some efforts to reduce Earth noise by deployment strategies". While many of the technical design goals have been met, we continue to strive for higher data quality with a combination of new sensors and improved installation techniques designed to achieve the lowest noise possible under existing site conditions. Data from the GSN are used not only for research, but on a daily basis as part of the operational missions of the USGS NEIC, NOAA tsunami warning centers, the Comprehensive Nuclear-Test-Ban-Treaty Organization as well as other organizations. In the recent period of very tight funding budgets, the primary challenges for the GSN include maintaining these operational capabilities while simultaneously developing and replacing the primary sensors, maintaining high quality data and repairing station infrastructure. Aging of GSN equipment and station infrastructure has resulted in renewed emphasis on developing, evaluating and implementing quality control tools such as MUSTANG and DQA to maintain the high data quality from the GSN stations. These tools allow the network operators to routinely monitor and analyze waveform data to detect and track problems and develop action plans as issues are found. We will present summary data quality metrics for the GSN as obtained via these quality assurance tools. In recent years, the GSN has standardized dataloggers to the Quanterra Q330HR data acquisition system at all but three stations resulting in significantly improved

  13. Support vector machine to predict diesel engine performance and emission parameters fueled with nano-particles additive to diesel fuel

    NASA Astrophysics Data System (ADS)

    Ghanbari, M.; Najafi, G.; Ghobadian, B.; Mamat, R.; Noor, M. M.; Moosavian, A.

    2015-12-01

    This paper studies the use of adaptive Support Vector Machine (SVM) to predict the performance parameters and exhaust emissions of a diesel engine operating on nanodiesel blended fuels. In order to predict the engine parameters, the whole experimental data were randomly divided into training and testing data. For SVM modelling, different values for radial basis function (RBF) kernel width and penalty parameters (C) were considered and the optimum values were then found. The results demonstrate that SVM is capable of predicting the diesel engine performance and emissions. In the experimental step, Carbon nano tubes (CNT) (40, 80 and 120 ppm) and nano silver particles (40, 80 and 120 ppm) with nanostructure were prepared and added as additive to the diesel fuel. Six cylinders, four-stroke diesel engine was fuelled with these new blended fuels and operated at different engine speeds. Experimental test results indicated the fact that adding nano particles to diesel fuel, increased diesel engine power and torque output. For nano-diesel it was found that the brake specific fuel consumption (bsfc) was decreased compared to the net diesel fuel. The results proved that with increase of nano particles concentrations (from 40 ppm to 120 ppm) in diesel fuel, CO2 emission increased. CO emission in diesel fuel with nano-particles was lower significantly compared to pure diesel fuel. UHC emission with silver nano-diesel blended fuel decreased while with fuels that contains CNT nano particles increased. The trend of NOx emission was inverse compared to the UHC emission. With adding nano particles to the blended fuels, NOx increased compared to the net diesel fuel. The tests revealed that silver & CNT nano particles can be used as additive in diesel fuel to improve complete combustion of the fuel and reduce the exhaust emissions significantly.

  14. Data mining methods in the prediction of Dementia: A real-data comparison of the accuracy, sensitivity and specificity of linear discriminant analysis, logistic regression, neural networks, support vector machines, classification trees and random forests

    PubMed Central

    2011-01-01

    Background Dementia and cognitive impairment associated with aging are a major medical and social concern. Neuropsychological testing is a key element in the diagnostic procedures of Mild Cognitive Impairment (MCI), but has presently a limited value in the prediction of progression to dementia. We advance the hypothesis that newer statistical classification methods derived from data mining and machine learning methods like Neural Networks, Support Vector Machines and Random Forests can improve accuracy, sensitivity and specificity of predictions obtained from neuropsychological testing. Seven non parametric classifiers derived from data mining methods (Multilayer Perceptrons Neural Networks, Radial Basis Function Neural Networks, Support Vector Machines, CART, CHAID and QUEST Classification Trees and Random Forests) were compared to three traditional classifiers (Linear Discriminant Analysis, Quadratic Discriminant Analysis and Logistic Regression) in terms of overall classification accuracy, specificity, sensitivity, Area under the ROC curve and Press'Q. Model predictors were 10 neuropsychological tests currently used in the diagnosis of dementia. Statistical distributions of classification parameters obtained from a 5-fold cross-validation were compared using the Friedman's nonparametric test. Results Press' Q test showed that all classifiers performed better than chance alone (p < 0.05). Support Vector Machines showed the larger overall classification accuracy (Median (Me) = 0.76) an area under the ROC (Me = 0.90). However this method showed high specificity (Me = 1.0) but low sensitivity (Me = 0.3). Random Forest ranked second in overall accuracy (Me = 0.73) with high area under the ROC (Me = 0.73) specificity (Me = 0.73) and sensitivity (Me = 0.64). Linear Discriminant Analysis also showed acceptable overall accuracy (Me = 0.66), with acceptable area under the ROC (Me = 0.72) specificity (Me = 0.66) and sensitivity (Me = 0.64). The remaining classifiers showed

  15. [C-terminal fragment of tetanus toxin: its use in neuronal network analysis and its potential as non-viral vector].

    PubMed

    Roux, Sylvie; Saint Cloment, Cécile; Curie, Thomas; Girard, Emmanuelle; Brûlet, Philippe; Molgó, Jordi

    2005-01-01

    The atoxic C-terminal fragment of tetanus neurotoxin or TTC fragment presents similar retrograde and transsynaptic properties to that of holotoxin. Detection of this fragment is easier when it is associated with a fluorescent marker or with beta-galactosidase activity by genetic fusion or chemical conjugation. Thus, these tracers have been used to study and analyse the synaptic connections of a neural network. In this article, we shortly review the various methods used with this aim including: injection of the fusion protein, adenovirus in vivo expression and transgenesis. Since neural activity is essential for neuronal TTC binding and internalization, the functionality of connections can be also evaluated. Moreover, modifications of the retrograde transport can be detected by using this fragment. Thus, TTC fragment is an excellent tracer to analyse the connectivity and functionality of a neural network. The TTC fragment was also soon proposed as potential therapeutic vector to transport and to deliver a biological activity or gene in a neural network. With this aim, the efficiency of a translocation domain to induce the cytosolic release of the associated activity has been evaluated. The use of the TTC fragment to target specifically a neurotrophic factor to neurons and thus avoid secondary effects has been tested with interesting results. PMID:16114262

  16. Performance optimisation through EPT-WBC in mobile ad hoc networks

    NASA Astrophysics Data System (ADS)

    Agarwal, Ratish; Gupta, Roopam; Motwani, Mahesh

    2016-03-01

    Mobile ad hoc networks are self-organised, infrastructure-less networks in which each mobile host works as a router to provide connectivity within the network. Nodes out of reach to each other can communicate with the help of intermediate routers (nodes). Routing protocols are the rules which determine the way in which these routing activities are to be performed. In cluster-based architecture, some selected nodes (clusterheads) are identified to bear the extra burden of network activities like routing. Selection of clusterheads is a critical issue which significantly affects the performance of the network. This paper proposes an enhanced performance and trusted weight-based clustering approach in which a number of performance factors such as trust, load balancing, energy consumption, mobility and battery power are considered for the selection of clusterheads. Moreover, the performance of the proposed scheme is compared with other existing approaches to demonstrate the effectiveness of the work.

  17. Symbolic computer vector analysis

    NASA Technical Reports Server (NTRS)

    Stoutemyer, D. R.

    1977-01-01

    A MACSYMA program is described which performs symbolic vector algebra and vector calculus. The program can combine and simplify symbolic expressions including dot products and cross products, together with the gradient, divergence, curl, and Laplacian operators. The distribution of these operators over sums or products is under user control, as are various other expansions, including expansion into components in any specific orthogonal coordinate system. There is also a capability for deriving the scalar or vector potential of a vector field. Examples include derivation of the partial differential equations describing fluid flow and magnetohydrodynamics, for 12 different classic orthogonal curvilinear coordinate systems.

  18. The performance evaluation of a new neural network based traffic management scheme for a satellite communication network

    NASA Technical Reports Server (NTRS)

    Ansari, Nirwan; Liu, Dequan

    1991-01-01

    A neural-network-based traffic management scheme for a satellite communication network is described. The scheme consists of two levels of management. The front end of the scheme is a derivation of Kohonen's self-organization model to configure maps for the satellite communication network dynamically. The model consists of three stages. The first stage is the pattern recognition task, in which an exemplar map that best meets the current network requirements is selected. The second stage is the analysis of the discrepancy between the chosen exemplar map and the state of the network, and the adaptive modification of the chosen exemplar map to conform closely to the network requirement (input data pattern) by means of Kohonen's self-organization. On the basis of certain performance criteria, whether a new map is generated to replace the original chosen map is decided in the third stage. A state-dependent routing algorithm, which arranges the incoming call to some proper path, is used to make the network more efficient and to lower the call block rate. Simulation results demonstrate that the scheme, which combines self-organization and the state-dependent routing mechanism, provides better performance in terms of call block rate than schemes that only have either the self-organization mechanism or the routing mechanism.

  19. Impact of Network Activity Levels on the Performance of Passive Network Service Dependency Discovery

    SciTech Connect

    Carroll, Thomas E.; Chikkagoudar, Satish; Arthur-Durett, Kristine M.

    2015-11-02

    Network services often do not operate alone, but instead, depend on other services distributed throughout a network to correctly function. If a service fails, is disrupted, or degraded, it is likely to impair other services. The web of dependencies can be surprisingly complex---especially within a large enterprise network---and evolve with time. Acquiring, maintaining, and understanding dependency knowledge is critical for many network management and cyber defense activities. While automation can improve situation awareness for network operators and cyber practitioners, poor detection accuracy reduces their confidence and can complicate their roles. In this paper we rigorously study the effects of network activity levels on the detection accuracy of passive network-based service dependency discovery methods. The accuracy of all except for one method was inversely proportional to network activity levels. Our proposed cross correlation method was particularly robust to the influence of network activity. The proposed experimental treatment will further advance a more scientific evaluation of methods and provide the ability to determine their operational boundaries.

  20. DISCRETE EVENT SIMULATION OF OPTICAL SWITCH MATRIX PERFORMANCE IN COMPUTER NETWORKS

    SciTech Connect

    Imam, Neena; Poole, Stephen W

    2013-01-01

    In this paper, we present application of a Discrete Event Simulator (DES) for performance modeling of optical switching devices in computer networks. Network simulators are valuable tools in situations where one cannot investigate the system directly. This situation may arise if the system under study does not exist yet or the cost of studying the system directly is prohibitive. Most available network simulators are based on the paradigm of discrete-event-based simulation. As computer networks become increasingly larger and more complex, sophisticated DES tool chains have become available for both commercial and academic research. Some well-known simulators are NS2, NS3, OPNET, and OMNEST. For this research, we have applied OMNEST for the purpose of simulating multi-wavelength performance of optical switch matrices in computer interconnection networks. Our results suggest that the application of DES to computer interconnection networks provides valuable insight in device performance and aids in topology and system optimization.

  1. International network for capacity building for the control of emerging viral vector-borne zoonotic diseases: ARBO-ZOONET.

    PubMed

    Ahmed, J; Bouloy, M; Ergonul, O; Fooks, Ar; Paweska, J; Chevalier, V; Drosten, C; Moormann, R; Tordo, N; Vatansever, Z; Calistri, P; Estrada-Pena, A; Mirazimi, A; Unger, H; Yin, H; Seitzer, U

    2009-03-26

    Arboviruses are arthropod-borne viruses, which include West Nile fever virus (WNFV), a mosquito-borne virus, Rift Valley fever virus (RVFV), a mosquito-borne virus, and Crimean-Congo haemorrhagic fever virus (CCHFV), a tick-borne virus. These arthropod-borne viruses can cause disease in different domestic and wild animals and in humans, posing a threat to public health because of their epidemic and zoonotic potential. In recent decades, the geographical distribution of these diseases has expanded. Outbreaks of WNF have already occurred in Europe, especially in the Mediterranean basin. Moreover, CCHF is endemic in many European countries and serious outbreaks have occurred, particularly in the Balkans, Turkey and Southern Federal Districts of Russia. In 2000, RVF was reported for the first time outside the African continent, with cases being confirmed in Saudi Arabia and Yemen. This spread was probably caused by ruminant trade and highlights that there is a threat of expansion of the virus into other parts of Asia and Europe. In the light of global warming and globalisation of trade and travel, public interest in emerging zoonotic diseases has increased. This is especially evident regarding the geographical spread of vector-borne diseases. A multi-disciplinary approach is now imperative, and groups need to collaborate in an integrated manner that includes vector control, vaccination programmes, improved therapy strategies, diagnostic tools and surveillance, public awareness, capacity building and improvement of infrastructure in endemic regions. PMID:19341603

  2. Support vector machine regression (SVR/LS-SVM)--an alternative to neural networks (ANN) for analytical chemistry? Comparison of nonlinear methods on near infrared (NIR) spectroscopy data.

    PubMed

    Balabin, Roman M; Lomakina, Ekaterina I

    2011-04-21

    In this study, we make a general comparison of the accuracy and robustness of five multivariate calibration models: partial least squares (PLS) regression or projection to latent structures, polynomial partial least squares (Poly-PLS) regression, artificial neural networks (ANNs), and two novel techniques based on support vector machines (SVMs) for multivariate data analysis: support vector regression (SVR) and least-squares support vector machines (LS-SVMs). The comparison is based on fourteen (14) different datasets: seven sets of gasoline data (density, benzene content, and fractional composition/boiling points), two sets of ethanol gasoline fuel data (density and ethanol content), one set of diesel fuel data (total sulfur content), three sets of petroleum (crude oil) macromolecules data (weight percentages of asphaltenes, resins, and paraffins), and one set of petroleum resins data (resins content). Vibrational (near-infrared, NIR) spectroscopic data are used to predict the properties and quality coefficients of gasoline, biofuel/biodiesel, diesel fuel, and other samples of interest. The four systems presented here range greatly in composition, properties, strength of intermolecular interactions (e.g., van der Waals forces, H-bonds), colloid structure, and phase behavior. Due to the high diversity of chemical systems studied, general conclusions about SVM regression methods can be made. We try to answer the following question: to what extent can SVM-based techniques replace ANN-based approaches in real-world (industrial/scientific) applications? The results show that both SVR and LS-SVM methods are comparable to ANNs in accuracy. Due to the much higher robustness of the former, the SVM-based approaches are recommended for practical (industrial) application. This has been shown to be especially true for complicated, highly nonlinear objects. PMID:21350755

  3. Computer systems for laboratory networks and high-performance NMR.

    PubMed

    Levy, G C; Begemann, J H

    1985-08-01

    Modern computer technology is significantly enhancing the associated tasks of spectroscopic data acquisition and data reduction and analysis. Distributed data processing techniques, particularly laboratory computer networking, are rapidly changing the scientist's ability to optimize results from complex experiments. Optimization of nuclear magnetic resonance spectroscopy (NMR) and magnetic resonance imaging (MRI) experimental results requires use of powerful, large-memory (virtual memory preferred) computers with integrated (and supported) high-speed links to magnetic resonance instrumentation. Laboratory architectures with larger computers, in order to extend data reduction capabilities, have facilitated the transition to NMR laboratory computer networking. Examples of a polymer microstructure analysis and in vivo 31P metabolic analysis are given. This paper also discusses laboratory data processing trends anticipated over the next 5-10 years. Full networking of NMR laboratories is just now becoming a reality. PMID:3840171

  4. Performance evaluation of the IBM RISC (reduced instruction set computer) System/6000: Comparison of an optimized scalar processor with two vector processors

    SciTech Connect

    Simmons, M.L.; Wasserman, H.J.

    1990-01-01

    RISC System/6000 computers are workstations with a reduced instruction set processor recently developed by IBM. This report details the performance of the 6000-series computers as measured using a set of portable, standard-Fortran, computationally-intensive benchmark codes that represent the scientific workload at the Los Alamos National Laboratory. On all but three of our benchmark codes, the 40-ns RISC System was able to perform as well as a single Convex C-240 processor, a vector processor that also has a 40-ns clock cycle, and on these same codes, it performed as well as the FPS-500, a vector processor with a 30-ns clock cycle. 17 refs., 2 figs., 6 tabs.

  5. Distinguishing Parkinson's disease from atypical parkinsonian syndromes using PET data and a computer system based on support vector machines and Bayesian networks

    PubMed Central

    Segovia, Fermín; Illán, Ignacio A.; Górriz, Juan M.; Ramírez, Javier; Rominger, Axel; Levin, Johannes

    2015-01-01

    Differentiating between Parkinson's disease (PD) and atypical parkinsonian syndromes (APS) is still a challenge, specially at early stages when the patients show similar symptoms. During last years, several computer systems have been proposed in order to improve the diagnosis of PD, but their accuracy is still limited. In this work we demonstrate a full automatic computer system to assist the diagnosis of PD using 18F-DMFP PET data. First, a few regions of interest are selected by means of a two-sample t-test. The accuracy of the selected regions to separate PD from APS patients is then computed using a support vector machine classifier. The accuracy values are finally used to train a Bayesian network that can be used to predict the class of new unseen data. This methodology was evaluated using a database with 87 neuroimages, achieving accuracy rates over 78%. A fair comparison with other similar approaches is also provided. PMID:26594165

  6. Distinguishing Parkinson's disease from atypical parkinsonian syndromes using PET data and a computer system based on support vector machines and Bayesian networks.

    PubMed

    Segovia, Fermín; Illán, Ignacio A; Górriz, Juan M; Ramírez, Javier; Rominger, Axel; Levin, Johannes

    2015-01-01

    Differentiating between Parkinson's disease (PD) and atypical parkinsonian syndromes (APS) is still a challenge, specially at early stages when the patients show similar symptoms. During last years, several computer systems have been proposed in order to improve the diagnosis of PD, but their accuracy is still limited. In this work we demonstrate a full automatic computer system to assist the diagnosis of PD using (18)F-DMFP PET data. First, a few regions of interest are selected by means of a two-sample t-test. The accuracy of the selected regions to separate PD from APS patients is then computed using a support vector machine classifier. The accuracy values are finally used to train a Bayesian network that can be used to predict the class of new unseen data. This methodology was evaluated using a database with 87 neuroimages, achieving accuracy rates over 78%. A fair comparison with other similar approaches is also provided. PMID:26594165

  7. Statistical modelling of networked human-automation performance using working memory capacity.

    PubMed

    Ahmed, Nisar; de Visser, Ewart; Shaw, Tyler; Mohamed-Ameen, Amira; Campbell, Mark; Parasuraman, Raja

    2014-01-01

    This study examines the challenging problem of modelling the interaction between individual attentional limitations and decision-making performance in networked human-automation system tasks. Analysis of real experimental data from a task involving networked supervision of multiple unmanned aerial vehicles by human participants shows that both task load and network message quality affect performance, but that these effects are modulated by individual differences in working memory (WM) capacity. These insights were used to assess three statistical approaches for modelling and making predictions with real experimental networked supervisory performance data: classical linear regression, non-parametric Gaussian processes and probabilistic Bayesian networks. It is shown that each of these approaches can help designers of networked human-automated systems cope with various uncertainties in order to accommodate future users by linking expected operating conditions and performance from real experimental data to observable cognitive traits like WM capacity. Practitioner Summary: Working memory (WM) capacity helps account for inter-individual variability in operator performance in networked unmanned aerial vehicle supervisory tasks. This is useful for reliable performance prediction near experimental conditions via linear models; robust statistical prediction beyond experimental conditions via Gaussian process models and probabilistic inference about unknown task conditions/WM capacities via Bayesian network models. PMID:24308716

  8. Visualizing weighted networks: a performance comparison of adjacency matrices versus node-link diagrams

    NASA Astrophysics Data System (ADS)

    McIntire, John P.; Osesina, O. Isaac; Bartley, Cecilia; Tudoreanu, M. Eduard; Havig, Paul R.; Geiselman, Eric E.

    2012-06-01

    Ensuring the proper and effective ways to visualize network data is important for many areas of academia, applied sciences, the military, and the public. Fields such as social network analysis, genetics, biochemistry, intelligence, cybersecurity, neural network modeling, transit systems, communications, etc. often deal with large, complex network datasets that can be difficult to interact with, study, and use. There have been surprisingly few human factors performance studies on the relative effectiveness of different graph drawings or network diagram techniques to convey information to a viewer. This is particularly true for weighted networks which include the strength of connections between nodes, not just information about which nodes are linked to other nodes. We describe a human factors study in which participants performed four separate network analysis tasks (finding a direct link between given nodes, finding an interconnected node between given nodes, estimating link strengths, and estimating the most densely interconnected nodes) on two different network visualizations: an adjacency matrix with a heat-map versus a node-link diagram. The results should help shed light on effective methods of visualizing network data for some representative analysis tasks, with the ultimate goal of improving usability and performance for viewers of network data displays.

  9. Network based high performance concurrent computing. Progress report, [FY 1991

    SciTech Connect

    Sunderam, V.S.

    1991-12-31

    The overall objectives of this project are to investigate research issues pertaining to programming tools and efficiency issues in network based concurrent computing systems. The basis for these efforts is the PVM project that evolved during my visits to Oak Ridge Laboratories under the DOE Faculty Research Participation program; I continue to collaborate with researchers at Oak Ridge on some portions of the project.

  10. Social Networks and Performance in Distributed Learning Communities

    ERIC Educational Resources Information Center

    Cadima, Rita; Ojeda, Jordi; Monguet, Josep M.

    2012-01-01

    Social networks play an essential role in learning environments as a key channel for knowledge sharing and students' support. In distributed learning communities, knowledge sharing does not occur as spontaneously as when a working group shares the same physical space; knowledge sharing depends even more on student informal connections. In this…

  11. Algorithms for Performance, Dependability, and Performability Evaluation using Stochastic Activity Networks

    NASA Technical Reports Server (NTRS)

    Deavours, Daniel D.; Qureshi, M. Akber; Sanders, William H.

    1997-01-01

    Modeling tools and technologies are important for aerospace development. At the University of Illinois, we have worked on advancing the state of the art in modeling by Markov reward models in two important areas: reducing the memory necessary to numerically solve systems represented as stochastic activity networks and other stochastic Petri net extensions while still obtaining solutions in a reasonable amount of time, and finding numerically stable and memory-efficient methods to solve for the reward accumulated during a finite mission time. A long standing problem when modeling with high level formalisms such as stochastic activity networks is the so-called state space explosion, where the number of states increases exponentially with size of the high level model. Thus, the corresponding Markov model becomes prohibitively large and solution is constrained by the the size of primary memory. To reduce the memory necessary to numerically solve complex systems, we propose new methods that can tolerate such large state spaces that do not require any special structure in the model (as many other techniques do). First, we develop methods that generate row and columns of the state transition-rate-matrix on-the-fly, eliminating the need to explicitly store the matrix at all. Next, we introduce a new iterative solution method, called modified adaptive Gauss-Seidel, that exhibits locality in its use of data from the state transition-rate-matrix, permitting us to cache portions of the matrix and hence reduce the solution time. Finally, we develop a new memory and computationally efficient technique for Gauss-Seidel based solvers that avoids the need for generating rows of A in order to solve Ax = b. This is a significant performance improvement for on-the-fly methods as well as other recent solution techniques based on Kronecker operators. Taken together, these new results show that one can solve very large models without any special structure.

  12. Performance Modeling of Network-Attached Storage Device Based Hierarchical Mass Storage Systems

    NASA Technical Reports Server (NTRS)

    Menasce, Daniel A.; Pentakalos, Odysseas I.

    1995-01-01

    Network attached storage devices improve I/O performance by separating control and data paths and eliminating host intervention during the data transfer phase. Devices are attached to both a high speed network for data transfer and to a slower network for control messages. Hierarchical mass storage systems use disks to cache the most recently used files and a combination of robotic and manually mounted tapes to store the bulk of the files in the file system. This paper shows how queuing network models can be used to assess the performance of hierarchical mass storage systems that use network attached storage devices as opposed to host attached storage devices. Simulation was used to validate the model. The analytic model presented here can be used, among other things, to evaluate the protocols involved in 1/0 over network attached devices.

  13. Moving Large Data Sets Over High-Performance Long Distance Networks

    SciTech Connect

    Hodson, Stephen W; Poole, Stephen W; Ruwart, Thomas; Settlemyer, Bradley W

    2011-04-01

    In this project we look at the performance characteristics of three tools used to move large data sets over dedicated long distance networking infrastructure. Although performance studies of wide area networks have been a frequent topic of interest, performance analyses have tended to focus on network latency characteristics and peak throughput using network traffic generators. In this study we instead perform an end-to-end long distance networking analysis that includes reading large data sets from a source file system and committing large data sets to a destination file system. An evaluation of end-to-end data movement is also an evaluation of the system configurations employed and the tools used to move the data. For this paper, we have built several storage platforms and connected them with a high performance long distance network configuration. We use these systems to analyze the capabilities of three data movement tools: BBcp, GridFTP, and XDD. Our studies demonstrate that existing data movement tools do not provide efficient performance levels or exercise the storage devices in their highest performance modes. We describe the device information required to achieve high levels of I/O performance and discuss how this data is applicable in use cases beyond data movement performance.

  14. Support vector machines

    NASA Technical Reports Server (NTRS)

    Garay, Michael J.; Mazzoni, Dominic; Davies, Roger; Wagstaff, Kiri

    2004-01-01

    Support Vector Machines (SVMs) are a type of supervised learning algorith,, other examples of which are Artificial Neural Networks (ANNs), Decision Trees, and Naive Bayesian Classifiers. Supervised learning algorithms are used to classify objects labled by a 'supervisor' - typically a human 'expert.'.

  15. Singular Vectors' Subtle Secrets

    ERIC Educational Resources Information Center

    James, David; Lachance, Michael; Remski, Joan

    2011-01-01

    Social scientists use adjacency tables to discover influence networks within and among groups. Building on work by Moler and Morrison, we use ordered pairs from the components of the first and second singular vectors of adjacency matrices as tools to distinguish these groups and to identify particularly strong or weak individuals.

  16. G-NetMon: a GPU-accelerated network performance monitoring system

    SciTech Connect

    Wu, Wenji; DeMar, Phil; Holmgren, Don; Singh, Amitoj; /Fermilab

    2011-06-01

    At Fermilab, we have prototyped a GPU-accelerated network performance monitoring system, called G-NetMon, to support large-scale scientific collaborations. In this work, we explore new opportunities in network traffic monitoring and analysis with GPUs. Our system exploits the data parallelism that exists within network flow data to provide fast analysis of bulk data movement between Fermilab and collaboration sites. Experiments demonstrate that our G-NetMon can rapidly detect sub-optimal bulk data movements.

  17. Analysis of NASA communications (Nascom) II network protocols and performance

    NASA Technical Reports Server (NTRS)

    Omidyar, Guy C.; Butler, Thomas E.

    1991-01-01

    The NASA Communications (Nascom) Division of the Mission Operations and Data Systems Directorate is to undertake a major initiative to develop the Nascom II (NII) network to achieve its long-range service objectives for operational data transport to support the Space Station Freedom Program, the Earth Observing System, and other projects. NII is the Nascom ground communications network being developed to accommodate the operational traffic of the mid-1990s and beyond. The authors describe various baseline protocol architectures based on current and evolving technologies. They address the internetworking issues suggested for reliable transfer of data over heterogeneous segments. They also describe the NII architecture, topology, system components, and services. A comparative evaluation of the current and evolving technologies was made, and suggestions for further study are described. It is shown that the direction of the NII configuration and the subsystem component design will clearly depend on the advances made in the area of broadband integrated services.

  18. Hydrogen Bond Nanoscale Networks Showing Switchable Transport Performance

    NASA Astrophysics Data System (ADS)

    Long, Yong; Hui, Jun-Feng; Wang, Peng-Peng; Xiang, Guo-Lei; Xu, Biao; Hu, Shi; Zhu, Wan-Cheng; Lü, Xing-Qiang; Zhuang, Jing; Wang, Xun

    2012-08-01

    Hydrogen bond is a typical noncovalent bond with its strength only one-tenth of a general covalent bond. Because of its easiness to fracture and re-formation, materials based on hydrogen bonds can enable a reversible behavior in their assembly and other properties, which supplies advantages in fabrication and recyclability. In this paper, hydrogen bond nanoscale networks have been utilized to separate water and oil in macroscale. This is realized upon using nanowire macro-membranes with pore sizes ~tens of nanometers, which can form hydrogen bonds with the water molecules on the surfaces. It is also found that the gradual replacement of the water by ethanol molecules can endow this film tunable transport properties. It is proposed that a hydrogen bond network in the membrane is responsible for this switching effect. Significant application potential is demonstrated by the successful separation of oil and water, especially in the emulsion forms.

  19. Direct message passing for hybrid Bayesian networks and performance analysis

    NASA Astrophysics Data System (ADS)

    Sun, Wei; Chang, K. C.

    2010-04-01

    Probabilistic inference for hybrid Bayesian networks, which involves both discrete and continuous variables, has been an important research topic over the recent years. This is not only because a number of efficient inference algorithms have been developed and used maturely for simple types of networks such as pure discrete model, but also for the practical needs that continuous variables are inevitable in modeling complex systems. Pearl's message passing algorithm provides a simple framework to compute posterior distribution by propagating messages between nodes and can provides exact answer for polytree models with pure discrete or continuous variables. In addition, applying Pearl's message passing to network with loops usually converges and results in good approximation. However, for hybrid model, there is a need of a general message passing algorithm between different types of variables. In this paper, we develop a method called Direct Message Passing (DMP) for exchanging messages between discrete and continuous variables. Based on Pearl's algorithm, we derive formulae to compute messages for variables in various dependence relationships encoded in conditional probability distributions. Mixture of Gaussian is used to represent continuous messages, with the number of mixture components up to the size of the joint state space of all discrete parents. For polytree Conditional Linear Gaussian (CLG) Bayesian network, DMP has the same computational requirements and can provide exact solution as the one obtained by the Junction Tree (JT) algorithm. However, while JT can only work for the CLG model, DMP can be applied for general nonlinear, non-Gaussian hybrid model to produce approximate solution using unscented transformation and loopy propagation. Furthermore, we can scale the algorithm by restricting the number of mixture components in the messages. Empirically, we found that the approximation errors are relatively small especially for nodes that are far away from

  20. Performance analysis of Integrated Communication and Control System networks

    NASA Technical Reports Server (NTRS)

    Halevi, Y.; Ray, A.

    1990-01-01

    This paper presents statistical analysis of delays in Integrated Communication and Control System (ICCS) networks that are based on asynchronous time-division multiplexing. The models are obtained in closed form for analyzing control systems with randomly varying delays. The results of this research are applicable to ICCS design for complex dynamical processes like advanced aircraft and spacecraft, autonomous manufacturing plants, and chemical and processing plants.

  1. Traffic Dimensioning and Performance Modeling of 4G LTE Networks

    ERIC Educational Resources Information Center

    Ouyang, Ye

    2011-01-01

    Rapid changes in mobile techniques have always been evolutionary, and the deployment of 4G Long Term Evolution (LTE) networks will be the same. It will be another transition from Third Generation (3G) to Fourth Generation (4G) over a period of several years, as is the case still with the transition from Second Generation (2G) to 3G. As a result,…

  2. The effects of malicious nodes on performance of mobile ad hoc networks

    NASA Astrophysics Data System (ADS)

    Li, Fanzhi; Shi, Xiyu; Jassim, Sabah; Adams, Christopher

    2006-05-01

    Wireless ad hoc networking offers convenient infrastructureless communication over the shared wireless channel. However, the nature of ad hoc networks makes them vulnerable to security attacks. Unlike their wired counterpart, infrastructureless ad hoc networks do not have a clear line of defense, their topology is dynamically changing, and every mobile node can receive messages from its neighbors and can be contacted by all other nodes in its neighborhood. This poses a great danger to network security if some nodes behave in a malicious manner. The immediate concern about the security in this type of networks is how to protect the network and the individual mobile nodes against malicious act of rogue nodes from within the network. This paper is concerned with security aspects of wireless ad hoc networks. We shall present results of simulation experiments on ad hoc network's performance in the presence of malicious nodes. We shall investigate two types of attacks and the consequences will be simulated and quantified in terms of loss of packets and other factors. The results show that network performance, in terms of successful packet delivery ratios, significantly deteriorates when malicious nodes act according to the defined misbehaving characteristics.

  3. Communication, opponents, and clan performance in online games: a social network approach.

    PubMed

    Lee, Hong Joo; Choi, Jaewon; Kim, Jong Woo; Park, Sung Joo; Gloor, Peter

    2013-12-01

    Online gamers form clans voluntarily to play together and to discuss their real and virtual lives. Although these clans have diverse goals, they seek to increase their rank in the game community by winning more battles. Communications among clan members and battles with other clans may influence the performance of a clan. In this study, we compared the effects of communication structure inside a clan, and battle networks among clans, with the performance of the clans. We collected battle histories, posts, and comments on clan pages from a Korean online game, and measured social network indices for communication and battle networks. Communication structures in terms of density and group degree centralization index had no significant association with clan performance. However, the centrality of clans in the battle network was positively related to the performance of the clan. If a clan had many battle opponents, the performance of the clan improved. PMID:23745617

  4. Communication, Opponents, and Clan Performance in Online Games: A Social Network Approach

    PubMed Central

    Lee, Hong Joo; Choi, Jaewon; Park, Sung Joo; Gloor, Peter

    2013-01-01

    Abstract Online gamers form clans voluntarily to play together and to discuss their real and virtual lives. Although these clans have diverse goals, they seek to increase their rank in the game community by winning more battles. Communications among clan members and battles with other clans may influence the performance of a clan. In this study, we compared the effects of communication structure inside a clan, and battle networks among clans, with the performance of the clans. We collected battle histories, posts, and comments on clan pages from a Korean online game, and measured social network indices for communication and battle networks. Communication structures in terms of density and group degree centralization index had no significant association with clan performance. However, the centrality of clans in the battle network was positively related to the performance of the clan. If a clan had many battle opponents, the performance of the clan improved. PMID:23745617

  5. High performance interconnection between high data rate networks

    NASA Technical Reports Server (NTRS)

    Foudriat, E. C.; Maly, K.; Overstreet, C. M.; Zhang, L.; Sun, W.

    1992-01-01

    The bridge/gateway system needed to interconnect a wide range of computer networks to support a wide range of user quality-of-service requirements is discussed. The bridge/gateway must handle a wide range of message types including synchronous and asynchronous traffic, large, bursty messages, short, self-contained messages, time critical messages, etc. It is shown that messages can be classified into three basic classes, synchronous and large and small asynchronous messages. The first two require call setup so that packet identification, buffer handling, etc. can be supported in the bridge/gateway. Identification enables resequences in packet size. The third class is for messages which do not require call setup. Resequencing hardware based to handle two types of resequencing problems is presented. The first is for a virtual parallel circuit which can scramble channel bytes. The second system is effective in handling both synchronous and asynchronous traffic between networks with highly differing packet sizes and data rates. The two other major needs for the bridge/gateway are congestion and error control. A dynamic, lossless congestion control scheme which can easily support effective error correction is presented. Results indicate that the congestion control scheme provides close to optimal capacity under congested conditions. Under conditions where error may develop due to intervening networks which are not lossless, intermediate error recovery and correction takes 1/3 less time than equivalent end-to-end error correction under similar conditions.

  6. Enhancing End-to-End Performance of Information Services Over Ka-Band Global Satellite Networks

    NASA Technical Reports Server (NTRS)

    Bhasin, Kul B.; Glover, Daniel R.; Ivancic, William D.; vonDeak, Thomas C.

    1997-01-01

    The Internet has been growing at a rapid rate as the key medium to provide information services such as e-mail, WWW and multimedia etc., however its global reach is limited. Ka-band communication satellite networks are being developed to increase the accessibility of information services via the Internet at global scale. There is need to assess satellite networks in their ability to provide these services and interconnect seamlessly with existing and proposed terrestrial telecommunication networks. In this paper the significant issues and requirements in providing end-to-end high performance for the delivery of information services over satellite networks based on various layers in the OSI reference model are identified. Key experiments have been performed to evaluate the performance of digital video and Internet over satellite-like testbeds. The results of the early developments in ATM and TCP protocols over satellite networks are summarized.

  7. Functional Connectivity in Multiple Cortical Networks Is Associated with Performance Across Cognitive Domains in Older Adults

    PubMed Central

    Shaw, Emily E.; Schultz, Aaron P.; Sperling, Reisa A.

    2015-01-01

    Abstract Intrinsic functional connectivity MRI has become a widely used tool for measuring integrity in large-scale cortical networks. This study examined multiple cortical networks using Template-Based Rotation (TBR), a method that applies a priori network and nuisance component templates defined from an independent dataset to test datasets of interest. A priori templates were applied to a test dataset of 276 older adults (ages 65–90) from the Harvard Aging Brain Study to examine the relationship between multiple large-scale cortical networks and cognition. Factor scores derived from neuropsychological tests represented processing speed, executive function, and episodic memory. Resting-state BOLD data were acquired in two 6-min acquisitions on a 3-Tesla scanner and processed with TBR to extract individual-level metrics of network connectivity in multiple cortical networks. All results controlled for data quality metrics, including motion. Connectivity in multiple large-scale cortical networks was positively related to all cognitive domains, with a composite measure of general connectivity positively associated with general cognitive performance. Controlling for the correlations between networks, the frontoparietal control network (FPCN) and executive function demonstrated the only significant association, suggesting specificity in this relationship. Further analyses found that the FPCN mediated the relationships of the other networks with cognition, suggesting that this network may play a central role in understanding individual variation in cognition during aging. PMID:25827242

  8. Functional Connectivity in Multiple Cortical Networks Is Associated with Performance Across Cognitive Domains in Older Adults.

    PubMed

    Shaw, Emily E; Schultz, Aaron P; Sperling, Reisa A; Hedden, Trey

    2015-10-01

    Intrinsic functional connectivity MRI has become a widely used tool for measuring integrity in large-scale cortical networks. This study examined multiple cortical networks using Template-Based Rotation (TBR), a method that applies a priori network and nuisance component templates defined from an independent dataset to test datasets of interest. A priori templates were applied to a test dataset of 276 older adults (ages 65-90) from the Harvard Aging Brain Study to examine the relationship between multiple large-scale cortical networks and cognition. Factor scores derived from neuropsychological tests represented processing speed, executive function, and episodic memory. Resting-state BOLD data were acquired in two 6-min acquisitions on a 3-Tesla scanner and processed with TBR to extract individual-level metrics of network connectivity in multiple cortical networks. All results controlled for data quality metrics, including motion. Connectivity in multiple large-scale cortical networks was positively related to all cognitive domains, with a composite measure of general connectivity positively associated with general cognitive performance. Controlling for the correlations between networks, the frontoparietal control network (FPCN) and executive function demonstrated the only significant association, suggesting specificity in this relationship. Further analyses found that the FPCN mediated the relationships of the other networks with cognition, suggesting that this network may play a central role in understanding individual variation in cognition during aging. PMID:25827242

  9. Social Networks and Students' Performance in Secondary Schools: Lessons from an Open Learning Centre, Kenya

    ERIC Educational Resources Information Center

    Muhingi, Wilkins Ndege; Mutavi, Teresia; Kokonya, Donald; Simiyu, Violet Nekesa; Musungu, Ben; Obondo, Anne; Kuria, Mary Wangari

    2015-01-01

    Given the known positive and negative effects of uncontrolled social networking among secondary school students worldwide, it is necessary to establish the relationship between social network sites and academic performances among secondary school students. This study, therefore, aimed at establishing the relationship between secondary school…

  10. Social Networks, Communication Styles, and Learning Performance in a CSCL Community

    ERIC Educational Resources Information Center

    Cho, Hichang; Gay, Geri; Davidson, Barry; Ingraffea, Anthony

    2007-01-01

    The aim of this study is to empirically investigate the relationships between communication styles, social networks, and learning performance in a computer-supported collaborative learning (CSCL) community. Using social network analysis (SNA) and longitudinal survey data, we analyzed how 31 distributed learners developed collaborative learning…

  11. Performance Analysis of Non-saturated IEEE 802.11 DCF Networks

    NASA Astrophysics Data System (ADS)

    Zhai, Linbo; Zhang, Xiaomin; Xie, Gang

    This letter presents a model with queueing theory to analyze the performance of non-saturated IEEE 802.11 DCF networks. We use the closed queueing network model and derive an approximate representation of throughput which can reveal the relationship between the throughput and the total offered load under finite traffic load conditions. The accuracy of the model is verified by extensive simulations.

  12. Performance benefits and limitations of a camera network

    NASA Astrophysics Data System (ADS)

    Carr, Peter; Thomas, Paul J.; Hornsey, Richard

    2005-06-01

    Visual information is of vital significance to both animals and artificial systems. The majority of mammals rely on two images, each with a resolution of 107-108 'pixels' per image. At the other extreme are insect eyes where the field of view is segmented into 103-105 images, each comprising effectively one pixel/image. The great majority of artificial imaging systems lie nearer to the mammalian characteristics in this parameter space, although electronic compound eyes have been developed in this laboratory and elsewhere. If the definition of a vision system is expanded to include networks or swarms of sensor elements, then schools of fish, flocks of birds and ant or termite colonies occupy a region where the number of images and the pixels/image may be comparable. A useful system might then have 105 imagers, each with about 104-105 pixels. Artificial analogs to these situations include sensor webs, smart dust and co-ordinated robot clusters. As an extreme example, we might consider the collective vision system represented by the imminent existence of ~109 cellular telephones, each with a one-megapixel camera. Unoccupied regions in this resolution-segmentation parameter space suggest opportunities for innovative artificial sensor network systems. Essential for the full exploitation of these opportunities is the availability of custom CMOS image sensor chips whose characteristics can be tailored to the application. Key attributes of such a chip set might include integrated image processing and control, low cost, and low power. This paper compares selected experimentally determined system specifications for an inward-looking array of 12 cameras with the aid of a camera-network model developed to explore the tradeoff between camera resolution and the number of cameras.

  13. Performance of the Birmingham Solar-Oscillations Network (BiSON)

    NASA Astrophysics Data System (ADS)

    Hale, S. J.; Howe, R.; Chaplin, W. J.; Davies, G. R.; Elsworth, Y. P.

    2016-01-01

    The Birmingham Solar-Oscillations Network (BiSON) has been operating with a full complement of six stations since 1992. Over 20 years later, we look back on the network history. The meta-data from the sites have been analysed to assess performance in terms of site insolation, with a brief look at the challenges that have been encountered over the years. We explain how the international community can gain easy access to the ever-growing dataset produced by the network, and finally look to the future of the network and the potential impact of nearly 25 years of technology miniaturisation.

  14. Low Temperature Performance of High-Speed Neural Network Circuits

    NASA Technical Reports Server (NTRS)

    Duong, T.; Tran, M.; Daud, T.; Thakoor, A.

    1995-01-01

    Artificial neural networks, derived from their biological counterparts, offer a new and enabling computing paradigm specially suitable for such tasks as image and signal processing with feature classification/object recognition, global optimization, and adaptive control. When implemented in fully parallel electronic hardware, it offers orders of magnitude speed advantage. Basic building blocks of the new architecture are the processing elements called neurons implemented as nonlinear operational amplifiers with sigmoidal transfer function, interconnected through weighted connections called synapses implemented using circuitry for weight storage and multiply functions either in an analog, digital, or hybrid scheme.

  15. A Technique for Moving Large Data Sets over High-Performance Long Distance Networks

    SciTech Connect

    Settlemyer, Bradley W; Dobson, Jonathan D; Hodson, Stephen W; Kuehn, Jeffery A; Poole, Stephen W; Ruwart, Thomas

    2011-01-01

    In this paper we look at the performance characteristics of three tools used to move large data sets over dedicated long distance networking infrastructure. Although performance studies of wide area networks have been a frequent topic of interest, performance analyses have tended to focus on network latency characteristics and peak throughput using network traffic generators. In this study we instead perform an end-to-end long distance networking analysis that includes reading large data sets from a source file system and committing the data to a remote destination file system. An evaluation of end-to-end data movement is also an evaluation of the system configurations employed and the tools used to move the data. For this paper, we have built several storage platforms and connected them with a high performance long distance network configuration. We use these systems to analyze the capabilities of three data movement tools: BBcp, GridFTP, and XDD. Our studies demonstrate that existing data movement tools do not provide efficient performance levels or exercise the storage devices in their highest performance modes.

  16. OSI Network-layer Abstraction: Analysis of Simulation Dynamics and Performance Indicators

    NASA Astrophysics Data System (ADS)

    Lawniczak, Anna T.; Gerisch, Alf; Di Stefano, Bruno

    2005-06-01

    The Open Systems Interconnection (OSI) reference model provides a conceptual framework for communication among computers in a data communication network. The Network Layer of this model is responsible for the routing and forwarding of packets of data. We investigate the OSI Network Layer and develop an abstraction suitable for the study of various network performance indicators, e.g. throughput, average packet delay, average packet speed, average packet path-length, etc. We investigate how the network dynamics and the network performance indicators are affected by various routing algorithms and by the addition of randomly generated links into a regular network connection topology of fixed size. We observe that the network dynamics is not simply the sum of effects resulting from adding individual links to the connection topology but rather is governed nonlinearly by the complex interactions caused by the existence of all randomly added and already existing links in the network. Data for our study was gathered using Netzwerk-1, a C++ simulation tool that we developed for our abstraction.

  17. High-performance testbed network with ATM technology for neuroimaging

    NASA Astrophysics Data System (ADS)

    Huang, H. K.; Arenson, Ronald L.; Dillon, William P.; Lou, Shyhliang A.; Bazzill, Todd M.; Wong, Albert W. K.; Gould, Robert G.

    1995-05-01

    Today's teleradiology transmits images with telephone lines (from 14400 to 1.5 Mbits/sec). However, the large amount of data commonly produced during an MR or CT procedure can limit some applications of teleradiology. This paper is a progress report of a high speed (155 Mbits/sec) testbed teleradiology network using asynchronous transfer mode (ATM OC 3) technology for neuroradiology. The network connects radiology departments of four affiliated hospitals and one MR imaging center within the San Francisco Bay Area with ATM switches through the Pacific Bell ATM main switch at Oakland, California; they are: University of California at San Francisco Hospital and Medical School (UCSF), Mt. Zion Hospital (MZH), San Francisco VA Medical Center (SFVAMC), San Francisco General Hospital (SFGH), and San Francisco Magnetic Resonance Imaging Center (SFMRC). UCSF serves as the expert center and the ATM switch is connected to its PACS infrastructure, the others are considered as satellite sites. Images and related patient data are transmitted from the four satellite sites to the expert canter for interpretation and consultation.

  18. Vector Video

    NASA Astrophysics Data System (ADS)

    Taylor, David P.

    2001-01-01

    Vector addition is an important skill for introductory physics students to master. For years, I have used a fun example to introduce vector addition in my introductory physics classes based on one with which my high school physics teacher piqued my interest many years ago.

  19. A comprehensive approach for evaluating network performance in surface and borehole seismic monitoring

    NASA Astrophysics Data System (ADS)

    Stabile, T. A.; Iannaccone, G.; Zollo, A.; Lomax, A.; Ferulano, M. F.; Vetri, M. L. V.; Barzaghi, L. P.

    2013-02-01

    The accurate determination of locations and magnitudes of seismic events in a monitored region is important for many scientific, industrial and military studies and applications; for these purposes a wide variety of seismic networks are deployed throughout the world. It is crucial to know the performance of these networks not only in detecting and locating seismic events of different sizes throughout a specified source region, but also by evaluating their location errors as a function of the magnitude and source location. In this framework, we have developed a method for evaluating network performance in surface and borehole seismic monitoring. For a specified network geometry, station characteristics and a target monitoring volume, the method determines the lowest magnitude of events that the seismic network is able to detect (Mwdetect), and locate (Mwloc) and estimates the expected location and origin time errors for a specified magnitude. Many of the features related to the seismic signal recorded at a single station are considered in this methodology, including characteristics of the seismic source, the instrument response, the ambient noise level, wave propagation in a layered, anelastic medium and uncertainties on waveform measures and the velocity model. We applied this method to two different network typologies: a local earthquake monitoring network, Irpinia Seismic Network (ISNet), installed along the Campania-Lucania Apennine chain in Southern Italy, and a hypothetic borehole network for monitoring microfractures induced during the hydrocarbon extraction process in an oil field. The method we present may be used to aid in enhancing existing networks and/or understanding their capabilities, such as for the ISNet case study, or to optimally design the network geometry in specific target regions, as for the borehole network example.

  20. High-performance image communication network with asynchronous transfer mode technology

    NASA Astrophysics Data System (ADS)

    Wong, Albert W. K.; Huang, H. K.; Lee, Joseph K.; Bazzill, Todd M.; Zhu, Xiaoming

    1996-05-01

    Asynchronous transfer mode (ATM) technology has been implemented within our radiology department's hospital-wide PACS as well as in a wide area network (WAN) connecting affiliated hospitals. This paper describes our implementation strategies and the network performance observed in a clinical setting. The image communication network for our PACS is composed of two network interfaces: ATM (OC-3, 155 Mbps) and Ethernet (10 Mbps). This communication network connects four major campus buildings and two remote hospitals, providing intra- and interbuilding communication for radiologic images including CT, MR, CR, US, and digitized screen-film images. The network links these modalities via their acquisition computers to the PACS controller and to display workstations. The ATM serves as the primary network for transmission of radiologic images and relevant data within the entire PACS. The standard Ethernet is used as a backup network for ATM. It interconnects all PACS components including radiologic imaging systems, acquisition computers, display workstations, the PACS controller, the database servers, and the RIS and HIS. Our communication network operates on a 24 hrs/day, 7 days/week basis. Performance of the ATM network was evaluated in terms of disk-to-disk, disk-to-memory, and memory-to- memory transmission rates. The average memory-to-memory transmission rate over the wide area ATM network was 8.3 MByte/s, which corresponds to transferring a 40-slice (or, 20- MByte) CT examination to a remote site in less than 3 seconds. With the emerging ATM technology, we believe that ATM-based digital communication network is a suitable choice for large-scale PACS involving both LAN and WAN.

  1. Altered Small-World Brain Networks in Schizophrenia Patients during Working Memory Performance

    PubMed Central

    He, Hao; Sui, Jing; Yu, Qingbao; Turner, Jessica A.; Ho, Beng-Choon; Sponheim, Scott R.; Manoach, Dara S.; Clark, Vincent P.; Calhoun, Vince D.

    2012-01-01

    Impairment of working memory (WM) performance in schizophrenia patients (SZ) is well-established. Compared to healthy controls (HC), SZ patients show aberrant blood oxygen level dependent (BOLD) activations and disrupted functional connectivity during WM performance. In this study, we examined the small-world network metrics computed from functional magnetic resonance imaging (fMRI) data collected as 35 HC and 35 SZ performed a Sternberg Item Recognition Paradigm (SIRP) at three WM load levels. Functional connectivity networks were built by calculating the partial correlation on preprocessed time courses of BOLD signal between task-related brain regions of interest (ROIs) defined by group independent component analysis (ICA). The networks were then thresholded within the small-world regime, resulting in undirected binarized small-world networks at different working memory loads. Our results showed: 1) at the medium WM load level, the networks in SZ showed a lower clustering coefficient and less local efficiency compared with HC; 2) in SZ, most network measures altered significantly as the WM load level increased from low to medium and from medium to high, while the network metrics were relatively stable in HC at different WM loads; and 3) the altered structure at medium WM load in SZ was related to their performance during the task, with longer reaction time related to lower clustering coefficient and lower local efficiency. These findings suggest brain connectivity in patients with SZ was more diffuse and less strongly linked locally in functional network at intermediate level of WM when compared to HC. SZ show distinctly inefficient and variable network structures in response to WM load increase, comparing to stable highly clustered network topologies in HC. PMID:22701611

  2. Consensus group sessions: a useful method to reconcile stakeholders’ perspectives about network performance evaluation

    PubMed Central

    Lamontagne, Marie-Eve; Swaine, Bonnie R; Lavoie, André; Champagne, François; Marcotte, Anne-Claire

    2010-01-01

    Background Having a common vision among network stakeholders is an important ingredient to developing a performance evaluation process. Consensus methods may be a viable means to reconcile the perceptions of different stakeholders about the dimensions to include in a performance evaluation framework. Objectives To determine whether individual organizations within traumatic brain injury (TBI) networks differ in perceptions about the importance of performance dimensions for the evaluation of TBI networks and to explore the extent to which group consensus sessions could reconcile these perceptions. Methods We used TRIAGE, a consensus technique that combines an individual and a group data collection phase to explore the perceptions of network stakeholders and to reach a consensus within structured group discussions. Results One hundred and thirty-nine professionals from 43 organizations within eight TBI networks participated in the individual data collection; 62 professionals from these same organisations contributed to the group data collection. The extent of consensus based on questionnaire results (e.g. individual data collection) was low, however, 100% agreement was obtained for each network during the consensus group sessions. The median importance scores and mean ranks attributed to the dimensions by individuals compared to groups did not differ greatly. Group discussions were found useful in understanding the reasons motivating the scoring, for resolving differences among participants, and for harmonizing their values. Conclusion Group discussions, as part of a consensus technique, appear to be a useful process to reconcile diverging perceptions of network performance among stakeholders. PMID:21289996

  3. Mapping the social network: tracking lice in a wild primate (Microcebus rufus) population to infer social contacts and vector potential

    PubMed Central

    2012-01-01

    previously unseen parasite movement between lemurs, but also allowed us to infer social interactions between them. As lice are known pathogen vectors, our method also allowed us to identify the lemurs most likely to facilitate louse-mediated epidemics. Our approach demonstrates the potential to uncover otherwise inaccessible parasite-host, and host social interaction data in any trappable species parasitized by sucking lice. PMID:22449178

  4. Hybrid Neural-Network: Genetic Algorithm Technique for Aircraft Engine Performance Diagnostics Developed and Demonstrated

    NASA Technical Reports Server (NTRS)

    Kobayashi, Takahisa; Simon, Donald L.

    2002-01-01

    As part of the NASA Aviation Safety Program, a unique model-based diagnostics method that employs neural networks and genetic algorithms for aircraft engine performance diagnostics has been developed and demonstrated at the NASA Glenn Research Center against a nonlinear gas turbine engine model. Neural networks are applied to estimate the internal health condition of the engine, and genetic algorithms are used for sensor fault detection, isolation, and quantification. This hybrid architecture combines the excellent nonlinear estimation capabilities of neural networks with the capability to rank the likelihood of various faults given a specific sensor suite signature. The method requires a significantly smaller data training set than a neural network approach alone does, and it performs the combined engine health monitoring objectives of performance diagnostics and sensor fault detection and isolation in the presence of nominal and degraded engine health conditions.

  5. Study on multiple-hops performance of MOOC sequences-based optical labels for OPS networks

    NASA Astrophysics Data System (ADS)

    Zhang, Chongfu; Qiu, Kun; Ma, Chunli

    2009-11-01

    In this paper, we utilize a new study method that is under independent case of multiple optical orthogonal codes to derive the probability function of MOOCS-OPS networks, discuss the performance characteristics for a variety of parameters, and compare some characteristics of the system employed by single optical orthogonal code or multiple optical orthogonal codes sequences-based optical labels. The performance of the system is also calculated, and our results verify that the method is effective. Additionally it is found that performance of MOOCS-OPS networks would, negatively, be worsened, compared with single optical orthogonal code-based optical label for optical packet switching (SOOC-OPS); however, MOOCS-OPS networks can greatly enlarge the scalability of optical packet switching networks.

  6. Analysis of latency performance of bluetooth low energy (BLE) networks.

    PubMed

    Cho, Keuchul; Park, Woojin; Hong, Moonki; Park, Gisu; Cho, Wooseong; Seo, Jihoon; Han, Kijun

    2015-01-01

    Bluetooth Low Energy (BLE) is a short-range wireless communication technology aiming at low-cost and low-power communication. The performance evaluation of classical Bluetooth device discovery have been intensively studied using analytical modeling and simulative methods, but these techniques are not applicable to BLE, since BLE has a fundamental change in the design of the discovery mechanism, including the usage of three advertising channels. Recently, there several works have analyzed the topic of BLE device discovery, but these studies are still far from thorough. It is thus necessary to develop a new, accurate model for the BLE discovery process. In particular, the wide range settings of the parameters introduce lots of potential for BLE devices to customize their discovery performance. This motivates our study of modeling the BLE discovery process and performing intensive simulation. This paper is focused on building an analytical model to investigate the discovery probability, as well as the expected discovery latency, which are then validated via extensive experiments. Our analysis considers both continuous and discontinuous scanning modes. We analyze the sensitivity of these performance metrics to parameter settings to quantitatively examine to what extent parameters influence the performance metric of the discovery processes. PMID:25545266

  7. Analysis of Latency Performance of Bluetooth Low Energy (BLE) Networks

    PubMed Central

    Cho, Keuchul; Park, Woojin; Hong, Moonki; Park, Gisu; Cho, Wooseong; Seo, Jihoon; Han, Kijun

    2015-01-01

    Bluetooth Low Energy (BLE) is a short-range wireless communication technology aiming at low-cost and low-power communication. The performance evaluation of classical Bluetooth device discovery have been intensively studied using analytical modeling and simulative methods, but these techniques are not applicable to BLE, since BLE has a fundamental change in the design of the discovery mechanism, including the usage of three advertising channels. Recently, there several works have analyzed the topic of BLE device discovery, but these studies are still far from thorough. It is thus necessary to develop a new, accurate model for the BLE discovery process. In particular, the wide range settings of the parameters introduce lots of potential for BLE devices to customize their discovery performance. This motivates our study of modeling the BLE discovery process and performing intensive simulation. This paper is focused on building an analytical model to investigate the discovery probability, as well as the expected discovery latency, which are then validated via extensive experiments. Our analysis considers both continuous and discontinuous scanning modes. We analyze the sensitivity of these performance metrics to parameter settings to quantitatively examine to what extent parameters influence the performance metric of the discovery processes. PMID:25545266

  8. Challenges for malaria elimination in Zanzibar: pyrethroid resistance in malaria vectors and poor performance of long-lasting insecticide nets

    PubMed Central

    2013-01-01

    Background Long-lasting insecticide treated nets (LLINs) and indoor residual house spraying (IRS) are the main interventions for the control of malaria vectors in Zanzibar. The aim of the present study was to assess the susceptibility status of malaria vectors against the insecticides used for LLINs and IRS and to determine the durability and efficacy of LLINs on the island. Methods Mosquitoes were sampled from Pemba and Unguja islands in 2010–2011 for use in WHO susceptibility tests. One hundred and fifty LLINs were collected from households on Unguja, their physical state was recorded and then tested for efficacy as well as total insecticide content. Results Species identification revealed that over 90% of the Anopheles gambiae complex was An. arabiensis with a small number of An. gambiae s.s. and An. merus being present. Susceptibility tests showed that An. arabiensis on Pemba was resistant to the pyrethroids used for LLINs and IRS. Mosquitoes from Unguja Island, however, were fully susceptible to all pyrethroids tested. A physical examination of 150 LLINs showed that two thirds were damaged after only three years in use. All used nets had a significantly lower (p < 0.001) mean permethrin concentration of 791.6 mg/m2 compared with 944.2 mg/m2 for new ones. Their efficacy decreased significantly against both susceptible An. gambiae s.s. colony mosquitoes and wild-type mosquitoes from Pemba after just six washes (p < 0.001). Conclusion The sustainability of the gains achieved in malaria control in Zanzibar is seriously threatened by the resistance of malaria vectors to pyrethroids and the short-lived efficacy of LLINs. This study has revealed that even in relatively well-resourced and logistically manageable places like Zanzibar, malaria elimination is going to be difficult to achieve with the current control measures. PMID:23537463

  9. A 3-D Poisson Solver Based on Conjugate Gradients Compared to Standard Iterative Methods and Its Performance on Vector Computers

    NASA Astrophysics Data System (ADS)

    Kapitza, H.; Eppel, D.

    1987-02-01

    A conjugate gradient method for solving a 3-D Poisson equation in Cartesian unequally spaced coordinates is tested in concurrence to standard iterative methods. It is found that the tested algorithm is far superior to Red-Black-SOR with optimal parameter. In the conjugate gradient method no relaxation parameter is needed, and there are no restrictions on the number of gridpoints in the three directions. The iteration routine is vectorizable to a large extent by the compiler of a CYBER 205 without any special preparations. Utilizing some special features of vector computers it is completely vectorizable with only minor changes in the code.

  10. High-performance parallel interface to synchronous optical network gateway

    DOEpatents

    St. John, Wallace B.; DuBois, David H.

    1996-01-01

    A system of sending and receiving gateways interconnects high speed data interfaces, e.g., HIPPI interfaces, through fiber optic links, e.g., a SONET network. An electronic stripe distributor distributes bytes of data from a first interface at the sending gateway onto parallel fiber optics of the fiber optic link to form transmitted data. An electronic stripe collector receives the transmitted data on the parallel fiber optics and reforms the data into a format effective for input to a second interface at the receiving gateway. Preferably, an error correcting syndrome is constructed at the sending gateway and sent with a data frame so that transmission errors can be detected and corrected in a real-time basis. Since the high speed data interface operates faster than any of the fiber optic links the transmission rate must be adapted to match the available number of fiber optic links so the sending and receiving gateways monitor the availability of fiber links and adjust the data throughput accordingly. In another aspect, the receiving gateway must have sufficient available buffer capacity to accept an incoming data frame. A credit-based flow control system provides for continuously updating the sending gateway on the available buffer capacity at the receiving gateway.

  11. High-performance parallel interface to synchronous optical network gateway

    DOEpatents

    St. John, W.B.; DuBois, D.H.

    1996-12-03

    Disclosed is a system of sending and receiving gateways interconnects high speed data interfaces, e.g., HIPPI interfaces, through fiber optic links, e.g., a SONET network. An electronic stripe distributor distributes bytes of data from a first interface at the sending gateway onto parallel fiber optics of the fiber optic link to form transmitted data. An electronic stripe collector receives the transmitted data on the parallel fiber optics and reforms the data into a format effective for input to a second interface at the receiving gateway. Preferably, an error correcting syndrome is constructed at the sending gateway and sent with a data frame so that transmission errors can be detected and corrected in a real-time basis. Since the high speed data interface operates faster than any of the fiber optic links the transmission rate must be adapted to match the available number of fiber optic links so the sending and receiving gateways monitor the availability of fiber links and adjust the data throughput accordingly. In another aspect, the receiving gateway must have sufficient available buffer capacity to accept an incoming data frame. A credit-based flow control system provides for continuously updating the sending gateway on the available buffer capacity at the receiving gateway. 7 figs.

  12. Cloning vector

    DOEpatents

    Guilfoyle, R.A.; Smith, L.M.

    1994-12-27

    A vector comprising a filamentous phage sequence containing a first copy of filamentous phage gene X and other sequences necessary for the phage to propagate is disclosed. The vector also contains a second copy of filamentous phage gene X downstream from a promoter capable of promoting transcription in a bacterial host. In a preferred form of the present invention, the filamentous phage is M13 and the vector additionally includes a restriction endonuclease site located in such a manner as to substantially inactivate the second gene X when a DNA sequence is inserted into the restriction site. 2 figures.

  13. Cloning vector

    DOEpatents

    Guilfoyle, Richard A.; Smith, Lloyd M.

    1994-01-01

    A vector comprising a filamentous phage sequence containing a first copy of filamentous phage gene X and other sequences necessary for the phage to propagate is disclosed. The vector also contains a second copy of filamentous phage gene X downstream from a promoter capable of promoting transcription in a bacterial host. In a preferred form of the present invention, the filamentous phage is M13 and the vector additionally includes a restriction endonuclease site located in such a manner as to substantially inactivate the second gene X when a DNA sequence is inserted into the restriction site.

  14. Implementation and Performance Evaluation Using the Fuzzy Network Balanced Scorecard

    ERIC Educational Resources Information Center

    Tseng, Ming-Lang

    2010-01-01

    The balanced scorecard (BSC) is a multi-criteria evaluation concept that highlights the importance of performance measurement. However, although there is an abundance of literature on the BSC framework, there is a scarcity of literature regarding how the framework with dependence and interactive relationships should be properly implemented in…

  15. Dynamic Social Networks in High Performance Football Coaching

    ERIC Educational Resources Information Center

    Occhino, Joseph; Mallett, Cliff; Rynne, Steven

    2013-01-01

    Background: Sports coaching is largely a social activity where engagement with athletes and support staff can enhance the experiences for all involved. This paper examines how high performance football coaches develop knowledge through their interactions with others within a social learning theory framework. Purpose: The key purpose of this study…

  16. A Method for Integrating Thrust-Vectoring and Actuated Forebody Strakes with Conventional Aerodynamic Controls on a High-Performance Fighter Airplane

    NASA Technical Reports Server (NTRS)

    Lallman, Frederick J.; Davidson, John B.; Murphy, Patrick C.

    1998-01-01

    A method, called pseudo controls, of integrating several airplane controls to achieve cooperative operation is presented. The method eliminates conflicting control motions, minimizes the number of feedback control gains, and reduces the complication of feedback gain schedules. The method is applied to the lateral/directional controls of a modified high-performance airplane. The airplane has a conventional set of aerodynamic controls, an experimental set of thrust-vectoring controls, and an experimental set of actuated forebody strakes. The experimental controls give the airplane additional control power for enhanced stability and maneuvering capabilities while flying over an expanded envelope, especially at high angles of attack. The flight controls are scheduled to generate independent body-axis control moments. These control moments are coordinated to produce stability-axis angular accelerations. Inertial coupling moments are compensated. Thrust-vectoring controls are engaged according to their effectiveness relative to that of the aerodynamic controls. Vane-relief logic removes steady and slowly varying commands from the thrust-vectoring controls to alleviate heating of the thrust turning devices. The actuated forebody strakes are engaged at high angles of attack. This report presents the forward-loop elements of a flight control system that positions the flight controls according to the desired stability-axis accelerations. This report does not include the generation of the required angular acceleration commands by means of pilot controls or the feedback of sensed airplane motions.

  17. Static thrust-vectoring performance of nonaxisymmetric convergent-divergent nozzles with post-exit yaw vanes. M.S. Thesis - George Washington Univ., Aug. 1988

    NASA Technical Reports Server (NTRS)

    Foley, Robert J.; Pendergraft, Odis C., Jr.

    1991-01-01

    A static (wind-off) test was conducted in the Static Test Facility of the 16-ft transonic tunnel to determine the performance and turning effectiveness of post-exit yaw vanes installed on two-dimensional convergent-divergent nozzles. One nozzle design that was previously tested was used as a baseline, simulating dry power and afterburning power nozzles at both 0 and 20 degree pitch vectoring conditions. Vanes were installed on these four nozzle configurations to study the effects of vane deflection angle, longitudinal and lateral location, size, and camber. All vanes were hinged at the nozzle sidewall exit, and in addition, some were also hinged at the vane quarter chord (double-hinged). The vane concepts tested generally produced yaw thrust vectoring angles much less than the geometric vane angles, for (up to 8 percent) resultant thrust losses. When the nozzles were pitch vectored, yawing effectiveness decreased as the vanes were moved downstream. Thrust penalties and yawing effectiveness both decreased rapidly as the vanes were moved outboard (laterally). Vane length and height changes increased yawing effectiveness and thrust ratio losses, while using vane camber, and double-hinged vanes increased resultant yaw angles by 50 to 100 percent.

  18. Theoretical Prediction of Hydrogen Separation Performance of Two-Dimensional Carbon Network of Fused Pentagon.

    PubMed

    Zhu, Lei; Xue, Qingzhong; Li, Xiaofang; Jin, Yakang; Zheng, Haixia; Wu, Tiantian; Guo, Qikai

    2015-12-30

    Using the van-der-Waals-corrected density functional theory (DFT) and molecular dynamic (MD) simulations, we theoretically predict the H2 separation performance of a new two-dimensional sp(2) carbon allotropes-fused pentagon network. The DFT calculations demonstrate that the fused pentagon network with proper pore sizes presents a surmountable energy barrier (0.18 eV) for H2 molecule passing through. Furthermore, the fused pentagon network shows an exceptionally high selectivity for H2/gas (CO, CH4, CO2, N2, et al.) at 300 and 450 K. Besides, using MD simulations we demonstrate that the fused pentagon network exhibits a H2 permeance of 4 × 10(7) GPU at 450 K, which is much higher than the value (20 GPU) in the current industrial applications. With high selectivity and excellent permeability, the fused pentagon network should be an excellent candidate for H2 separation. PMID:26632974

  19. INCITE: Edge-based Traffic Processing and Inference for High-Performance Networks

    SciTech Connect

    Baraniuk, Richard G.; Feng, Wu-chun; Cottrell, Les; Knightly, Edward; Nowak, Robert; Riedi, Rolf

    2005-06-20

    The INCITE (InterNet Control and Inference Tools at the Edge) Project developed on-line tools to characterize and map host and network performance as a function of space, time, application, protocol, and service. In addition to their utility for trouble-shooting problems, these tools will enable a new breed of applications and operating systems that are network aware and resource aware. Launching from the foundation provided our recent leading-edge research on network measurement, multifractal signal analysis, multiscale random fields, and quality of service, our effort consisted of three closely integrated research thrusts that directly attack several key networking challenges of DOE's SciDAC program. These are: Thrust 1, Multiscale traffic analysis and modeling techniques; Thrust 2, Inference and control algorithms for network paths, links, and routers, and Thrust 3, Data collection tools.

  20. System for Automated Calibration of Vector Modulators

    NASA Technical Reports Server (NTRS)

    Lux, James; Boas, Amy; Li, Samuel

    2009-01-01

    Vector modulators are used to impose baseband modulation on RF signals, but non-ideal behavior limits the overall performance. The non-ideal behavior of the vector modulator is compensated using data collected with the use of an automated test system driven by a LabVIEW program that systematically applies thousands of control-signal values to the device under test and collects RF measurement data. The technology innovation automates several steps in the process. First, an automated test system, using computer controlled digital-to-analog converters (DACs) and a computer-controlled vector network analyzer (VNA) systematically can apply different I and Q signals (which represent the complex number by which the RF signal is multiplied) to the vector modulator under test (VMUT), while measuring the RF performance specifically, gain and phase. The automated test system uses the LabVIEW software to control the test equipment, collect the data, and write it to a file. The input to the Lab - VIEW program is either user-input for systematic variation, or is provided in a file containing specific test values that should be fed to the VMUT. The output file contains both the control signals and the measured data. The second step is to post-process the file to determine the correction functions as needed. The result of the entire process is a tabular representation, which allows translation of a desired I/Q value to the required analog control signals to produce a particular RF behavior. In some applications, corrected performance is needed only for a limited range. If the vector modulator is being used as a phase shifter, there is only a need to correct I and Q values that represent points on a circle, not the entire plane. This innovation has been used to calibrate 2-GHz MMIC (monolithic microwave integrated circuit) vector modulators in the High EIRP Cluster Array project (EIRP is high effective isotropic radiated power). These calibrations were then used to create

  1. Results of computer network experiment via the Japanese communication satellite CS - Performance evaluation of communication protocols

    NASA Astrophysics Data System (ADS)

    Ito, A.; Kakinuma, Y.; Uchida, K.; Matsumoto, K.; Takahashi, H.

    1984-03-01

    Computer network experiments have been performed by using the Japanese communication satellite CS. The network is of a centralized (star) type, consisting of one center station and many user stations. The protocols are determined taking into consideration the long round trip delay of a satellite channel. This paper treats the communication protocol aspects of the experiments. Performances of the burst level and the link protocols (which correspond nearly to data link layer of OSI 7 layer model) are evaluated. System performances of throughput, delay, link level overhead are measured by using the statistically generated traffic.

  2. Application of Artificial Neural Networks to Investigate the Energy Performance of Household Refrigerator-Freezers

    NASA Astrophysics Data System (ADS)

    Saidur, R.; Masjuki, H. H.

    In this study, the energy consumption of 149 domestic refrigerators has been monitored in Malaysian households. A questionnaire was used to get relevant information regarding the usage of this appliance in the actual kitchen environment to feed into neural networks. Prediction performance of Artificial Neural Networks (ANN) approach was investigated using actual monitored and survey data. Statistical analyses in terms of fraction of variance R2, Coefficient of Variation (COV), RMS are calculated to judge the performance of NN model. It has been found that the regression coefficient R2 is very close to unity for the best prediction performance results.

  3. Performance Analysis of Network Model to Identify Healthy and Cancerous Colon Genes.

    PubMed

    Roy, Tanusree; Barman, Soma

    2016-03-01

    Modeling of cancerous and healthy Homo Sapiens colon gene using electrical network is proposed to study their behavior. In this paper, the individual amino acid models are designed using hydropathy index of amino acid side chain. The phase and magnitude responses of genes are examined to screen out cancer from healthy genes. The performance of proposed modeling technique is judged using various performance measurement metrics such as accuracy, sensitivity, specificity, etc. The network model performance is increased with frequency, which is analyzed using the receiver operating characteristic curve. The accuracy of the model is tested on colon genes and achieved maximum 97% at 10-MHz frequency. PMID:25730835

  4. Performance Analysis of Receive Diversity in Wireless Sensor Networks over GBSBE Models

    PubMed Central

    Goel, Shivali; Abawajy, Jemal H.; Kim, Tai-hoon

    2010-01-01

    Wireless sensor networks have attracted a lot of attention recently. In this paper, we develop a channel model based on the elliptical model for multipath components involving randomly placed scatterers in the scattering region with sensors deployed on a field. We verify that in a sensor network, the use of receive diversity techniques improves the performance of the system. Extensive performance analysis of the system is carried out for both single and multiple antennas with the applied receive diversity techniques. Performance analyses based on variations in receiver height, maximum multipath delay and transmit power have been performed considering different numbers of antenna elements present in the receiver array, Our results show that increasing the number of antenna elements for a wireless sensor network does indeed improve the BER rates that can be obtained. PMID:22163510

  5. Vector quantization

    NASA Technical Reports Server (NTRS)

    Gray, Robert M.

    1989-01-01

    During the past ten years Vector Quantization (VQ) has developed from a theoretical possibility promised by Shannon's source coding theorems into a powerful and competitive technique for speech and image coding and compression at medium to low bit rates. In this survey, the basic ideas behind the design of vector quantizers are sketched and some comments made on the state-of-the-art and current research efforts.

  6. Long-running telemedicine networks delivering humanitarian services: experience, performance and scientific output

    PubMed Central

    Geissbuhler, Antoine; Jethwani, Kamal; Kovarik, Carrie; Person, Donald A; Vladzymyrskyy, Anton; Zanaboni, Paolo; Zolfo, Maria

    2012-01-01

    Abstract Objective To summarize the experience, performance and scientific output of long-running telemedicine networks delivering humanitarian services. Methods Nine long-running networks – those operating for five years or more– were identified and seven provided detailed information about their activities, including performance and scientific output. Information was extracted from peer-reviewed papers describing the networks’ study design, effectiveness, quality, economics, provision of access to care and sustainability. The strength of the evidence was scored as none, poor, average or good. Findings The seven networks had been operating for a median of 11 years (range: 5–15). All networks provided clinical tele-consultations for humanitarian purposes using store-and-forward methods and five were also involved in some form of education. The smallest network had 15 experts and the largest had more than 500. The clinical caseload was 50 to 500 cases a year. A total of 59 papers had been published by the networks, and 44 were listed in Medline. Based on study design, the strength of the evidence was generally poor by conventional standards (e.g. 29 papers described non-controlled clinical series). Over half of the papers provided evidence of sustainability and improved access to care. Uncertain funding was a common risk factor. Conclusion Improved collaboration between networks could help attenuate the lack of resources reported by some networks and improve sustainability. Although the evidence base is weak, the networks appear to offer sustainable and clinically useful services. These findings may interest decision-makers in developing countries considering starting, supporting or joining similar telemedicine networks. PMID:22589567

  7. Network telemetry System Performance Tests in support of the Mark 3 data system implementation

    NASA Technical Reports Server (NTRS)

    Rey, R. D.; Nipper, E. J.

    1978-01-01

    The philocophy and the objectives of the telemetry system performance tests (SPTs) are discussed to demonstrate the benefits gained by performing these tests. The test procedure and test software are included. The results and the status of the Network Telemetry system are summarized.

  8. A performance analysis of DS-CDMA and SCPC VSAT networks

    NASA Technical Reports Server (NTRS)

    Hayes, David P.; Ha, Tri T.

    1990-01-01

    Spread-spectrum and single-channel-per-carrier (SCPC) transmission techniques work well in very small aperture terminal (VSAT) networks for multiple-access purposes while allowing the earth station antennas to remain small. Direct-sequence code-division multiple-access (DS-CDMA) is the simplest spread-spectrum technique to use in a VSAT network since a frequency synthesizer is not required for each terminal. An examination is made of the DS-CDMA and SCPC Ku-band VSAT satellite systems for low-density (64-kb/s or less) communications. A method for improving the standardf link analysis of DS-CDMA satellite-switched networks by including certain losses is developed. The performance of 50-channel full mesh and star network architectures is analyzed. The selection of operating conditions producing optimum performance is demonstrated.

  9. Scalable software-defined optical networking with high-performance routing and wavelength assignment algorithms.

    PubMed

    Lee, Chankyun; Cao, Xiaoyuan; Yoshikane, Noboru; Tsuritani, Takehiro; Rhee, June-Koo Kevin

    2015-10-19

    The feasibility of software-defined optical networking (SDON) for a practical application critically depends on scalability of centralized control performance. The paper, highly scalable routing and wavelength assignment (RWA) algorithms are investigated on an OpenFlow-based SDON testbed for proof-of-concept demonstration. Efficient RWA algorithms are proposed to achieve high performance in achieving network capacity with reduced computation cost, which is a significant attribute in a scalable centralized-control SDON. The proposed heuristic RWA algorithms differ in the orders of request processes and in the procedures of routing table updates. Combined in a shortest-path-based routing algorithm, a hottest-request-first processing policy that considers demand intensity and end-to-end distance information offers both the highest throughput of networks and acceptable computation scalability. We further investigate trade-off relationship between network throughput and computation complexity in routing table update procedure by a simulation study. PMID:26480397

  10. HPNAIDM: The High-Performance Network Anomaly/Intrusion Detection and Mitigation System

    SciTech Connect

    Chen, Yan

    2013-12-05

    Identifying traffic anomalies and attacks rapidly and accurately is critical for large network operators. With the rapid growth of network bandwidth, such as the next generation DOE UltraScience Network, and fast emergence of new attacks/virus/worms, existing network intrusion detection systems (IDS) are insufficient because they: • Are mostly host-based and not scalable to high-performance networks; • Are mostly signature-based and unable to adaptively recognize flow-level unknown attacks; • Cannot differentiate malicious events from the unintentional anomalies. To address these challenges, we proposed and developed a new paradigm called high-performance network anomaly/intrustion detection and mitigation (HPNAIDM) system. The new paradigm is significantly different from existing IDSes with the following features (research thrusts). • Online traffic recording and analysis on high-speed networks; • Online adaptive flow-level anomaly/intrusion detection and mitigation; • Integrated approach for false positive reduction. Our research prototype and evaluation demonstrate that the HPNAIDM system is highly effective and economically feasible. Beyond satisfying the pre-set goals, we even exceed that significantly (see more details in the next section). Overall, our project harvested 23 publications (2 book chapters, 6 journal papers and 15 peer-reviewed conference/workshop papers). Besides, we built a website for technique dissemination, which hosts two system prototype release to the research community. We also filed a patent application and developed strong international and domestic collaborations which span both academia and industry.

  11. Performance evaluation of a sequential minimal radial basis function (RBF) neural network learning algorithm.

    PubMed

    Lu, Y; Sundararajan, N; Saratchandran, P

    1998-01-01

    This paper presents a detailed performance analysis of the minimal resource allocation network (M-RAN) learning algorithm, M-RAN is a sequential learning radial basis function neural network which combines the growth criterion of the resource allocating network (RAN) of Platt (1991) with a pruning strategy based on the relative contribution of each hidden unit to the overall network output. The resulting network leads toward a minimal topology for the RAN. The performance of this algorithm is compared with the multilayer feedforward networks (MFNs) trained with 1) a variant of the standard backpropagation algorithm, known as RPROP and 2) the dependence identification (DI) algorithm of Moody and Antsaklis on several benchmark problems in the function approximation and pattern classification areas. For all these problems, the M-RAN algorithm is shown to realize networks with far fewer hidden neurons with better or same approximation/classification accuracy. Further, the time taken for learning (training) is also considerably shorter as M-RAN does not require repeated presentation of the training data. PMID:18252454

  12. Performance analysis for gigabit Ethernet communication network under various data rates and switching structures

    NASA Astrophysics Data System (ADS)

    Tsao, Shyh-Lin; Wu, Jun-Yi

    2004-11-01

    This paper reports recent work on gigabit Ethernet interconnection communication traffic analysis. We proposed and demonstrated the system concept of gigabit Ethernet communication. 1.25Gbps, 10Gpbs and 40Gpbs gigabit Ethernet interconnection networks are considered for computer communications. Various switching structures, such as crossbar, double crossbar (Dcrossbar), Modified Dilated Benes (MDB), General MDB (GMDB), Benes, Dilated Benes (Dbenes), Tree architecture, Simplified tree (Stree), Extended baseline network (Ebaseline) are analyzed for searching the optimal performance of gigabit Ethernet communication. The numerical results for computation information transferring can be applied to search a best strategy for gigabit Ethernet communication networks interconnections.

  13. Laser ranging network performance and routine orbit determination at D-PAF

    NASA Technical Reports Server (NTRS)

    Massmann, Franz-Heinrich; Reigber, C.; Li, H.; Koenig, Rolf; Raimondo, J. C.; Rajasenan, C.; Vei, M.

    1993-01-01

    ERS-1 is now about 8 months in orbit and has been tracked by the global laser network from the very beginning of the mission. The German processing and archiving facility for ERS-1 (D-PAF) is coordinating and supporting the network and performing the different routine orbit determination tasks. This paper presents details about the global network status, the communication to D-PAF and the tracking data and orbit processing system at D-PAF. The quality of the preliminary and precise orbits are shown and some problem areas are identified.

  14. A holistic approach to ZigBee performance enhancement for home automation networks.

    PubMed

    Betzler, August; Gomez, Carles; Demirkol, Ilker; Paradells, Josep

    2014-01-01

    Wireless home automation networks are gaining importance for smart homes. In this ambit, ZigBee networks play an important role. The ZigBee specification defines a default set of protocol stack parameters and mechanisms that is further refined by the ZigBee Home Automation application profile. In a holistic approach, we analyze how the network performance is affected with the tuning of parameters and mechanisms across multiple layers of the ZigBee protocol stack and investigate possible performance gains by implementing and testing alternative settings. The evaluations are carried out in a testbed of 57 TelosB motes. The results show that considerable performance improvements can be achieved by using alternative protocol stack configurations. From these results, we derive two improved protocol stack configurations for ZigBee wireless home automation networks that are validated in various network scenarios. In our experiments, these improved configurations yield a relative packet delivery ratio increase of up to 33.6%, a delay decrease of up to 66.6% and an improvement of the energy efficiency for battery powered devices of up to 48.7%, obtainable without incurring any overhead to the network. PMID:25196004

  15. Experimental validation of optical layer performance monitoring using an all-optical network testbed

    NASA Astrophysics Data System (ADS)

    Vukovic, Alex; Savoie, Michel J.; Hua, Heng

    2004-11-01

    Communication transmission systems continue to evolve towards higher data rates, increased wavelength densities, longer transmission distances and more intelligence. Further development of dense wavelength division multiplexing (DWDM) and all-optical networks (AONs) will demand ever-tighter monitoring to assure a specified quality of service (QoS). Traditional monitoring methods have been proven to be insufficient. Higher degree of self-control, intelligence and optimization for functions within next generation networks require new monitoring schemes to be developed and deployed. Both perspective and challenges of performance monitoring, its techniques, requirements and drivers are discussed. It is pointed out that optical layer monitoring is a key enabler for self-control of next generation optical networks. Aside from its real-time feedback and the safeguarding of neighbouring channels, optical performance monitoring ensures the ability to build and control complex network topologies while maintaining an efficiently high QoS. Within an all-optical network testbed environment, key performance monitoring parameters are identified, assessed through real-time proof-of-concept, and proposed for network applications for the safeguarding of neighbouring channels in WDM systems.

  16. A Holistic Approach to ZigBee Performance Enhancement for Home Automation Networks

    PubMed Central

    Betzler, August; Gomez, Carles; Demirkol, Ilker; Paradells, Josep

    2014-01-01

    Wireless home automation networks are gaining importance for smart homes. In this ambit, ZigBee networks play an important role. The ZigBee specification defines a default set of protocol stack parameters and mechanisms that is further refined by the ZigBee Home Automation application profile. In a holistic approach, we analyze how the network performance is affected with the tuning of parameters and mechanisms across multiple layers of the ZigBee protocol stack and investigate possible performance gains by implementing and testing alternative settings. The evaluations are carried out in a testbed of 57 TelosB motes. The results show that considerable performance improvements can be achieved by using alternative protocol stack configurations. From these results, we derive two improved protocol stack configurations for ZigBee wireless home automation networks that are validated in various network scenarios. In our experiments, these improved configurations yield a relative packet delivery ratio increase of up to 33.6%, a delay decrease of up to 66.6% and an improvement of the energy efficiency for battery powered devices of up to 48.7%, obtainable without incurring any overhead to the network. PMID:25196004

  17. Performance of wavelet analysis and neural networks for pathological voices identification

    NASA Astrophysics Data System (ADS)

    Salhi, Lotfi; Talbi, Mourad; Abid, Sabeur; Cherif, Adnane

    2011-09-01

    Within the medical environment, diverse techniques exist to assess the state of the voice of the patient. The inspection technique is inconvenient for a number of reasons, such as its high cost, the duration of the inspection, and above all, the fact that it is an invasive technique. This study focuses on a robust, rapid and accurate system for automatic identification of pathological voices. This system employs non-invasive, non-expensive and fully automated method based on hybrid approach: wavelet transform analysis and neural network classifier. First, we present the results obtained in our previous study while using classic feature parameters. These results allow visual identification of pathological voices. Second, quantified parameters drifting from the wavelet analysis are proposed to characterise the speech sample. On the other hand, a system of multilayer neural networks (MNNs) has been developed which carries out the automatic detection of pathological voices. The developed method was evaluated using voice database composed of recorded voice samples (continuous speech) from normophonic or dysphonic speakers. The dysphonic speakers were patients of a National Hospital 'RABTA' of Tunis Tunisia and a University Hospital in Brussels, Belgium. Experimental results indicate a success rate ranging between 75% and 98.61% for discrimination of normal and pathological voices using the proposed parameters and neural network classifier. We also compared the average classification rate based on the MNN, Gaussian mixture model and support vector machines.

  18. Performance Analysis of TCP Enhancements in Satellite Data Networks

    NASA Technical Reports Server (NTRS)

    Broyles, Ren H.

    1999-01-01

    This research examines two proposed enhancements to the well-known Transport Control Protocol (TCP) in the presence of noisy communication links. The Multiple Pipes protocol is an application-level adaptation of the standard TCP protocol, where several TCP links cooperate to transfer data. The Space Communication Protocol Standard - Transport Protocol (SCPS-TP) modifies TCP to optimize performance in a satellite environment. While SCPS-TP has inherent advantages that allow it to deliver data more rapidly than Multiple Pipes, the protocol, when optimized for operation in a high-error environment, is not compatible with legacy TCP systems, and requires changes to the TCP specification. This investigation determines the level of improvement offered by SCPS-TP's Corruption Mode, which will help determine if migration to the protocol is appropriate in different environments. As the percentage of corrupted packets approaches 5 %, Multiple Pipes can take over five times longer than SCPS-TP to deliver data. At high error rates, SCPS-TP's advantage is primarily caused by Multiple Pipes' use of congestion control algorithms. The lack of congestion control, however, limits the systems in which SCPS-TP can be effectively used.

  19. Copper nanofiber-networked cobalt oxide composites for high performance Li-ion batteries

    PubMed Central

    2011-01-01

    We prepared a composite electrode structure consisting of copper nanofiber-networked cobalt oxide (CuNFs@CoOx). The copper nanofibers (CuNFs) were fabricated on a substrate with formation of a network structure, which may have potential for improving electron percolation and retarding film deformation during the discharging/charging process over the electroactive cobalt oxide. Compared to bare CoOxthin-film (CoOxTF) electrodes, the CuNFs@CoOxelectrodes exhibited a significant enhancement of rate performance by at least six-fold at an input current density of 3C-rate. Such enhanced Li-ion storage performance may be associated with modified electrode structure at the nanoscale, improved charge transfer, and facile stress relaxation from the embedded CuNF network. Consequently, the CuNFs@CoOxcomposite structure demonstrated here can be used as a promising high-performance electrode for Li-ion batteries. PMID:21711839

  20. Centrality and charisma: comparing how leader networks and attributions affect team performance.

    PubMed

    Balkundi, Prasad; Kilduff, Martin; Harrison, David A

    2011-11-01

    When leaders interact in teams with their subordinates, they build social capital that can have positive effects on team performance. Does this social capital affect team performance because subordinates come to see the leader as charismatic? We answered this question by examining 2 models. First, we tested the charisma-to-centrality model according to which the leader's charisma facilitates the occupation of a central position in the informal advice network. From this central position, the leader positively influences team performance. Second, we examined the centrality-to-charisma model according to which charisma is attributed to those leaders who are socially active in terms of giving and receiving advice. Attributed charisma facilitates increased team performance. We tested these 2 models in 2 different studies. In the first study, based on time-separated, multisource data emanating from members of 56 work teams, we found support for the centrality-to-charisma model. Formal leaders who were central within team advice networks were seen as charismatic by subordinates, and this charisma was associated with high team performance. To clarify how leader network centrality affected the emergence of charismatic leadership, we designed Study 2 in which, for 79 student teams, we measured leader networking activity and leader charisma at 2 different points in time and related these variables to team performance measured at a third point in time. On the basis of this temporally separated data set, we again found support for the centrality-to-charisma model. PMID:21895351

  1. Classification of fault location and the degree of performance degradation of a rolling bearing based on an improved hyper-sphere-structured multi-class support vector machine

    NASA Astrophysics Data System (ADS)

    Wang, Yujing; Kang, Shouqiang; Jiang, Yicheng; Yang, Guangxue; Song, Lixin; Mikulovich, V. I.

    2012-05-01

    Effective classification of a rolling bearing fault location and especially its degree of performance degradation provides an important basis for appropriate fault judgment and processing. Two methods are introduced to extract features of the rolling bearing vibration signal—one combining empirical mode decomposition (EMD) with the autoregressive model, whose model parameters and variances of the remnant can be obtained using the Yule-Walker or Ulrych-Clayton method, and the other combining EMD with singular value decomposition. Feature vector matrices obtained are then regarded as the input of the improved hyper-sphere-structured multi-class support vector machine (HSSMC-SVM) for classification. Thereby, multi-status intelligent diagnosis of normal rolling bearings and faulty rolling bearings at different locations and the degrees of performance degradation of the faulty rolling bearings can be achieved simultaneously. Experimental results show that EMD combined with singular value decomposition and the improved HSSMC-SVM intelligent method requires less time and has a higher recognition rate.

  2. Windows NT 4.0 Asynchronous Transfer Mode network interface card performance

    SciTech Connect

    Tolendino, L.F.

    1997-02-18

    Windows NT desktop and server systems are becoming increasingly important to Sandia. These systems are capable of network performance considerably in excess of the 10 Mbps Ethernet data rate. As alternatives to conventional Ethernet, 155 Mbps Asynchronous Transfer Mode, ATM, and 100 Mbps Ethernet network interface cards were tested and compared to conventional 10 Mbps Ethernet cards in a typical Windows NT system. The results of the tests were analyzed and compared to show the advantages of the alternative technologies. Both 155 Mbps ATM and 100 Mbps Ethernet offer significant performance improvements over conventional 10 Mbps shared media Ethernet.

  3. Analysis of physical layer performance of hybrid optical-wireless access network

    NASA Astrophysics Data System (ADS)

    Shaddad, R. Q.; Mohammad, A. B.; Al-hetar, A. M.

    2011-09-01

    The hybrid optical-wireless access network (HOWAN) is a favorable architecture for next generation access network. It is an optimal combination of an optical backhaul and a wireless front-end for an efficient access network. In this paper, the HOWAN architecture is designed based on a wavelengths division multiplexing/time division multiplexing passive optical network (WDM/TDM PON) at the optical backhaul and a wireless fidelity (WiFi) technology at the wireless front-end. The HOWAN is proposed that can provide blanket coverage of broadband and flexible connection for end-users. Most of the existing works, based on performance evaluation are concerned on network layer aspects. This paper reports physical layer performance in terms of the bit error rate (BER), eye diagram, and signal-to-noise ratio (SNR) of the communication system. It accommodates 8 wavelength channels with 32 optical network unit/wireless access points (ONU/APs). It is demonstrated that downstream and upstream of 2 Gb/s can be achieved by optical backhaul for each wavelength channel along optical fiber length of 20 km and a data rate of 54 Mb/s per ONU/AP along a 50 m outdoor wireless link.

  4. The tendon network of the fingers performs anatomical computation at a macroscopic scale.

    PubMed

    Valero-Cuevas, Francisco J; Yi, Jae-Woong; Brown, Daniel; McNamara, Robert V; Paul, Chandana; Lipson, Hood

    2007-06-01

    Current thinking attributes information processing for neuromuscular control exclusively to the nervous system. Our cadaveric experiments and computer simulations show, however, that the tendon network of the fingers performs logic computation to preferentially change torque production capabilities. How this tendon network propagates tension to enable manipulation has been debated since the time of Vesalius and DaVinci and remains an unanswered question. We systematically changed the proportion of tension to the tendons of the extensor digitorum versus the two dorsal interosseous muscles of two cadaver fingers and measured the tension delivered to the proximal and distal interphalangeal joints. We find that the distribution of input tensions in the tendon network itself regulates how tensions propagate to the finger joints, acting like the switching function of a logic gate that nonlinearly enables different torque production capabilities. Computer modeling reveals that the deformable structure of the tendon networks is responsible for this phenomenon; and that this switching behavior is an effective evolutionary solution permitting a rich repertoire of finger joint actuation not possible with simpler tendon paths. We conclude that the structural complexity of this tendon network, traditionally oversimplified or ignored, may in fact be critical to understanding brain-body coevolution and neuromuscular control. Moreover, this form of information processing at the macroscopic scale is a new instance of the emerging principle of nonneural "somatic logic" found to perform logic computation such as in cellular networks. PMID:17549909

  5. Scheduling, bandwidth allocation and performance evaluation of DOCSIS protocol over cable networks

    NASA Astrophysics Data System (ADS)

    Kuo, Wen-Kuang; Kumar, Sunil; Kuo, C.-C. Jay

    2002-12-01

    The Data Over Cable Service Interface Specifications (DOCSIS) of the Multimedia Cable Network System (MCNS) organization intends to support IP traffics over HFC (hybrid fiber/coax) networks with significantly higher data rates than analog modems and Integrated Service Digital Network (ISDN) links. The availability of high speed-access enables the delivery of high quality audio, video and interactive services. To support quality-of-service (QoS) for such applications, it is important for HFC networks to provide effective medium access and traffic scheduling mechanisms. In this work, a novel scheduling mechanism and a new bandwidth allocation scheme are proposed to support multimedia traffic over DOCSIS (Data Over Cable System Interface Specification)-compliant cable networks. The primary goal of our research is to improve the transmission of real-time variable bit rate (VBR) traffic in terms of throughput and delay under DOCSIS. To support integrated services, we also consider the transmission of constant bit rate (CBR) traffic and non-real-time traffic in the simulation. To demonstrate the performance, we compare the result of the proposed scheme with that of a simple multiple priority scheme. It is shown via simulation that the proposed method provides a significant amount of improvement over existing QoS scheduling services in DOCSIS. Finally, a discrete-time Markov model is used to analyze the performance of the voice traffic over DOCSIS-supported cable networks.

  6. Designing optimal greenhouse gas observing networks that consider performance and cost

    NASA Astrophysics Data System (ADS)

    Lucas, D. D.; Yver Kwok, C.; Cameron-Smith, P.; Graven, H.; Bergmann, D.; Guilderson, T. P.; Weiss, R.; Keeling, R.

    2015-06-01

    Emission rates of greenhouse gases (GHGs) entering into the atmosphere can be inferred using mathematical inverse approaches that combine observations from a network of stations with forward atmospheric transport models. Some locations for collecting observations are better than others for constraining GHG emissions through the inversion, but the best locations for the inversion may be inaccessible or limited by economic and other non-scientific factors. We present a method to design an optimal GHG observing network in the presence of multiple objectives that may be in conflict with each other. As a demonstration, we use our method to design a prototype network of six stations to monitor summertime emissions in California of the potent GHG 1,1,1,2-tetrafluoroethane (CH2FCF3, HFC-134a). We use a multiobjective genetic algorithm to evolve network configurations that seek to jointly maximize the scientific accuracy of the inferred HFC-134a emissions and minimize the associated costs of making the measurements. The genetic algorithm effectively determines a set of "optimal" observing networks for HFC-134a that satisfy both objectives (i.e., the Pareto frontier). The Pareto frontier is convex, and clearly shows the tradeoffs between performance and cost, and the diminishing returns in trading one for the other. Without difficulty, our method can be extended to design optimal networks to monitor two or more GHGs with different emissions patterns, or to incorporate other objectives and constraints that are important in the practical design of atmospheric monitoring networks.

  7. Designing optimal greenhouse gas observing networks that consider performance and cost

    NASA Astrophysics Data System (ADS)

    Lucas, D. D.; Yver Kwok, C.; Cameron-Smith, P.; Graven, H.; Bergmann, D.; Guilderson, T. P.; Weiss, R.; Keeling, R.

    2014-12-01

    Emission rates of greenhouse gases (GHGs) entering into the atmosphere can be inferred using mathematical inverse approaches that combine observations from a network of stations with forward atmospheric transport models. Some locations for collecting observations are better than others for constraining GHG emissions through the inversion, but the best locations for the inversion may be inaccessible or limited by economic and other non-scientific factors. We present a method to design an optimal GHG observing network in the presence of multiple objectives that may be in conflict with each other. As a demonstration, we use our method to design a prototype network of six stations to monitor summertime emissions in California of the potent GHG 1,1,1,2-tetrafluoroethane (CH2FCF3, HFC-134a). We use a multiobjective genetic algorithm to evolve network configurations that seek to jointly maximize the scientific accuracy of the inferred HFC-134a emissions and minimize the associated costs of making the measurements. The genetic algorithm effectively determines a set of "optimal" observing networks for HFC-134a that satisfy both objectives (i.e., the Pareto frontier). The Pareto frontier is convex, and clearly shows the tradeoffs between performance and cost, and the diminishing returns in trading one for the other. Without difficulty, our method can be extended to design optimal networks to monitor two or more GHGs with different emissions patterns, or to incorporate other objectives and constraints that are important in the practical design of atmospheric monitoring networks.

  8. Support vector machine regression (LS-SVM)--an alternative to artificial neural networks (ANNs) for the analysis of quantum chemistry data?

    PubMed

    Balabin, Roman M; Lomakina, Ekaterina I

    2011-06-28

    A multilayer feed-forward artificial neural network (MLP-ANN) with a single, hidden layer that contains a finite number of neurons can be regarded as a universal non-linear approximator. Today, the ANN method and linear regression (MLR) model are widely used for quantum chemistry (QC) data analysis (e.g., thermochemistry) to improve their accuracy (e.g., Gaussian G2-G4, B3LYP/B3-LYP, X1, or W1 theoretical methods). In this study, an alternative approach based on support vector machines (SVMs) is used, the least squares support vector machine (LS-SVM) regression. It has been applied to ab initio (first principle) and density functional theory (DFT) quantum chemistry data. So, QC + SVM methodology is an alternative to QC + ANN one. The task of the study was to estimate the Møller-Plesset (MPn) or DFT (B3LYP, BLYP, BMK) energies calculated with large basis sets (e.g., 6-311G(3df,3pd)) using smaller ones (6-311G, 6-311G*, 6-311G**) plus molecular descriptors. A molecular set (BRM-208) containing a total of 208 organic molecules was constructed and used for the LS-SVM training, cross-validation, and testing. MP2, MP3, MP4(DQ), MP4(SDQ), and MP4/MP4(SDTQ) ab initio methods were tested. Hartree-Fock (HF/SCF) results were also reported for comparison. Furthermore, constitutional (CD: total number of atoms and mole fractions of different atoms) and quantum-chemical (QD: HOMO-LUMO gap, dipole moment, average polarizability, and quadrupole moment) molecular descriptors were used for the building of the LS-SVM calibration model. Prediction accuracies (MADs) of 1.62 ± 0.51 and 0.85 ± 0.24 kcal mol(-1) (1 kcal mol(-1) = 4.184 kJ mol(-1)) were reached for SVM-based approximations of ab initio and DFT energies, respectively. The LS-SVM model was more accurate than the MLR model. A comparison with the artificial neural network approach shows that the accuracy of the LS-SVM method is similar to the accuracy of ANN. The extrapolation and interpolation results show that LS-SVM is

  9. Phylogeographic analysis reveals association of tick-borne pathogen, Anaplasma marginale, MSP1a sequences with ecological traits affecting tick vector performance

    PubMed Central

    Estrada-Peña, Agustín; Naranjo, Victoria; Acevedo-Whitehouse, Karina; Mangold, Atilio J; Kocan, Katherine M; de la Fuente, José

    2009-01-01

    Background The tick-borne pathogen Anaplasma marginale, which is endemic worldwide, is the type species of the genus Anaplasma (Rickettsiales: Anaplasmataceae). Rhipicephalus (Boophilus) microplus is the most important tick vector of A. marginale in tropical and subtropical regions of the world. Despite extensive characterization of the genetic diversity in A. marginale geographic strains using major surface protein sequences, little is known about the biogeography and evolution of A. marginale and other Anaplasma species. For A. marginale, MSP1a was shown to be involved in vector-pathogen and host-pathogen interactions and to have evolved under positive selection pressure. The MSP1a of A. marginale strains differs in molecular weight because of a variable number of tandem 23-31 amino acid repeats and has proven to be a stable marker of strain identity. While phylogenetic studies of MSP1a repeat sequences have shown evidence of A. marginale-tick co-evolution, these studies have not provided phylogeographic information on a global scale because of the high level of MSP1a genetic diversity among geographic strains. Results In this study we showed that the phylogeography of A. marginale MSP1a sequences is associated with world ecological regions (ecoregions) resulting in different evolutionary pressures and thence MSP1a sequences. The results demonstrated that the MSP1a first (R1) and last (RL) repeats and microsatellite sequences were associated with world ecoregion clusters with specific and different environmental envelopes. The evolution of R1 repeat sequences was found to be under positive selection. It is hypothesized that the driving environmental factors regulating tick populations could act on the selection of different A. marginale MSP1a sequence lineages, associated to each ecoregion. Conclusion The results reported herein provided the first evidence that the evolution of A. marginale was linked to ecological traits affecting tick vector performance. These

  10. VLSI Processor For Vector Quantization

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul

    1995-01-01

    Pixel intensities in each kernel compared simultaneously with all code vectors. Prototype high-performance, low-power, very-large-scale integrated (VLSI) circuit designed to perform compression of image data by vector-quantization method. Contains relatively simple analog computational cells operating on direct or buffered outputs of photodetectors grouped into blocks in imaging array, yielding vector-quantization code word for each such block in sequence. Scheme exploits parallel-processing nature of vector-quantization architecture, with consequent increase in speed.

  11. Advanced Communication Technology Satellite (ACTS) Very Small Aperture Terminal (VSAT) Network Control Performance

    NASA Technical Reports Server (NTRS)

    Coney, T. A.

    1996-01-01

    This paper discusses the performance of the network control function for the Advanced Communications Technology Satellite (ACTS) very small aperture terminal (VSAT) full mesh network. This includes control of all operational activities such as acquisition, synchronization, timing and rain fade compensation as well as control of all communications activities such as on-demand integrated services (voice, video, and date) connects and disconnects Operations control is provided by an in-band orderwire carried in the baseboard processor (BBP) control burst, the orderwire burst, the reference burst, and the uplink traffic burst. Communication services are provided by demand assigned multiple access (DAMA) protocols. The ACTS implementation of DAMA protocols ensures both on-demand and integrated voice, video and data services. Communications services control is also provided by the in-band orderwire but uses only the reference burst and the uplink traffic burst. The performance of the ACTS network control functions have been successfully tested during on-orbit checkout and in various VSAT networks in day to day operations. This paper discusses the network operations and services control performance.

  12. System performance analysis of time-division-multiplexing passive optical network using directly modulated lasers or colorless optical network units

    NASA Astrophysics Data System (ADS)

    Gong, Xiaoxue; Guo, Lei; Liu, Yejun; Zhou, Yufang

    2015-05-01

    As a promising technology for broadband communication, passive optical network (PON) has been deployed to support the last-mile broadband access network. In particular, time-division-multiplexing PON (TDM-PON) has been widely used owing to its mature technology and low cost. To practically implement TDM-PONs, the combination of intensity modulation and direct detection is a very promising technique because it achieves cost reduction in system installation and maintenance. However, the current intensity-modulation and direct-detection TDM-PON still suffers from some problems, which mainly include a high-power penalty, detrimental Brillouin backscattering (BB), and so on. Thus, using directly modulated lasers (DMLs) and colorless optical network units (ONUs), respectively, two intensity-modulation and direct-detection TDM-PON architectures are proposed. Using VPI (an optical simulation software developed by VPIphotonics company) simulators, we first analyze the influences on DML-based intensity-modulation and direct-detection TDM-PON (system 1) performances, which mainly include bit error rate (BER) and power penalty. Next, the BB effect on the BER of the intensity-modulation and direct-detection TDM-PON that uses colorless ONUs (system 2) is also investigated. The simulation results show that: (1) a low-power penalty is achieved without degrading the BER of system 1, and (2) the BB can be effectively reduced using phase modulation of the optical carrier in system 2.

  13. Performance evaluation of a high-speed switched network for PACS

    NASA Astrophysics Data System (ADS)

    Zhang, Randy H.; Tao, Wenchao; Huang, Lu J.; Valentino, Daniel J.

    1998-07-01

    We have replaced our shared-media Ethernet and FDDI network with a multi-tiered, switched network using OC-12 (622 Mbps) ATM for the network backbone, OC3 (155 Mbps) connections to high-end servers and display workstations, and switched 100/10 Mbps Ethernet for workstations and desktop computers. The purpose of this research was to help PACS designers and implementers understand key performance factors in a high- speed switched network by characterizing and evaluating its image delivery performance, specifically, the performance of socket-based TCP (Transmission Control Protocol) and DICOM 3.0 communications. A test network within the UCLA Clinical RIS/PACS was constructed using Sun UltraSPARC-II machines with ATM, Fast Ethernet, and Ethernet network interfaces. To identify performance bottlenecks, we evaluated network throughput for memory to memory, memory to disk, disk to memory, and disk to disk transfers. To evaluate the effect of file size, tests involving disks were further divided using sizes of small (514 KB), medium (8 MB), and large (16 MB) files. The observed maximum throughput for various network configurations using the TCP protocol was 117 Mbps for memory to memory and 88 MBPS for memory to disk. For disk to memory, the peak throughput was 98 Mbps using small files, 114 Mbps using medium files, and 116 Mbps using large files. The peak throughput for disk to disk became 64 Mbps using small files and 96 Mbps using medium and large files. The peak throughput using the DICOM 3.0 protocol was substantially lower in all categories. The measured throughput varied significantly among the tests when TCP socket buffer was raised above the default value. The optimal buffer size was approximately 16 KB or the TCP protocol and around 256 KB for the DICOM protocol. The application message size also displayed distinctive effects on network throughput when the TCP socket buffer size was varied. The throughput results for Fast Ethernet and Ethernet were expectedly

  14. Support vector machine for automatic pain recognition

    NASA Astrophysics Data System (ADS)

    Monwar, Md Maruf; Rezaei, Siamak

    2009-02-01

    Facial expressions are a key index of emotion and the interpretation of such expressions of emotion is critical to everyday social functioning. In this paper, we present an efficient video analysis technique for recognition of a specific expression, pain, from human faces. We employ an automatic face detector which detects face from the stored video frame using skin color modeling technique. For pain recognition, location and shape features of the detected faces are computed. These features are then used as inputs to a support vector machine (SVM) for classification. We compare the results with neural network based and eigenimage based automatic pain recognition systems. The experiment results indicate that using support vector machine as classifier can certainly improve the performance of automatic pain recognition system.

  15. Data-flow Performance Optimisation on Unreliable Networks: the ATLAS Data-Acquisition Case

    NASA Astrophysics Data System (ADS)

    Colombo, Tommaso; ATLAS Collaboration

    2015-05-01

    The ATLAS detector at CERN records proton-proton collisions delivered by the Large Hadron Collider (LHC). The ATLAS Trigger and Data-Acquisition (TDAQ) system identifies, selects, and stores interesting collision data. These are received from the detector readout electronics at an average rate of 100 kHz. The typical event data size is 1 to 2 MB. Overall, the ATLAS TDAQ system can be seen as a distributed software system executed on a farm of roughly 2000 commodity PCs. The worker nodes are interconnected by an Ethernet network that at the restart of the LHC in 2015 is expected to experience a sustained throughput of several 10 GB/s. A particular type of challenge posed by this system, and by DAQ systems in general, is the inherently burstynature of the data traffic from the readout buffers to the worker nodes. This can cause instantaneous network congestion and therefore performance degradation. The effect is particularly pronounced for unreliable network interconnections, such as Ethernet. In this paper we report on the design of the data-flow software for the 2015-2018 data-taking period of the ATLAS experiment. This software will be responsible for transporting the data across the distributed Data-Acquisition system. We will focus on the strategies employed to manage the network congestion and therefore minimisethe data-collection latency and maximisethe system performance. We will discuss the results of systematic measurements performed on different types of networking hardware. These results highlight the causes of network congestion and the effects on the overall system performance.

  16. Vectorized garbage collection

    SciTech Connect

    Appel, A.W.; Bendiksen, A.

    1988-01-01

    Garbage collection can be done in vector mode on supercomputers like the Cray-2 and the Cyber 205. Both copying collection and mark-and-sweep can be expressed as breadth-first searches in which the queue can be processed in parallel. The authors have designed a copying garbage collector whose inner loop works entirely in vector mode. The only significant limitation of the algorithm is that if the size of the records is not constant, the implementation becomes much more complicated. The authors give performance measurements of the algorithm as implemented for Lisp CONS cells on the Cyber 205. Vector-mode garbage collection performs up to 9 times faster than scalar-mode collection.

  17. Performance evaluation of a multi-granularity and multi-connectivity circuit switched network

    NASA Astrophysics Data System (ADS)

    Guo, Naixing; Xin, Maoqing; Sun, Weiqiang; Jin, Yaohui; Zhu, Yi; Zhang, Chunlei; Hu, Weisheng; Xie, Guowu

    2007-11-01

    This paper introduces a novel notion of multi-granularity and multi-connectivity circuit switched network. Based on this notion, four routing schemes - Fixed Routing (FR), Maximum Remain (MR), Secured Maximum Remain (SMR) and Premium/Punishment Modification (PPM) are proposed. Numerical simulation results about the performance of these four schemes are also presented in this paper.

  18. Early detection monitoring of aquatic invasive species: Measuring performance success in a Lake Superior pilot network

    EPA Science Inventory

    The Great Lakes Water Quality Agreement, Annex 6 calls for a U.S.-Canada, basin-wide aquatic invasive species early detection network by 2015. The objective of our research is to explore survey design strategies that can improve detection efficiency, and to develop performance me...

  19. Quality Performance Assessment as a Source of Motivation for Lecturers: A Teaching Network Experience

    ERIC Educational Resources Information Center

    Andreu, R.; Canos, L.; de Juana, S.; Manresa, E.; Rienda, L.; Tari, J. J.

    2006-01-01

    Purpose: The purpose of this paper is to present findings derived from research work carried out by a team of six university lecturers who are members of a teaching quality improvement network. The aim is to increase the motivation of the lecturers involved, so that better performance can be achieved, and the teaching-learning process enriched.…

  20. TCP performance in ATM networks: ABR parameter tuning and ABR/UBR comparisons

    SciTech Connect

    Chien Fang; Lin, A.

    1996-02-27

    This paper explores two issues on TOP performance over ATM networks: ABR parameter tuning and performance comparison of binary mode ABR with enhanced UBR services. Of the fifteen parameters defined for ABR, two parameters dominate binary mode ABR performance: Rate Increase Factor (RIF) and Rate Decrease Factor (RDF). Using simulations, we study the effects of these two parameters on TOP over ABR performance. We compare TOP performance with different ABR parameter settings in terms of through-puts and fairness. The effects of different buffer sizes and LAN/WAN distances are also examined. We then compare TOP performance with the best ABR parameter setting with corresponding UBR service enhanced with Early Packet Discard and also with a fair buffer allocation scheme. The results show that TOP performance over binary mode ABR is very sensitive to parameter value settings, and that a poor choice of parameters can result in ABR performance worse than that of the much less expensive UBR-EPD scheme.

  1. Vector Magnetograph Design

    NASA Technical Reports Server (NTRS)

    Chipman, Russell A.

    1996-01-01

    This report covers work performed during the period of November 1994 through March 1996 on the design of a Space-borne Solar Vector Magnetograph. This work has been performed as part of a design team under the supervision of Dr. Mona Hagyard and Dr. Alan Gary of the Space Science Laboratory. Many tasks were performed and this report documents the results from some of those tasks, each contained in the corresponding appendix. Appendices are organized in chronological order.

  2. Performance of a random access packet network with time-capture capability

    NASA Technical Reports Server (NTRS)

    Lin, Y. H.

    1983-01-01

    The Joint Tactical Information Distribution System (JTIDS) is applied to a digital network supporting the command, control and communication requirements of 105 highly mobile users. User data traffic is bursty and the slotted ALOHA channel access scheme is therefore employed. This paper focuses on the determination of JTIDS system performance in this particular application. Emphasis is directed at the specific time-capture capability of JTIDS. Significant system performance parameters are quantified with analysis and simulation.

  3. Architecture Modeling and Performance Characterization of Space Communications and Navigation (SCaN) Network Using MACHETE

    NASA Technical Reports Server (NTRS)

    Jennings, Esther; Heckman, David

    2008-01-01

    As future space exploration missions will involve larger number of spacecraft and more complex systems, theoretical analysis alone may have limitations on characterizing system performance and interactions among the systems. Simulation tools can be useful for system performance characterization through detailed modeling and simulation of the systems and its environment...This paper reports the simulation of the Orion (Crew Exploration Vehicle) to the International Space Station (ISS) mission where Orion is launched by Ares into orbit on a 14-day mission to rendezvous with the ISS. Communications services for the mission are provided by the Space Communication and Navigation (SCaN) network infrastructure which includes the NASA Space Network (SN), Ground Network (GN) and NASA Integrated Services Network (NISN). The objectives of the simulation are to determine whether SCaN can meet the communications needs of the mission, to demonstrate the benefit of using QoS prioritization, and to evaluate network-key parameters of interest such as delay and throughout.

  4. Changes in Brain Network Efficiency and Working Memory Performance in Aging

    PubMed Central

    Stanley, Matthew L.; Simpson, Sean L.; Dagenbach, Dale; Lyday, Robert G.; Burdette, Jonathan H.; Laurienti, Paul J.

    2015-01-01

    Working memory is a complex psychological construct referring to the temporary storage and active processing of information. We used functional connectivity brain network metrics quantifying local and global efficiency of information transfer for predicting individual variability in working memory performance on an n-back task in both young (n = 14) and older (n = 15) adults. Individual differences in both local and global efficiency during the working memory task were significant predictors of working memory performance in addition to age (and an interaction between age and global efficiency). Decreases in local efficiency during the working memory task were associated with better working memory performance in both age cohorts. In contrast, increases in global efficiency were associated with much better working performance for young participants; however, increases in global efficiency were associated with a slight decrease in working memory performance for older participants. Individual differences in local and global efficiency during resting-state sessions were not significant predictors of working memory performance. Significant group whole-brain functional network decreases in local efficiency also were observed during the working memory task compared to rest, whereas no significant differences were observed in network global efficiency. These results are discussed in relation to recently developed models of age-related differences in working memory. PMID:25875001

  5. Co-scheduling of network resource provisioning and host-to-host bandwidth reservation on high-performance network and storage systems

    DOEpatents

    Yu, Dantong; Katramatos, Dimitrios; Sim, Alexander; Shoshani, Arie

    2014-04-22

    A cross-domain network resource reservation scheduler configured to schedule a path from at least one end-site includes a management plane device configured to monitor and provide information representing at least one of functionality, performance, faults, and fault recovery associated with a network resource; a control plane device configured to at least one of schedule the network resource, provision local area network quality of service, provision local area network bandwidth, and provision wide area network bandwidth; and a service plane device configured to interface with the control plane device to reserve the network resource based on a reservation request and the information from the management plane device. Corresponding methods and computer-readable medium are also disclosed.

  6. A three-dimensional carbon nano-network for high performance lithium ion batteries

    DOE PAGESBeta

    Tian, Miao; Wang, Wei; Liu, Yang; Jungjohann, Katherine L.; Thomas Harris, C.; Lee, Yung -Cheng; Yang, Ronggui

    2014-11-20

    Three-dimensional (3D) network structure has been envisioned as a superior architecture for lithium ion battery (LIB) electrodes, which enhances both ion and electron transport to significantly improve battery performance. Herein, a 3D carbon nano-network is fabricated through chemical vapor deposition of carbon on a scalably manufactured 3D porous anodic alumina (PAA) template. As a demonstration on the applicability of 3D carbon nano-network for LIB electrodes, the low conductivity active material, TiO2, is then uniformly coated on the 3D carbon nano-network using atomic layer deposition. High power performance is demonstrated in the 3D C/TiO2 electrodes, where the parallel tubes and gapsmore » in the 3D carbon nano-network facilitates fast Li ion transport. A large areal capacity of ~0.37 mAh·cm–2 is achieved due to the large TiO2 mass loading in the 60 µm-thick 3D C/TiO2 electrodes. At a test rate of C/5, the 3D C/TiO2 electrode with 18 nm-thick TiO2 delivers a high gravimetric capacity of ~240 mAh g–1, calculated with the mass of the whole electrode. A long cycle life of over 1000 cycles with a capacity retention of 91% is demonstrated at 1C. In this study, the effects of the electrical conductivity of carbon nano-network, ion diffusion, and the electrolyte permeability on the rate performance of these 3D C/TiO2 electrodes are systematically studied.« less

  7. A three-dimensional carbon nano-network for high performance lithium ion batteries

    SciTech Connect

    Tian, Miao; Wang, Wei; Liu, Yang; Jungjohann, Katherine L.; Thomas Harris, C.; Lee, Yung -Cheng; Yang, Ronggui

    2014-11-20

    Three-dimensional (3D) network structure has been envisioned as a superior architecture for lithium ion battery (LIB) electrodes, which enhances both ion and electron transport to significantly improve battery performance. Herein, a 3D carbon nano-network is fabricated through chemical vapor deposition of carbon on a scalably manufactured 3D porous anodic alumina (PAA) template. As a demonstration on the applicability of 3D carbon nano-network for LIB electrodes, the low conductivity active material, TiO2, is then uniformly coated on the 3D carbon nano-network using atomic layer deposition. High power performance is demonstrated in the 3D C/TiO2 electrodes, where the parallel tubes and gaps in the 3D carbon nano-network facilitates fast Li ion transport. A large areal capacity of ~0.37 mAh·cm–2 is achieved due to the large TiO2 mass loading in the 60 µm-thick 3D C/TiO2 electrodes. At a test rate of C/5, the 3D C/TiO2 electrode with 18 nm-thick TiO2 delivers a high gravimetric capacity of ~240 mAh g–1, calculated with the mass of the whole electrode. A long cycle life of over 1000 cycles with a capacity retention of 91% is demonstrated at 1C. In this study, the effects of the electrical conductivity of carbon nano-network, ion diffusion, and the electrolyte permeability on the rate performance of these 3D C/TiO2 electrodes are systematically studied.

  8. Motor network structure and function are associated with motor performance in Huntington's disease.

    PubMed

    Müller, Hans-Peter; Gorges, Martin; Grön, Georg; Kassubek, Jan; Landwehrmeyer, G Bernhard; Süßmuth, Sigurd D; Wolf, Robert Christian; Orth, Michael

    2016-03-01

    In Huntington's disease, the relationship of brain structure, brain function and clinical measures remains incompletely understood. We asked how sensory-motor network brain structure and neural activity relate to each other and to motor performance. Thirty-four early stage HD and 32 age- and sex-matched healthy control participants underwent structural magnetic resonance imaging (MRI), diffusion tensor, and intrinsic functional connectivity MRI. Diffusivity patterns were assessed in the cortico-spinal tract and the thalamus-somatosensory cortex tract. For the motor network connectivity analyses the dominant M1 motor cortex region and for the basal ganglia-thalamic network the thalamus were used as seeds. Region to region structural and functional connectivity was examined between thalamus and somatosensory cortex. Fractional anisotropy (FA) was higher in HD than controls in the basal ganglia, and lower in the external and internal capsule, in the thalamus, and in subcortical white matter. Between-group axial and radial diffusivity differences were more prominent than differences in FA, and correlated with motor performance. Within the motor network, the insula was less connected in HD than in controls, with the degree of connection correlating with motor scores. The basal ganglia-thalamic network's connectivity differed in the insula and basal ganglia. Tract specific white matter diffusivity and functional connectivity were not correlated. In HD sensory-motor white matter organization and functional connectivity in a motor network were independently associated with motor performance. The lack of tract-specific association of structure and function suggests that functional adaptation to structural loss differs between participants. PMID:26762394

  9. A framework for performance measurement in university using extended network data envelopment analysis (DEA) structures

    NASA Astrophysics Data System (ADS)

    Kashim, Rosmaini; Kasim, Maznah Mat; Rahman, Rosshairy Abd

    2015-12-01

    Measuring university performance is essential for efficient allocation and utilization of educational resources. In most of the previous studies, performance measurement in universities emphasized the operational efficiency and resource utilization without investigating the university's ability to fulfill the needs of its stakeholders and society. Therefore, assessment of the performance of university should be separated into two stages namely efficiency and effectiveness. In conventional DEA analysis, a decision making unit (DMU) or in this context, a university is generally treated as a black-box which ignores the operation and interdependence of the internal processes. When this happens, the results obtained would be misleading. Thus, this paper suggest an alternative framework for measuring the overall performance of a university by incorporating both efficiency and effectiveness and applies network DEA model. The network DEA models are recommended because this approach takes into account the interrelationship between the processes of efficiency and effectiveness in the system. This framework also focuses on the university structure which is expanded from the hierarchical to form a series of horizontal relationship between subordinate units by assuming both intermediate unit and its subordinate units can generate output(s). Three conceptual models are proposed to evaluate the performance of a university. An efficiency model is developed at the first stage by using hierarchical network model. It is followed by an effectiveness model which take output(s) from the hierarchical structure at the first stage as a input(s) at the second stage. As a result, a new overall performance model is proposed by combining both efficiency and effectiveness models. Thus, once this overall model is realized and utilized, the university's top management can determine the overall performance of each unit more accurately and systematically. Besides that, the result from the network

  10. A Case Study of Performance Degradation Attributable to Run-Time Bounds Checks on C++ Vector Access

    PubMed Central

    Flater, David; Guthrie, William F

    2013-01-01

    Programmers routinely omit run-time safety checks from applications because they assume that these safety checks would degrade performance. The simplest example is the use of arrays or array-like data structures that do not enforce the constraint that indices must be within bounds. This report documents an attempt to measure the performance penalty incurred by two different implementations of bounds-checking in C and C++ using a simple benchmark and a desktop PC with a modern superscalar CPU. The benchmark consisted of a loop that wrote to array elements in sequential order. With this configuration, relative to the best performance observed for any access method in C or C++, mean degradation of only (0.881 ± 0.009) % was measured for a standard bounds-checking access method in C++. This case study showed the need for further work to develop and refine measurement methods and to perform more comparisons of this type. Comparisons across different use cases, configurations, programming languages, and environments are needed to determine under what circumstances (if any) the performance advantage of unchecked access is actually sufficient to outweigh the negative consequences for security and software quality. PMID:26401432

  11. Application of artificial neural network for prediction of marine diesel engine performance

    NASA Astrophysics Data System (ADS)

    Mohd Noor, C. W.; Mamat, R.; Najafi, G.; Nik, W. B. Wan; Fadhil, M.

    2015-12-01

    This study deals with an artificial neural network (ANN) modelling of a marine diesel engine to predict the brake power, output torque, brake specific fuel consumption, brake thermal efficiency and volumetric efficiency. The input data for network training was gathered from engine laboratory testing running at various engine speed. The prediction model was developed based on standard back-propagation Levenberg-Marquardt training algorithm. The performance of the model was validated by comparing the prediction data sets with the measured experiment data. Results showed that the ANN model provided good agreement with the experimental data with high accuracy.

  12. Static and dynamic posterior cingulate cortex nodal topology of default mode network predicts attention task performance.

    PubMed

    Lin, Pan; Yang, Yong; Jovicich, Jorge; De Pisapia, Nicola; Wang, Xiang; Zuo, Chun S; Levitt, James Jonathan

    2016-03-01

    Characterization of the default mode network (DMN) as a complex network of functionally interacting dynamic systems has received great interest for the study of DMN neural mechanisms. In particular, understanding the relationship of intrinsic resting-state DMN brain network with cognitive behaviors is an important issue in healthy cognition and mental disorders. However, it is still unclear how DMN functional connectivity links to cognitive behaviors during resting-state. In this study, we hypothesize that static and dynamic DMN nodal topology is associated with upcoming cognitive task performance. We used graph theory analysis in order to understand better the relationship between the DMN functional connectivity and cognitive behavior during resting-state and task performance. Nodal degree of the DMN was calculated as a metric of network topology. We found that the static and dynamic posterior cingulate cortex (PCC) nodal degree within the DMN was associated with task performance (Reaction Time). Our results show that the core node PCC nodal degree within the DMN was significantly correlated with reaction time, which suggests that the PCC plays a key role in supporting cognitive function. PMID:25904156

  13. Improving the Unsteady Aerodynamic Performance of Transonic Turbines using Neural Networks

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan; Madavan, Nateri K.; Huber, Frank W.

    1999-01-01

    A recently developed neural net-based aerodynamic design procedure is used in the redesign of a transonic turbine stage to improve its unsteady aerodynamic performance. The redesign procedure used incorporates the advantages of both traditional response surface methodology and neural networks by employing a strategy called parameter-based partitioning of the design space. Starting from the reference design, a sequence of response surfaces based on both neural networks and polynomial fits are constructed to traverse the design space in search of an optimal solution that exhibits improved unsteady performance. The procedure combines the power of neural networks and the economy of low-order polynomials (in terms of number of simulations required and network training requirements). A time-accurate, two-dimensional, Navier-Stokes solver is used to evaluate the various intermediate designs and provide inputs to the optimization procedure. The procedure yielded a modified design that improves the aerodynamic performance through small changes to the reference design geometry. These results demonstrate the capabilities of the neural net-based design procedure, and also show the advantages of including high-fidelity unsteady simulations that capture the relevant flow physics in the design optimization process.

  14. Study of Synthetic Vision Systems (SVS) and Velocity-vector Based Command Augmentation System (V-CAS) on Pilot Performance

    NASA Technical Reports Server (NTRS)

    Liu, Dahai; Goodrich, Ken; Peak, Bob

    2006-01-01

    This study investigated the effects of synthetic vision system (SVS) concepts and advanced flight controls on single pilot performance (SPP). Specifically, we evaluated the benefits and interactions of two levels of terrain portrayal, guidance symbology, and control-system response type on SPP in the context of lower-landing minima (LLM) approaches. Performance measures consisted of flight technical error (FTE) and pilot perceived workload. In this study, pilot rating, control type, and guidance symbology were not found to significantly affect FTE or workload. It is likely that transfer from prior experience, limited scope of the evaluation task, specific implementation limitations, and limited sample size were major factors in obtaining these results.

  15. Performance Improvement of Induction Motor Speed Sensor-Less Vector Control System Using an Adaptive Observer with an Estimated Flux Feedback in Low Speed Range

    NASA Astrophysics Data System (ADS)

    Fukumoto, Tetsuya; Kato, Yousuke; Kurita, Kazuya; Hayashi, Yoichi

    Because of various errors caused by dead time, temperature variation of resistance and so on, the speed estimation error is inevitable in the speed sensor-less vector control methods of the induction motor. Especially, the speed control loop becomes unstable at near zero frequency. In order to solve these problems, this paper proposes a novel design of an adaptive observer for the speed estimation. Adding a feedback loop of the error between the estimated and reference fluxes, the sensitivity of the current error signals for the speed estimation and the primary resistance identification are improved. The proposed system is analyzed and the appropriate feedback gains are derived. The experimental results showed good performance in low speed range.

  16. Performance of twin two-dimensional wedge nozzles including thrust vectoring and reversing effects at speeds up to Mach 2.20

    NASA Technical Reports Server (NTRS)

    Capone, F. J.; Maiden, D. L.

    1977-01-01

    Transonic tunnel and supersonic pressure tunnel tests were reformed to determine the performance characteristics of twin nonaxisymmetric or two-dimensional nozzles with fixed shrouds and variable-geometry wedges. The effects of thrust vectoring, reversing, and installation of various tails were also studied. The investigation was conducted statically and at flight speeds up to a Mach number of 2.20. The total pressure ratio of the simulated jet exhaust was varied up to approximately 26 depending on Mach number. The Reynolds number per meter varied up to 13.20 x 1 million. An analytical study was made to determine the effect on calculated wave drag by varying the mathematical model used to simulate nozzle jet-exhaust plume.

  17. Dual Arm Work Package performance estimates and telerobot task network simulation

    SciTech Connect

    Draper, J.V.; Blair, L.M.

    1997-02-01

    This paper describes the methodology and results of a network simulation study of the Dual Arm Work Package (DAWP), to be employed for dismantling the Argonne National Laboratory CP-5 reactor. The development of the simulation model was based upon the results of a task analysis for the same system. This study was performed by the Oak Ridge National Laboratory (ORNL), in the Robotics and Process Systems Division. Funding was provided the US Department of Energy`s Office of Technology Development, Robotics Technology Development Program (RTDP). The RTDP is developing methods of computer simulation to estimate telerobotic system performance. Data were collected to provide point estimates to be used in a task network simulation model. Three skilled operators performed six repetitions of a pipe cutting task representative of typical teleoperation cutting operations.

  18. Reconfigurable and adaptive photonic networks for high-performance computing systems.

    PubMed

    Kodi, Avinash; Louri, Ahmed

    2009-08-01

    As feature sizes decrease to the submicrometer regime and clock rates increase to the multigigahertz range, the limited bandwidth at higher bit rates and longer communication distances in electrical interconnects will create a major bandwidth imbalance in future high-performance computing (HPC) systems. We explore the application of an optoelectronic interconnect for the design of flexible, high-bandwidth, reconfigurable and adaptive interconnection architectures for chip-to-chip and board-to-board HPC systems. Reconfigurability is realized by interconnecting arrays of optical transmitters, and adaptivity is implemented by a dynamic bandwidth reallocation (DBR) technique that balances the load on each communication channel. We evaluate a DBR technique, the lockstep (LS) protocol, that monitors traffic intensities, reallocates bandwidth, and adapts to changes in communication patterns. We incorporate this DBR technique into a detailed discrete-event network simulator to evaluate the performance for uniform, nonuniform, and permutation communication patterns. Simulation results indicate that, without reconfiguration techniques being applied, optical based system architecture shows better performance than electrical interconnects for uniform and nonuniform patterns; with reconfiguration techniques being applied, the dynamically reconfigurable optoelectronic interconnect provides much better performance for all communication patterns. Based on the performance study, the reconfigured architecture shows 30%-50% increased throughput and 50%-75% reduced network latency compared with HPC electrical networks. PMID:19649024

  19. Personal and Network Dynamics in Performance of Knowledge Workers: A Study of Australian Breast Radiologists

    PubMed Central

    Tavakoli Taba, Seyedamir; Hossain, Liaquat; Heard, Robert; Brennan, Patrick; Lee, Warwick; Lewis, Sarah

    2016-01-01

    Materials and Methods In this paper, we propose a theoretical model based upon previous studies about personal and social network dynamics of job performance. We provide empirical support for this model using real-world data within the context of the Australian radiology profession. An examination of radiologists’ professional network topology through structural-positional and relational dimensions and radiologists’ personal characteristics in terms of knowledge, experience and self-esteem is provided. Thirty one breast imaging radiologists completed a purpose designed questionnaire regarding their network characteristics and personal attributes. These radiologists also independently read a test set of 60 mammographic cases: 20 cases with cancer and 40 normal cases. A Jackknife free response operating characteristic (JAFROC) method was used to measure the performance of the radiologists’ in detecting breast cancers. Results Correlational analyses showed that reader performance was positively correlated with the social network variables of degree centrality and effective size, but negatively correlated with constraint and hierarchy. For personal characteristics, the number of mammograms read per year and self-esteem (self-evaluation) positively correlated with reader performance. Hierarchical multiple regression analysis indicated that the combination of number of mammograms read per year and network’s effective size, hierarchy and tie strength was the best fitting model, explaining 63.4% of the variance in reader performance. The results from this study indicate the positive relationship between reading high volumes of cases by radiologists and expertise development, but also strongly emphasise the association between effective social/professional interactions and informal knowledge sharing with high performance. PMID:26918644

  20. Attentional performance is correlated with the local regional efficiency of intrinsic brain networks

    PubMed Central

    Xu, Junhai; Yin, Xuntao; Ge, Haitao; Han, Yan; Pang, Zengchang; Tang, Yuchun; Liu, Baolin; Liu, Shuwei

    2015-01-01

    Attention is a crucial brain function for human beings. Using neuropsychological paradigms and task-based functional brain imaging, previous studies have indicated that widely distributed brain regions are engaged in three distinct attention subsystems: alerting, orienting and executive control (EC). Here, we explored the potential contribution of spontaneous brain activity to attention by examining whether resting-state activity could account for individual differences of the attentional performance in normal individuals. The resting-state functional images and behavioral data from attention network test (ANT) task were collected in 59 healthy subjects. Graph analysis was conducted to obtain the characteristics of functional brain networks and linear regression analyses were used to explore their relationships with behavioral performances of the three attentional components. We found that there was no significant relationship between the attentional performance and the global measures, while the attentional performance was associated with specific local regional efficiency. These regions related to the scores of alerting, orienting and EC largely overlapped with the regions activated in previous task-related functional imaging studies, and were consistent with the intrinsic dorsal and ventral attention networks (DAN/VAN). In addition, the strong associations between the attentional performance and specific regional efficiency suggested that there was a possible relationship between the DAN/VAN and task performances in the ANT. We concluded that the intrinsic activity of the human brain could reflect the processing efficiency of the attention system. Our findings revealed a robust evidence for the functional significance of the efficiently organized intrinsic brain network for highly productive cognitions and the hypothesized role of the DAN/VAN at rest. PMID:26283939

  1. Predicting the performance of local seismic networks using Matlab and Google Earth.

    SciTech Connect

    Chael, Eric Paul

    2009-11-01

    We have used Matlab and Google Earth to construct a prototype application for modeling the performance of local seismic networks for monitoring small, contained explosions. Published equations based on refraction experiments provide estimates of peak ground velocities as a function of event distance and charge weight. Matlab routines implement these relations to calculate the amplitudes across a network of stations from sources distributed over a geographic grid. The amplitudes are then compared to ambient noise levels at the stations, and scaled to determine the smallest yield that could be detected at each source location by a specified minimum number of stations. We use Google Earth as the primary user interface, both for positioning the stations of a hypothetical local network, and for displaying the resulting detection threshold contours.

  2. Performance evaluation of differentiated-resilience provisioning scheme for the GMPLS/ASON networks

    NASA Astrophysics Data System (ADS)

    Tan, Zhi; Cao, Hongyu

    2008-11-01

    The future trend is to integrate a constantly increasing number of services in the GMPLS/ASON networks. Some services have very high resilience requirements, while other services have lower ones. This scenario calls for frameworks capable of provisioning for multiple services in a cost efficient manner. This article proposes a differentiated-resilience provisioning scheme applied to the GMPLS/ASON networks, which is expected to be the near- and long-term network technology thanks, among other things, to the great bandwidth capacity offered by optical devices. Finally, a critical evaluation of the state-of-the-art and future challenges facing operators and designers is given. Our numerical results show that the differentiated resilience scenario has better performance than that of dedicated and shared protection and received connections in the differentiated-resilience are 31% higher than that of shared protection and are 60% higher than that of dedicated protection.

  3. Implementation and performance evaluation of a network-attached CD image file server

    NASA Astrophysics Data System (ADS)

    Wu, Jinglian; Dong, Yonggui; Sun, Zhaoyan; Jia, Huibo

    2002-09-01

    A network-attached CD Image file server, working on the Linux operating system, is implemented. By taking advantage of virtual file system (VFS) infrastructure and loopback device, the data of CD are mirrored in harddisks and can be shared by clients synchronously through network. The primary benefits of such a server are cost effectiveness, high capacity and excellent compatibility with Chinese characters. The performance of the server is evaluated by testing its throughput during I/O request. The experimental results show that, compared with conventional methods such as sharing the CD-ROM hard devices via network as, the rate of reading data from the CD Image is much higher. This is especially true when the server is dealing with multi-client access.

  4. A Hybrid Neural Network-Genetic Algorithm Technique for Aircraft Engine Performance Diagnostics

    NASA Technical Reports Server (NTRS)

    Kobayashi, Takahisa; Simon, Donald L.

    2001-01-01

    In this paper, a model-based diagnostic method, which utilizes Neural Networks and Genetic Algorithms, is investigated. Neural networks are applied to estimate the engine internal health, and Genetic Algorithms are applied for sensor bias detection and estimation. This hybrid approach takes advantage of the nonlinear estimation capability provided by neural networks while improving the robustness to measurement uncertainty through the application of Genetic Algorithms. The hybrid diagnostic technique also has the ability to rank multiple potential solutions for a given set of anomalous sensor measurements in order to reduce false alarms and missed detections. The performance of the hybrid diagnostic technique is evaluated through some case studies derived from a turbofan engine simulation. The results show this approach is promising for reliable diagnostics of aircraft engines.

  5. Data compression using artificial neural networks

    SciTech Connect

    Watkins, B.E.

    1991-09-01

    This thesis investigates the application of artificial neural networks for the compression of image data. An algorithm is developed using the competitive learning paradigm which takes advantage of the parallel processing and classification capability of neural networks to produce an efficient implementation of vector quantization. Multi-Stage, tree searched, and classification vector quantization codebook design are adapted to the neural network design to reduce the computational cost and hardware requirements. The results show that the new algorithm provides a substantial reduction in computational costs and an improvement in performance.

  6. Performance Analysis of DPSK-OCDMA System for Optical Access Network

    NASA Astrophysics Data System (ADS)

    Islam, Monirul; Ahmed, N.; Aljunid, S. A.; Ali, Sharafat; Sayeed, S.; Sabri, Naseer

    2016-03-01

    In this research, the performance of optical code division multiple access (OCDMA) using differential phase shift keying (DPSK) has been compared with OCDMA On-Off Keying (OOK). This comparison took place in terms of bit error rate (BER) and receiver power where two bit rates (155 Mbps and 622 Mbps) have been used for this analysis. Using of OptiSystem 7.0 simulation, comparing eye diagram and optical spectrum alongside with BER and Rx power. It is found that OCDMA-DPSK performs better in comparison to OCDMA-OOK. The performance analysis also provides parameter for designing and development of an OCDMA system for optical access network using DPSK.

  7. A New Approach in Advance Network Reservation and Provisioning for High-Performance Scientific Data Transfers

    SciTech Connect

    Balman, Mehmet; Chaniotakis, Evangelos; Shoshani, Arie; Sim, Alex

    2010-01-28

    Scientific applications already generate many terabytes and even petabytes of data from supercomputer runs and large-scale experiments. The need for transferring data chunks of ever-increasing sizes through the network shows no sign of abating. Hence, we need high-bandwidth high speed networks such as ESnet (Energy Sciences Network). Network reservation systems, i.e. ESnet's OSCARS (On-demand Secure Circuits and Advance Reservation System) establish guaranteed bandwidth of secure virtual circuits at a certain time, for a certain bandwidth and length of time. OSCARS checks network availability and capacity for the specified period of time, and allocates requested bandwidth for that user if it is available. If the requested reservation cannot be granted, no further suggestion is returned back to the user. Further, there is no possibility from the users view-point to make an optimal choice. We report a new algorithm, where the user specifies the total volume that needs to be transferred, a maximum bandwidth that he/she can use, and a desired time period within which the transfer should be done. The algorithm can find alternate allocation possibilities, including earliest time for completion, or shortest transfer duration - leaving the choice to the user. We present a novel approach for path finding in time-dependent networks, and a new polynomial algorithm to find possible reservation options according to given constraints. We have implemented our algorithm for testing and incorporation into a future version of ESnet?s OSCARS. Our approach provides a basis for provisioning end-to-end high performance data transfers over storage and network resources.

  8. Designing optimal greenhouse gas observing networks that consider performance and cost

    DOE PAGESBeta

    Lucas, D. D.; Yver Kwok, C.; Cameron-Smith, P.; Graven, H.; Bergmann, D.; Guilderson, T. P.; Weiss, R.; Keeling, R.

    2014-12-23

    Emission rates of greenhouse gases (GHGs) entering into the atmosphere can be inferred using mathematical inverse approaches that combine observations from a network of stations with forward atmospheric transport models. Some locations for collecting observations are better than others for constraining GHG emissions through the inversion, but the best locations for the inversion may be inaccessible or limited by economic and other non-scientific factors. We present a method to design an optimal GHG observing network in the presence of multiple objectives that may be in conflict with each other. As a demonstration, we use our method to design a prototypemore » network of six stations to monitor summertime emissions in California of the potent GHG 1,1,1,2-tetrafluoroethane (CH2FCF3, HFC-134a). We use a multiobjective genetic algorithm to evolve network configurations that seek to jointly maximize the scientific accuracy of the inferred HFC-134a emissions and minimize the associated costs of making the measurements. The genetic algorithm effectively determines a set of "optimal" observing networks for HFC-134a that satisfy both objectives (i.e., the Pareto frontier). The Pareto frontier is convex, and clearly shows the tradeoffs between performance and cost, and the diminishing returns in trading one for the other. Without difficulty, our method can be extended to design optimal networks to monitor two or more GHGs with different emissions patterns, or to incorporate other objectives and constraints that are important in the practical design of atmospheric monitoring networks.« less

  9. Designing optimal greenhouse gas observing networks that consider performance and cost

    DOE PAGESBeta

    Lucas, D. D.; Yver Kwok, C.; Cameron-Smith, P.; Graven, H.; Bergmann, D.; Guilderson, T. P.; Weiss, R.; Keeling, R.

    2015-06-16

    Emission rates of greenhouse gases (GHGs) entering into the atmosphere can be inferred using mathematical inverse approaches that combine observations from a network of stations with forward atmospheric transport models. Some locations for collecting observations are better than others for constraining GHG emissions through the inversion, but the best locations for the inversion may be inaccessible or limited by economic and other non-scientific factors. We present a method to design an optimal GHG observing network in the presence of multiple objectives that may be in conflict with each other. As a demonstration, we use our method to design a prototypemore » network of six stations to monitor summertime emissions in California of the potent GHG 1,1,1,2-tetrafluoroethane (CH2FCF3, HFC-134a). We use a multiobjective genetic algorithm to evolve network configurations that seek to jointly maximize the scientific accuracy of the inferred HFC-134a emissions and minimize the associated costs of making the measurements. The genetic algorithm effectively determines a set of "optimal" observing networks for HFC-134a that satisfy both objectives (i.e., the Pareto frontier). The Pareto frontier is convex, and clearly shows the tradeoffs between performance and cost, and the diminishing returns in trading one for the other. Without difficulty, our method can be extended to design optimal networks to monitor two or more GHGs with different emissions patterns, or to incorporate other objectives and constraints that are important in the practical design of atmospheric monitoring networks.« less

  10. Performance evaluation of an importance sampling technique in a Jackson network

    NASA Astrophysics Data System (ADS)

    brahim Mahdipour, E.; Masoud Rahmani, Amir; Setayeshi, Saeed

    2014-03-01

    Importance sampling is a technique that is commonly used to speed up Monte Carlo simulation of rare events. However, little is known regarding the design of efficient importance sampling algorithms in the context of queueing networks. The standard approach, which simulates the system using an a priori fixed change of measure suggested by large deviation analysis, has been shown to fail in even the simplest network settings. Estimating probabilities associated with rare events has been a topic of great importance in queueing theory, and in applied probability at large. In this article, we analyse the performance of an importance sampling estimator for a rare event probability in a Jackson network. This article carries out strict deadlines to a two-node Jackson network with feedback whose arrival and service rates are modulated by an exogenous finite state Markov process. We have estimated the probability of network blocking for various sets of parameters, and also the probability of missing the deadline of customers for different loads and deadlines. We have finally shown that the probability of total population overflow may be affected by various deadline values, service rates and arrival rates.

  11. ER-TCP (Exponential Recovery-TCP): High-Performance TCP for Satellite Networks

    NASA Astrophysics Data System (ADS)

    Park, Mankyu; Shin, Minsu; Oh, Deockgil; Ahn, Doseob; Kim, Byungchul; Lee, Jaeyong

    A transmission control protocol (TCP) using an additive increase multiplicative decrease (AIMD) algorithm for congestion control plays a leading role in advanced Internet services. However, the AIMD method shows only low link utilization in lossy networks with long delay such as satellite networks. This is because the cwnd dynamics of TCP are reduced by long propagation delay, and TCP uses an inadequate congestion control algorithm, which does not distinguish packet loss from wireless errors from that due to congestion of the wireless networks. To overcome these problems, we propose an exponential recovery (ER) TCP that uses the exponential recovery function for rapidly occupying available bandwidth during a congestion avoidance period, and an adaptive congestion window decrease scheme using timestamp base available bandwidth estimation (TABE) to cope with wireless channel errors. We simulate the proposed ER-TCP under various test scenarios using the ns-2 network simulator to verify its performance enhancement. Simulation results show that the proposal is a more suitable TCP than the several TCP variants under long delay and heavy loss probability environments of satellite networks.

  12. Optimizing the ASC WAN: evaluating network performance tools for comparing transport protocols.

    SciTech Connect

    Lydick, Christopher L.

    2007-07-01

    The Advanced Simulation & Computing Wide Area Network (ASC WAN), which is a high delay-bandwidth network connection between US Department of Energy National Laboratories, is constantly being examined and evaluated for efficiency. One of the current transport-layer protocols which is used, TCP, was developed for traffic demands which are different from that on the ASC WAN. The Stream Control Transport Protocol (SCTP), on the other hand, has shown characteristics which make it more appealing to networks such as these. Most important, before considering a replacement for TCP on any network, a testing tool that performs well against certain criteria needs to be found. In order to try to find such a tool, two popular networking tools (Netperf v.2.4.3 & v.2.4.6 (OpenSS7 STREAMS), and Iperf v.2.0.6) were tested. These tools implement both TCP and SCTP and were evaluated using four metrics: (1) How effectively can the tool reach a throughput near the bandwidth? (2) How much of the CPU does the tool utilize during operation? (3) Is the tool freely and widely available? And, (4) Is the tool actively developed? Following the analysis of those tools, this paper goes further into explaining some recommendations and ideas for future work.

  13. A Workflow-based Intelligent Network Data Movement Advisor with End-to-end Performance Optimization

    SciTech Connect

    Zhu, Michelle M.; Wu, Chase Q.

    2013-11-07

    Next-generation eScience applications often generate large amounts of simulation, experimental, or observational data that must be shared and managed by collaborative organizations. Advanced networking technologies and services have been rapidly developed and deployed to facilitate such massive data transfer. However, these technologies and services have not been fully utilized mainly because their use typically requires significant domain knowledge and in many cases application users are even not aware of their existence. By leveraging the functionalities of an existing Network-Aware Data Movement Advisor (NADMA) utility, we propose a new Workflow-based Intelligent Network Data Movement Advisor (WINDMA) with end-to-end performance optimization for this DOE funded project. This WINDMA system integrates three major components: resource discovery, data movement, and status monitoring, and supports the sharing of common data movement workflows through account and database management. This system provides a web interface and interacts with existing data/space management and discovery services such as Storage Resource Management, transport methods such as GridFTP and GlobusOnline, and network resource provisioning brokers such as ION and OSCARS. We demonstrate the efficacy of the proposed transport-support workflow system in several use cases based on its implementation and deployment in DOE wide-area networks.

  14. Performance-Based Adaptive Fuzzy Tracking Control for Networked Industrial Processes.

    PubMed

    Wang, Tong; Qiu, Jianbin; Yin, Shen; Gao, Huijun; Fan, Jialu; Chai, Tianyou

    2016-08-01

    In this paper, the performance-based control design problem for double-layer networked industrial processes is investigated. At the device layer, the prescribed performance functions are first given to describe the output tracking performance, and then by using backstepping technique, new adaptive fuzzy controllers are designed to guarantee the tracking performance under the effects of input dead-zone and the constraint of prescribed tracking performance functions. At operation layer, by considering the stochastic disturbance, actual index value, target index value, and index prediction simultaneously, an adaptive inverse optimal controller in discrete-time form is designed to optimize the overall performance and stabilize the overall nonlinear system. Finally, a simulation example of continuous stirred tank reactor system is presented to show the effectiveness of the proposed control method. PMID:27168605

  15. A high performance long-reach passive optical network with a novel excess bandwidth distribution scheme

    NASA Astrophysics Data System (ADS)

    Chao, I.-Fen; Zhang, Tsung-Min

    2015-06-01

    Long-reach passive optical networks (LR-PONs) have been considered to be promising solutions for future access networks. In this paper, we propose a distributed medium access control (MAC) scheme over an advantageous LR-PON network architecture that reroutes the control information from and back to all ONUs through an (N + 1) × (N + 1) star coupler (SC) deployed near the ONUs, thereby overwhelming the extremely long propagation delay problem in LR-PONs. In the network, the control slot is designed to contain all bandwidth requirements of all ONUs and is in-band time-division-multiplexed with a number of data slots within a cycle. In the proposed MAC scheme, a novel profit-weight-based dynamic bandwidth allocation (P-DBA) scheme is presented. The algorithm is designed to efficiently and fairly distribute the amount of excess bandwidth based on a profit value derived from the excess bandwidth usage of each ONU, which resolves the problems of previously reported DBA schemes that are either unfair or inefficient. The simulation results show that the proposed decentralized algorithms exhibit a nearly three-order-of-magnitude improvement in delay performance compared to the centralized algorithms over LR-PONs. Moreover, the newly proposed P-DBA scheme guarantees low delay performance and fairness even when under attack by the malevolent ONU irrespective of traffic loads and burstiness.

  16. GPU Accelerated Vector Median Filter

    NASA Technical Reports Server (NTRS)

    Aras, Rifat; Shen, Yuzhong

    2011-01-01

    Noise reduction is an important step for most image processing tasks. For three channel color images, a widely used technique is vector median filter in which color values of pixels are treated as 3-component vectors. Vector median filters are computationally expensive; for a window size of n x n, each of the n(sup 2) vectors has to be compared with other n(sup 2) - 1 vectors in distances. General purpose computation on graphics processing units (GPUs) is the paradigm of utilizing high-performance many-core GPU architectures for computation tasks that are normally handled by CPUs. In this work. NVIDIA's Compute Unified Device Architecture (CUDA) paradigm is used to accelerate vector median filtering. which has to the best of our knowledge never been done before. The performance of GPU accelerated vector median filter is compared to that of the CPU and MPI-based versions for different image and window sizes, Initial findings of the study showed 100x improvement of performance of vector median filter implementation on GPUs over CPU implementations and further speed-up is expected after more extensive optimizations of the GPU algorithm .

  17. Support Vector Regression Algorithms in the Forecasting of Daily Maximums of Tropospheric Ozone Concentration in Madrid

    NASA Astrophysics Data System (ADS)

    Ortiz-García, E. G.; Salcedo-Sanz, S.; Pérez-Bellido, A. M.; Gascón-Moreno, J.; Portilla-Figueras, A.

    In this paper we present the application of a support vector regression algorithm to a real problem of maximum daily tropospheric ozone forecast. The support vector regression approach proposed is hybridized with an heuristic for optimal selection of hyper-parameters. The prediction of maximum daily ozone is carried out in all the station of the air quality monitoring network of Madrid. In the paper we analyze how the ozone prediction depends on meteorological variables such as solar radiation and temperature, and also we perform a comparison against the results obtained using a multi-layer perceptron neural network in the same prediction problem.

  18. Evaluating the performances of statistical and neural network based control charts

    NASA Astrophysics Data System (ADS)

    Teoh, Kok Ban; Ong, Hong Choon

    2015-10-01

    Control chart is used widely in many fields and traditional control chart is no longer adequate in detecting a sudden change in a particular process. So, run rules which are built in into Shewhart X ¯ control chart while Exponential Weighted Moving Average control chart (EWMA), Cumulative Sum control chart (CUSUM) and neural network based control chart are introduced to overcome the limitation regarding to the sensitivity of traditional control chart. In this study, the average run length (ARL) and median run length (MRL) in the shifts in the process mean of control charts mentioned will be computed. We will show that interpretations based only on the ARL can be misleading. Thus, MRL is also used to evaluate the performances of the control charts. From this study, neural network based control chart is found to possess a better performance than run rules of Shewhart X ¯ control chart, EWMA and CUSUM control chart.

  19. A Dynamic Network Model to Explain the Development of Excellent Human Performance.

    PubMed

    Den Hartigh, Ruud J R; Van Dijk, Marijn W G; Steenbeek, Henderien W; Van Geert, Paul L C

    2016-01-01

    Across different domains, from sports to science, some individuals accomplish excellent levels of performance. For over 150 years, researchers have debated the roles of specific nature and nurture components to develop excellence. In this article, we argue that the key to excellence does not reside in specific underlying components, but rather in the ongoing interactions among the components. We propose that excellence emerges out of dynamic networks consisting of idiosyncratic mixtures of interacting components such as genetic endowment, motivation, practice, and coaching. Using computer simulations we demonstrate that the dynamic network model accurately predicts typical properties of excellence reported in the literature, such as the idiosyncratic developmental trajectories leading to excellence and the highly skewed distributions of productivity present in virtually any achievement domain. Based on this novel theoretical perspective on excellent human performance, this article concludes by suggesting policy implications and directions for future research. PMID:27148140

  20. Unveiling the BitTorrent Performance in Mobile WiMAX Networks

    NASA Astrophysics Data System (ADS)

    Wang, Xiaofei; Kim, Seungbae; Kwon, Ted "Taekyoung"; Kim, Hyun-Chul; Choi, Yanghee

    As mobile Internet environments are becoming widespread, how to revamp peer-to-peer (P2P) operations for mobile hosts is gaining more attention. In this paper, we carry out empirical measurement of BitTorrent users in a commercial WiMAX network. We investigate how handovers in WiMAX networks impact the BitTorrent performance, how BitTorrent peers perform from the aspects of connectivity, stability and capability, and how the BitTorrent protocol behaves depending on user mobility. We observe that the drawbacks of BitTorrent for mobile users are characterized by poor connectivity among peers, short download session times, small download throughput, negligible upload contributions, and high signaling overhead.

  1. A Dynamic Network Model to Explain the Development of Excellent Human Performance

    PubMed Central

    Den Hartigh, Ruud J. R.; Van Dijk, Marijn W. G.; Steenbeek, Henderien W.; Van Geert, Paul L. C.

    2016-01-01

    Across different domains, from sports to science, some individuals accomplish excellent levels of performance. For over 150 years, researchers have debated the roles of specific nature and nurture components to develop excellence. In this article, we argue that the key to excellence does not reside in specific underlying components, but rather in the ongoing interactions among the components. We propose that excellence emerges out of dynamic networks consisting of idiosyncratic mixtures of interacting components such as genetic endowment, motivation, practice, and coaching. Using computer simulations we demonstrate that the dynamic network model accurately predicts typical properties of excellence reported in the literature, such as the idiosyncratic developmental trajectories leading to excellence and the highly skewed distributions of productivity present in virtually any achievement domain. Based on this novel theoretical perspective on excellent human performance, this article concludes by suggesting policy implications and directions for future research. PMID:27148140

  2. Automated image segmentation using support vector machines

    NASA Astrophysics Data System (ADS)

    Powell, Stephanie; Magnotta, Vincent A.; Andreasen, Nancy C.

    2007-03-01

    Neurodegenerative and neurodevelopmental diseases demonstrate problems associated with brain maturation and aging. Automated methods to delineate brain structures of interest are required to analyze large amounts of imaging data like that being collected in several on going multi-center studies. We have previously reported on using artificial neural networks (ANN) to define subcortical brain structures including the thalamus (0.88), caudate (0.85) and the putamen (0.81). In this work, apriori probability information was generated using Thirion's demons registration algorithm. The input vector consisted of apriori probability, spherical coordinates, and an iris of surrounding signal intensity values. We have applied the support vector machine (SVM) machine learning algorithm to automatically segment subcortical and cerebellar regions using the same input vector information. SVM architecture was derived from the ANN framework. Training was completed using a radial-basis function kernel with gamma equal to 5.5. Training was performed using 15,000 vectors collected from 15 training images in approximately 10 minutes. The resulting support vectors were applied to delineate 10 images not part of the training set. Relative overlap calculated for the subcortical structures was 0.87 for the thalamus, 0.84 for the caudate, 0.84 for the putamen, and 0.72 for the hippocampus. Relative overlap for the cerebellar lobes ranged from 0.76 to 0.86. The reliability of the SVM based algorithm was similar to the inter-rater reliability between manual raters and can be achieved without rater intervention.

  3. Data Analysis for the SOLIS Vector Spectromagnetograph

    NASA Technical Reports Server (NTRS)

    Jones, Harrison P.; Harvey, John W.; Oegerle, William (Technical Monitor)

    2002-01-01

    The National Solar Observatory's SOLIS Vector Spectromagnetograph (VSM), which will produce three or more full-disk maps of the Sun's photospheric vector magnetic field every day for at least one solar magnetic cycle, is in the final stages of assembly. Initial observations, including cross-calibration with the current NASA/NSO spectromagnetograph (SPM) will soon be carried out at a test site in Tucson. This paper discusses data analysis techniques for reducing the raw data, calculation of line-of-sight magnetograms and both quick-look and high-precision inference of vector fields from Stokes spectral profiles. Existing SPM algorithms, suitably modified to accomodate the cameras, scanning pattern, and polarization calibration optics for the VSM, will be used to "clean" the raw data and to process line-of-sight, magnetograms. A recent. version of the High Altitude Observatory Milne-Eddington (HAO-ME) inversion code (Skumanich and Lites; 1987, 11)J 322, p. 473) will he used for high-precision vector fields since the algorithm has been extensively tested, is well understood, and is fast enough to complete data analysis within 24 hours of data acquisition. The simplified inversion algorithm of Auer, Heasley. arid House (1977, Sol. Phys. 55, p. 47) forms the initial guess for this version of the HAO-ME code and will be used for quick-look vector analysis of VSM data since its performance on simulated Stokes profiles is better than other candidate methods. Improvements (e.g., principal components analysis or neural networks) are under consideration and will be straightforward to implement. However, current resources are sufficient to store the original Stokes profiles only long enough for high-precision analysis. Retrospective reduction of Stokes data with improved methods will not be possible, and modifications will only be introduced when the advantages of doing so are compelling enough to justify discontinuity in the long-term data stream.

  4. Performance of the GLOBALink/HF Network during the Halloween Storm Period of 2003

    NASA Astrophysics Data System (ADS)

    Goodman, J. M.; Patterson, J. D.

    2004-12-01

    The GLOBALink/HF system, developed and managed by ARINC, is a global high frequency data link communications network providing service to commercial aviation worldwide. It consists of 14 ground stations located around the globe, and a network control center located in Annapolis. The system was designed to provide reliable aircraft communications through the use of multi-station accessibility, quasi-dynamic frequency management, and a robust time-diversity modem with equalization. Although HF (i.e., 3-30 MHz) signaling has a poor reputation when considering individual circuits, it has been shown that near-real time channel evaluation and/or adaptive frequency management can improve performance considerably. Moreover, multi-station network operation provides an additional form of diversity, which is probably the most valuable design strategy. Our paper briefly describes the system, but the major discussion will be about performance metrics derived during super storms. The Halloween storm period of October-November 2003 was a period of significant ionospheric effects. Large geomagnetic storms were evidenced. We have examined the impact on HFDL of the various phenomena observed during this period. We have found some impact on HFDL performance for the October 29-31 period, but it is minimal in amplitude. While HFDL is based upon HF propagation, a medium known for its vulnerability to ionospheric variability, the system performance metric does not reflect this vulnerability to a significant degree. This is thought to be the result of the substantial amount of diversity built into the system, especially the adaptive frequency management system, Dynacastr, a system developed by RPSI. The adaptive frequency management system involves the use of active frequency tables (or AFTs) that are based upon space weather observables. During the stormy weeks of October and November, ARINC issued over seven changes to the AFTs used by every HFDL station. These changes helped the HFDL

  5. Team performance and collective efficacy in the dynamic psychology of competitive team: a Bayesian network analysis.

    PubMed

    Fuster-Parra, P; García-Mas, A; Ponseti, F J; Leo, F M

    2015-04-01

    The purpose of this paper was to discover the relationships among 22 relevant psychological features in semi-professional football players in order to study team's performance and collective efficacy via a Bayesian network (BN). The paper includes optimization of team's performance and collective efficacy using intercausal reasoning pattern which constitutes a very common pattern in human reasoning. The BN is used to make inferences regarding our problem, and therefore we obtain some conclusions; among them: maximizing the team's performance causes a decrease in collective efficacy and when team's performance achieves the minimum value it causes an increase in moderate/high values of collective efficacy. Similarly, we may reason optimizing team collective efficacy instead. It also allows us to determine the features that have the strongest influence on performance and which on collective efficacy. From the BN two different coaching styles were differentiated taking into account the local Markov property: training leadership and autocratic leadership. PMID:25546263

  6. Predicting Subcontractor Performance Using Web-Based Evolutionary Fuzzy Neural Networks

    PubMed Central

    2013-01-01

    Subcontractor performance directly affects project success. The use of inappropriate subcontractors may result in individual work delays, cost overruns, and quality defects throughout the project. This study develops web-based Evolutionary Fuzzy Neural Networks (EFNNs) to predict subcontractor performance. EFNNs are a fusion of Genetic Algorithms (GAs), Fuzzy Logic (FL), and Neural Networks (NNs). FL is primarily used to mimic high level of decision-making processes and deal with uncertainty in the construction industry. NNs are used to identify the association between previous performance and future status when predicting subcontractor performance. GAs are optimizing parameters required in FL and NNs. EFNNs encode FL and NNs using floating numbers to shorten the length of a string. A multi-cut-point crossover operator is used to explore the parameter and retain solution legality. Finally, the applicability of the proposed EFNNs is validated using real subcontractors. The EFNNs are evolved using 22 historical patterns and tested using 12 unseen cases. Application results show that the proposed EFNNs surpass FL and NNs in predicting subcontractor performance. The proposed approach improves prediction accuracy and reduces the effort required to predict subcontractor performance, providing field operators with web-based remote access to a reliable, scientific prediction mechanism. PMID:23864830

  7. Scalable Memory Registration for High-Performance Networks Using Helper Threads

    SciTech Connect

    Li, Dong; Cameron, Kirk W.; Nikolopoulos, Dimitrios; de Supinski, Bronis R.; Schulz, Martin

    2011-01-01

    Remote DMA (RDMA) enables high performance networks to reduce data copying between an application and the operating system (OS). However RDMA operations in some high performance networks require communication memory explicitly registered with the network adapter and pinned by the OS. Memory registration and pinning limits the flexibility of the memory system and reduces the amount of memory that user processes can allocate. These issues become more significant on multicore platforms, since registered memory demand grows linearly with the number of processor cores. In this paper we propose a new memory registration/deregistration strategy to reduce registered memory on multicore architectures for HPC applications. We hide the cost of dynamic memory management by offloading all dynamic memory registration and deregistration requests to a dedicated memory management helper thread. We investigate design policies and performance implications of the helper thread approach. We evaluate our framework with the NAS parallel benchmarks, for which our registration scheme significantly reduces the registered memory (23.62% on average and up to 49.39%) and avoids memory registration/deregistration costs for reused communication memory. We show that our system enables the execution of problem sizes that could not complete under existing memory registration strategies.

  8. Performance of Thorup's Shortest Path Algorithm for Large-Scale Network Simulation

    NASA Astrophysics Data System (ADS)

    Sakumoto, Yusuke; Ohsaki, Hiroyuki; Imase, Makoto

    In this paper, we investigate the performance of Thorup's algorithm by comparing it to Dijkstra's algorithm for large-scale network simulations. One of the challenges toward the realization of large-scale network simulations is the efficient execution to find shortest paths in a graph with N vertices and M edges. The time complexity for solving a single-source shortest path (SSSP) problem with Dijkstra's algorithm with a binary heap (DIJKSTRA-BH) is O((M+N)log N). An sophisticated algorithm called Thorup's algorithm has been proposed. The original version of Thorup's algorithm (THORUP-FR) has the time complexity of O(M+N). A simplified version of Thorup's algorithm (THORUP-KL) has the time complexity of O(Mα(N)+N) where α(N) is the functional inverse of the Ackerman function. In this paper, we compare the performances (i.e., execution time and memory consumption) of THORUP-KL and DIJKSTRA-BH since it is known that THORUP-FR is at least ten times slower than Dijkstra's algorithm with a Fibonaccii heap. We find that (1) THORUP-KL is almost always faster than DIJKSTRA-BH for large-scale network simulations, and (2) the performances of THORUP-KL and DIJKSTRA-BH deviate from their time complexities due to the presence of the memory cache in the microprocessor.

  9. Analysing the Correlation between Social Network Analysis Measures and Performance of Students in Social Network-Based Engineering Education

    ERIC Educational Resources Information Center

    Putnik, Goran; Costa, Eric; Alves, Cátia; Castro, Hélio; Varela, Leonilde; Shah, Vaibhav

    2016-01-01

    Social network-based engineering education (SNEE) is designed and implemented as a model of Education 3.0 paradigm. SNEE represents a new learning methodology, which is based on the concept of social networks and represents an extended model of project-led education. The concept of social networks was applied in the real-life experiment,…

  10. Modeling of Abrasion Resistance Performance of Persian Handmade Wool Carpets Using Artificial Neural Network

    NASA Astrophysics Data System (ADS)

    Gupta, Shravan Kumar; Goswami, Kamal Kanti

    2015-10-01

    This paper presents the application of Artificial Neural Network (ANN) modeling for the prediction of abrasion resistance of Persian handmade wool carpets. Four carpet constructional parameters, namely knot density, pile height, number of ply in pile yarn and pile yarn twist have been used as input parameters for ANN model. The prediction performance was judged in terms of statistical parameters like correlation coefficient ( R) and Mean Absolute Percentage Error ( MAPE). Though the training performance of ANN was very good, the generalization ability was not up to the mark. This implies that large number of training data should be used for the adequate training of ANN models.

  11. Performance Analysis of AODV Routing Protocol for Wireless Sensor Network based Smart Metering

    NASA Astrophysics Data System (ADS)

    >Hasan Farooq, Low Tang Jung,

    2013-06-01

    Today no one can deny the need for Smart Grid and it is being considered as of utmost importance to upgrade outdated electric infrastructure to cope with the ever increasing electric load demand. Wireless Sensor Network (WSN) is considered a promising candidate for internetworking of smart meters with the gateway using mesh topology. This paper investigates the performance of AODV routing protocol for WSN based smart metering deployment. Three case studies are presented to analyze its performance based on four metrics of (i) Packet Delivery Ratio, (ii) Average Energy Consumption of Nodes (iii) Average End-End Delay and (iv) Normalized Routing Load.

  12. Networks.

    ERIC Educational Resources Information Center

    Maughan, George R.; Petitto, Karen R.; McLaughlin, Don

    2001-01-01

    Describes the connectivity features and options of modern campus communication and information system networks, including signal transmission (wire-based and wireless), signal switching, convergence of networks, and network assessment variables, to enable campus leaders to make sound future-oriented decisions. (EV)

  13. Three-dimensional interconnected nickel phosphide networks with hollow microstructures and desulfurization performance

    SciTech Connect

    Zhang, Shuna; Zhang, Shujuan; Song, Limin; Wu, Xiaoqing; Fang, Sheng

    2014-05-01

    Graphical abstract: Three-dimensional interconnected nickel phosphide networks with hollow microstructures and desulfurization performance. - Highlights: • Three-dimensional Ni{sub 2}P has been prepared using foam nickel as a template. • The microstructures interconnected and formed sponge-like porous networks. • Three-dimensional Ni{sub 2}P shows superior hydrodesulfurization activity. - Abstract: Three-dimensional microstructured nickel phosphide (Ni{sub 2}P) was fabricated by the reaction between foam nickel (Ni) and phosphorus red. The as-prepared Ni{sub 2}P samples, as interconnected networks, maintained the original mesh structure of foamed nickel. The crystal structure and morphology of the as-synthesized Ni{sub 2}P were characterized by X-ray diffraction, scanning electron microscopy, automatic mercury porosimetry and X-ray photoelectron spectroscopy. The SEM study showed adjacent hollow branches were mutually interconnected to form sponge-like networks. The investigation on pore structure provided detailed information for the hollow microstructures. The growth mechanism for the three-dimensionally structured Ni{sub 2}P was postulated and discussed in detail. To investigate its catalytic properties, SiO{sub 2} supported three-dimensional Ni{sub 2}P was prepared successfully and evaluated for the hydrodesulfurization (HDS) of dibenzothiophene (DBT). DBT molecules were mostly hydrogenated and then desulfurized by Ni{sub 2}P/SiO{sub 2}.

  14. Age-related changes in functional network connectivity associated with high levels of verbal fluency performance.

    PubMed

    Marsolais, Yannick; Perlbarg, Vincent; Benali, Habib; Joanette, Yves

    2014-09-01

    The relative preservation of receptive language abilities in older adults has been associated with adaptive changes in cerebral activation patterns, which have been suggested to be task-load dependent. However, the effects of aging and task demands on the functional integration of neural networks contributing to speech production abilities remain largely unexplored. In the present functional neuroimaging study, data-driven spatial independent component analysis and hierarchical measures of integration were used to explore age-related changes in functional connectivity among cortical areas contributing to semantic, orthographic, and automated word fluency tasks in healthy young and older adults, as well as to assess the effect of age and task demands on the functional integration of a verbal fluency network. The results showed that the functional integration of speech production networks decreases with age, while at the same time this has a marginal effect on behavioral outcomes in high-performing older adults. Moreover, a significant task demand/age interaction was found in functional connectivity within the anterior and posterior subnetworks of the verbal fluency network. These results suggest that local changes in functional integration among cortical areas supporting lexical speech production are modulated by age and task demands. PMID:25014614

  15. Networks of ultrasmall Pd/Cr bilayer nanowires as high performance hydrogen sensors.

    SciTech Connect

    Zeng, X.-Q.; Wang, Y.-L.; Deng, H.; Latimer, M. L.; Xiao, Z.-L.; Pearson, J.; Xu, T.; Wang, H.-H.; Welp, U.; Crabtree, G. W.; Kwok, W.-K.

    2011-01-01

    The newly developed hydrogen sensor, based on a network of ultrasmall pure palladium nanowires sputter-deposited on a filtration membrane, takes advantage of single palladium nanowires' characteristics of high speed and sensitivity while eliminating their nanofabrication obstacles. However, this new type of sensor, like the single palladium nanowires, cannot distinguish hydrogen concentrations above 3%, thus limiting the potential applications of the sensor. This study reports hydrogen sensors based on a network of ultrasmall Cr-buffered Pd (Pd/Cr) nanowires on a filtration membrane. These sensors not only are able to outperform their pure Pd counterparts in speed and durability but also allow hydrogen detection at concentrations up to 100%. The new networks consist of a thin layer of palladium deposited on top of a Cr adhesion layer 1-3 nm thick. Although the Cr layer is insensitive to hydrogen, it enables the formation of a network of continuous Pd/Cr nanowires with thicknesses of the Pd layer as thin as 2 nm. The improved performance of the Pd/Cr sensors can be attributed to the increased surface area to volume ratio and to the confinement-induced suppression of the phase transition from Pd/H solid solution ({alpha}-phase) to Pd hydride ({beta}-phase).

  16. A High Performance Computing Network and System Simulator for the Power Grid: NGNS^2

    SciTech Connect

    Villa, Oreste; Tumeo, Antonino; Ciraci, Selim; Daily, Jeffrey A.; Fuller, Jason C.

    2012-11-11

    Designing and planing next generation power grid sys- tems composed of large power distribution networks, monitoring and control networks, autonomous generators and consumers of power requires advanced simulation infrastructures. The objective is to predict and analyze in time the behavior of networks of systems for unexpected events such as loss of connectivity, malicious attacks and power loss scenarios. This ultimately allows one to answer questions such as: “What could happen to the power grid if ...”. We want to be able to answer as many questions as possible in the shortest possible time for the largest possible systems. In this paper we present a new High Performance Computing (HPC) oriented simulation infrastructure named Next Generation Network and System Simulator (NGNS2 ). NGNS2 allows for the distribution of a single simulation among multiple computing elements by using MPI and OpenMP threads. NGNS2 provides extensive configuration, fault tolerant and load balancing capabilities needed to simulate large and dynamic systems for long periods of time. We show the preliminary results of the simulator running approximately two million simulated entities both on a 64-node commodity Infiniband cluster and a 48-core SMP workstation.

  17. Performance Analysis of Satellite Clock Bias Based on Wavelet Analysis and Neural Network

    NASA Astrophysics Data System (ADS)

    Guo, C. J.; Teng, Y. L.

    2010-10-01

    In the field of the real-time GPS precise point positioning (PPP), the real-time and reliable prediction of satellite clock bias (SCB) is one key to realize the real-time GPS PPP with high accuracy. The satellite borne GPS atomic clock has high frequency, is very sensitive and extremely easy to be influenced by the outside world and its own factors. So it is very difficult to master its complicated and detailed law of change. With the above characters, a novel four-stage method for SCB prediction based on wavelet analysis and neural network is proposed. The basic ideas, prediction models and steps of clock bias prediction based on wavelet analysis and radial basis function (RBF) network are discussed, respectively. This model adopts "sliding window" to compartmentalize data and utilizes neural network to prognosticate coefficients of clock bias sequence at each layer after wavelet analysis and wiping off noise. As a result, the intricate and meticulous diversification rule of clock bias sequence is obtained more accurately and the clock bias sequence is better approached. Compared with the grey system model and neural network model, a careful precision analysis of SCB prediction is made to verify the feasibility and validity of this proposed method by using the performance parameters of GPS satellite clocks. The simulation results show that the prediction precision of this novel model is much better. This model can afford the SCB prediction with relatively high precision for real-time GPS PPP.

  18. Neural network-based combustion optimization reduces NOx emissions while improving performance

    SciTech Connect

    Booth, R.C.; Roland, W.B.

    1998-07-01

    This paper presents the benefits of applying an on-line, real-time neural network to several bituminous coal fired utility boilers. The system helps reduce NOx emissions up to 60%, meeting compliance while it improves heat rate up to 2% overall (5% at low load) and reduces LOI as much as 30% through combustion optimization alone. The system can avoid or postpone large capital expenditures for low NOx burners, overfire air boiler modifications, SCRs, and SNCRs. The neural network-based system has been applied on 11 electric utility boilers that represent a wide range of furnace and burner types including units with tangential-, cell-, single wall-, and opposed wall-burner arrangements that have ranged in capacity from 146 to 800 MW in an advisory mode. Several sites have employed the neural network-based system for closed-loop supervisory combustion control. Boiler combustion profiles change continuously due to coal quality, boiler loading, changes in slag/soot deposits, ambient conditions, and the condition of plant equipment. Through on-line retraining, the neural network-based system optimizes the boiler operation by accommodating equipment performance changes due to wear and maintenance activities, adjusting to fluctuations in fuel quality, and improving operating flexibility. The system dynamically adjusts combustion setpoints and bias settings in closed-loop supervisory control to reduce NO{sub x} emissions and improve heat rate simultaneously.

  19. Neural network-based combustion optimization reduces NOx emissions while improving performance

    SciTech Connect

    Booth, R.C.; Roland, W.B. Jr.

    1998-12-31

    The NeuSIGHT neural network based system has been applied to units with tangential-, cell-, single wall-, and opposed wall-burner arrangements that have ranged in capacity from 146 to 800 MW in an advisory mode. Several sites have employed the neural network-based system for closed-loop supervisory combustion control. Boiler combustion profiles change continuously due to coal quality, boiler loading, changes in slag/soot deposits, ambient conditions, and the condition of plant equipment. Through on-line retraining, the neural network-based system optimizes the boiler operation by accommodating equipment performance changes due to wear and maintenance activities, adjusting to fluctuations in fuel quality, and improving operating flexibility. The system dynamically adjusts combustion setpoints and bias settings in closed-loop supervisory control to reduce NO{sub x} emissions and improve heat rate simultaneously. This paper presents the benefits of applying an on-line, real-time neural network to several commercially operating bituminous coal fired utility boilers. The system helps reduce NO{sub x} emissions up to 60%, meeting compliance while it improves heat rate up to 2% overall (5% at low load) and reduces LOI as much as 30% through combustion optimization alone. The system can avoid or postpone large capital expenditures for low NO{sub x} burners, overfire air boiler modifications, SCRs, and SNCRs.

  20. Distributed semi-supervised support vector machines.

    PubMed

    Scardapane, Simone; Fierimonte, Roberto; Di Lorenzo, Paolo; Panella, Massimo; Uncini, Aurelio

    2016-08-01

    The semi-supervised support vector machine (S(3)VM) is a well-known algorithm for performing semi-supervised inference under the large margin principle. In this paper, we are interested in the problem of training a S(3)VM when the labeled and unlabeled samples are distributed over a network of interconnected agents. In particular, the aim is to design a distributed training protocol over networks, where communication is restricted only to neighboring agents and no coordinating authority is present. Using a standard relaxation of the original S(3)VM, we formulate the training problem as the distributed minimization of a non-convex social cost function. To find a (stationary) solution in a distributed manner, we employ two different strategies: (i) a distributed gradient descent algorithm; (ii) a recently developed framework for In-Network Nonconvex Optimization (NEXT), which is based on successive convexifications of the original problem, interleaved by state diffusion steps. Our experimental results show that the proposed distributed algorithms have comparable performance with respect to a centralized implementation, while highlighting the pros and cons of the proposed solutions. To the date, this is the first work that paves the way toward the broad field of distributed semi-supervised learning over networks. PMID:27179615

  1. Quantifying individual performance in Cricket — A network analysis of batsmen and bowlers

    NASA Astrophysics Data System (ADS)

    Mukherjee, Satyam

    2014-01-01

    Quantifying individual performance in the game of Cricket is critical for team selection in International matches. The number of runs scored by batsmen and wickets taken by bowlers serves as a natural way of quantifying the performance of a cricketer. Traditionally the batsmen and bowlers are rated on their batting or bowling average respectively. However, in a game like Cricket it is always important the manner in which one scores the runs or claims a wicket. Scoring runs against a strong bowling line-up or delivering a brilliant performance against a team with a strong batting line-up deserves more credit. A player’s average is not able to capture this aspect of the game. In this paper we present a refined method to quantify the ‘quality’ of runs scored by a batsman or wickets taken by a bowler. We explore the application of Social Network Analysis (SNA) to rate the players in a team performance. We generate a directed and weighted network of batsmen-bowlers using the player-vs-player information available for Test cricket and ODI cricket. Additionally we generate a network of batsmen and bowlers based on the dismissal record of batsmen in the history of cricket-Test (1877-2011) and ODI (1971-2011). Our results show that M. Muralitharan is the most successful bowler in the history of Cricket. Our approach could potentially be applied in domestic matches to judge a player’s performance which in turn paves the way for a balanced team selection for International matches.

  2. REMOTE, a Wireless Sensor Network Based System to Monitor Rowing Performance

    PubMed Central

    Llosa, Jordi; Vilajosana, Ignasi; Vilajosana, Xavier; Navarro, Nacho; Suriñach, Emma; Marquès, Joan Manuel

    2009-01-01

    In this paper, we take a hard look at the performance of REMOTE, a sensor network based application that provides a detailed picture of a boat movement, individual rower performance, or his/her performance compared with other crew members. The application analyzes data gathered with a WSN strategically deployed over a boat to obtain information on the boat and oar movements. Functionalities of REMOTE are compared to those of RowX [1] outdoor instrument, a commercial wired sensor instrument designed for similar purposes. This study demonstrates that with smart geometrical configuration of the sensors, rotation and translation of the oars and boat can be obtained. Three different tests are performed: laboratory calibration allows us to become familiar with the accelerometer readings and validate the theory, ergometer tests which help us to set the acquisition parameters, and on boat tests shows the application potential of this technologies in sports. PMID:22423204

  3. Evaluation of replacement protocols and modifications to TCP to enhance ASC Wide Area Network performance.

    SciTech Connect

    Romero, Randy L. Jr.

    2004-09-01

    Historically, TCP/IP has been the protocol suite used to transfer data throughout the Advanced Simulation and Computing (ASC) community. However, TCP was developed many years ago for an environment very different from the ASC Wide Area Network (WAN) of today. There have been numerous publications that hint of better performance if modifications were made to the TCP algorithms or a different protocol was used to transfer data across a high bandwidth, high delay WAN. Since Sandia National Laboratories wants to maximize the ASC WAN performance to support the Thor's Hammer supercomputer, there is strong interest in evaluating modifications to the TCP protocol and in evaluating alternatives to TCP, such as SCTP, to determine if they provide improved performance. Therefore, the goal of this project is to test, evaluate, compare, and report protocol technologies that enhance the performance of the ASC WAN.

  4. Brain networks subserving fixed versus performance-adjusted delay stop trials in a stop signal task.

    PubMed

    Fauth-Bühler, Mira; de Rover, Mischa; Rubia, Katya; Garavan, Hugh; Abbott, Sanja; Clark, Luke; Vollstädt-Klein, Sabine; Mann, Karl; Schumann, Gunter; Robbins, Trevor W

    2012-11-01

    The stop signal task is a widely used tool for assessing inhibitory motor control. Two main task variants exist: (1) a fixed delay version, where all volunteers complete the same trials, resulting in performance differences due to individual variation in inhibitory capacity, and (2) a performance-adjusted version that uses a tracking algorithm to equate performance and task difficulty across subjects, leading to ∼50% successful inhibition for every participant. Our aim was to investigate commonalities, mean differences and between-subject variability in brain activation for successful response inhibition between the performance-adjusted and fixed delay version. We conducted a functional magnetic resonance imaging (fMRI) study in 18 healthy individuals, using a within-subject, within-task design where both adjusting and fixed delay trials were analysed separately. Conjunction analyses identified a network of areas involved in successful response inhibition in both task versions. In comparing the fixed and performance-adjusted versions, we found no significant differences between delay conditions during successful inhibition. While activation measures in the inhibitory networks of both delay variants were highly comparable, the neural responses to fixed delay trials were more variable across participants. This suggests that performance-adjusted stop signal tasks may be more suitable for studies in which the performance differences need to be controlled for, such as for developmental or clinical studies. Fixed delay stop signal tasks may be more appropriate in studies assessing the neural basis of individual differences in performance, such as studies of personality traits or genetic associations. PMID:22820235

  5. Performance evaluation of wavelet-based ECG compression algorithms for telecardiology application over CDMA network.

    PubMed

    Kim, Byung S; Yoo, Sun K

    2007-09-01

    The use of wireless networks bears great practical importance in instantaneous transmission of ECG signals during movement. In this paper, three typical wavelet-based ECG compression algorithms, Rajoub (RA), Embedded Zerotree Wavelet (EZ), and Wavelet Transform Higher-Order Statistics Coding (WH), were evaluated to find an appropriate ECG compression algorithm for scalable and reliable wireless tele-cardiology applications, particularly over a CDMA network. The short-term and long-term performance characteristics of the three algorithms were analyzed using normal, abnormal, and measurement noise-contaminated ECG signals from the MIT-BIH database. In addition to the processing delay measurement, compression efficiency and reconstruction sensitivity to error were also evaluated via simulation models including the noise-free channel model, random noise channel model, and CDMA channel model, as well as over an actual CDMA network currently operating in Korea. This study found that the EZ algorithm achieves the best compression efficiency within a low-noise environment, and that the WH algorithm is competitive for use in high-error environments with degraded short-term performance with abnormal or contaminated ECG signals. PMID:17701824

  6. Exact performance analytical model for spectrum allocation in flexible grid optical networks

    NASA Astrophysics Data System (ADS)

    Yu, Yiming; Zhang, Jie; Zhao, Yongli; Li, Hui; Ji, Yuefeng; Gu, Wanyi

    2014-03-01

    Dynamic flexible grid optical networks have gained much attention because of the advantages of high spectrum efficiency and flexibility, while the performance analysis will be more complex compared with fixed grid optical networks. An analytical Markov model is first presented in the paper, which can exactly describe the stochastic characteristics of the spectrum allocation in flexible grid optical networks considering both random-fit and first-fit resource assignment policies. We focus on the effect of spectrum contiguous constraint which has not been systematically studied in respect of mathematical modeling, and three major properties of the model are presented and analyzed. The model can expose key performance features and act as the foundation of modeling the Routing and Spectrum Assignment (RSA) problem with diverse topologies. Two heuristic algorithms are also proposed to make it more tractable. Finally, several key parameters, such as blocking probability, resource utilization rate and fragmentation rate are presented and computed, and the corresponding Monte Carlo simulation results match closely with analytical results, which prove the correctness of this mathematical model.

  7. EFFECT OF MOBILITY ON PERFORMANCE OF WIRELESS AD-HOC NETWORK PROTOCOLS.

    SciTech Connect

    Barrett, C. L.; Drozda, M.; Marathe, M. V.; Marathe, A.

    2001-01-01

    We empirically study the effect of mobility on the performance of protocols designed for wireless adhoc networks. An important ohjective is to study the interaction of the Routing and MAC layer protocols under different mobility parameters. We use three basic mobility models: grid mobility model, random waypoint model, and exponential correlated random model. The performance of protocols was measured in terms of (i) latency, (ii) throughput, (iii) number of packels received, (iv) long term fairness and (v) number of control packets at the MAC layer level. Three different commonly studied routing protocols were used: AODV, DSR and LAR1. Similarly three well known MAC protocols were used: MACA, 802.1 1 and CSMA. The inair1 conclusion of our study include the following: 1. 'I'he performance of the: network varies widely with varying mobility models, packet injection rates and speeds; and can ba in fact characterized as fair to poor depending on the specific situation. Nevertheless, in general, it appears that the combination of AODV and 802.1 I is far better than other combination of routing and MAC protocols. 2. MAC layer protocols interact with routing layer protocols. This concept which is formalized using statistics implies that in general it is not meaningful to speak about a MAC or a routing protocol in isolation. Such an interaction leads to trade-offs between the amount of control packets generated by each layer. More interestingly, the results wise the possibility of improving the performance of a particular MAC layer protocol by using a cleverly designed routing protocol or vice-versa. 3. Routing prolocols with distributed knowledge about routes are more suitable for networks with mobility. This is seen by comparing the performance of AODV with DSR or LAR scheme 1. In DSli and IAR scheme 1, information about a computed path is being stored in the route query control packct. 4. MAC layer protocols have varying performance with varying mobility models. It is

  8. Ka-band (32-GHz) performance of 70-meter antennas in the Deep Space Network

    NASA Technical Reports Server (NTRS)

    Imbriale, W. A.; Bhanji, A. M.; Blank, S.; Lobb, V. B.; Levy, R.; Rocci, S. A.

    1987-01-01

    Two models are provided of the Deep Space Network (DSN) 70 m antenna performance at Ka-band (32 GHz) and, for comparison purposes, one at X-band (8.4 GHz). The baseline 70 m model represents expected X-band and Ka-band performance at the end of the currently ongoing 64 m to 70 m mechanical upgrade. The improved 70 m model represents two sets of Ka-band performance estimates (the X-band performance will not change) based on two separately developed improvement schemes: the first scheme, a mechanical approach, reduces tolerances of the panels and their settings, the reflector structure and subreflector, and the pointing and tracking system. The second, an electronic/mechanical approach, uses an array feed scheme to compensate fo lack of antenna stiffness, and improves panel settings using microwave holographic measuring techniques. Results are preliminary, due to remaining technical and cost uncertainties. However, there do not appear to be any serious difficulties in upgrading the baseline DSN 70 m antenna network to operate efficiently in an improved configuration at 32 GHz (Ka-band). This upgrade can be achieved by a conventional mechanical upgrade or by a mechanical/electronic combination. An electronically compensated array feed system is technically feasible, although it needs to be modeled and demonstrated. Similarly, the mechanical upgrade requires the development and demonstration of panel actuators, sensors, and an optical surveying system.

  9. Design and Performance of the Acts Gigabit Satellite Network High Data-Rate Ground Station

    NASA Technical Reports Server (NTRS)

    Hoder, Doug; Kearney, Brian

    1995-01-01

    The ACTS High Data-Rate Ground stations were built to support the ACTS Gigabit Satellite Network (GSN). The ACTS GSN was designed to provide fiber-compatible SONET service to remote nodes and networks through a wideband satellite system. The ACTS satellite is unique in its extremely wide bandwidth, and electronically controlled spot beam antennas. This paper discusses the requirements, design and performance of the RF section of the ACTS High Data-Rate Ground Stations and constituent hardware. The ACTS transponder systems incorporate highly nonlinear hard limiting. This introduced a major complexity in to the design and subsequent modification of the ground stations. A discussion of the peculiarities of the A CTS spacecraft transponder system and their impact is included.

  10. Multilayer cellular neural network and fuzzy C-mean classifiers: comparison and performance analysis

    NASA Astrophysics Data System (ADS)

    Trujillo San-Martin, Maite; Hlebarov, Vejen; Sadki, Mustapha

    2004-11-01

    Neural Networks and Fuzzy systems are considered two of the most important artificial intelligent algorithms which provide classification capabilities obtained through different learning schemas which capture knowledge and process it according to particular rule-based algorithms. These methods are especially suited to exploit the tolerance for uncertainty and vagueness in cognitive reasoning. By applying these methods with some relevant knowledge-based rules extracted using different data analysis tools, it is possible to obtain a robust classification performance for a wide range of applications. This paper will focus on non-destructive testing quality control systems, in particular, the study of metallic structures classification according to the corrosion time using a novel cellular neural network architecture, which will be explained in detail. Additionally, we will compare these results with the ones obtained using the Fuzzy C-means clustering algorithm and analyse both classifiers according to its classification capabilities.

  11. Prediction of wastewater treatment plants performance based on artificial fish school neural network

    NASA Astrophysics Data System (ADS)

    Zhang, Ruicheng; Li, Chong

    2011-10-01

    A reliable model for wastewater treatment plant is essential in providing a tool for predicting its performance and to form a basis for controlling the operation of the process. This would minimize the operation costs and assess the stability of environmental balance. For the multi-variable, uncertainty, non-linear characteristics of the wastewater treatment system, an artificial fish school neural network prediction model is established standing on actual operation data in the wastewater treatment system. The model overcomes several disadvantages of the conventional BP neural network. The results of model calculation show that the predicted value can better match measured value, played an effect on simulating and predicting and be able to optimize the operation status. The establishment of the predicting model provides a simple and practical way for the operation and management in wastewater treatment plant, and has good research and engineering practical value.

  12. A model to compare performance of space and ground network support of low-Earth orbiters

    NASA Technical Reports Server (NTRS)

    Posner, E. C.

    1992-01-01

    This article compares the downlink performance in a gross average sense between space and ground network support of low-Earth orbiters. The purpose is to assess what the demand for DSN support of future small, low-cost missions might be, if data storage for spacecraft becomes reliable enough and small enough to support the storage requirements needed to enable support only a fraction of the time. It is shown that the link advantage of the DSN over space reception in an average sense is enormous for low-Earth orbiters. The much shorter distances needed to communicate with the ground network more than make up for the speedup in data rate needed to compensate for the short contact times with the DSN that low-Earth orbiters have. The result is that more and more requests for DSN-only support of low-Earth orbiters can be expected.

  13. CRAY-1S integer vector utility library

    SciTech Connect

    Rogers, J.N.; Tooman, T.P.

    1982-06-01

    This report describes thirty-five integer or packed vector utility routines, and documents their testing. These routines perform various vector searches, linear algebra functions, memory resets, and vector boolean operations. They are written in CAL, the assembly language on the CRAY-1S computer. By utilizing the vector processing features of that machine, they are optimized in terms of run time. Each routine has been extensively tested.

  14. Improving TCP Network Performance by Detecting and Reacting to Packet Reordering

    NASA Technical Reports Server (NTRS)

    Kruse, Hans; Ostermann, Shawn; Allman, Mark

    2003-01-01

    There are many factors governing the performance of TCP-basec applications traversing satellite channels. The end-to-end performance of TCP is known to be degraded by the reordering, delay, noise and asymmetry inherent in geosynchronous systems. This result has been largely based on experiments that evaluate the performance of TCP in single flow tests. While single flow tests are useful for deriving information on the theoretical behavior of TCP and allow for easy diagnosis of problems they do not represent a broad range of realistic situations and therefore cannot be used to authoritatively comment on performance issues. The experiments discussed in this report test TCP s performance in a more dynamic environment with competing traffic flows from hundreds of TCP connections running simultaneously across the satellite channel. Another aspect we investigate is TCP's reaction to bit errors on satellite channels. TCP interprets loss as a sign of network congestion. This causes TCP to reduce its transmission rate leading to reduced performance when loss is due to corruption. We allowed the bit error rate on our satellite channel to vary widely and tested the performance of TCP as a function of these bit error rates. Our results show that the average performance of TCP on satellite channels is good even under conditions of loss as high as bit error rates of 10(exp -5)

  15. Production of lentiviral vectors

    PubMed Central

    Merten, Otto-Wilhelm; Hebben, Matthias; Bovolenta, Chiara

    2016-01-01

    Lentiviral vectors (LV) have seen considerably increase in use as gene therapy vectors for the treatment of acquired and inherited diseases. This review presents the state of the art of the production of these vectors with particular emphasis on their large-scale production for clinical purposes. In contrast to oncoretroviral vectors, which are produced using stable producer cell lines, clinical-grade LV are in most of the cases produced by transient transfection of 293 or 293T cells grown in cell factories. However, more recent developments, also, tend to use hollow fiber reactor, suspension culture processes, and the implementation of stable producer cell lines. As is customary for the biotech industry, rather sophisticated downstream processing protocols have been established to remove any undesirable process-derived contaminant, such as plasmid or host cell DNA or host cell proteins. This review compares published large-scale production and purification processes of LV and presents their process performances. Furthermore, developments in the domain of stable cell lines and their way to the use of production vehicles of clinical material will be presented. PMID:27110581

  16. Impact of pay-for-performance contracts and network registry on diabetes and asthma HEDIS measures in an integrated delivery network.

    PubMed

    Levin-Scherz, Jeffrey; DeVita, Nicole; Timbie, Justin

    2006-02-01

    This article reviews the experience of a large, heterogeneous integrated delivery network that incorporated physician quality metrics into pay-for-performance contracts. The authors present criteria for including measures in pay-for-performance contracts and offer a practical approach to determining withhold return or bonus distribution based on improvement and performance. They demonstrate interventions undertaken to improve performance, including the development of a claims-based registry. Empirical data show that the network performance improved more than the comparable state and national performance during the period of this observational study. The authors conclude that pay-for-performance contracts led to development of medical management programs including a claims-based registry and nonphysician interventions, which helped significantly improve selected HEDIS scores. PMID:16688922

  17. Control mechanism to prevent correlated message arrivals from degrading signaling no. 7 network performance

    NASA Astrophysics Data System (ADS)

    Kosal, Haluk; Skoog, Ronald A.

    1994-04-01

    Signaling System No. 7 (SS7) is designed to provide a connection-less transfer of signaling messages of reasonable length. Customers having access to user signaling bearer capabilities as specified in the ANSI T1.623 and CCITT Q.931 standards can send bursts of correlated messages (e.g., by doing a file transfer that results in the segmentation of a block of data into a number of consecutive signaling messages) through SS7 networks. These message bursts with short interarrival times could have an adverse impact on the delay performance of the SS7 networks. A control mechanism, Credit Manager, is investigated in this paper to regulate incoming traffic to the SS7 network by imposing appropriate time separation between messages when the incoming stream is too bursty. The credit manager has a credit bank where credits accrue at a fixed rate up to a prespecified credit bank capacity. When a message arrives, the number of octets in that message is compared to the number of credits in the bank. If the number of credits is greater than or equal to the number of octets, then the message is accepted for transmission and the number of credits in the bank is decremented by the number of octets. If the number of credits is less than the number of octets, then the message is delayed until enough credits are accumulated. This paper presents simulation results showing delay performance of the SS7 ISUP and TCAP message traffic with a range of correlated message traffic, and control parameters of the credit manager (i.e., credit generation rate and bank capacity) are determined that ensure the traffic entering the SS7 network is acceptable. The results show that control parameters can be set so that for any incoming traffic stream there is no detrimental impact on the SS7 ISUP and TCAP message delay, and the credit manager accepts a wide range of traffic patterns without causing significant delay.

  18. Comparison of the Performances of Five Primer Sets for the Detection and Quantification of Plasmodium in Anopheline Vectors by Real-Time PCR

    PubMed Central

    Chaumeau, V.; Andolina, C.; Fustec, B.; Tuikue Ndam, N.; Brengues, C.; Herder, S.; Cerqueira, D.; Chareonviriyaphap, T.; Nosten, F.; Corbel, V.

    2016-01-01

    Quantitative real-time polymerase chain reaction (qrtPCR) has made a significant improvement for the detection of Plasmodium in anopheline vectors. A wide variety of primers has been used in different assays, mostly adapted from molecular diagnosis of malaria in human. However, such an adaptation can impact the sensitivity of the PCR. Therefore we compared the sensitivity of five primer sets with different molecular targets on blood stages, sporozoites and oocysts standards of Plasmodium falciparum (Pf) and P. vivax (Pv). Dilution series of standard DNA were used to discriminate between methods at low concentrations of parasite and to generate standard curves suitable for the absolute quantification of Plasmodium sporozoites. Our results showed that the best primers to detect blood stages were not necessarily the best ones to detect sporozoites. Absolute detection threshold of our qrtPCR assay varied between 3.6 and 360 Pv sporozoites and between 6 and 600 Pf sporozoites per mosquito according to the primer set used in the reaction mix. In this paper, we discuss the general performance of each primer set and highlight the need to use efficient detection methods for transmission studies. PMID:27441839

  19. Comparison of the Performances of Five Primer Sets for the Detection and Quantification of Plasmodium in Anopheline Vectors by Real-Time PCR.

    PubMed

    Chaumeau, V; Andolina, C; Fustec, B; Tuikue Ndam, N; Brengues, C; Herder, S; Cerqueira, D; Chareonviriyaphap, T; Nosten, F; Corbel, V

    2016-01-01

    Quantitative real-time polymerase chain reaction (qrtPCR) has made a significant improvement for the detection of Plasmodium in anopheline vectors. A wide variety of primers has been used in different assays, mostly adapted from molecular diagnosis of malaria in human. However, such an adaptation can impact the sensitivity of the PCR. Therefore we compared the sensitivity of five primer sets with different molecular targets on blood stages, sporozoites and oocysts standards of Plasmodium falciparum (Pf) and P. vivax (Pv). Dilution series of standard DNA were used to discriminate between methods at low concentrations of parasite and to generate standard curves suitable for the absolute quantification of Plasmodium sporozoites. Our results showed that the best primers to detect blood stages were not necessarily the best ones to detect sporozoites. Absolute detection threshold of our qrtPCR assay varied between 3.6 and 360 Pv sporozoites and between 6 and 600 Pf sporozoites per mosquito according to the primer set used in the reaction mix. In this paper, we discuss the general performance of each primer set and highlight the need to use efficient detection methods for transmission studies. PMID:27441839

  20. Network Performance and Coordination in the Health, Education, Telecommunications System. Satellite Technology Demonstration, Technical Report No. 0422.

    ERIC Educational Resources Information Center

    Braunstein, Jean; Janky, James M.

    This paper describes the network coordination for the Health, Education, Telecommunications (HET) system. Specifically, it discusses HET network performance as a function of a specially-developed coordination system which was designed to link terrestrial equipment to satellite operations centers. Because all procedures and equipment developed for…

  1. Performance of an Abbreviated Version of the Lubben Social Network Scale among Three European Community-Dwelling Older Adult Populations

    ERIC Educational Resources Information Center

    Lubben, James; Blozik, Eva; Gillmann, Gerhard; Iliffe, Steve; von Renteln-Kruse, Wolfgang; Beck, John C.; Stuck, Andreas E.

    2006-01-01

    Purpose: There is a need for valid and reliable short scales that can be used to assess social networks and social supports and to screen for social isolation in older persons. Design and Methods: The present study is a cross-national and cross-cultural evaluation of the performance of an abbreviated version of the Lubben Social Network Scale…

  2. Recurrent fuzzy neural network backstepping control for the prescribed output tracking performance of nonlinear dynamic systems.

    PubMed

    Han, Seong-Ik; Lee, Jang-Myung

    2014-01-01

    This paper proposes a backstepping control system that uses a tracking error constraint and recurrent fuzzy neural networks (RFNNs) to achieve a prescribed tracking performance for a strict-feedback nonlinear dynamic system. A new constraint variable was defined to generate the virtual control that forces the tracking error to fall within prescribed boundaries. An adaptive RFNN was also used to obtain the required improvement on the approximation performances in order to avoid calculating the explosive number of terms generated by the recursive steps of traditional backstepping control. The boundedness and convergence of the closed-loop system was confirmed based on the Lyapunov stability theory. The prescribed performance of the proposed control scheme was validated by using it to control the prescribed error of a nonlinear system and a robot manipulator. PMID:24055100

  3. Analytic network process model for sustainable lean and green manufacturing performance indicator

    NASA Astrophysics Data System (ADS)

    Aminuddin, Adam Shariff Adli; Nawawi, Mohd Kamal Mohd; Mohamed, Nik Mohd Zuki Nik

    2014-09-01

    Sustainable manufacturing is regarded as the most complex manufacturing paradigm to date as it holds the widest scope of requirements. In addition, its three major pillars of economic, environment and society though distinct, have some overlapping among each of its elements. Even though the concept of sustainability is not new, the development of the performance indicator still needs a lot of improvement due to its multifaceted nature, which requires integrated approach to solve the problem. This paper proposed the best combination of criteria en route a robust sustainable manufacturing performance indicator formation via Analytic Network Process (ANP). The integrated lean, green and sustainable ANP model can be used to comprehend the complex decision system of the sustainability assessment. The finding shows that green manufacturing is more sustainable than lean manufacturing. It also illustrates that procurement practice is the most important criteria in the sustainable manufacturing performance indicator.

  4. Comparative investigation of multiplane thrust vectoring nozzles

    NASA Technical Reports Server (NTRS)

    Capone, F.; Smereczniak, P.; Spetnagel, D.; Thayer, E.

    1992-01-01

    The inflight aerodynamic performance of multiplane vectoring nozzles is critical to development of advanced aircraft and flight control systems utilizing thrust vectoring. To investigate vectoring nozzle performance, subscale models of two second-generation thrust vectoring nozzle concepts currently under development for advanced fighters were integrated into an axisymmetric test pod. Installed drag and vectoring performance characteristics of both concepts were experimentally determined in wind tunnel testing. CFD analyses were conducted to understand the impact of internal flow turning on thrust vectoring characteristics. Both nozzles exhibited drag comparable with current nonvectoring axisymmetric nozzles. During vectored-thrust operations, forces produced by external flow effects amounted to about 25 percent of the total force measured.

  5. Free-Standing Copper Nanowire Network Current Collector for Improving Lithium Anode Performance.

    PubMed

    Lu, Lei-Lei; Ge, Jin; Yang, Jun-Nan; Chen, Si-Ming; Yao, Hong-Bin; Zhou, Fei; Yu, Shu-Hong

    2016-07-13

    Lithium metal is one of the most attractive anode materials for next-generation lithium batteries due to its high specific capacity and low electrochemical potential. However, the poor cycling performance and serious safety hazards, caused by the growth of dendritic and mossy lithium, has long hindered the application of lithium metal based batteries. Herein, we reported a rational design of free-standing Cu nanowire (CuNW) network to suppress the growth of dendritic lithium via accommodating the lithium metal in three-dimensional (3D) nanostructures. We demonstrated that as high as 7.5 mA h cm(-2) of lithium can be plated into the free-standing copper nanowire (CuNW) current collector without the growth of dendritic lithium. The lithium metal anode based on the CuNW exhibited high Coulombic efficiency (average 98.6% during 200 cycles) and outstanding rate performance owing to the suppression of lithium dendrite growth and high conductivity of CuNW network. Our results demonstrate that the rational nanostructural design of current collector could be a promising strategy to improve the performance of lithium metal anode enabling its application in next-generation lithium-metal based batteries. PMID:27253417

  6. Performance monitoring of the Geumdang Bridge using a dense network of high-resolution wireless sensors

    NASA Astrophysics Data System (ADS)

    Lynch, Jerome P.; Wang, Yang; Loh, Kenneth J.; Yi, Jin-Hak; Yun, Chung-Bang

    2006-12-01

    As researchers continue to explore wireless sensors for use in structural monitoring systems, validation of field performance must be done using actual civil structures. In this study, a network of low-cost wireless sensors was installed in the Geumdang Bridge, Korea to monitor the bridge response to truck loading. Such installations allow researchers to quantify the accuracy and robustness of wireless monitoring systems within the complex environment encountered in the field. In total, 14 wireless sensors were installed in the concrete box girder span of the Geumdang Bridge to record acceleration responses to forced vibrations introduced by a calibrated truck. In order to enhance the resolution of the capacitive accelerometers interfaced to the wireless sensors, a signal conditioning circuit that amplifies and filters low-level accelerometer outputs is proposed. The performance of the complete wireless monitoring system is compared to a commercial tethered monitoring system that was installed in parallel. The performance of the wireless monitoring system is shown to be comparable to that of the tethered counterpart. Computational resources (e.g. microcontrollers) coupled with each wireless sensor allow the sensor to estimate modal parameters of the bridge such as modal frequencies and operational displacement shapes. This form of distributed processing of measurement data by a network of wireless sensors represents a new data management paradigm associated with wireless structural monitoring.

  7. Improving the Performance of the Structure-Based Connectionist Network for Diagnosis of Helicopter Gearboxes

    NASA Technical Reports Server (NTRS)

    Jammu, Vinay B.; Danai, Koroush; Lewicki, David G.

    1996-01-01

    A diagnostic method is introduced for helicopter gearboxes that uses knowledge of the gear-box structure and characteristics of the 'features' of vibration to define the influences of faults on features. The 'structural influences' in this method are defined based on the root mean square value of vibration obtained from a simplified lumped-mass model of the gearbox. The structural influences are then converted to fuzzy variables, to account for the approximate nature of the lumped-mass model, and used as the weights of a connectionist network. Diagnosis in this Structure-Based Connectionist Network (SBCN) is performed by propagating the abnormal vibration features through the weights of SBCN to obtain fault possibility values for each component in the gearbox. Upon occurrence of misdiagnoses, the SBCN also has the ability to improve its diagnostic performance. For this, a supervised training method is presented which adapts the weights of SBCN to minimize the number of misdiagnoses. For experimental evaluation of the SBCN, vibration data from a OH-58A helicopter gearbox collected at NASA Lewis Research Center is used. Diagnostic results indicate that the SBCN is able to diagnose about 80% of the faults without training, and is able to improve its performance to nearly 100% after training.

  8. Performance Analysis of OCDMA Based on AND Detection in FTTH Access Network Using PIN & APD Photodiodes

    NASA Astrophysics Data System (ADS)

    Aldouri, Muthana; Aljunid, S. A.; Ahmad, R. Badlishah; Fadhil, Hilal A.

    2011-06-01

    In order to comprise between PIN photo detector and avalanche photodiodes in a system used double weight (DW) code to be a performance of the optical spectrum CDMA in FTTH network with point-to-multi-point (P2MP) application. The performance of PIN against APD is compared through simulation by using opt system software version 7. In this paper we used two networks designed as follows one used PIN photo detector and the second using APD photo diode, both two system using with and without erbium doped fiber amplifier (EDFA). It is found that APD photo diode in this system is better than PIN photo detector for all simulation results. The conversion used a Mach-Zehnder interferometer (MZI) wavelength converter. Also we are study, the proposing a detection scheme known as AND subtraction detection technique implemented with fiber Bragg Grating (FBG) act as encoder and decoder. This FBG is used to encode and decode the spectral amplitude coding namely double weight (DW) code in Optical Code Division Multiple Access (OCDMA). The performances are characterized through bit error rate (BER) and bit rate (BR) also the received power at various bit rate.

  9. Performance analysis of bi-directional broadband passive optical network using erbium-doped fiber amplifier

    NASA Astrophysics Data System (ADS)

    Almalaq, Yasser; Matin, Mohammad A.

    2014-09-01

    The broadband passive optical network (BPON) has the ability to support high-speed data, voice, and video services to home and small businesses customers. In this work, the performance of bi-directional BPON is analyzed for both down and up streams traffic cases by the help of erbium doped fiber amplifier (EDFA). The importance of BPON is reduced cost. Because PBON uses a splitter the cost of the maintenance between the providers and the customers side is suitable. In the proposed research, BPON has been tested by the use of bit error rate (BER) analyzer. BER analyzer realizes maximum Q factor, minimum bit error rate, and eye height.

  10. Performance Evaluation of Multi-Channel Wireless Mesh Networks with Embedded Systems

    PubMed Central

    Lam, Jun Huy; Lee, Sang-Gon; Tan, Whye Kit

    2012-01-01

    Many commercial wireless mesh network (WMN) products are available in the marketplace with their own proprietary standards, but interoperability among the different vendors is not possible. Open source communities have their own WMN implementation in accordance with the IEEE 802.11s draft standard, Linux open80211s project and FreeBSD WMN implementation. While some studies have focused on the test bed of WMNs based on the open80211s project, none are based on the FreeBSD. In this paper, we built an embedded system using the FreeBSD WMN implementation that utilizes two channels and evaluated its performance. This implementation allows the legacy system to connect to the WMN independent of the type of platform and distributes the load between the two non-overlapping channels. One channel is used for the backhaul connection and the other one is used to connect to the stations to wireless mesh network. By using the power efficient 802.11 technology, this device can also be used as a gateway for the wireless sensor network (WSN). PMID:22368482

  11. Performance evaluation of multi-channel wireless mesh networks with embedded systems.

    PubMed

    Lam, Jun Huy; Lee, Sang-Gon; Tan, Whye Kit

    2012-01-01

    Many commercial wireless mesh network (WMN) products are available in the marketplace with their own proprietary standards, but interoperability among the different vendors is not possible. Open source communities have their own WMN implementation in accordance with the IEEE 802.11s draft standard, Linux open80211s project and FreeBSD WMN implementation. While some studies have focused on the test bed of WMNs based on the open80211s project, none are based on the FreeBSD. In this paper, we built an embedded system using the FreeBSD WMN implementation that utilizes two channels and evaluated its performance. This implementation allows the legacy system to connect to the WMN independent of the type of platform and distributes the load between the two non-overlapping channels. One channel is used for the backhaul connection and the other one is used to connect to the stations to wireless mesh network. By using the power efficient 802.11 technology, this device can also be used as a gateway for the wireless sensor network (WSN). PMID:22368482

  12. Networking.

    ERIC Educational Resources Information Center

    Duvall, Betty

    Networking is an information giving and receiving system, a support system, and a means whereby women can get ahead in careers--either in new jobs or in current positions. Networking information can create many opportunities: women can talk about how other women handle situations and tasks, and previously established contacts can be used in…

  13. Optimizing the Reliability and Performance of Service Composition Applications with Fault Tolerance in Wireless Sensor Networks

    PubMed Central

    Wu, Zhao; Xiong, Naixue; Huang, Yannong; Xu, Degang; Hu, Chunyang

    2015-01-01

    The services composition technology provides flexible methods for building service composition applications (SCAs) in wireless sensor networks (WSNs). The high reliability and high performance of SCAs help services composition technology promote the practical application of WSNs. The optimization methods for reliability and performance used for traditional software systems are mostly based on the instantiations of software components, which are inapplicable and inefficient in the ever-changing SCAs in WSNs. In this paper, we consider the SCAs with fault tolerance in WSNs. Based on a Universal Generating Function (UGF) we propose a reliability and performance model of SCAs in WSNs, which generalizes a redundancy optimization problem to a multi-state system. Based on this model, an efficient optimization algorithm for reliability and performance of SCAs in WSNs is developed based on a Genetic Algorithm (GA) to find the optimal structure of SCAs with fault-tolerance in WSNs. In order to examine the feasibility of our algorithm, we have evaluated the performance. Furthermore, the interrelationships between the reliability, performance and cost are investigated. In addition, a distinct approach to determine the most suitable parameters in the suggested algorithm is proposed. PMID:26561818

  14. Solder-Graphite Network Composite Sheets as High-Performance Thermal Interface Materials

    NASA Astrophysics Data System (ADS)

    Sharma, Munish; Chung, D. D. L.

    2015-03-01

    Low-cost solder-graphite composite sheets (≥55 vol.% solder), with solder and graphite forming interpenetrating networks to a degree, are excellent thermal interface materials (TIMs). Solders 63Sn-37Pb and 95.5Sn-4Ag-0.5Cu are separately used, with the latter performing better. In composite fabrication, a mixture of micrometer-size solder powder and ozone-treated exfoliated graphite is compressed to form a graphite network, followed by fluxless solder reflow and subsequent hot pressing to form the solder network. The network connectivity (enhanced by ozone treatment) is lower in the through-thickness direction. The electrical conductivity obeys the rule of mixtures (parallel model in-plane and series model through-thickness), with anisotropy 7. Thermal contact conductance ≤26 × 104 W/(m2 K) (with 15- μm-roughness copper sandwiching surfaces), through-thickness thermal conductivity ≤52 W/(m K), and in-plane thermal expansion coefficient 1 × 10-5/°C are obtained. The contact conductance exceeds or is comparable to that of all other TIMs, provided that solder reflow has occurred and the composite thickness is ≤100 μm. Upon decreasing the thickness below 100 μm, the sandwich thermal resistivity decreases abruptly, the composite through-thickness thermal conductivity increases abruptly to values comparable to the calculated values based on the rule of mixtures (parallel model), and the composite-copper interfacial thermal resistivity (rather than the composite resistivity) becomes dominant.

  15. Vectorized presentation-level services for scientific distributed applications

    SciTech Connect

    Stanberry, L.C.; Branstetter, M.L.; Nessett, D.M.

    1993-03-01

    The use of heterogeneous distributed systems is a promising approach to significantly increase computational performance of scientific applications. However, one key to this strategy is to minimize the percentage of lime spent by an application moving data between machines. This percentage is composed of two parts: (1) the time to translate data between the formats used on different machines, and (2) the time to move data over the network that interconnects the machines. Previous work suggests that data format conversion activity, generally known as presentation-level services, is by far the more costly of the two. In this paper we describe how vectorization can be used to improve presentation-level performance in scientific applications by one or two orders of magnitude over the conventional approach. While others have recognized the advantages of vectorized data format conversion, we describe how to automate this process so that an application programmer need not explicitly call vectorization routines. We explore the impact of presentation-level vectorization on software portability, programming efficiency and protocol standards. We compare our performance results with those of two other popular distributed application programming tools and then summarize the lessons we have learned during the course of our research.

  16. Benchmark for Peak Detection Algorithms in Fiber Bragg Grating Interrogation and a New Neural Network for its Performance Improvement

    PubMed Central

    Negri, Lucas; Nied, Ademir; Kalinowski, Hypolito; Paterno, Aleksander

    2011-01-01

    This paper presents a benchmark for peak detection algorithms employed in fiber Bragg grating spectrometric interrogation systems. The accuracy, precision, and computational performance of currently used algorithms and those of a new proposed artificial neural network algorithm are compared. Centroid and gaussian fitting algorithms are shown to have the highest precision but produce systematic errors that depend on the FBG refractive index modulation profile. The proposed neural network displays relatively good precision with reduced systematic errors and improved computational performance when compared to other networks. Additionally, suitable algorithms may be chosen with the general guidelines presented. PMID:22163806

  17. Utah's Regional/Urban ANSS Seismic Network---Strategies and Tools for Quality Performance

    NASA Astrophysics Data System (ADS)

    Burlacu, R.; Arabasz, W. J.; Pankow, K. L.; Pechmann, J. C.; Drobeck, D. L.; Moeinvaziri, A.; Roberson, P. M.; Rusho, J. A.

    2007-05-01

    The University of Utah's regional/urban seismic network (224 stations recorded: 39 broadband, 87 strong-motion, 98 short-period) has become a model for locally implementing the Advanced National Seismic System (ANSS) because of successes in integrating weak- and strong-motion recording and in developing an effective real-time earthquake information system. Early achievements included implementing ShakeMap, ShakeCast, point-to- multipoint digital telemetry, and an Earthworm Oracle database, as well as in-situ calibration of all broadband and strong-motion stations and submission of all data and metadata into the IRIS DMC. Regarding quality performance, our experience as a medium-size regional network affirms the fundamental importance of basics such as the following: for data acquisition, deliberate attention to high-quality field installations, signal quality, and computer operations; for operational efficiency, a consistent focus on professional project management and human resources; and for customer service, healthy partnerships---including constant interactions with emergency managers, engineers, public policy-makers, and other stakeholders as part of an effective state earthquake program. (Operational cost efficiencies almost invariably involve trade-offs between personnel costs and the quality of hardware and software.) Software tools that we currently rely on for quality performance include those developed by UUSS (e.g., SAC and shell scripts for estimating local magnitudes) and software developed by other organizations such as: USGS (Earthworm), University of Washington (interactive analysis software), ISTI (SeisNetWatch), and IRIS (PDCC, BUD tools). Although there are many pieces, there is little integration. One of the main challenges we face is the availability of a complete and coherent set of tools for automatic and post-processing to assist in achieving the goals/requirements set forth by ANSS. Taking our own network---and ANSS---to the next level

  18. Molecular dynamics on vector computers

    NASA Astrophysics Data System (ADS)

    Sullivan, F.; Mountain, R. D.; Oconnell, J.

    1985-10-01

    An algorithm called the method of lights (MOL) has been developed for the computerized simulation of molecular dynamics. The MOL, implemented on the CYBER 205 computer, is based on sorting and reformulating the manner in which neighbor lists are compiled, and it uses data structures compatible with specialized vector statements that perform parallel computations. The MOL is found to reduce running time over standard methods in scalar form, and vectorization is shown to produce an order-of-magnitude reduction in execution time.

  19. Performance evaluation of a Wireless Body Area sensor network for remote patient monitoring.

    PubMed

    Khan, Jamil Y; Yuce, Mehmet R; Karami, Farbood

    2008-01-01

    In recent years, interests in the application of Wireless Body Area Network (WBAN) have grown considerably. A WBAN can be used to develop a patient monitoring system which offers flexibility and mobility to patients. Use of a WBAN will also allow the flexibility of setting up a remote monitoring system via either the internet or an intranet. For such medical systems it is very important that a WBAN can collect and transmit data reliably, and in a timely manner to the monitoring entity. In this paper we examine the performance of an IEEE802.15.4/Zigbee MAC based WBAN operating in different patient monitoring environment. We study the performance of a remote patient monitoring system using an OPNET based simulation model. PMID:19162897

  20. Topology Design and Performance Evaluation of Wireless Sensor Network Based on MIMO Channel Capacity

    NASA Astrophysics Data System (ADS)

    Leng, Ky; Sakaguchi, Kei; Araki, Kiyomichi

    The Wireless Sensor Network (WSN) uses autonomous sensor nodes to monitor a field. These sensor nodes sometimes act as relay nodesfor each other. In this paper, the performance of the WSN using fixed relay nodes and Multiple-Input Multiple-Output (MIMO) technology necessary for future wireless communication is evaluated in terms of the channel capacity of the MIMO system and the number of sensor nodes served by the system. Accordingly, we propose an optimum topology for the WSN backbone named Connected Relay Node Double Cover (CRNDC), which can recover from a single fault, the algorithms (exhaustive search and other two approximation methods) to find the optimum distance to place the relay nodes from sink node, and the height of the sink and relay nodes to be placed by using the pathloss model. The performances of different MIMO-WSN configurations over conventional WSN are evaluated, and the direct relationship between relay position and minimum required channel capacity are discovered.