Science.gov

Sample records for performance vector network

  1. Vector Network Analysis

    1997-10-20

    Vector network analyzers are a convenient way to measure scattering parameters of a variety of microwave devices. However, these instruments, unlike oscilloscopes for example, require a relatively high degree of user knowledge and expertise. Due to the complexity of the instrument and of the calibration process, there are many ways in which an incorrect measurement may be produced. The Microwave Project, which is part of Sandia National Laboratories Primary Standards Laboratory, routinely uses check standardsmore » to verify that the network analyzer is operating properly. In the past, these measurements were recorded manually and, sometimes, interpretation of the results was problematic. To aid our measurement assurance process, a software program was developed to automatically measure a check standard and compare the new measurements with an historical database of measurements of the same device. The program acquires new measurement data from selected check standards, plots the new data against the mean and standard deviation of prior data for the same check standard, and updates the database files for the check standard. The program is entirely menu-driven requiring little additional work by the user.« less

  2. Vector Encoding in Biochemical Networks

    NASA Astrophysics Data System (ADS)

    Potter, Garrett; Sun, Bo

    Encoding of environmental cues via biochemical signaling pathways is of vital importance in the transmission of information for cells in a network. The current literature assumes a single cell state is used to encode information, however, recent research suggests the optimal strategy utilizes a vector of cell states sampled at various time points. To elucidate the optimal sampling strategy for vector encoding, we take an information theoretic approach and determine the mutual information of the calcium signaling dynamics obtained from fibroblast cells perturbed with different concentrations of ATP. Specifically, we analyze the sampling strategies under the cases of fixed and non-fixed vector dimension as well as the efficiency of these strategies. Our results show that sampling with greater frequency is optimal in the case of non-fixed vector dimension but that, in general, a lower sampling frequency is best from both a fixed vector dimension and efficiency standpoint. Further, we find the use of a simple modified Ornstein-Uhlenbeck process as a model qualitatively captures many of our experimental results suggesting that sampling in biochemical networks is based on a few basic components.

  3. Applying knowledge engineering and representation methods to improve support vector machine and multivariate probabilistic neural network CAD performance

    NASA Astrophysics Data System (ADS)

    Land, Walker H., Jr.; Anderson, Frances; Smith, Tom; Fahlbusch, Stephen; Choma, Robert; Wong, Lut

    2005-04-01

    Achieving consistent and correct database cases is crucial to the correct evaluation of any computer-assisted diagnostic (CAD) paradigm. This paper describes the application of artificial intelligence (AI), knowledge engineering (KE) and knowledge representation (KR) to a data set of ~2500 cases from six separate hospitals, with the objective of removing/reducing inconsistent outlier data. Several support vector machine (SVM) kernels were used to measure diagnostic performance of the original and a "cleaned" data set. Specifically, KE and ER principles were applied to the two data sets which were re-examined with respect to the environment and agents. One data set was found to contain 25 non-characterizable sets. The other data set contained 180 non-characterizable sets. CAD system performance was measured with both the original and "cleaned" data sets using two SVM kernels as well as a multivariate probabilistic neural network (PNN). Results demonstrated: (i) a 10% average improvement in overall Az and (ii) approximately a 50% average improvement in partial Az.

  4. Video data compression using artificial neural network differential vector quantization

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, Ashok K.; Bibyk, Steven B.; Ahalt, Stanley C.

    1991-01-01

    An artificial neural network vector quantizer is developed for use in data compression applications such as Digital Video. Differential Vector Quantization is used to preserve edge features, and a new adaptive algorithm, known as Frequency-Sensitive Competitive Learning, is used to develop the vector quantizer codebook. To develop real time performance, a custom Very Large Scale Integration Application Specific Integrated Circuit (VLSI ASIC) is being developed to realize the associative memory functions needed in the vector quantization algorithm. By using vector quantization, the need for Huffman coding can be eliminated, resulting in superior performance against channel bit errors than methods that use variable length codes.

  5. Vectorized algorithms for spiking neural network simulation.

    PubMed

    Brette, Romain; Goodman, Dan F M

    2011-06-01

    High-level languages (Matlab, Python) are popular in neuroscience because they are flexible and accelerate development. However, for simulating spiking neural networks, the cost of interpretation is a bottleneck. We describe a set of algorithms to simulate large spiking neural networks efficiently with high-level languages using vector-based operations. These algorithms constitute the core of Brian, a spiking neural network simulator written in the Python language. Vectorized simulation makes it possible to combine the flexibility of high-level languages with the computational efficiency usually associated with compiled languages. PMID:21395437

  6. Communication networks, soap films and vectors

    NASA Astrophysics Data System (ADS)

    Clark, R. C.

    1981-01-01

    The problem of constructing the least-cost network of connections between arbitrarily placed points is one that is common and which can be very important financially. The network may consist of motorways between towns, a grid of electric power lines, buried gas or oil pipe lines or telephone cables. Soap films trapped between parallel planes with vertical pins between them provide a 'shortest path' network and Isenberg (1975) has suggested that soap films of this sort be used to model communication networks. However soap films are unable to simulate the different costs of laying, say, a three-lane motorway instead of a two-lane one or of using a larger pipeline to take the flow from two smaller ones. Soap films, however, have considerable intrinsic interest. In the article the emphasis is on the use of soap films and communication networks as a practical means of illustrating the importance of vector and matrix methods in geometry. The power of vector methods is illustrated by the fact that given any soap film network the total length of the film can be written down by inspection if the vector positions of the pins are known. It is also possible to predict the boundaries at which 'catastrophes' occur and to decide which network has the least total length. In the field of communication networks a method is given of designing the minimum cost network linking, say, a number of oilwells, which produce at different rates to an outlet terminal.

  7. Optimal Network Alignment with Graphlet Degree Vectors

    PubMed Central

    Milenković, Tijana; Ng, Weng Leong; Hayes, Wayne; Pržulj, Nataša

    2010-01-01

    Important biological information is encoded in the topology of biological networks. Comparative analyses of biological networks are proving to be valuable, as they can lead to transfer of knowledge between species and give deeper insights into biological function, disease, and evolution. We introduce a new method that uses the Hungarian algorithm to produce optimal global alignment between two networks using any cost function. We design a cost function based solely on network topology and use it in our network alignment. Our method can be applied to any two networks, not just biological ones, since it is based only on network topology. We use our new method to align protein-protein interaction networks of two eukaryotic species and demonstrate that our alignment exposes large and topologically complex regions of network similarity. At the same time, our alignment is biologically valid, since many of the aligned protein pairs perform the same biological function. From the alignment, we predict function of yet unannotated proteins, many of which we validate in the literature. Also, we apply our method to find topological similarities between metabolic networks of different species and build phylogenetic trees based on our network alignment score. The phylogenetic trees obtained in this way bear a striking resemblance to the ones obtained by sequence alignments. Our method detects topologically similar regions in large networks that are statistically significant. It does this independent of protein sequence or any other information external to network topology. PMID:20628593

  8. A calibration free vector network analyzer

    NASA Astrophysics Data System (ADS)

    Kothari, Arpit

    Recently, two novel single-port, phase-shifter based vector network analyzer (VNA) systems were developed and tested at X-band (8.2--12.4 GHz) and Ka-band (26.4--40 GHz), respectively. These systems operate based on electronically moving the standing wave pattern, set up in a waveguide, over a Schottky detector and sample the standing wave voltage for several phase shift values. Once this system is fully characterized, all parameters in the system become known and hence theoretically, no other correction (or calibration) should be required to obtain the reflection coefficient, (Gamma), of an unknown load. This makes this type of VNA "calibration free" which is a significant advantage over other types of VNAs. To this end, a VNA system, based on this design methodology, was developed at X-band using several design improvements (compared to the previous designs) with the aim of demonstrating this "calibration-free" feature. It was found that when a commercial VNA (HP8510C) is used as the source and the detector, the system works as expected. However, when a detector is used (Schottky diode, log detector, etc.), obtaining correct Gamma still requires the customary three-load calibration. With the aim of exploring the cause, a detailed sensitivity analysis of prominent error sources was performed. Extensive measurements were done with different detection techniques including use of a spectrum analyzer as power detector. The system was tested even for electromagnetic compatibility (EMC) which may have contributed to this issue. Although desired results could not be obtained using the proposed standing-wave-power measuring devices like the Schottky diode but the principle of "calibration-free VNA" was shown to be true.

  9. Improving DNA vaccine performance through vector design.

    PubMed

    Williams, James A

    2014-01-01

    DNA vaccines are a rapidly deployed next generation vaccination platform for treatment of human and animal disease. DNA delivery devices, such as electroporation and needle free jet injectors, are used to increase gene transfer. This results in higher antigen expression which correlates with improved humoral and cellular immunity in humans and animals. This review highlights recent vector and transgene design innovations that improve DNA vaccine performance. These new vectors improve antigen expression, increase plasmid manufacturing yield and quality in bioreactors, and eliminate antibiotic selection and other potential safety issues. A flowchart for designing synthetic antigen transgenes, combining antigen targeting, codon-optimization and bioinformatics, is presented. Application of improved vectors, of antibiotic free plasmid production, and cost effective manufacturing technologies will be critical to ensure safety, efficacy, and economically viable manufacturing of DNA vaccines currently under development for infectious disease, cancer, autoimmunity, immunotolerance and allergy indications.

  10. New perspectives in tracing vector-borne interaction networks.

    PubMed

    Gómez-Díaz, Elena; Figuerola, Jordi

    2010-10-01

    Disentangling trophic interaction networks in vector-borne systems has important implications in epidemiological and evolutionary studies. Molecular methods based on bloodmeal typing in vectors have been increasingly used to identify hosts. Although most molecular approaches benefit from good specificity and sensitivity, their temporal resolution is limited by the often rapid digestion of blood, and mixed bloodmeals still remain a challenge for bloodmeal identification in multi-host vector systems. Stable isotope analyses represent a novel complementary tool that can overcome some of these problems. The utility of these methods using examples from different vector-borne systems are discussed and the extents to which they are complementary and versatile are highlighted. There are excellent opportunities for progress in the study of vector-borne transmission networks resulting from the integration of both molecular and stable isotope approaches.

  11. NASF transposition network: A computing network for unscrambling p-ordered vectors

    NASA Technical Reports Server (NTRS)

    Lim, R. S.

    1979-01-01

    The viewpoints of design, programming, and application of the transportation network (TN) is presented. The TN is a programmable combinational logic network that connects 521 memory modules to 512 processors. The unscrambling of p-ordered vectors to 1-ordered vectors in one cycle is described. The TN design is based upon the concept of cyclic groups from abstract algebra and primitive roots and indices from number theory. The programming of the TN is very simple, requiring only 20 bits: 10 bits for offset control and 10 bits for barrel switch shift control. This simple control is executed by the control unit (CU), not the processors. Any memory access by a processor must be coordinated with the CU and wait for all other processors to come to a synchronization point. These wait and synchronization events can be a degradation in performance to a computation. The TN application is for multidimensional data manipulation, matrix processing, and data sorting, and can also perform a perfect shuffle. Unlike other more complicated and powerful permutation networks, the TN cannot, if possible at all, unscramble non-p-ordered vectors in one cycle.

  12. High performance satellite networks

    NASA Astrophysics Data System (ADS)

    Helm, Neil R.; Edelson, Burton I.

    1997-06-01

    The high performance satellite communications networks of the future will have to be interoperable with terrestrial fiber cables. These satellite networks will evolve from narrowband analogue formats to broadband digital transmission schemes, with protocols, algorithms and transmission architectures that will segment the data into uniform cells and frames, and then transmit these data via larger and more efficient synchronous optional (SONET) and asynchronous transfer mode (ATM) networks that are being developed for the information "superhighway". These high performance satellite communications and information networks are required for modern applications, such as electronic commerce, digital libraries, medical imaging, distance learning, and the distribution of science data. In order for satellites to participate in these information superhighway networks, it is essential that they demonstrate their ability to: (1) operate seamlessly with heterogeneous architectures and applications, (2) carry data at SONET rates with the same quality of service as optical fibers, (3) qualify transmission delay as a parameter not a problem, and (4) show that satellites have several performance and economic advantages over fiber cable networks.

  13. Incremental Support Vector Machine Framework for Visual Sensor Networks

    NASA Astrophysics Data System (ADS)

    Awad, Mariette; Jiang, Xianhua; Motai, Yuichi

    2006-12-01

    Motivated by the emerging requirements of surveillance networks, we present in this paper an incremental multiclassification support vector machine (SVM) technique as a new framework for action classification based on real-time multivideo collected by homogeneous sites. The technique is based on an adaptation of least square SVM (LS-SVM) formulation but extends beyond the static image-based learning of current SVM methodologies. In applying the technique, an initial supervised offline learning phase is followed by a visual behavior data acquisition and an online learning phase during which the cluster head performs an ensemble of model aggregations based on the sensor nodes inputs. The cluster head then selectively switches on designated sensor nodes for future incremental learning. Combining sensor data offers an improvement over single camera sensing especially when the latter has an occluded view of the target object. The optimization involved alleviates the burdens of power consumption and communication bandwidth requirements. The resulting misclassification error rate, the iterative error reduction rate of the proposed incremental learning, and the decision fusion technique prove its validity when applied to visual sensor networks. Furthermore, the enabled online learning allows an adaptive domain knowledge insertion and offers the advantage of reducing both the model training time and the information storage requirements of the overall system which makes it even more attractive for distributed sensor networks communication.

  14. Four-quadrant optical matrix-vector multiplication machine as a neural-network processor

    NASA Astrophysics Data System (ADS)

    Abramson, S.; Saad, D.; Marom, E.; Konforti, N.

    1993-03-01

    An optoelectronic four-quadrant matrix-vector multiplier that can be used for feed-forward neural-network recall and learning is proposed and demonstrated. The system, based on a high-resolution monitor and a pair of liquid-crystal television (LCTV) displays, can perform, with the coordination of a computer, fast neural network learning procedures. Current limitations on the computational abilities of the proposed machine are the resolution and the time response of the LCTV.

  15. Demonstration of Cost-Effective, High-Performance Computing at Performance and Reliability Levels Equivalent to a 1994 Vector Supercomputer

    NASA Technical Reports Server (NTRS)

    Babrauckas, Theresa

    2000-01-01

    The Affordable High Performance Computing (AHPC) project demonstrated that high-performance computing based on a distributed network of computer workstations is a cost-effective alternative to vector supercomputers for running CPU and memory intensive design and analysis tools. The AHPC project created an integrated system called a Network Supercomputer. By connecting computer work-stations through a network and utilizing the workstations when they are idle, the resulting distributed-workstation environment has the same performance and reliability levels as the Cray C90 vector Supercomputer at less than 25 percent of the C90 cost. In fact, the cost comparison between a Cray C90 Supercomputer and Sun workstations showed that the number of distributed networked workstations equivalent to a C90 costs approximately 8 percent of the C90.

  16. High Performance Network Monitoring

    SciTech Connect

    Martinez, Jesse E

    2012-08-10

    Network Monitoring requires a substantial use of data and error analysis to overcome issues with clusters. Zenoss and Splunk help to monitor system log messages that are reporting issues about the clusters to monitoring services. Infiniband infrastructure on a number of clusters upgraded to ibmon2. ibmon2 requires different filters to report errors to system administrators. Focus for this summer is to: (1) Implement ibmon2 filters on monitoring boxes to report system errors to system administrators using Zenoss and Splunk; (2) Modify and improve scripts for monitoring and administrative usage; (3) Learn more about networks including services and maintenance for high performance computing systems; and (4) Gain a life experience working with professionals under real world situations. Filters were created to account for clusters running ibmon2 v1.0.0-1 10 Filters currently implemented for ibmon2 using Python. Filters look for threshold of port counters. Over certain counts, filters report errors to on-call system administrators and modifies grid to show local host with issue.

  17. Improving neural network performance on SIMD architectures

    NASA Astrophysics Data System (ADS)

    Limonova, Elena; Ilin, Dmitry; Nikolaev, Dmitry

    2015-12-01

    Neural network calculations for the image recognition problems can be very time consuming. In this paper we propose three methods of increasing neural network performance on SIMD architectures. The usage of SIMD extensions is a way to speed up neural network processing available for a number of modern CPUs. In our experiments, we use ARM NEON as SIMD architecture example. The first method deals with half float data type for matrix computations. The second method describes fixed-point data type for the same purpose. The third method considers vectorized activation functions implementation. For each method we set up a series of experiments for convolutional and fully connected networks designed for image recognition task.

  18. Internal performance characteristics of thrust-vectored axisymmetric ejector nozzles

    NASA Technical Reports Server (NTRS)

    Lamb, Milton

    1995-01-01

    A series of thrust-vectored axisymmetric ejector nozzles were designed and experimentally tested for internal performance and pumping characteristics at the Langley research center. This study indicated that discontinuities in the performance occurred at low primary nozzle pressure ratios and that these discontinuities were mitigated by decreasing expansion area ratio. The addition of secondary flow increased the performance of the nozzles. The mid-to-high range of secondary flow provided the most overall improvements, and the greatest improvements were seen for the largest ejector area ratio. Thrust vectoring the ejector nozzles caused a reduction in performance and discharge coefficient. With or without secondary flow, the vectored ejector nozzles produced thrust vector angles that were equivalent to or greater than the geometric turning angle. With or without secondary flow, spacing ratio (ejector passage symmetry) had little effect on performance (gross thrust ratio), discharge coefficient, or thrust vector angle. For the unvectored ejectors, a small amount of secondary flow was sufficient to reduce the pressure levels on the shroud to provide cooling, but for the vectored ejector nozzles, a larger amount of secondary air was required to reduce the pressure levels to provide cooling.

  19. Biologically relevant neural network architectures for support vector machines.

    PubMed

    Jändel, Magnus

    2014-01-01

    Neural network architectures that implement support vector machines (SVM) are investigated for the purpose of modeling perceptual one-shot learning in biological organisms. A family of SVM algorithms including variants of maximum margin, 1-norm, 2-norm and ν-SVM is considered. SVM training rules adapted for neural computation are derived. It is found that competitive queuing memory (CQM) is ideal for storing and retrieving support vectors. Several different CQM-based neural architectures are examined for each SVM algorithm. Although most of the sixty-four scanned architectures are unconvincing for biological modeling four feasible candidates are found. The seemingly complex learning rule of a full ν-SVM implementation finds a particularly simple and natural implementation in bisymmetric architectures. Since CQM-like neural structures are thought to encode skilled action sequences and bisymmetry is ubiquitous in motor systems it is speculated that trainable pattern recognition in low-level perception has evolved as an internalized motor programme.

  20. Design and Performance of Tree-Structured Vector Quantizers.

    ERIC Educational Resources Information Center

    Lin, Jianhua; Storer, James A.

    1994-01-01

    Describes the design of optimal tree-structured vector quantizers that minimize the expected distortion subject to cost functions related to storage cost, encoding rate, or quantization time. Since the optimal design problem is intractable in most cases, the performance of a general design heuristic based on successive partitioning is analyzed.…

  1. Optical vector network analyzer based on amplitude-phase modulation

    NASA Astrophysics Data System (ADS)

    Morozov, Oleg G.; Morozov, Gennady A.; Nureev, Ilnur I.; Kasimova, Dilyara I.; Zastela, Mikhail Y.; Gavrilov, Pavel V.; Makarov, Igor A.; Purtov, Vadim A.

    2016-03-01

    The article describes the principles of optical vector network analyzer (OVNA) design for fiber Bragg gratings (FBG) characterization based on amplitude-phase modulation of optical carrier that allow us to improve the measurement accuracy of amplitude and phase parameters of the elements under test. Unlike existing OVNA based on a single-sideband and unbalanced double sideband amplitude modulation, the ratio of the two side components of the probing radiation is used for analysis of amplitude and phase parameters of the tested elements, and the radiation of the optical carrier is suppressed, or the latter is used as a local oscillator. The suggested OVNA is designed for the narrow band-stop elements (π-phaseshift FBG) and wide band-pass elements (linear chirped FBG) research.

  2. A high-performance FFT algorithm for vector supercomputers

    NASA Technical Reports Server (NTRS)

    Bailey, David H.

    1988-01-01

    Many traditional algorithms for computing the fast Fourier transform (FFT) on conventional computers are unacceptable for advanced vector and parallel computers because they involve nonunit, power-of-two memory strides. A practical technique for computing the FFT that avoids all such strides and appears to be near-optimal for a variety of current vector and parallel computers is presented. Performance results of a program based on this technique are given. Notable among these results is that a FORTRAN implementation of this algorithm on the CRAY-2 runs up to 77-percent faster than Cray's assembly-coded library routine.

  3. Performance evaluation of the SX-6 vector architecture forscientific computations

    SciTech Connect

    Oliker, Leonid; Canning, Andrew; Carter, Jonathan Carter; Shalf,John; Skinner, David; Ethier, Stephane; Biswas, Rupak; Djomehri,Jahed; Van der Wijngaart, Rob

    2005-01-01

    The growing gap between sustained and peak performance for scientific applications is a well-known problem in high performance computing. The recent development of parallel vector systems offers the potential to reduce this gap for many computational science codes and deliver a substantial increase in computing capabilities. This paper examines the intranode performance of the NEC SX-6 vector processor, and compares it against the cache-based IBMPower3 and Power4 superscalar architectures, across a number of key scientific computing areas. First, we present the performance of a microbenchmark suite that examines many low-level machine characteristics. Next, we study the behavior of the NAS Parallel Benchmarks. Finally, we evaluate the performance of several scientific computing codes. Overall results demonstrate that the SX-6 achieves high performance on a large fraction of our application suite and often significantly outperforms the cache-based architectures. However, certain classes of applications are not easily amenable to vectorization and would require extensive algorithm and implementation reengineering to utilize the SX-6 effectively.

  4. Vector Symbolic Spiking Neural Network Model of Hippocampal Subarea CA1 Novelty Detection Functionality.

    PubMed

    Agerskov, Claus

    2016-04-01

    A neural network model is presented of novelty detection in the CA1 subdomain of the hippocampal formation from the perspective of information flow. This computational model is restricted on several levels by both anatomical information about hippocampal circuitry and behavioral data from studies done in rats. Several studies report that the CA1 area broadcasts a generalized novelty signal in response to changes in the environment. Using the neural engineering framework developed by Eliasmith et al., a spiking neural network architecture is created that is able to compare high-dimensional vectors, symbolizing semantic information, according to the semantic pointer hypothesis. This model then computes the similarity between the vectors, as both direct inputs and a recalled memory from a long-term memory network by performing the dot-product operation in a novelty neural network architecture. The developed CA1 model agrees with available neuroanatomical data, as well as the presented behavioral data, and so it is a biologically realistic model of novelty detection in the hippocampus, which can provide a feasible explanation for experimentally observed dynamics. PMID:26890351

  5. Vector Symbolic Spiking Neural Network Model of Hippocampal Subarea CA1 Novelty Detection Functionality.

    PubMed

    Agerskov, Claus

    2016-04-01

    A neural network model is presented of novelty detection in the CA1 subdomain of the hippocampal formation from the perspective of information flow. This computational model is restricted on several levels by both anatomical information about hippocampal circuitry and behavioral data from studies done in rats. Several studies report that the CA1 area broadcasts a generalized novelty signal in response to changes in the environment. Using the neural engineering framework developed by Eliasmith et al., a spiking neural network architecture is created that is able to compare high-dimensional vectors, symbolizing semantic information, according to the semantic pointer hypothesis. This model then computes the similarity between the vectors, as both direct inputs and a recalled memory from a long-term memory network by performing the dot-product operation in a novelty neural network architecture. The developed CA1 model agrees with available neuroanatomical data, as well as the presented behavioral data, and so it is a biologically realistic model of novelty detection in the hippocampus, which can provide a feasible explanation for experimentally observed dynamics.

  6. Biasing vector network analyzers using variable frequency and amplitude signals.

    PubMed

    Nobles, J E; Zagorodnii, V; Hutchison, A; Celinski, Z

    2016-08-01

    We report the development of a test setup designed to provide a variable frequency biasing signal to a vector network analyzer (VNA). The test setup is currently used for the testing of liquid crystal (LC) based devices in the microwave region. The use of an AC bias for LC based devices minimizes the negative effects associated with ionic impurities in the media encountered with DC biasing. The test setup utilizes bias tees on the VNA test station to inject the bias signal. The square wave biasing signal is variable from 0.5 to 36.0 V peak-to-peak (VPP) with a frequency range of DC to 10 kHz. The test setup protects the VNA from transient processes, voltage spikes, and high-frequency leakage. Additionally, the signals to the VNA are fused to ½ amp and clipped to a maximum of 36 VPP based on bias tee limitations. This setup allows us to measure S-parameters as a function of both the voltage and the frequency of the applied bias signal. PMID:27587141

  7. Biasing vector network analyzers using variable frequency and amplitude signals

    NASA Astrophysics Data System (ADS)

    Nobles, J. E.; Zagorodnii, V.; Hutchison, A.; Celinski, Z.

    2016-08-01

    We report the development of a test setup designed to provide a variable frequency biasing signal to a vector network analyzer (VNA). The test setup is currently used for the testing of liquid crystal (LC) based devices in the microwave region. The use of an AC bias for LC based devices minimizes the negative effects associated with ionic impurities in the media encountered with DC biasing. The test setup utilizes bias tees on the VNA test station to inject the bias signal. The square wave biasing signal is variable from 0.5 to 36.0 V peak-to-peak (VPP) with a frequency range of DC to 10 kHz. The test setup protects the VNA from transient processes, voltage spikes, and high-frequency leakage. Additionally, the signals to the VNA are fused to ½ amp and clipped to a maximum of 36 VPP based on bias tee limitations. This setup allows us to measure S-parameters as a function of both the voltage and the frequency of the applied bias signal.

  8. Performance of Ultra-Scale Applications on Leading Vector andScalar HPC Platforms

    SciTech Connect

    Oliker, Leonid; Canning, Andrew; Carter, Jonathan Carter; Shalf,John; Simon, Horst; Ethier, Stephane; Parks, David; Kitawaki, Shigemune; Tsuda, Yoshinori; Sato, Tetsuya

    2005-01-01

    The last decade has witnessed a rapid proliferation of superscalar cache-based microprocessors to build high-end capability and capacity computers primarily because of their generality, scalability, and cost effectiveness. However, the constant degradation of superscalar sustained performance, has become a well-known problem in the scientific computing community. This trend has been widely attributed to the use of superscalar-based commodity components who's architectural designs offer a balance between memory performance, network capability, and execution rate that is poorly matched to the requirements of large-scale numerical computations. The recent development of massively parallel vector systems offers the potential to increase the performance gap for many important classes of algorithms. In this study we examine four diverse scientific applications with the potential to run at ultrascale, from the areas of plasma physics, material science, astrophysics, and magnetic fusion. We compare performance between the vector-based Earth Simulator (ES) and Cray X1, with leading superscalar-based platforms: the IBM Power3/4 and the SGI Altix. Results demonstrate that the ES vector systems achieve excellent performance on our application suite - the highest of any architecture tested to date.

  9. Monthly evaporation forecasting using artificial neural networks and support vector machines

    NASA Astrophysics Data System (ADS)

    Tezel, Gulay; Buyukyildiz, Meral

    2016-04-01

    Evaporation is one of the most important components of the hydrological cycle, but is relatively difficult to estimate, due to its complexity, as it can be influenced by numerous factors. Estimation of evaporation is important for the design of reservoirs, especially in arid and semi-arid areas. Artificial neural network methods and support vector machines (SVM) are frequently utilized to estimate evaporation and other hydrological variables. In this study, usability of artificial neural networks (ANNs) (multilayer perceptron (MLP) and radial basis function network (RBFN)) and ɛ-support vector regression (SVR) artificial intelligence methods was investigated to estimate monthly pan evaporation. For this aim, temperature, relative humidity, wind speed, and precipitation data for the period 1972 to 2005 from Beysehir meteorology station were used as input variables while pan evaporation values were used as output. The Romanenko and Meyer method was also considered for the comparison. The results were compared with observed class A pan evaporation data. In MLP method, four different training algorithms, gradient descent with momentum and adaptive learning rule backpropagation (GDX), Levenberg-Marquardt (LVM), scaled conjugate gradient (SCG), and resilient backpropagation (RBP), were used. Also, ɛ-SVR model was used as SVR model. The models were designed via 10-fold cross-validation (CV); algorithm performance was assessed via mean absolute error (MAE), root mean square error (RMSE), and coefficient of determination (R 2). According to the performance criteria, the ANN algorithms and ɛ-SVR had similar results. The ANNs and ɛ-SVR methods were found to perform better than the Romanenko and Meyer methods. Consequently, the best performance using the test data was obtained using SCG(4,2,2,1) with R 2 = 0.905.

  10. Maximizing sparse matrix vector product performance in MIMD computers

    SciTech Connect

    McLay, R.T.; Kohli, H.S.; Swift, S.L.; Carey, G.F.

    1994-12-31

    A considerable component of the computational effort involved in conjugate gradient solution of structured sparse matrix systems is expended during the Matrix-Vector Product (MVP), and hence it is the focus of most efforts at improving performance. Such efforts are hindered on MIMD machines due to constraints on memory, cache and speed of memory-cpu data transfer. This paper describes a strategy for maximizing the performance of the local computations associated with the MVP. The method focuses on single stride memory access, and the efficient use of cache by pre-loading it with data that is re-used while bypassing it for other data. The algorithm is designed to behave optimally for varying grid sizes and number of unknowns per gridpoint. Results from an assembly language implementation of the strategy on the iPSC/860 show a significant improvement over the performance using FORTRAN.

  11. Internal performance characteristics of vectored axisymmetric ejector nozzles

    NASA Technical Reports Server (NTRS)

    Lamb, Milton

    1993-01-01

    A series of vectoring axisymmetric ejector nozzles were designed and experimentally tested for internal performance and pumping characteristics at NASA-Langley Research Center. These ejector nozzles used convergent-divergent nozzles as the primary nozzles. The model geometric variables investigated were primary nozzle throat area, primary nozzle expansion ratio, effective ejector expansion ratio (ratio of shroud exit area to primary nozzle throat area), ratio of minimum ejector area to primary nozzle throat area, ratio of ejector upper slot height to lower slot height (measured on the vertical centerline), and thrust vector angle. The primary nozzle pressure ratio was varied from 2.0 to 10.0 depending upon primary nozzle throat area. The corrected ejector-to-primary nozzle weight-flow ratio was varied from 0 (no secondary flow) to approximately 0.21 (21 percent of primary weight-flow rate) depending on ejector nozzle configuration. In addition to the internal performance and pumping characteristics, static pressures were obtained on the shroud walls.

  12. Locally connected neural network with improved feature vector

    NASA Technical Reports Server (NTRS)

    Thomas, Tyson (Inventor)

    2004-01-01

    A pattern recognizer which uses neuromorphs with a fixed amount of energy that is distributed among the elements. The distribution of the energy is used to form a histogram which is used as a feature vector.

  13. Four-quadrant optical matrix-vector multiplication machine as a neural-network processor.

    PubMed

    Abramson, S; Saad, D; Marom, E; Konforti, N

    1993-03-10

    Optical processors for neural networks are primarily fast matrix-vector multiplication machines that can potentially compete with serial computers owing to their parallelism and their ability to facilitate densely connected networks. However, in most proposed systems the multiplication supports only two quadrants and is thus unable to provide bipolar neuron outputs for increasing network capabilities and learning rate. We propose and demonstrate an opto-electronic four-quadrant matrix-vector multiplier that can be used for feed-forward neural-network recall and learning. Experimental results obtained with common commercial components demonstrate a novel, useful, and reliable approach for fourquadrant matrix-vector multiplication in general and for feed-forward neural-network training and recall in particular. PMID:20820267

  14. Four-quadrant optical matrix vector multiplication machine as a neural network processor

    NASA Astrophysics Data System (ADS)

    Abramson, Shai; Saad, D.; Marom, Emanuel; Konforti, Naim

    1993-08-01

    Optical processors for neural networks are primarily fast matrix-vector multiplication machines that can potentially compete with serial computers owing to their parallelism and their ability to facilitate densely connected networks. However, in most proposed systems the multiplication supports only two quadrants and is thus unable to provide bipolar neuron outputs for increasing network capabilities and learning rate. We propose and demonstrate an opto-electronic four quadrant matrix-vector multiplier that can be used for feedforward neural networks recall and learning. Experimental results obtained with common commercial components demonstrate a novel, useful, and reliable approach for four quadrant matrix-vector multiplication in general and for feedforward neural network training and recall in particular.

  15. Double Virus Vector Infection to the Prefrontal Network of the Macaque Brain

    PubMed Central

    Tanaka, Shingo; Koizumi, Masashi; Kikusui, Takefumi; Ichihara, Nobutsune; Kato, Shigeki; Kobayashi, Kazuto; Sakagami, Masamichi

    2015-01-01

    To precisely understand how higher cognitive functions are implemented in the prefrontal network of the brain, optogenetic and pharmacogenetic methods to manipulate the signal transmission of a specific neural pathway are required. The application of these methods, however, has been mostly restricted to animals other than the primate, which is the best animal model to investigate higher cognitive functions. In this study, we used a double viral vector infection method in the prefrontal network of the macaque brain. This enabled us to express specific constructs into specific neurons that constitute a target pathway without use of germline genetic manipulation. The double-infection technique utilizes two different virus vectors in two monosynaptically connected areas. One is a vector which can locally infect cell bodies of projection neurons (local vector) and the other can retrogradely infect from axon terminals of the same projection neurons (retrograde vector). The retrograde vector incorporates the sequence which encodes Cre recombinase and the local vector incorporates the “Cre-On” FLEX double-floxed sequence in which a reporter protein (mCherry) was encoded. mCherry thus came to be expressed only in doubly infected projection neurons with these vectors. We applied this method to two macaque monkeys and targeted two different pathways in the prefrontal network: The pathway from the lateral prefrontal cortex to the caudate nucleus and the pathway from the lateral prefrontal cortex to the frontal eye field. As a result, mCherry-positive cells were observed in the lateral prefrontal cortex in all of the four injected hemispheres, indicating that the double virus vector transfection is workable in the prefrontal network of the macaque brain. PMID:26193102

  16. Intercomparison of Terahertz Dielectric Measurements Using Vector Network Analyzer and Time-Domain Spectrometer

    NASA Astrophysics Data System (ADS)

    Naftaly, Mira; Shoaib, Nosherwan; Stokes, Daniel; Ridler, Nick M.

    2016-07-01

    We describe a method for direct intercomparison of terahertz permittivities at 200 GHz obtained by a Vector Network Analyzer and a Time-Domain Spectrometer, whereby both instruments operate in their customary configurations, i.e., the VNA in waveguide and TDS in free-space. The method employs material that can be inserted into a waveguide for VNA measurements or contained in a cell for TDS measurements. The intercomparison experiments were performed using two materials: petroleum jelly and a mixture of petroleum jelly with carbon powder. The obtained values of complex permittivities were similar within the measurement uncertainty. An intercomparison between VNA and TDS measurements is of importance because the two modalities are customarily employed separately and require different approaches. Since material permittivities can and have been measured using either platform, it is necessary to ascertain that the obtained data is similar in both cases.

  17. Optical matrix-vector implementation of the content-addressable network

    NASA Astrophysics Data System (ADS)

    Brodsky, Stephen A.; Marsden, Gary C.; Guest, Clark C.

    1993-03-01

    The content-addressable network (CAN) is an efficient, intrinsically discrete training algorithm for binary-valued classification networks. The binary nature of the CAN network permits accelerated learning and significantly reduced hardware-implementation requirements. A multilayer optoelectronic CAN network employing matrix-vector multiplication was constructed. The network learned and correctly classified trained patterns, gaining a measure of fault tolerance by learning associative solutions to optical hardware imperfections. Operation of this system is possible owing to the reduced hardware accuracy requirements of the CAN learning algorithm.

  18. Optical matrix-vector implementation of the content-addressable network.

    PubMed

    Brodsky, S A; Marsden, G C; Guest, C C

    1993-03-10

    The content-addressable network (CAN) is an efficient, intrinsically discrete training algorithm for binary-valued classification networks. The binary nature of the CAN network permits accelerated learning and significantly reduced hardware-implementation requirements. A multilayer optoelectronic CAN network employing matrix-vector multiplication was constructed. The network learned and correctly classified trained patterns, gaining a measure of fault tolerance by learning associative solutions to optical hardware imperfections. Operation of this system is possible owing to the reduced hardware accuracy requirements of the CAN learning algorithm. PMID:20820268

  19. Diagnosing Anomalous Network Performance with Confidence

    SciTech Connect

    Settlemyer, Bradley W; Hodson, Stephen W; Kuehn, Jeffery A; Poole, Stephen W

    2011-04-01

    Variability in network performance is a major obstacle in effectively analyzing the throughput of modern high performance computer systems. High performance interconnec- tion networks offer excellent best-case network latencies; how- ever, highly parallel applications running on parallel machines typically require consistently high levels of performance to adequately leverage the massive amounts of available computing power. Performance analysts have usually quantified network performance using traditional summary statistics that assume the observational data is sampled from a normal distribution. In our examinations of network performance, we have found this method of analysis often provides too little data to under- stand anomalous network performance. Our tool, Confidence, instead uses an empirically derived probability distribution to characterize network performance. In this paper we describe several instances where the Confidence toolkit allowed us to understand and diagnose network performance anomalies that we could not adequately explore with the simple summary statis- tics provided by traditional measurement tools. In particular, we examine a multi-modal performance scenario encountered with an Infiniband interconnection network and we explore the performance repeatability on the custom Cray SeaStar2 interconnection network after a set of software and driver updates.

  20. Automatic target recognition using vector quantization and neural networks

    NASA Astrophysics Data System (ADS)

    Chan, Lipchen A.; Nasrabadi, Nasser M.

    1999-12-01

    We propose an automatic target recognition (ATR) algorithm that uses a set of dedicated vector quantizers (VQs) and multilayer perceptrons (MLPs). For each target class at a specific range of aspects, the background pixels of an input image are first removed. The extracted target area is then subdivided into several subimages. A dedicated VQ codebook is constructed for each of the resulting subimages. Using the K-means algorithm, each VQ codebook learns a set of patterns representing the local features of a particular target for a specific range of aspects. The resulting codebooks are further trained by a modified learning vector quantization algorithm, which enhances the discriminatory power of the codebooks. Each final codebook is expected to give the lowest mean squared error (MSE) for its correct target class and range of aspects. These MSEs are then input to an array of window-level MLPs (WMLPs), where each WMLP is specialized in recognizing its intended target class for a specific range of aspects. The outputs of these WMLPs are manipulated and passed to a target-level MLP, which produces the final recognition results. We trained and tested the proposed ATR algorithm on large and realistic data sets and obtained impressive results using the wavelet-based adaptive produce VQs configuration.

  1. Data Access Performance Through Parallelization and Vectored Access: Some Results

    SciTech Connect

    Furano, Fabrizio; Hanushevsky, Andrew; /SLAC

    2011-11-10

    High Energy Physics data processing and analysis applications typically deal with the problem of accessing and processing data at high speed. Recent studies, development and test work have shown that the latencies due to data access can often be hidden by parallelizing them with the data processing, thus giving the ability to have applications which process remote data with a high level of efficiency. Techniques and algorithms able to reach this result have been implemented in the client side of the Scalla/xrootd system, and in this contribution we describe the results of some tests done in order to compare their performance and characteristics. These techniques, if used together with multiple streams data access, can also be effective in allowing to efficiently and transparently deal with data repositories accessible via a Wide Area Network.

  2. SP2I interconnection network and extension of the iteration method of automatic vector-routing

    SciTech Connect

    Wang Rong-quan; Zhang Xiang; Gao Qing-shi

    1982-01-01

    In this paper the SP2I (single-stage plus 2/sup i/) interconnection network, which is applicable to the CVCVHP with VCM (cellular vector computer of vertical-horizontal processing with virtual common memory) and other multiprocessor systems, is discussed. Starting from the need for dynamic and parallel data alignment, the authors investigate various properties of conflict-free routing, and describes the iteration method of automatic vector-routing which may be used to solve the conflict problem in the SP2I network. Furthermore, they extend the iteration method to the networks of ADM, omega, delta, indirect binary n-cube, baseline, etc. The problem of routing conflict in these networks, which has not been well solved so far, may be solved efficiently. Finally, the implementation methods of several common data manipulation functions without conflict are given. 11 references.

  3. Hybrid Neural Network and Support Vector Machine Method for Optimization

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan (Inventor)

    2007-01-01

    System and method for optimization of a design associated with a response function, using a hybrid neural net and support vector machine (NN/SVM) analysis to minimize or maximize an objective function, optionally subject to one or more constraints. As a first example, the NN/SVM analysis is applied iteratively to design of an aerodynamic component, such as an airfoil shape, where the objective function measures deviation from a target pressure distribution on the perimeter of the aerodynamic component. As a second example, the NN/SVM analysis is applied to data classification of a sequence of data points in a multidimensional space. The NN/SVM analysis is also applied to data regression.

  4. HYBRID NEURAL NETWORK AND SUPPORT VECTOR MACHINE METHOD FOR OPTIMIZATION

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan (Inventor)

    2005-01-01

    System and method for optimization of a design associated with a response function, using a hybrid neural net and support vector machine (NN/SVM) analysis to minimize or maximize an objective function, optionally subject to one or more constraints. As a first example, the NN/SVM analysis is applied iteratively to design of an aerodynamic component, such as an airfoil shape, where the objective function measures deviation from a target pressure distribution on the perimeter of the aerodynamic component. As a second example, the NN/SVM analysis is applied to data classification of a sequence of data points in a multidimensional space. The NN/SVM analysis is also applied to data regression.

  5. Design of thrust vectoring exhaust nozzles for real-time applications using neural networks

    NASA Technical Reports Server (NTRS)

    Prasanth, Ravi K.; Markin, Robert E.; Whitaker, Kevin W.

    1991-01-01

    Thrust vectoring continues to be an important issue in military aircraft system designs. A recently developed concept of vectoring aircraft thrust makes use of flexible exhaust nozzles. Subtle modifications in the nozzle wall contours produce a non-uniform flow field containing a complex pattern of shock and expansion waves. The end result, due to the asymmetric velocity and pressure distributions, is vectored thrust. Specification of the nozzle contours required for a desired thrust vector angle (an inverse design problem) has been achieved with genetic algorithms. This approach is computationally intensive and prevents the nozzles from being designed in real-time, which is necessary for an operational aircraft system. An investigation was conducted into using genetic algorithms to train a neural network in an attempt to obtain, in real-time, two-dimensional nozzle contours. Results show that genetic algorithm trained neural networks provide a viable, real-time alternative for designing thrust vectoring nozzles contours. Thrust vector angles up to 20 deg were obtained within an average error of 0.0914 deg. The error surfaces encountered were highly degenerate and thus the robustness of genetic algorithms was well suited for minimizing global errors.

  6. Radio to microwave dielectric characterisation of constitutive electromagnetic soil properties using vector network analyses

    NASA Astrophysics Data System (ADS)

    Schwing, M.; Wagner, N.; Karlovsek, J.; Chen, Z.; Williams, D. J.; Scheuermann, A.

    2016-04-01

    The knowledge of constitutive broadband electromagnetic (EM) properties of porous media such as soils and rocks is essential in the theoretical and numerical modeling of EM wave propagation in the subsurface. This paper presents an experimental and numerical study on the performance EM measuring instruments for broadband EM wave in the radio-microwave frequency range. 3-D numerical calculations of a specific sensor were carried out using the Ansys HFSS (high frequency structural simulator) to further evaluate the probe performance. In addition, six different sensors of varying design, application purpose, and operational frequency range, were tested on different calibration liquids and a sample of fine-grained soil over a frequency range of 1 MHz-40 GHz using four vector network analysers. The resulting dielectric spectrum of the soil was analysed and interpreted using a 3-term Cole-Cole model under consideration of a direct current conductivity contribution. Comparison of sensor performances on calibration materials and fine-grained soils showed consistency in the measured dielectric spectra at a frequency range from 100 MHz-2 GHz. By combining open-ended coaxial line and coaxial transmission line measurements, the observable frequency window could be extended to a truly broad frequency range of 1 MHz-40 GHz.

  7. Classification of mammographic masses using support vector machines and Bayesian networks

    NASA Astrophysics Data System (ADS)

    Samulski, Maurice; Karssemeijer, Nico; Lucas, Peter; Groot, Perry

    2007-03-01

    In this paper, we compare two state-of-the-art classification techniques characterizing masses as either benign or malignant, using a dataset consisting of 271 cases (131 benign and 140 malignant), containing both a MLO and CC view. For suspect regions in a digitized mammogram, 12 out of 81 calculated image features have been selected for investigating the classification accuracy of support vector machines (SVMs) and Bayesian networks (BNs). Additional techniques for improving their performance were included in their comparison: the Manly transformation for achieving a normal distribution of image features and principal component analysis (PCA) for reducing our high-dimensional data. The performance of the classifiers were evaluated with Receiver Operating Characteristics (ROC) analysis. The classifiers were trained and tested using a k-fold cross-validation test method (k=10). It was found that the area under the ROC curve (A z) of the BN increased significantly (p=0.0002) using the Manly transformation, from A z = 0.767 to A z = 0.795. The Manly transformation did not result in a significant change for SVMs. Also the difference between SVMs and BNs using the transformed dataset was not statistically significant (p=0.78). Applying PCA resulted in an improvement in classification accuracy of the naive Bayesian classifier, from A z = 0.767 to A z = 0.786. The difference in classification performance between BNs and SVMs after applying PCA was small and not statistically significant (p=0.11).

  8. Effects of internal yaw-vectoring devices on the static performance of a pitch-vectoring nonaxisymmetric convergent-divergent nozzle

    NASA Technical Reports Server (NTRS)

    Asbury, Scott C.

    1993-01-01

    An investigation was conducted in the static test facility of the Langley 16-Foot Transonic Tunnel to evaluate the internal performance of a nonaxisymmetric convergent divergent nozzle designed to have simultaneous pitch and yaw thrust vectoring capability. This concept utilized divergent flap deflection for thrust vectoring in the pitch plane and flow-turning deflectors installed within the divergent flaps for yaw thrust vectoring. Modifications consisting of reducing the sidewall length and deflecting the sidewall outboard were investigated as means to increase yaw-vectoring performance. This investigation studied the effects of multiaxis (pitch and yaw) thrust vectoring on nozzle internal performance characteristics. All tests were conducted with no external flow, and nozzle pressure ratio was varied from 2.0 to approximately 13.0. The results indicate that this nozzle concept can successfully generate multiaxis thrust vectoring. Deflection of the divergent flaps produced resultant pitch vector angles that, although dependent on nozzle pressure ratio, were nearly equal to the geometric pitch vector angle. Losses in resultant thrust due to pitch vectoring were small or negligible. The yaw deflectors produced resultant yaw vector angles up to 21 degrees that were controllable by varying yaw deflector rotation. However, yaw deflector rotation resulted in significant losses in thrust ratios and, in some cases, nozzle discharge coefficient. Either of the sidewall modifications generally reduced these losses and increased maximum resultant yaw vector angle. During multiaxis (simultaneous pitch and yaw) thrust vectoring, little or no cross coupling between the thrust vectoring processes was observed.

  9. Impact of descriptor vector scaling on the classification of drugs and nondrugs with artificial neural networks.

    PubMed

    Givehchi, Alireza; Schneider, Gisbert

    2004-06-01

    The influence of preprocessing of molecular descriptor vectors for solving classification tasks was analyzed for drug/nondrug classification by artificial neural networks. Molecular properties were used to form descriptor vectors. Two types of neural networks were used, supervised multilayer neural nets trained with the back-propagation algorithm, and unsupervised self-organizing maps (Kohonen maps). Data were preprocessed by logistic scaling and histogram equalization. For both types of neural networks, the preprocessing step significantly improved classification compared to nonstandardized data. Classification accuracy was measured as prediction mean square error and Matthews correlation coefficient in the case of supervised learning, and quantization error in the case of unsupervised learning. The results demonstrate that appropriate data preprocessing is an essential step in solving classification tasks.

  10. Target localization in wireless sensor networks using online semi-supervised support vector regression.

    PubMed

    Yoo, Jaehyun; Kim, H Jin

    2015-01-01

    Machine learning has been successfully used for target localization in wireless sensor networks (WSNs) due to its accurate and robust estimation against highly nonlinear and noisy sensor measurement. For efficient and adaptive learning, this paper introduces online semi-supervised support vector regression (OSS-SVR). The first advantage of the proposed algorithm is that, based on semi-supervised learning framework, it can reduce the requirement on the amount of the labeled training data, maintaining accurate estimation. Second, with an extension to online learning, the proposed OSS-SVR automatically tracks changes of the system to be learned, such as varied noise characteristics. We compare the proposed algorithm with semi-supervised manifold learning, an online Gaussian process and online semi-supervised colocalization. The algorithms are evaluated for estimating the unknown location of a mobile robot in a WSN. The experimental results show that the proposed algorithm is more accurate under the smaller amount of labeled training data and is robust to varying noise. Moreover, the suggested algorithm performs fast computation, maintaining the best localization performance in comparison with the other methods. PMID:26024420

  11. The interplay of vaccination and vector control on small dengue networks.

    PubMed

    Hendron, Ross-William S; Bonsall, Michael B

    2016-10-21

    Dengue fever is a major public health issue affecting billions of people in over 100 countries across the globe. This challenge is growing as the invasive mosquito vectors, Aedes aegypti and Aedes albopictus, expand their distributions and increase their population sizes. Hence there is an increasing need to devise effective control methods that can contain dengue outbreaks. Here we construct an epidemiological model for virus transmission between vectors and hosts on a network of host populations distributed among city and town patches, and investigate disease control through vaccination and vector control using variants of the sterile insect technique (SIT). Analysis of the basic reproductive number and simulations indicate that host movement across this small network influences the severity of epidemics. Both vaccination and vector control strategies are investigated as methods of disease containment and our results indicate that these controls can be made more effective with mixed strategy solutions. We predict that reduced lethality through poor SIT methods or imperfectly efficacious vaccines will impact efforts to control disease spread. In particular, weakly efficacious vaccination strategies against multiple virus serotype diversity may be counter productive to disease control efforts. Even so, failings of one method may be mitigated by supplementing it with an alternative control strategy. Generally, our network approach encourages decision making to consider connected populations, to emphasise that successful control methods must effectively suppress dengue epidemics at this landscape scale. PMID:27457093

  12. Epidemic spreading and global stability of an SIS model with an infective vector on complex networks

    NASA Astrophysics Data System (ADS)

    Kang, Huiyan; Fu, Xinchu

    2015-10-01

    In this paper, we present a new SIS model with delay on scale-free networks. The model is suitable to describe some epidemics which are not only transmitted by a vector but also spread between individuals by direct contacts. In view of the biological relevance and real spreading process, we introduce a delay to denote average incubation period of disease in a vector. By mathematical analysis, we obtain the epidemic threshold and prove the global stability of equilibria. The simulation shows the delay will effect the epidemic spreading. Finally, we investigate and compare two major immunization strategies, uniform immunization and targeted immunization.

  13. A Performance Management Initiative for Local Health Department Vector Control Programs

    PubMed Central

    Gerding, Justin; Kirshy, Micaela; Moran, John W.; Bialek, Ron; Lamers, Vanessa; Sarisky, John

    2016-01-01

    Local health department (LHD) vector control programs have experienced reductions in funding and capacity. Acknowledging this situation and its potential effect on the ability to respond to vector-borne diseases, the U.S. Centers for Disease Control and Prevention and the Public Health Foundation partnered on a performance management initiative for LHD vector control programs. The initiative involved 14 programs that conducted a performance assessment using the Environmental Public Health Performance Standards. The programs, assisted by quality improvement (QI) experts, used the assessment results to prioritize improvement areas that were addressed with QI projects intended to increase effectiveness and efficiency in the delivery of services such as responding to mosquito complaints and educating the public about vector-borne disease prevention. This article describes the initiative as a process LHD vector control programs may adapt to meet their performance management needs. This study also reviews aggregate performance assessment results and QI projects, which may reveal common aspects of LHD vector control program performance and priority improvement areas. LHD vector control programs interested in performance assessment and improvement may benefit from engaging in an approach similar to this performance management initiative. PMID:27429555

  14. Artificial astrocytes improve neural network performance.

    PubMed

    Porto-Pazos, Ana B; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-01-01

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function. PMID:21526157

  15. Artificial Astrocytes Improve Neural Network Performance

    PubMed Central

    Porto-Pazos, Ana B.; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-01-01

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function. PMID:21526157

  16. In-flight performance of the Absolute Scalar Magnetometer vector mode on board the Swarm satellites

    NASA Astrophysics Data System (ADS)

    Léger, Jean-Michel; Jager, Thomas; Bertrand, François; Hulot, Gauthier; Brocco, Laura; Vigneron, Pierre; Lalanne, Xavier; Chulliat, Arnaud; Fratter, Isabelle

    2015-04-01

    The role of the Absolute Scalar Magnetometer (ASM) in the European Space Agency (ESA) Swarm mission is to deliver absolute measurements of the magnetic field's strength for science investigations and in-flight calibration of the Vector Field Magnetometer (VFM). However, the ASM instrument can also simultaneously deliver vector measurements with no impact on the magnetometer's scalar performance, using a so-called vector mode. This vector mode has been continuously operated since the beginning of the mission, except for short periods of time during commissioning. Since both scalar and vector measurements are perfectly synchronous and spatially coherent, a direct assessment of the ASM vector performance can then be carried out at instrument level without need to correct for the various magnetic perturbations generated by the satellites. After a brief description of the instrument's operating principles, a thorough analysis of the instrument's behavior is presented, as well as a characterization of its environment in flight, using an alternative high sampling rate (burst) scalar mode that could be run a few days during commissioning. The ASM vector calibration process is next detailed, with some emphasis on its sensitivity to operational conditions. Finally, the evolution of the instrument's performance during the first year of the mission is presented and discussed in view of the mission's performance requirements for vector measurements.

  17. Static internal performance of a single expansion ramp nozzle with multiaxis thrust vectoring capability

    NASA Technical Reports Server (NTRS)

    Capone, Francis J.; Schirmer, Alberto W.

    1993-01-01

    An investigation was conducted at static conditions in order to determine the internal performance characteristics of a multiaxis thrust vectoring single expansion ramp nozzle. Yaw vectoring was achieved by deflecting yaw flaps in the nozzle sidewall into the nozzle exhaust flow. In order to eliminate any physical interference between the variable angle yaw flap deflected into the exhaust flow and the nozzle upper ramp and lower flap which were deflected for pitch vectoring, the downstream corners of both the nozzle ramp and lower flap were cut off to allow for up to 30 deg of yaw vectoring. The effects of nozzle upper ramp and lower flap cutout, yaw flap hinge line location and hinge inclination angle, sidewall containment, geometric pitch vector angle, and geometric yaw vector angle were studied. This investigation was conducted in the static-test facility of the Langley 16-Foot Transonic Tunnel at nozzle pressure ratios up to 8.0.

  18. Internal performance of two nozzles utilizing gimbal concepts for thrust vectoring

    NASA Technical Reports Server (NTRS)

    Berrier, Bobby L.; Taylor, John G.

    1990-01-01

    The internal performance of an axisymmetric convergent-divergent nozzle and a nonaxisymmetric convergent-divergent nozzle, both of which utilized a gimbal type mechanism for thrust vectoring was evaluated in the Static Test Facility of the Langley 16-Foot Transonic Tunnel. The nonaxisymmetric nozzle used the gimbal concept for yaw thrust vectoring only; pitch thrust vectoring was accomplished by simultaneous deflection of the upper and lower divergent flaps. The model geometric parameters investigated were pitch vector angle for the axisymmetric nozzle and pitch vector angle, yaw vector angle, nozzle throat aspect ratio, and nozzle expansion ratio for the nonaxisymmetric nozzle. All tests were conducted with no external flow, and nozzle pressure ratio was varied from 2.0 to approximately 12.0.

  19. High Performance Networks for High Impact Science

    SciTech Connect

    Scott, Mary A.; Bair, Raymond A.

    2003-02-13

    This workshop was the first major activity in developing a strategic plan for high-performance networking in the Office of Science. Held August 13 through 15, 2002, it brought together a selection of end users, especially representing the emerging, high-visibility initiatives, and network visionaries to identify opportunities and begin defining the path forward.

  20. Measurements by a Vector Network Analyzer at 325 to 508 GHz

    NASA Technical Reports Server (NTRS)

    Fung, King Man; Samoska, Lorene; Chattopadhyay, Goutam; Gaier, Todd; Kangaslahti, Pekka; Pukala, David; Lau, Yuenie; Oleson, Charles; Denning, Anthony

    2008-01-01

    Recent experiments were performed in which return loss and insertion loss of waveguide test assemblies in the frequency range from 325 to 508 GHz were measured by use of a swept-frequency two-port vector network analyzer (VNA) test set. The experiments were part of a continuing effort to develop means of characterizing passive and active electronic components and systems operating at ever increasing frequencies. The waveguide test assemblies comprised WR-2.2 end sections collinear with WR-3.3 middle sections. The test set, assembled from commercially available components, included a 50-GHz VNA scattering- parameter test set and external signal synthesizers, augmented with recently developed frequency extenders, and further augmented with attenuators and amplifiers as needed to adjust radiofrequency and intermediate-frequency power levels between the aforementioned components. The tests included line-reflect-line calibration procedures, using WR-2.2 waveguide shims as the "line" standards and waveguide flange short circuits as the "reflect" standards. Calibrated dynamic ranges somewhat greater than about 20 dB for return loss and 35 dB for insertion loss were achieved. The measurement data of the test assemblies were found to substantially agree with results of computational simulations.

  1. Spatial Variance in Resting fMRI Networks of Schizophrenia Patients: An Independent Vector Analysis.

    PubMed

    Gopal, Shruti; Miller, Robyn L; Michael, Andrew; Adali, Tulay; Cetin, Mustafa; Rachakonda, Srinivas; Bustillo, Juan R; Cahill, Nathan; Baum, Stefi A; Calhoun, Vince D

    2016-01-01

    Spatial variability in resting functional MRI (fMRI) brain networks has not been well studied in schizophrenia, a disease known for both neurodevelopmental and widespread anatomic changes. Motivated by abundant evidence of neuroanatomical variability from previous studies of schizophrenia, we draw upon a relatively new approach called independent vector analysis (IVA) to assess this variability in resting fMRI networks. IVA is a blind-source separation algorithm, which segregates fMRI data into temporally coherent but spatially independent networks and has been shown to be especially good at capturing spatial variability among subjects in the extracted networks. We introduce several new ways to quantify differences in variability of IVA-derived networks between schizophrenia patients (SZs = 82) and healthy controls (HCs = 89). Voxelwise amplitude analyses showed significant group differences in the spatial maps of auditory cortex, the basal ganglia, the sensorimotor network, and visual cortex. Tests for differences (HC-SZ) in the spatial variability maps suggest, that at rest, SZs exhibit more activity within externally focused sensory and integrative network and less activity in the default mode network thought to be related to internal reflection. Additionally, tests for difference of variance between groups further emphasize that SZs exhibit greater network variability. These results, consistent with our prediction of increased spatial variability within SZs, enhance our understanding of the disease and suggest that it is not just the amplitude of connectivity that is different in schizophrenia, but also the consistency in spatial connectivity patterns across subjects. PMID:26106217

  2. Enhancing neural-network performance via assortativity

    SciTech Connect

    Franciscis, Sebastiano de; Johnson, Samuel; Torres, Joaquin J.

    2011-03-15

    The performance of attractor neural networks has been shown to depend crucially on the heterogeneity of the underlying topology. We take this analysis a step further by examining the effect of degree-degree correlations - assortativity - on neural-network behavior. We make use of a method recently put forward for studying correlated networks and dynamics thereon, both analytically and computationally, which is independent of how the topology may have evolved. We show how the robustness to noise is greatly enhanced in assortative (positively correlated) neural networks, especially if it is the hub neurons that store the information.

  3. High-performance neural networks. [Neural computers

    SciTech Connect

    Dress, W.B.

    1987-06-01

    The new Forth hardware architectures offer an intermediate solution to high-performance neural networks while the theory and programming details of neural networks for synthetic intelligence are developed. This approach has been used successfully to determine the parameters and run the resulting network for a synthetic insect consisting of a 200-node ''brain'' with 1760 interconnections. Both the insect's environment and its sensor input have thus far been simulated. However, the frequency-coded nature of the Browning network allows easy replacement of the simulated sensors by real-world counterparts.

  4. Enhancing neural-network performance via assortativity.

    PubMed

    de Franciscis, Sebastiano; Johnson, Samuel; Torres, Joaquín J

    2011-03-01

    The performance of attractor neural networks has been shown to depend crucially on the heterogeneity of the underlying topology. We take this analysis a step further by examining the effect of degree-degree correlations--assortativity--on neural-network behavior. We make use of a method recently put forward for studying correlated networks and dynamics thereon, both analytically and computationally, which is independent of how the topology may have evolved. We show how the robustness to noise is greatly enhanced in assortative (positively correlated) neural networks, especially if it is the hub neurons that store the information.

  5. Comparing error minimized extreme learning machines and support vector sequential feed-forward neural networks.

    PubMed

    Romero, Enrique; Alquézar, René

    2012-01-01

    Recently, error minimized extreme learning machines (EM-ELMs) have been proposed as a simple and efficient approach to build single-hidden-layer feed-forward networks (SLFNs) sequentially. They add random hidden nodes one by one (or group by group) and update the output weights incrementally to minimize the sum-of-squares error in the training set. Other very similar methods that also construct SLFNs sequentially had been reported earlier with the main difference that their hidden-layer weights are a subset of the data instead of being random. These approaches are referred to as support vector sequential feed-forward neural networks (SV-SFNNs), and they are a particular case of the sequential approximation with optimal coefficients and interacting frequencies (SAOCIF) method. In this paper, it is firstly shown that EM-ELMs can also be cast as a particular case of SAOCIF. In particular, EM-ELMs can easily be extended to test some number of random candidates at each step and select the best of them, as SAOCIF does. Moreover, it is demonstrated that the cost of the computation of the optimal output-layer weights in the originally proposed EM-ELMs can be improved if it is replaced by the one included in SAOCIF. Secondly, we present the results of an experimental study on 10 benchmark classification and 10 benchmark regression data sets, comparing EM-ELMs and SV-SFNNs, that was carried out under the same conditions for the two models. Although both models have the same (efficient) computational cost, a statistically significant improvement in generalization performance of SV-SFNNs vs. EM-ELMs was found in 12 out of the 20 benchmark problems.

  6. Belief network algorithms: A study of performance

    SciTech Connect

    Jitnah, N.

    1996-12-31

    This abstract gives an overview of the work. We present a survey of Belief Network algorithms and propose a domain characterization system to be used as a basis for algorithm comparison and for predicting algorithm performance.

  7. Evaluation of Raman spectra of human brain tumor tissue using the learning vector quantization neural network

    NASA Astrophysics Data System (ADS)

    Liu, Tuo; Chen, Changshui; Shi, Xingzhe; Liu, Chengyong

    2016-05-01

    The Raman spectra of tissue of 20 brain tumor patients was recorded using a confocal microlaser Raman spectroscope with 785 nm excitation in vitro. A total of 133 spectra were investigated. Spectra peaks from normal white matter tissue and tumor tissue were analyzed. Algorithms, such as principal component analysis, linear discriminant analysis, and the support vector machine, are commonly used to analyze spectral data. However, in this study, we employed the learning vector quantization (LVQ) neural network, which is typically used for pattern recognition. By applying the proposed method, a normal diagnosis accuracy of 85.7% and a glioma diagnosis accuracy of 89.5% were achieved. The LVQ neural network is a recent approach to excavating Raman spectra information. Moreover, it is fast and convenient, does not require the spectra peak counterpart, and achieves a relatively high accuracy. It can be used in brain tumor prognostics and in helping to optimize the cutting margins of gliomas.

  8. Performance analysis of a VSAT network

    NASA Astrophysics Data System (ADS)

    Karam, Fouad G.; Miller, Neville; Karam, Antoine

    With the growing need for efficient satellite networking facilities, the very small aperture terminal (VSAT) technology emerges as the leading edge of satellite communications. Achieving the required performance of a VSAT network is dictated by the multiple access technique utilized. Determining the inbound access method best suited for a particular application involves trade-offs between response time and space segment utilization. In this paper, the slotted Aloha and dedicated stream access techniques are compared. It is shown that network performance is dependent on the traffic offered from remote earth stations as well as the sensitivity of customer's applications to satellite delay.

  9. Vector Winds from a Single-Transmitter Bistatic Dual-Doppler Radar Network.

    NASA Astrophysics Data System (ADS)

    Wurman, Joshua

    1994-06-01

    A bistatic dual-Doppler weather radar network consisting of only one transmitter and a nontransmitting, nonscanning, low-cost bistatic receiver was deployed in the Boulder, Colorado, area during 1993.The Boulder network took data in a variety of weather situations, including low-reflectivity stratiform snowfall, several convective cells, and a hailstorm. Dual-Doppler vector wind fields were retrieved and compared to those from a traditional, two-transmitter dual-Doppler network. The favorable results from these comparisons indicate that the bistatic dual-Doppler technique is viable and practical.Bistatic multiple-Doppler networks have significant scientific and economic advantages accruing from the use of only single sources of illumination. Individual spatial volumes are viewed simultaneously from multiple look angles, minimizing storm evolution-induced errors. The passive receivers in a bistatic network do not require expensive transmitters, moving antenna hardware, or operators. Thus, they require only a small pereentage of the investment needed to field traditional transmitting radars.Bistatic systems can be deployed affordably to provide three-dimensional fields of full-vector winds, including directly measured vertical precipitation particle velocities for numerous applications in meteorological research, aviation, forecasting, media, and education.

  10. Several small-scale vector array performance analysis and simulation of DOA estimation

    NASA Astrophysics Data System (ADS)

    Mei, Yinzhen

    2011-10-01

    To research the application and estimate performance in some small-scale vector sensor array by traditional direction of arrival estimate , we derivate the time delay expression of four small-scale non-uniform vector sensor array, the array direction vector is given, and the MUSIC algorithm is applied successfully to non-uniform vector array for direction of arrival(DOA) estimate, select the better performance of each array element setting method, and compare of beam forming, the probability of success and the mean square error, this shows that the performance of line array is best, followed by L-array and circular array, the performance of cross-array is worst.

  11. Diversity Performance Analysis on Multiple HAP Networks.

    PubMed

    Dong, Feihong; Li, Min; Gong, Xiangwu; Li, Hongjun; Gao, Fengyue

    2015-01-01

    One of the main design challenges in wireless sensor networks (WSNs) is achieving a high-data-rate transmission for individual sensor devices. The high altitude platform (HAP) is an important communication relay platform for WSNs and next-generation wireless networks. Multiple-input multiple-output (MIMO) techniques provide the diversity and multiplexing gain, which can improve the network performance effectively. In this paper, a virtual MIMO (V-MIMO) model is proposed by networking multiple HAPs with the concept of multiple assets in view (MAV). In a shadowed Rician fading channel, the diversity performance is investigated. The probability density function (PDF) and cumulative distribution function (CDF) of the received signal-to-noise ratio (SNR) are derived. In addition, the average symbol error rate (ASER) with BPSK and QPSK is given for the V-MIMO model. The system capacity is studied for both perfect channel state information (CSI) and unknown CSI individually. The ergodic capacity with various SNR and Rician factors for different network configurations is also analyzed. The simulation results validate the effectiveness of the performance analysis. It is shown that the performance of the HAPs network in WSNs can be significantly improved by utilizing the MAV to achieve overlapping coverage, with the help of the V-MIMO techniques. PMID:26134102

  12. Diversity Performance Analysis on Multiple HAP Networks

    PubMed Central

    Dong, Feihong; Li, Min; Gong, Xiangwu; Li, Hongjun; Gao, Fengyue

    2015-01-01

    One of the main design challenges in wireless sensor networks (WSNs) is achieving a high-data-rate transmission for individual sensor devices. The high altitude platform (HAP) is an important communication relay platform for WSNs and next-generation wireless networks. Multiple-input multiple-output (MIMO) techniques provide the diversity and multiplexing gain, which can improve the network performance effectively. In this paper, a virtual MIMO (V-MIMO) model is proposed by networking multiple HAPs with the concept of multiple assets in view (MAV). In a shadowed Rician fading channel, the diversity performance is investigated. The probability density function (PDF) and cumulative distribution function (CDF) of the received signal-to-noise ratio (SNR) are derived. In addition, the average symbol error rate (ASER) with BPSK and QPSK is given for the V-MIMO model. The system capacity is studied for both perfect channel state information (CSI) and unknown CSI individually. The ergodic capacity with various SNR and Rician factors for different network configurations is also analyzed. The simulation results validate the effectiveness of the performance analysis. It is shown that the performance of the HAPs network in WSNs can be significantly improved by utilizing the MAV to achieve overlapping coverage, with the help of the V-MIMO techniques. PMID:26134102

  13. Performance characterization of a broadband vector Apodizing Phase Plate coronagraph.

    PubMed

    Otten, Gilles P P L; Snik, Frans; Kenworthy, Matthew A; Miskiewicz, Matthew N; Escuti, Michael J

    2014-12-01

    One of the main challenges for the direct imaging of planets around nearby stars is the suppression of the diffracted halo from the primary star. Coronagraphs are angular filters that suppress this diffracted halo. The Apodizing Phase Plate coronagraph modifies the pupil-plane phase with an anti-symmetric pattern to suppress diffraction over a 180 degree region from 2 to 7 λ/D and achieves a mean raw contrast of 10(-4) in this area, independent of the tip-tilt stability of the system. Current APP coronagraphs implemented using classical phase techniques are limited in bandwidth and suppression region geometry (i.e. only on one side of the star). In this paper, we introduce the vector-APP (vAPP) whose phase pattern is implemented through the vector phase imposed by the orientation of patterned liquid crystals. Beam-splitting according to circular polarization states produces two, complementary PSFs with dark holes on either side. We have developed a prototype vAPP that consists of a stack of three twisting liquid crystal layers to yield a bandwidth of 500 to 900 nm. We characterize the properties of this device using reconstructions of the pupil-plane pattern, and of the ensuing PSF structures. By imaging the pupil between crossed and parallel polarizers we reconstruct the fast axis pattern, transmission, and retardance of the vAPP, and use this as input for a PSF model. This model includes aberrations of the laboratory set-up, and matches the measured PSF, which shows a raw contrast of 10(-3.8) between 2 and 7 λ/D in a 135 degree wedge. The vAPP coronagraph is relatively easy to manufacture and can be implemented together with a broadband quarter-wave plate and Wollaston prism in a pupil wheel in high-contrast imaging instruments. The liquid crystal patterning technique permits the application of extreme phase patterns with deeper contrasts inside the dark holes, and the multilayer liquid crystal achromatization technique enables unprecedented spectral bandwidths

  14. Performance analysis of distributed symmetric sparse matrix vector multiplication algorithm for multi-core architectures

    SciTech Connect

    Oryspayev, Dossay; Aktulga, Hasan Metin; Sosonkina, Masha; Maris, Pieter; Vary, James P.

    2015-07-14

    In this article, sparse matrix vector multiply (SpMVM) is an important kernel that frequently arises in high performance computing applications. Due to its low arithmetic intensity, several approaches have been proposed in literature to improve its scalability and efficiency in large scale computations. In this paper, our target systems are high end multi-core architectures and we use messaging passing interface + open multiprocessing hybrid programming model for parallelism. We analyze the performance of recently proposed implementation of the distributed symmetric SpMVM, originally developed for large sparse symmetric matrices arising in ab initio nuclear structure calculations. We also study important features of this implementation and compare with previously reported implementations that do not exploit underlying symmetry. Our SpMVM implementations leverage the hybrid paradigm to efficiently overlap expensive communications with computations. Our main comparison criterion is the "CPU core hours" metric, which is the main measure of resource usage on supercomputers. We analyze the effects of topology-aware mapping heuristic using simplified network load model. Furthermore, we have tested the different SpMVM implementations on two large clusters with 3D Torus and Dragonfly topology. Our results show that the distributed SpMVM implementation that exploits matrix symmetry and hides communication yields the best value for the "CPU core hours" metric and significantly reduces data movement overheads.

  15. Performance analysis of distributed symmetric sparse matrix vector multiplication algorithm for multi-core architectures

    DOE PAGES

    Oryspayev, Dossay; Aktulga, Hasan Metin; Sosonkina, Masha; Maris, Pieter; Vary, James P.

    2015-07-14

    In this article, sparse matrix vector multiply (SpMVM) is an important kernel that frequently arises in high performance computing applications. Due to its low arithmetic intensity, several approaches have been proposed in literature to improve its scalability and efficiency in large scale computations. In this paper, our target systems are high end multi-core architectures and we use messaging passing interface + open multiprocessing hybrid programming model for parallelism. We analyze the performance of recently proposed implementation of the distributed symmetric SpMVM, originally developed for large sparse symmetric matrices arising in ab initio nuclear structure calculations. We also study important featuresmore » of this implementation and compare with previously reported implementations that do not exploit underlying symmetry. Our SpMVM implementations leverage the hybrid paradigm to efficiently overlap expensive communications with computations. Our main comparison criterion is the "CPU core hours" metric, which is the main measure of resource usage on supercomputers. We analyze the effects of topology-aware mapping heuristic using simplified network load model. Furthermore, we have tested the different SpMVM implementations on two large clusters with 3D Torus and Dragonfly topology. Our results show that the distributed SpMVM implementation that exploits matrix symmetry and hides communication yields the best value for the "CPU core hours" metric and significantly reduces data movement overheads.« less

  16. WDM backbone network with guaranteed performance planning

    NASA Astrophysics Data System (ADS)

    Liang, Peng; Sheng, Wang; Zhong, Xusi; Li, Lemin

    2005-11-01

    Wavelength-Division multiplexing (WDM), which allows a single fibre to carry multiple signals simultaneously, has been widely used to increase link capacity and is a promising technology in backbone transport network. But designing such WDM backbone network is hard for two reasons, one is the uncertainty of future traffic demand, the other is difficulty of planning of the backup resource for failure conditions. As a result, enormous amount of link capacity for the network has to be provided for the network. Recently, a new approach called Valiant Load-Balanced Scheme (VLBS) has been proposed to design the WDM backbone network. The network planned by Valiant Load-Balanced Scheme is insensitive to the traffic and continues to guarantee performance under a user defined number of link or node failures. In this paper, the Valiant Load-Balanced Scheme (VLBS) for backbone network planning has been studied and a new Valiant Load-Balanced Scheme has been proposed. Compared with the early work, the new Valiant Load-Balanced Scheme is much more general and can be used for the computation of the link capacity of both homogeneous and heterogeneous networks. The abbreviation for the general Valiant Load-Balanced Scheme is GVLBS. After a brief description of the VLBS, we will give the detail derivation of the GVLBS. The central concept of the derivation of GVLBS is transforming the heterogeneous network into a homogeneous network, and taking advantage of VLBS to get GVLBS. Such transformation process is described and the derivation and analysis of GVLBS for link capacity under normal and failure conditions is also given. The numerical results show that GVLBS can compute the minimum link capacity required for the heterogeneous backbone network under different conditions (normal or failure).

  17. TCP performance analysis for wide area networks

    SciTech Connect

    Chen, H.Y.; Hutchins, J.A.; Testi, N.

    1993-08-01

    Even though networks have been getting faster, perceived throughput at the application level has not increased accordingly. In an attempt to identify many of the performance bottlenecks, we collected and analyzed data over a wide area network (WAN) at T3 (45 Mbps) bandwidth. The information gained will assist in designing new protocols and/or algorithms that are consistent with future high- speed requirements.

  18. Static performance of an axisymmetric nozzle with post-exit vanes for multiaxis thrust vectoring

    NASA Technical Reports Server (NTRS)

    Berrier, Bobby L.; Mason, Mary L.

    1988-01-01

    An investigation was conducted in the static test facility of the Langley 16-Foot Transonic Tunnel to determine the flow-turning capability and the nozzle internal performance of an axisymmetric convergent-divergent nozzle with post-exit vanes installed for multiaxis thrust vectoring. The effects of vane curvature, vane location relative to the nozzle exit, number of vanes, and vane deflection angle were determined. A comparison of the post-exit-vane thrust-vectoring concept with other thrust-vectoring concepts is provided. All tests were conducted with no external flow, and nozzle pressure ratio was varied from 1.6 to 6.0.

  19. Selected Performance Measurements of the F-15 ACTIVE Axisymmetric Thrust-Vectoring Nozzle

    NASA Technical Reports Server (NTRS)

    Orme, John S.; Sims, Robert L.

    1999-01-01

    Flight tests recently completed at the NASA Dryden Flight Research Center evaluated performance of a hydromechanically vectored axisymmetric nozzle onboard the F-15 ACTIVE. A flight-test technique whereby strain gages installed onto engine mounts provided for the direct measurement of thrust and vector forces has proven to be extremely valuable. Flow turning and thrust efficiency, as well as nozzle static pressure distributions were measured and analyzed. This report presents results from testing at an altitude of 30,000 ft and a speed of Mach 0.9. Flow turning and thrust efficiency were found to be significantly different than predicted, and moreover, varied substantially with power setting and pitch vector angle. Results of an in-flight comparison of the direct thrust measurement technique and an engine simulation fell within the expected uncertainty bands. Overall nozzle performance at this flight condition demonstrated the F100-PW-229 thrust-vectoring nozzles to be highly capable and efficient.

  20. Selected Performance Measurements of the F-15 Active Axisymmetric Thrust-vectoring Nozzle

    NASA Technical Reports Server (NTRS)

    Orme, John S.; Sims, Robert L.

    1998-01-01

    Flight tests recently completed at the NASA Dryden Flight Research Center evaluated performance of a hydromechanically vectored axisymmetric nozzle onboard the F-15 ACTIVE. A flight-test technique whereby strain gages installed onto engine mounts provided for the direct measurement of thrust and vector forces has proven to be extremely valuable. Flow turning and thrust efficiency, as well as nozzle static pressure distributions were measured and analyzed. This report presents results from testing at an altitude of 30,000 ft and a speed of Mach 0.9. Flow turning and thrust efficiency were found to be significantly different than predicted, and moreover, varied substantially with power setting and pitch vector angle. Results of an in-flight comparison of the direct thrust measurement technique and an engine simulation fell within the expected uncertainty bands. Overall nozzle performance at this flight condition demonstrated the F100-PW-229 thrust-vectoring nozzles to be highly capable and efficient.

  1. Health professional networks as a vector for improving healthcare quality and safety: a systematic review

    PubMed Central

    Ranmuthugala, Geetha; Plumb, Jennifer; Georgiou, Andrew; Westbrook, Johanna I; Braithwaite, Jeffrey

    2011-01-01

    Background While there is a considerable corpus of theoretical and empirical literature on networks within and outside of the health sector, multiple research questions are yet to be answered. Objective To conduct a systematic review of studies of professionals' network structures, identifying factors associated with network effectiveness and sustainability, particularly in relation to quality of care and patient safety. Methods The authors searched MEDLINE, CINAHL, EMBASE, Web of Science and Business Source Premier from January 1995 to December 2009. Results A majority of the 26 unique studies identified used social network analysis to examine structural relationships in networks: structural relationships within and between networks, health professionals and their social context, health collaboratives and partnerships, and knowledge sharing networks. Key aspects of networks explored were administrative and clinical exchanges, network performance, integration, stability and influences on the quality of healthcare. More recent studies show that cohesive and collaborative health professional networks can facilitate the coordination of care and contribute to improving quality and safety of care. Structural network vulnerabilities include cliques, professional and gender homophily, and over-reliance on central agencies or individuals. Conclusions Effective professional networks employ natural structural network features (eg, bridges, brokers, density, centrality, degrees of separation, social capital, trust) in producing collaboratively oriented healthcare. This requires efficient transmission of information and social and professional interaction within and across networks. For those using networks to improve care, recurring success factors are understanding your network's characteristics, attending to its functioning and investing time in facilitating its improvement. Despite this, there is no guarantee that time spent on networks will necessarily improve patient

  2. Predictable nonwandering localization of covariant Lyapunov vectors and cluster synchronization in scale-free networks of chaotic maps.

    PubMed

    Kuptsov, Pavel V; Kuptsova, Anna V

    2014-09-01

    Covariant Lyapunov vectors for scale-free networks of Hénon maps are highly localized. We revealed two mechanisms of the localization related to full and phase cluster synchronization of network nodes. In both cases the localization nodes remain unaltered in the course of the dynamics, i.e., the localization is nonwandering. Moreover, this is predictable: The localization nodes are found to have specific dynamical and topological properties and they can be found without computing of the covariant vectors. This is an example of explicit relations between the system topology, its phase-space dynamics, and the associated tangent-space dynamics of covariant Lyapunov vectors. PMID:25314498

  3. Performance and Integrity Analysis of the Vector Tracking Architecture of GNSS Receivers

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, Susmita

    Frequent loss or attenuation of signals in urban areas and integrity (or reliability of system performance) are two principal challenges facing the Global Navigation Satellite Systems or GNSS today. They are of critical importance especially to safety or liability-critical applications where system malfunction can cause safety problems or has legal/economic consequences. To deal with the problem of integrity, algorithms called integrity monitors have been developed and fielded. These monitors are designed to raise an alarm when situations resulting in misleading information are identified. However, they do not enhance the ability of a GNSS receiver to track weak signals. Among several approaches proposed to deal with the problem of frequent signal outage, an advanced GNSS receiver architecture called vector tracking loops has attracted much attention in recent years. While there is an extensive body of knowledge that documents vector tracking's superiority to deal with weak signals, prior work on vector loop integrity monitoring is scant. Systematic designs of a vector loop-integrity monitoring scheme can find use in above-mentioned applications that are inherently vulnerable to frequent signal loss or attenuation. Developing such a system, however, warrants a thorough understanding of the workings of the vector architecture as the open literature provides very few preliminary studies in this regard. To this end, the first aspect of this research thoroughly explains the internal operations of the vector architecture. It recasts the existing complex vector architecture equations into parametric models that are mathematically tractable. An in-depth theoretical analysis of these models reveals that inter-satellite aiding is the key to vector tracking's superiority. The second aspect of this research performs integrity studies of the vector loops. Simulation results from the previous analysis show that inter-satellite aiding allows easy propagation of errors (and

  4. Geometrically nonlinear design sensitivity analysis on parallel-vector high-performance computers

    NASA Technical Reports Server (NTRS)

    Baddourah, Majdi A.; Nguyen, Duc T.

    1993-01-01

    Parallel-vector solution strategies for generation and assembly of element matrices, solution of the resulted system of linear equations, calculations of the unbalanced loads, displacements, stresses, and design sensitivity analysis (DSA) are all incorporated into the Newton Raphson (NR) procedure for nonlinear finite element analysis and DSA. Numerical results are included to show the performance of the proposed method for structural analysis and DSA in a parallel-vector computer environment.

  5. Estimation of wrist angle from sonomyography using support vector machine and artificial neural network models.

    PubMed

    Xie, Hong-Bo; Zheng, Yong-Ping; Guo, Jing-Yi; Chen, Xin; Shi, Jun

    2009-04-01

    Sonomyography (SMG) is the signal we previously termed to describe muscle contraction using real-time muscle thickness changes extracted from ultrasound images. In this paper, we used least squares support vector machine (LS-SVM) and artificial neural networks (ANN) to predict dynamic wrist angles from SMG signals. Synchronized wrist angle and SMG signals from the extensor carpi radialis muscles of five normal subjects were recorded during the process of wrist extension and flexion at rates of 15, 22.5, and 30cycles/min, respectively. An LS-SVM model together with back-propagation (BP) and radial basis function (RBF) ANN was developed and trained using the data sets collected at the rate of 22.5cycles/min for each subject. The established LS-SVM and ANN models were then used to predict the wrist angles for the remained data sets obtained at different extension rates. It was found that the wrist angle signals collected at different rates could be accurately predicted by all the three methods, based on the values of root mean square difference (RMSD<0.2) and the correlation coefficient (CC>0.98), with the performance of the LS-SVM model being significantly better (RMSD<0.15, CC>0.99) than those of its counterparts. The results also demonstrated that the models established for the rate of 22.5cycles/min could be used for the prediction from SMG data sets obtained under other extension rates. It was concluded that the wrist angle could be precisely estimated from the thickness changes of the extensor carpi radialis using LS-SVM or ANN models.

  6. Monthly river flow forecasting using artificial neural network and support vector regression models coupled with wavelet transform

    NASA Astrophysics Data System (ADS)

    Kalteh, Aman Mohammad

    2013-04-01

    Reliable and accurate forecasts of river flow is needed in many water resources planning, design development, operation and maintenance activities. In this study, the relative accuracy of artificial neural network (ANN) and support vector regression (SVR) models coupled with wavelet transform in monthly river flow forecasting is investigated, and compared to regular ANN and SVR models, respectively. The relative performance of regular ANN and SVR models is also compared to each other. For this, monthly river flow data of Kharjegil and Ponel stations in Northern Iran are used. The comparison of the results reveals that both ANN and SVR models coupled with wavelet transform, are able to provide more accurate forecasting results than the regular ANN and SVR models. However, it is found that SVR models coupled with wavelet transform provide better forecasting results than ANN models coupled with wavelet transform. The results also indicate that regular SVR models perform slightly better than regular ANN models.

  7. Planning Multitechnology Access Networks with Performance Constraints

    NASA Astrophysics Data System (ADS)

    Chamberland, Steven

    Considering the number of access network technologies and the investment needed for the “last mile” of a solution, in today’s highly competitive markets, planning tools are crucial for the service providers to optimize the network costs and accelerate the planning process. In this paper, we propose to tackle the problem of planning access networks composed of four technologies/architectures: the digital subscriber line (xDSL) technologies deployed directly from the central office (CO), the fiber-to-the-node (FTTN), the fiber-to-the-micro-node (FTTn) and the fiber-to-the-premises (FTTP). A mathematical programming model is proposed for this planning problem that is solved using a commercial implementation of the branch-and-bound algorithm. Next, a detailed access network planning example is presented followed by a systematic set of experiments designed to assess the performance of the proposed approach.

  8. Performance Analysis of IIUM Wireless Campus Network

    NASA Astrophysics Data System (ADS)

    Abd Latif, Suhaimi; Masud, Mosharrof H.; Anwar, Farhat

    2013-12-01

    International Islamic University Malaysia (IIUM) is one of the leading universities in the world in terms of quality of education that has been achieved due to providing numerous facilities including wireless services to every enrolled student. The quality of this wireless service is controlled and monitored by Information Technology Division (ITD), an ISO standardized organization under the university. This paper aims to investigate the constraints of wireless campus network of IIUM. It evaluates the performance of the IIUM wireless campus network in terms of delay, throughput and jitter. QualNet 5.2 simulator tool has employed to measure these performances of IIUM wireless campus network. The observation from the simulation result could be one of the influencing factors in improving wireless services for ITD and further improvement.

  9. Generalization performance of radial basis function networks.

    PubMed

    Lei, Yunwen; Ding, Lixin; Zhang, Wensheng

    2015-03-01

    This paper studies the generalization performance of radial basis function (RBF) networks using local Rademacher complexities. We propose a general result on controlling local Rademacher complexities with the L1 -metric capacity. We then apply this result to estimate the RBF networks' complexities, based on which a novel estimation error bound is obtained. An effective approximation error bound is also derived by carefully investigating the Hölder continuity of the lp loss function's derivative. Furthermore, it is demonstrated that the RBF network minimizing an appropriately constructed structural risk admits a significantly better learning rate when compared with the existing results. An empirical study is also performed to justify the application of our structural risk in model selection.

  10. Static performance investigation of a skewed-throat multiaxis thrust-vectoring nozzle concept

    NASA Technical Reports Server (NTRS)

    Wing, David J.

    1994-01-01

    The static performance of a jet exhaust nozzle which achieves multiaxis thrust vectoring by physically skewing the geometric throat has been characterized in the static test facility of the 16-Foot Transonic Tunnel at NASA Langley Research Center. The nozzle has an asymmetric internal geometry defined by four surfaces: a convergent-divergent upper surface with its ridge perpendicular to the nozzle centerline, a convergent-divergent lower surface with its ridge skewed relative to the nozzle centerline, an outwardly deflected sidewall, and a straight sidewall. The primary goal of the concept is to provide efficient yaw thrust vectoring by forcing the sonic plane (nozzle throat) to form at a yaw angle defined by the skewed ridge of the lower surface contour. A secondary goal is to provide multiaxis thrust vectoring by combining the skewed-throat yaw-vectoring concept with upper and lower pitch flap deflections. The geometric parameters varied in this investigation included lower surface ridge skew angle, nozzle expansion ratio (divergence angle), aspect ratio, pitch flap deflection angle, and sidewall deflection angle. Nozzle pressure ratio was varied from 2 to a high of 11.5 for some configurations. The results of the investigation indicate that efficient, substantial multiaxis thrust vectoring was achieved by the skewed-throat nozzle concept. However, certain control surface deflections destabilized the internal flow field, which resulted in substantial shifts in the position and orientation of the sonic plane and had an adverse effect on thrust-vectoring and weight flow characteristics. By increasing the expansion ratio, the location of the sonic plane was stabilized. The asymmetric design resulted in interdependent pitch and yaw thrust vectoring as well as nonzero thrust-vector angles with undeflected control surfaces. By skewing the ridges of both the upper and lower surface contours, the interdependency between pitch and yaw thrust vectoring may be eliminated

  11. Genetic algorithm-support vector regression for high reliability SHM system based on FBG sensor network

    NASA Astrophysics Data System (ADS)

    Zhang, XiaoLi; Liang, DaKai; Zeng, Jie; Asundi, Anand

    2012-02-01

    Structural Health Monitoring (SHM) based on fiber Bragg grating (FBG) sensor network has attracted considerable attention in recent years. However, FBG sensor network is embedded or glued in the structure simply with series or parallel. In this case, if optic fiber sensors or fiber nodes fail, the fiber sensors cannot be sensed behind the failure point. Therefore, for improving the survivability of the FBG-based sensor system in the SHM, it is necessary to build high reliability FBG sensor network for the SHM engineering application. In this study, a model reconstruction soft computing recognition algorithm based on genetic algorithm-support vector regression (GA-SVR) is proposed to achieve the reliability of the FBG-based sensor system. Furthermore, an 8-point FBG sensor system is experimented in an aircraft wing box. The external loading damage position prediction is an important subject for SHM system; as an example, different failure modes are selected to demonstrate the SHM system's survivability of the FBG-based sensor network. Simultaneously, the results are compared with the non-reconstruct model based on GA-SVR in each failure mode. Results show that the proposed model reconstruction algorithm based on GA-SVR can still keep the predicting precision when partial sensors failure in the SHM system; thus a highly reliable sensor network for the SHM system is facilitated without introducing extra component and noise.

  12. Performance Standards and Evaluation in IR Test Collections: Vector-Space and Other Retrieval Models.

    ERIC Educational Resources Information Center

    Shaw, W. M., Jr.; And Others

    1997-01-01

    Describes a study that computed the low performance standards for queries in 17 test collections. Predicted by the hypergeometric distribution, the standards represent the highest level of retrieval effectiveness attributable to chance. Operational levels of performance for vector-space and other retrieval models were compared to the standards.…

  13. Analysis of a general SIS model with infective vectors on the complex networks

    NASA Astrophysics Data System (ADS)

    Juang, Jonq; Liang, Yu-Hao

    2015-11-01

    A general SIS model with infective vectors on complex networks is studied in this paper. In particular, the model considers the linear combination of three possible routes of disease propagation between infected and susceptible individuals as well as two possible transmission types which describe how the susceptible vectors attack the infected individuals. A new technique based on the basic reproduction matrix is introduced to obtain the following results. First, necessary and sufficient conditions are obtained for the global stability of the model through a unified approach. As a result, we are able to produce the exact basic reproduction number and the precise epidemic thresholds with respect to three spreading strengths, the curing strength or the immunization strength all at once. Second, the monotonicity of the basic reproduction number and the above mentioned epidemic thresholds with respect to all other parameters can be rigorously characterized. Finally, we are able to compare the effectiveness of various immunization strategies under the assumption that the number of persons getting vaccinated is the same for all strategies. In particular, we prove that in the scale-free networks, both targeted and acquaintance immunizations are more effective than uniform and active immunizations and that active immunization is the least effective strategy among those four. We are also able to determine how the vaccine should be used at minimum to control the outbreak of the disease.

  14. Characterization of interdigitated electrode structures for water contaminant detection using a hybrid voltage divider and a vector network analyzer.

    PubMed

    Rodríguez-Delgado, José Manuel; Rodríguez-Delgado, Melissa Marlene; Mendoza-Buenrostro, Christian; Dieck-Assad, Graciano; Omar Martínez-Chapa, Sergio

    2012-01-01

    Interdigitated capacitive electrode structures have been used to monitor or actuate over organic and electrochemical media in efforts to characterize biochemical properties. This article describes a method to perform a pre-characterization of interdigitated electrode structures using two methods: a hybrid voltage divider (HVD) and a vector network analyzer (VNA). Both methodologies develop some tests under two different conditions: free air and bi-distilled water media. Also, the HVD methodology is used for other two conditions: phosphate buffer with laccase (polyphenoloxidase; EC 1.10.3.2) and contaminated media composed by a mix of phosphate buffer and 3-ethylbenzothiazoline-6-sulfonic acid (ABTS). The purpose of this study is to develop and validate a characterization methodology using both, a hybrid voltage divider and VNA T-# network impedance models of the interdigitated capacitive electrode structure that will provide a shunt RC network of particular interest in detecting the amount of contamination existing in the water solution for the media conditions. This methodology should provide us with the best possible sensitivity in monitoring water contaminant media characteristics. The results show that both methods, the hybrid voltage divider and the VNA methodology, are feasible in determining impedance modeling parameters. These parameters can be used to develop electric interrogation procedures and devices such as dielectric characteristics to identify contaminant substances in water solutions.

  15. Static performance of a cruciform nozzle with multiaxis thrust-vectoring and reverse-thrust capabilities

    NASA Technical Reports Server (NTRS)

    Wing, David J.; Asbury, Scott C.

    1992-01-01

    A multiaxis thrust vectoring nozzle designed to have equal flow turning capability in pitch and yaw was conceived and experimentally tested for internal, static performance. The cruciform-shaped convergent-divergent nozzle turned the flow for thrust vectoring by deflecting the divergent surfaces of the nozzle, called flaps. Methods for eliminating physical interference between pitch and yaw flaps at the larger multiaxis deflection angles was studied. These methods included restricting the pitch flaps from the path of the yaw flaps and shifting the flow path at the throat off the nozzle centerline to permit larger pitch-flap deflections without interfering with the operation of the yaw flaps. Two flap widths were tested at both dry and afterburning settings. Vertical and reverse thrust configurations at dry power were also tested. Comparison with two dimensional convergent-divergent nozzles showed lower but still competitive thrust performance and thrust vectoring capability.

  16. Performance of TCP variants over LTE network

    NASA Astrophysics Data System (ADS)

    Nor, Shahrudin Awang; Maulana, Ade Novia

    2016-08-01

    One of the implementation of a wireless network is based on mobile broadband technology Long Term Evolution (LTE). LTE offers a variety of advantages, especially in terms of access speed, capacity, architectural simplicity and ease of implementation, as well as the breadth of choice of the type of user equipment (UE) that can establish the access. The majority of the Internet connections in the world happen using the TCP (Transmission Control Protocol) due to the TCP's reliability in transmitting packets in the network. TCP reliability lies in the ability to control the congestion. TCP was originally designed for wired media, but LTE connected through a wireless medium that is not stable in comparison to wired media. A wide variety of TCP has been made to produce a better performance than its predecessor. In this study, we simulate the performance provided by the TCP NewReno and TCP Vegas based on simulation using network simulator version 2 (ns2). The TCP performance is analyzed in terms of throughput, packet loss and end-to-end delay. In comparing the performance of TCP NewReno and TCP Vegas, the simulation result shows that the throughput of TCP NewReno is slightly higher than TCP Vegas, while TCP Vegas gives significantly better end-to-end delay and packet loss. The analysis of throughput, packet loss and end-to-end delay are made to evaluate the simulation.

  17. The Helioseismic and Magnetic Imager (HMI) Vector Magnetic Field Pipeline: Overview and Performance

    NASA Astrophysics Data System (ADS)

    Hoeksema, J. Todd; Liu, Yang; Hayashi, Keiji; Sun, Xudong; Schou, Jesper; Couvidat, Sebastien; Norton, Aimee; Bobra, Monica; Centeno, Rebecca; Leka, K. D.; Barnes, Graham; Turmon, Michael

    2014-09-01

    The Helioseismic and Magnetic Imager (HMI) began near-continuous full-disk solar measurements on 1 May 2010 from the Solar Dynamics Observatory (SDO). An automated processing pipeline keeps pace with observations to produce observable quantities, including the photospheric vector magnetic field, from sequences of filtergrams. The basic vector-field frame list cadence is 135 seconds, but to reduce noise the filtergrams are combined to derive data products every 720 seconds. The primary 720 s observables were released in mid-2010, including Stokes polarization parameters measured at six wavelengths, as well as intensity, Doppler velocity, and the line-of-sight magnetic field. More advanced products, including the full vector magnetic field, are now available. Automatically identified HMI Active Region Patches (HARPs) track the location and shape of magnetic regions throughout their lifetime. The vector field is computed using the Very Fast Inversion of the Stokes Vector (VFISV) code optimized for the HMI pipeline; the remaining 180∘ azimuth ambiguity is resolved with the Minimum Energy (ME0) code. The Milne-Eddington inversion is performed on all full-disk HMI observations. The disambiguation, until recently run only on HARP regions, is now implemented for the full disk. Vector and scalar quantities in the patches are used to derive active region indices potentially useful for forecasting; the data maps and indices are collected in the SHARP data series, hmi.sharp_720s. Definitive SHARP processing is completed only after the region rotates off the visible disk; quick-look products are produced in near real time. Patches are provided in both CCD and heliographic coordinates. HMI provides continuous coverage of the vector field, but has modest spatial, spectral, and temporal resolution. Coupled with limitations of the analysis and interpretation techniques, effects of the orbital velocity, and instrument performance, the resulting measurements have a certain dynamic

  18. USING MULTITAIL NETWORKS IN HIGH PERFORMANCE CLUSTERS

    SciTech Connect

    S. COLL; E. FRACHTEMBERG; F. PETRINI; A. HOISIE; L. GURVITS

    2001-03-01

    Using multiple independent networks (also known as rails) is an emerging technique to overcome bandwidth limitations and enhance fault-tolerance of current high-performance clusters. We present and analyze various venues for exploiting multiple rails. Different rail access policies are presented and compared, including static and dynamic allocation schemes. An analytical lower bound on the number of networks required for static rail allocation is shown. We also present an extensive experimental comparison of the behavior of various allocation schemes in terms of bandwidth and latency. Striping messages over multiple rails can substantially reduce network latency, depending on average message size, network load and allocation scheme. The methods compared include a static rail allocation, a round-robin rail allocation, a dynamic allocation based on local knowledge, and a rail allocation that reserves both end-points of a message before sending it. The latter is shown to perform better than other methods at higher loads: up to 49% better than local-knowledge allocation and 37% better than the round-robin allocation. This allocation scheme also shows lower latency and it saturates on higher loads (for messages large enough). Most importantly, this proposed allocation scheme scales well with the number of rails and message sizes.

  19. Significance analysis of qualitative mammographic features, using linear classifiers, neural networks and support vector machines.

    PubMed

    Mavroforakis, Michael; Georgiou, Harris; Dimitropoulos, Nikos; Cavouras, Dionisis; Theodoridis, Sergios

    2005-04-01

    models, namely linear classifiers, neural networks and support vector machines, were employed to investigate the true efficiency of each one of them, as well as the overall complexity of the diagnostic task of mammographic tumor characterization. Both the statistical and the classification results have proven the explicit correlation of all the selected features with the final diagnosis, qualifying them as an adequate input base for any type of similar automated diagnosis system. The underlying complexity of the diagnostic task has justified the high value of sophisticated pattern recognition architectures. PMID:15797296

  20. Network- and network-element-level parameters for configuration, fault, and performance management of optical networks

    NASA Astrophysics Data System (ADS)

    Drion, Christophe; Berthelon, Luc; Chambon, Olivier; Eilenberger, Gert; Peden, Francoise R.; Jourdan, Amaury

    1998-10-01

    With the high interest of network operators and manufacturers for wavelength division multiplexing (WDM) networking technology, the need for management systems adapted to this new technology keeps increasing. We investigated this topic and produced outputs through the specification of the functional architecture, network layered model, and through the development of new, TMN- based, information models for the management of optical networks and network elements. Based on these first outputs, defects in each layer together with parameters for performance management/monitoring have been identified for each type of optical network element, and each atomic function describing the element, including functions for both the transport of payload signals and of overhead information. The list of probable causes has been established for the identified defects. A second aspect consists in the definition of network-level parameters, if such photonic technology-related parameters are to be considered at this level. It is our conviction that some parameters can be taken into account at the network level for performance management, based on physical measurements within the network. Some parameters could possibly be used as criteria for configuration management, in the route calculation processes, including protection. The outputs of these specification activities are taken into account in the development of a manageable WDM network prototype which will be used as a test platform to demonstrate configuration, fault, protection and performance management in a real network, in the scope of the ACTS-MEPHISTO project. This network prototype will also be used in a larger size experiment in the context of the ACTS-PELICAN field trial (Pan-European Lightwave Core and Access Network).

  1. Parallel-vector unsymmetric Eigen-Solver on high performance computers

    NASA Technical Reports Server (NTRS)

    Nguyen, Duc T.; Jiangning, Qin

    1993-01-01

    The popular QR algorithm for solving all eigenvalues of an unsymmetric matrix is reviewed. Among the basic components in the QR algorithm, it was concluded from this study, that the reduction of an unsymmetric matrix to a Hessenberg form (before applying the QR algorithm itself) can be done effectively by exploiting the vector speed and multiple processors offered by modern high-performance computers. Numerical examples of several test cases have indicated that the proposed parallel-vector algorithm for converting a given unsymmetric matrix to a Hessenberg form offers computational advantages over the existing algorithm. The time saving obtained by the proposed methods is increased as the problem size increased.

  2. Inference of nonlinear gene regulatory networks through optimized ensemble of support vector regression and dynamic Bayesian networks.

    PubMed

    Akutekwe, Arinze; Seker, Huseyin

    2015-08-01

    Comprehensive understanding of gene regulatory networks (GRNs) is a major challenge in systems biology. Most methods for modeling and inferring the dynamics of GRNs, such as those based on state space models, vector autoregressive models and G1DBN algorithm, assume linear dependencies among genes. However, this strong assumption does not make for true representation of time-course relationships across the genes, which are inherently nonlinear. Nonlinear modeling methods such as the S-systems and causal structure identification (CSI) have been proposed, but are known to be statistically inefficient and analytically intractable in high dimensions. To overcome these limitations, we propose an optimized ensemble approach based on support vector regression (SVR) and dynamic Bayesian networks (DBNs). The method called SVR-DBN, uses nonlinear kernels of the SVR to infer the temporal relationships among genes within the DBN framework. The two-stage ensemble is further improved by SVR parameter optimization using Particle Swarm Optimization. Results on eight insilico-generated datasets, and two real world datasets of Drosophila Melanogaster and Escherichia Coli, show that our method outperformed the G1DBN algorithm by a total average accuracy of 12%. We further applied our method to model the time-course relationships of ovarian carcinoma. From our results, four hub genes were discovered. Stratified analysis further showed that the expression levels Prostrate differentiation factor and BTG family member 2 genes, were significantly increased by the cisplatin and oxaliplatin platinum drugs; while expression levels of Polo-like kinase and Cyclin B1 genes, were both decreased by the platinum drugs. These hub genes might be potential biomarkers for ovarian carcinoma. PMID:26738192

  3. Scheduling and performance limits of networks with constantly changing topology

    SciTech Connect

    Tassiulas, L.

    1997-01-01

    A communication network with time-varying topology is considered. The network consists of M receivers and N transmitters that may access in principle every receiver. An underlying network state process with Markovian statistics is considered, that reflects the physical characteristics of the network affecting the link service capacity. The transmissions are scheduled dynamically, based on information about the link capacities and the backlog in the network. The region of achievable throughputs is characterized. A transmission scheduling policy is proposed, that utilizes current topology state information and achieves all throughput vectors achievable by any anticipative policy. The changing topology model applies to networks of Low Earth Orbit (LEO) satellites, meteor-burst communication networks and networks with mobile users. {copyright} {ital 1997 American Institute of Physics.}

  4. Functional performance requirements for seismic network upgrade

    SciTech Connect

    Lee, R.C.

    1991-08-18

    The SRL seismic network, established in 1976, was developed to monitor site and regional seismic activity that may have any potential to impact the safety or reduce containment capability of existing and planned structures and systems at the SRS, report seismic activity that may be relevant to emergency preparedness, including rapid assessments of earthquake location and magnitude, and estimates of potential on-site and off-site damage to facilities and lifelines for mitigation measures. All of these tasks require SRL seismologists to provide rapid analysis of large amounts of seismic data. The current seismic network upgrade, the subject of this Functional Performance Requirements Document, is necessary to improve system reliability and resolution. The upgrade provides equipment for the analysis of the network seismic data and replacement of old out-dated equipment. The digital network upgrade is configured for field station and laboratory digital processing systems. The upgrade consists of the purchase and installation of seismic sensors,, data telemetry digital upgrades, a dedicated Seismic Data Processing (SDP) system (already in procurement stage), and a Seismic Signal Analysis (SSA) system. The field stations and telephone telemetry upgrades include equipment necessary for three remote station upgrades including seismic amplifiers, voltage controlled oscillators, pulse calibrators, weather protection (including lightning protection) systems, seismometers, seismic amplifiers, and miscellaneous other parts. The central receiving and recording station upgrades will include discriminators, helicopter amplifier, omega timing system, strong motion instruments, wide-band velocity sensors, and other miscellaneous equipment.

  5. Static Thrust and Vectoring Performance of a Spherical Convergent Flap Nozzle with a Nonrectangular Divergent Duct

    NASA Technical Reports Server (NTRS)

    Wing, David J.

    1998-01-01

    The static internal performance of a multiaxis-thrust-vectoring, spherical convergent flap (SCF) nozzle with a non-rectangular divergent duct was obtained in the model preparation area of the Langley 16-Foot Transonic Tunnel. Duct cross sections of hexagonal and bowtie shapes were tested. Additional geometric parameters included throat area (power setting), pitch flap deflection angle, and yaw gimbal angle. Nozzle pressure ratio was varied from 2 to 12 for dry power configurations and from 2 to 6 for afterburning power configurations. Approximately a 1-percent loss in thrust efficiency from SCF nozzles with a rectangular divergent duct was incurred as a result of internal oblique shocks in the flow field. The internal oblique shocks were the result of cross flow generated by the vee-shaped geometric throat. The hexagonal and bowtie nozzles had mirror-imaged flow fields and therefore similar thrust performance. Thrust vectoring was not hampered by the three-dimensional internal geometry of the nozzles. Flow visualization indicates pitch thrust-vector angles larger than 10' may be achievable with minimal adverse effect on or a possible gain in resultant thrust efficiency as compared with the performance at a pitch thrust-vector angle of 10 deg.

  6. Scientific Application Performance on Leading Scalar and VectorSupercomputing Platforms

    SciTech Connect

    Oliker, Leonid; Canning, Andrew; Carter, Jonathan; Shalf, John; Ethier, Stephane

    2007-01-01

    The last decade has witnessed a rapid proliferation of superscalar cache-based microprocessors to build high-end computing (HEC) platforms, primarily because of their generality, scalability, and cost effectiveness. However, the growing gap between sustained and peak performance for full-scale scientific applications on conventional supercomputers has become a major concern in high performance computing, requiring significantly larger systems and application scalability than implied by peak performance in order to achieve desired performance. The latest generation of custom-built parallel vector systems have the potential to address this issue for numerical algorithms with sufficient regularity in their computational structure. In this work we explore applications drawn from four areas: magnetic fusion (GTC), plasma physics (LBMHD3D), astrophysics (Cactus), and material science (PARATEC). We compare performance of the vector-based Cray X1, X1E, Earth Simulator, NEC SX-8, with performance of three leading commodity-based superscalar platforms utilizing the IBM Power3, Intel Itanium2, and AMD Opteron processors. Our work makes several significant contributions: a new data-decomposition scheme for GTC that (for the first time) enables a breakthrough of the Teraflop barrier; the introduction of a new three-dimensional Lattice Boltzmann magneto-hydrodynamic implementation used to study the onset evolution of plasma turbulence that achieves over 26Tflop/s on 4800 ES processors; the highest per processor performance (by far) achieved by the full-production version of the Cactus ADM-BSSN; and the largest PARATEC cell size atomistic simulation to date. Overall, results show that the vector architectures attain unprecedented aggregate performance across our application suite, demonstrating the tremendous potential of modern parallel vector systems.

  7. Deep learning of support vector machines with class probability output networks.

    PubMed

    Kim, Sangwook; Yu, Zhibin; Kil, Rhee Man; Lee, Minho

    2015-04-01

    Deep learning methods endeavor to learn features automatically at multiple levels and allow systems to learn complex functions mapping from the input space to the output space for the given data. The ability to learn powerful features automatically is increasingly important as the volume of data and range of applications of machine learning methods continues to grow. This paper proposes a new deep architecture that uses support vector machines (SVMs) with class probability output networks (CPONs) to provide better generalization power for pattern classification problems. As a result, deep features are extracted without additional feature engineering steps, using multiple layers of the SVM classifiers with CPONs. The proposed structure closely approaches the ideal Bayes classifier as the number of layers increases. Using a simulation of classification problems, the effectiveness of the proposed method is demonstrated.

  8. Artificial neural network simulation of battery performance

    SciTech Connect

    O`Gorman, C.C.; Ingersoll, D.; Jungst, R.G.; Paez, T.L.

    1998-12-31

    Although they appear deceptively simple, batteries embody a complex set of interacting physical and chemical processes. While the discrete engineering characteristics of a battery such as the physical dimensions of the individual components, are relatively straightforward to define explicitly, their myriad chemical and physical processes, including interactions, are much more difficult to accurately represent. Within this category are the diffusive and solubility characteristics of individual species, reaction kinetics and mechanisms of primary chemical species as well as intermediates, and growth and morphology characteristics of reaction products as influenced by environmental and operational use profiles. For this reason, development of analytical models that can consistently predict the performance of a battery has only been partially successful, even though significant resources have been applied to this problem. As an alternative approach, the authors have begun development of a non-phenomenological model for battery systems based on artificial neural networks. Both recurrent and non-recurrent forms of these networks have been successfully used to develop accurate representations of battery behavior. The connectionist normalized linear spline (CMLS) network has been implemented with a self-organizing layer to model a battery system with the generalized radial basis function net. Concurrently, efforts are under way to use the feedforward back propagation network to map the {open_quotes}state{close_quotes} of a battery system. Because of the complexity of battery systems, accurate representation of the input and output parameters has proven to be very important. This paper describes these initial feasibility studies as well as the current models and makes comparisons between predicted and actual performance.

  9. Advancements and performance of iterative methods in industrial applications codes on CRAY parallel/vector supercomputers

    SciTech Connect

    Poole, G.; Heroux, M.

    1994-12-31

    This paper will focus on recent work in two widely used industrial applications codes with iterative methods. The ANSYS program, a general purpose finite element code widely used in structural analysis applications, has now added an iterative solver option. Some results are given from real applications comparing performance with the tradition parallel/vector frontal solver used in ANSYS. Discussion of the applicability of iterative solvers as a general purpose solver will include the topics of robustness, as well as memory requirements and CPU performance. The FIDAP program is a widely used CFD code which uses iterative solvers routinely. A brief description of preconditioners used and some performance enhancements for CRAY parallel/vector systems is given. The solution of large-scale applications in structures and CFD includes examples from industry problems solved on CRAY systems.

  10. Two-dimensional confined jet thrust vector control: Operating mechanisms and performance

    NASA Astrophysics Data System (ADS)

    Caton, Jeffrey L.

    1989-03-01

    An experimental investigation of two-dimensional confined jet thrust vector control nozzles was performed. Thrust vector control was accomplished by using secondary flow injection in the diverging section of the nozzle. Schlieren photographs and video tapes were used to study flow separation and internal shock structures. Nozzle performance parameters were determined for nozzle flow with and without secondary flows. These parameters included nozzles forces, vector angles, thrust efficiencies, and flow switching response times. Vector angles as great as 18 degrees with thrust efficiencies of 0.79 were measured. Several confined jet nozzles with variations in secondary flow port design were tested and results were compared to each other. Converging-diverging nozzles of similar design to the confined jet nozzles were also tested and results were compared to the confined jet nozzle results. Existing prediction models for nozzle side to axial force ratio were evaluated. A model for nozzle total forces based on shock losses that predicted values very close to actual results was developed.

  11. Dynamic changes of spatial functional network connectivity in healthy individuals and schizophrenia patients using independent vector analysis

    PubMed Central

    Ma, Sai; Calhoun, Vince D.; Phlypo, Ronald; Adalı, Tülay

    2016-01-01

    Recent work on both task-induced and resting-state functional magnetic resonance imaging (fMRI) data suggests that functional connectivity may fluctuate, rather than being stationary during an entire scan. Most dynamic studies are based on second-order statistics between fMRI time series or time courses derived from blind source separation, e.g., independent component analysis (ICA), to investigate changes of temporal interactions among brain regions. However, fluctuations related to spatial components over time are of interest as well. In this paper, we examine higher-order statistical dependence between pairs of spatial components, which we define as spatial functional network connectivity (sFNC), and changes of sFNC across a resting-state scan. We extract time-varying components from healthy controls and patients with schizophrenia to represent brain networks using independent vector analysis (IVA), which is an extension of ICA to multiple data sets and enables one to capture spatial variations. Based on mutual information among IVA components, we perform statistical analysis and Markov modeling to quantify the changes in spatial connectivity. Our experimental results suggest significantly more fluctuations in patient group and show that patients with schizophrenia have more variable patterns of spatial concordance primarily between frontoparietal, cerebellum and temporal lobe regions. This study extends upon earlier studies showing temporal connectivity differences in similar areas on average by providing evidence that the dynamic spatial interplay between these regions is also impacted by schizophrenia. PMID:24418507

  12. Quantifying performance limitations of Kalman filters in state vector estimation problems

    NASA Astrophysics Data System (ADS)

    Bageshwar, Vibhor Lal

    In certain applications, the performance objectives of a Kalman filter (KF) are to compute unbiased, minimum variance estimates of a state mean vector governed by a stochastic system. The KF can be considered as a model based algorithm used to recursively estimate the state mean vector and state covariance matrix. The general objective of this thesis is to investigate the performance limitations of the KF in three state vector estimation applications. Stochastic observability is a property of a system and refers to the existence of a filter for which the errors of the estimated state mean vector have bounded variance. In the first application, we derive a test to assess the stochastic observability of a KF implemented for discrete linear time-varying systems consisting of known, deterministic parameters. This class of system includes discrete nonlinear systems linearized about the true state vector trajectory. We demonstrate the utility of the stochastic observability test using an aided INS problem. Attitude determination systems consist of a sensor set, a stochastic system, and a filter to estimate attitude. In the second application, we design an inertially aided (IA) vector matching algorithm (VMA) architecture for estimating a spacecraft's attitude. The sensor set includes rate gyros and a three-axis magnetometer (TAM). The VMA is a filtering algorithm that solves Wahba's problem. The VMA is then extended by incorporating dynamic and sensor models to formulate the IA VMA architecture. We evaluate the performance of the IA VMA architectures by using an extended KF to blend post-processed spaceflight data. Model predictive control (MPC) algorithms achieve offset-free control by augmenting the nominal system model with a disturbance model. In the third application, we consider an offset-free MPC framework that includes an output integrator disturbance model and a KF to estimate the state and disturbance vectors. Using root locus techniques, we identify sufficient

  13. Curriculum Assessment Using Artificial Neural Network and Support Vector Machine Modeling Approaches: A Case Study. IR Applications. Volume 29

    ERIC Educational Resources Information Center

    Chen, Chau-Kuang

    2010-01-01

    Artificial Neural Network (ANN) and Support Vector Machine (SVM) approaches have been on the cutting edge of science and technology for pattern recognition and data classification. In the ANN model, classification accuracy can be achieved by using the feed-forward of inputs, back-propagation of errors, and the adjustment of connection weights. In…

  14. Evaluation models for soil nutrient based on support vector machine and artificial neural networks.

    PubMed

    Li, Hao; Leng, Weijia; Zhou, Yibing; Chen, Fudi; Xiu, Zhilong; Yang, Dazuo

    2014-01-01

    Soil nutrient is an important aspect that contributes to the soil fertility and environmental effects. Traditional evaluation approaches of soil nutrient are quite hard to operate, making great difficulties in practical applications. In this paper, we present a series of comprehensive evaluation models for soil nutrient by using support vector machine (SVM), multiple linear regression (MLR), and artificial neural networks (ANNs), respectively. We took the content of organic matter, total nitrogen, alkali-hydrolysable nitrogen, rapidly available phosphorus, and rapidly available potassium as independent variables, while the evaluation level of soil nutrient content was taken as dependent variable. Results show that the average prediction accuracies of SVM models are 77.87% and 83.00%, respectively, while the general regression neural network (GRNN) model's average prediction accuracy is 92.86%, indicating that SVM and GRNN models can be used effectively to assess the levels of soil nutrient with suitable dependent variables. In practical applications, both SVM and GRNN models can be used for determining the levels of soil nutrient.

  15. Improving Memory Subsystem Performance Using ViVA: Virtual Vector Architecture

    SciTech Connect

    Gebis, Joseph; Oliker, Leonid; Shalf, John; Williams, Samuel; Yelick, Katherine

    2009-01-12

    The disparity between microprocessor clock frequencies and memory latency is a primary reason why many demanding applications run well below peak achievable performance. Software controlled scratchpad memories, such as the Cell local store, attempt to ameliorate this discrepancy by enabling precise control over memory movement; however, scratchpad technology confronts the programmer and compiler with an unfamiliar and difficult programming model. In this work, we present the Virtual Vector Architecture (ViVA), which combines the memory semantics of vector computers with a software-controlled scratchpad memory in order to provide a more effective and practical approach to latency hiding. ViVA requires minimal changes to the core design and could thus be easily integrated with conventional processor cores. To validate our approach, we implemented ViVA on the Mambo cycle-accurate full system simulator, which was carefully calibrated to match the performance on our underlying PowerPC Apple G5 architecture. Results show that ViVA is able to deliver significant performance benefits over scalar techniques for a variety of memory access patterns as well as two important memory-bound compact kernels, corner turn and sparse matrix-vector multiplication -- achieving 2x-13x improvement compared the scalar version. Overall, our preliminary ViVA exploration points to a promising approach for improving application performance on leading microprocessors with minimal design and complexity costs, in a power efficient manner.

  16. Performance of wireless sensor networks under random node failures

    SciTech Connect

    Bradonjic, Milan; Hagberg, Aric; Feng, Pan

    2011-01-28

    Networks are essential to the function of a modern society and the consequence of damages to a network can be large. Assessing network performance of a damaged network is an important step in network recovery and network design. Connectivity, distance between nodes, and alternative routes are some of the key indicators to network performance. In this paper, random geometric graph (RGG) is used with two types of node failure, uniform failure and localized failure. Since the network performance are multi-facet and assessment can be time constrained, we introduce four measures, which can be computed in polynomial time, to estimate performance of damaged RGG. Simulation experiments are conducted to investigate the deterioration of networks through a period of time. With the empirical results, the performance measures are analyzed and compared to provide understanding of different failure scenarios in a RGG.

  17. Performance Characteristics of an Adaptive Mesh RefinementCalculation on Scalar and Vector Platforms

    SciTech Connect

    Welcome, Michael; Rendleman, Charles; Oliker, Leonid; Biswas, Rupak

    2006-01-31

    Adaptive mesh refinement (AMR) is a powerful technique thatreduces the resources necessary to solve otherwise in-tractable problemsin computational science. The AMR strategy solves the problem on arelatively coarse grid, and dynamically refines it in regions requiringhigher resolution. However, AMR codes tend to be far more complicatedthan their uniform grid counterparts due to the software infrastructurenecessary to dynamically manage the hierarchical grid framework. Despitethis complexity, it is generally believed that future multi-scaleapplications will increasingly rely on adaptive methods to study problemsat unprecedented scale and resolution. Recently, a new generation ofparallel-vector architectures have become available that promise toachieve extremely high sustained performance for a wide range ofapplications, and are the foundation of many leadership-class computingsystems worldwide. It is therefore imperative to understand the tradeoffsbetween conventional scalar and parallel-vector platforms for solvingAMR-based calculations. In this paper, we examine the HyperCLaw AMRframework to compare and contrast performance on the Cray X1E, IBM Power3and Power5, and SGI Altix. To the best of our knowledge, this is thefirst work that investigates and characterizes the performance of an AMRcalculation on modern parallel-vector systems.

  18. The performance of multicomputer interconnection networks

    NASA Technical Reports Server (NTRS)

    Reed, Daniel A.; Grunwald, Dirk C.

    1987-01-01

    The interdependency of nodes and multicomputer interconnection networks is examined using simple calculations based on the asymptotic properties of queueing networks. Methods are described for choosing interconnection networks that fit individual classes of applications. It is also shown how analytic models can be extended to benchmark existing interconnection networks.

  19. Online monitoring and control of particle size in the grinding process using least square support vector regression and resilient back propagation neural network.

    PubMed

    Pani, Ajaya Kumar; Mohanta, Hare Krishna

    2015-05-01

    Particle size soft sensing in cement mills will be largely helpful in maintaining desired cement fineness or Blaine. Despite the growing use of vertical roller mills (VRM) for clinker grinding, very few research work is available on VRM modeling. This article reports the design of three types of feed forward neural network models and least square support vector regression (LS-SVR) model of a VRM for online monitoring of cement fineness based on mill data collected from a cement plant. In the data pre-processing step, a comparative study of the various outlier detection algorithms has been performed. Subsequently, for model development, the advantage of algorithm based data splitting over random selection is presented. The training data set obtained by use of Kennard-Stone maximal intra distance criterion (CADEX algorithm) was used for development of LS-SVR, back propagation neural network, radial basis function neural network and generalized regression neural network models. Simulation results show that resilient back propagation model performs better than RBF network, regression network and LS-SVR model. Model implementation has been done in SIMULINK platform showing the online detection of abnormal data and real time estimation of cement Blaine from the knowledge of the input variables. Finally, closed loop study shows how the model can be effectively utilized for maintaining cement fineness at desired value.

  20. Performance of an integrated network model

    PubMed Central

    Lehmann, François; Dunn, David; Beaulieu, Marie-Dominique; Brophy, James

    2016-01-01

    Objective To evaluate the changes in accessibility, patients’ care experiences, and quality-of-care indicators following a clinic’s transformation into a fully integrated network clinic. Design Mixed-methods study. Setting Verdun, Que. Participants Data on all patient visits were used, in addition to 2 distinct patient cohorts: 134 patients with chronic illness (ie, diabetes, arteriosclerotic heart disease, or both); and 450 women between the ages of 20 and 70 years. Main outcome measures Accessibility was measured by the number of walk-in visits, scheduled visits, and new patient enrolments. With the first cohort, patients’ care experiences were measured using validated serial questionnaires; and quality-of-care indicators were measured using biologic data. With the second cohort, quality of preventive care was measured using the number of Papanicolaou tests performed as a surrogate marker. Results Despite a negligible increase in the number of physicians, there was an increase in accessibility after the clinic’s transition to an integrated network model. During the first 4 years of operation, the number of scheduled visits more than doubled, nonscheduled visits (walk-in visits) increased by 29%, and enrolment of vulnerable patients (those with chronic illnesses) at the clinic remained high. Patient satisfaction with doctors was rated very highly at all points of time that were evaluated. While the number of Pap tests done did not increase with time, the proportion of patients meeting hemoglobin A1c and low-density lipoprotein guideline target levels increased, as did the number of patients tested for microalbuminuria. Conclusion Transformation to an integrated network model of care led to increased efficiency and enhanced accessibility with no negative effects on the doctor-patient relationship. Improvements in biologic data also suggested better quality of care. PMID:27521410

  1. Non-metallic coating thickness prediction using artificial neural network and support vector machine with time resolved thermography

    NASA Astrophysics Data System (ADS)

    Wang, Hongjin; Hsieh, Sheng-Jen; Peng, Bo; Zhou, Xunfei

    2016-07-01

    A method without requirements on knowledge about thermal properties of coatings or those of substrates will be interested in the industrial application. Supervised machine learning regressions may provide possible solution to the problem. This paper compares the performances of two regression models (artificial neural networks (ANN) and support vector machines for regression (SVM)) with respect to coating thickness estimations made based on surface temperature increments collected via time resolved thermography. We describe SVM roles in coating thickness prediction. Non-dimensional analyses are conducted to illustrate the effects of coating thicknesses and various factors on surface temperature increments. It's theoretically possible to correlate coating thickness with surface increment. Based on the analyses, the laser power is selected in such a way: during the heating, the temperature increment is high enough to determine the coating thickness variance but low enough to avoid surface melting. Sixty-one pain-coated samples with coating thicknesses varying from 63.5 μm to 571 μm are used to train models. Hyper-parameters of the models are optimized by 10-folder cross validation. Another 28 sets of data are then collected to test the performance of the three methods. The study shows that SVM can provide reliable predictions of unknown data, due to its deterministic characteristics, and it works well when used for a small input data group. The SVM model generates more accurate coating thickness estimates than the ANN model.

  2. Static internal performance including thrust vectoring and reversing of two-dimensional convergent-divergent nozzles

    NASA Technical Reports Server (NTRS)

    Re, R. J.; Leavitt, L. D.

    1984-01-01

    The effects of geometric design parameters on two dimensional convergent-divergent nozzles were investigated at nozzle pressure ratios up to 12 in the static test facility. Forward flight (dry and afterburning power settings), vectored-thrust (afterburning power setting), and reverse-thrust (dry power setting) nozzles were investigated. The nozzles had thrust vector angles from 0 deg to 20.26 deg, throat aspect ratios of 3.696 to 7.612, throat radii from sharp to 2.738 cm, expansion ratios from 1.089 to 1.797, and various sidewall lengths. The results indicate that unvectored two dimensional convergent-divergent nozzles have static internal performance comparable to axisymmetric nozzles with similar expansion ratios.

  3. Connections between inversion, kriging, wiener filters, support vector machines, and neural networks.

    NASA Astrophysics Data System (ADS)

    Kuzma, H. A.; Kappler, K. A.; Rector, J. W.

    2006-12-01

    Kriging, wiener filters, support vector machines (SVMs), neural networks, linear and non-linear inversion are methods for predicting the values of one set of variables given the values of another. They can all be used to estimate a set of model parameters from measured data given that a physical relationship exists between models and data. However, since the methods were developed in different fields, the mathematics used to describe them tend to obscure rather than highlight the links between them. In this poster, we diagram the methods and clarify their connections in hopes that practitioners of one method will be able to understand and learn from the insights developed in another. At the heart of all of the methods are a set of coefficients that must be found by minimizing an objective function. The solution to the objective function can be found either by inverting a matrix, or by searching through a space of possible answers. We distinguish between direct inversion, in which the desired coefficients are those of the model itself, and indirect inversion, in which examples of models and data are used to estimate the coefficients of an inverse process that, once discovered, can be used to compute new models from new data. Kriging is developed from Geostatistics. The model is usually a rock property (such as gold concentration) and the data is a sample location (x,y,z). The desired coefficients are a set of weights which are used to predict the concentration of a sample taken at a new location based on a variogram. The variogram is computed by averaging across a given set of known samples and is manually adjusted to reflect prior knowledge. Wiener filters were developed in signal processing to predict the values of one time-series from measurements of another. A wiener filter can be derived from kriging by replacing variograms with correlation. Support vector machines are an offshoot of statistical learning theory. They can be written as a form of kriging in which

  4. Radar cross-section measurements of ice particles using vector network analyzer

    NASA Astrophysics Data System (ADS)

    Wang, Jinhu; Ge, Junxiang; Zhang, Qilin; Li, Xiangchao; Wei, Ming; Yang, Zexin; Liu, Yan-An

    2016-09-01

    We carried out radar cross-section (RSC) measurements of ice particles in a microwave anechoic chamber at Nanjing University of Information Science and Technology. We used microwave similarity theory to enlarge the size of particle from the micrometer to millimeter scale and to reduce the testing frequency from 94 GHz to 10 GHz. The microwave similarity theory was validated using the method of moments for single metal sphere, single dielectric sphere, and spherical and non-spherical dielectric particle swarms. The differences between the retrieved and theoretical results at 94 GHz were 0.016117%, 0.0023029%, 0.027627%, and 0.0046053%, respectively. We proposed a device that can measure the RCS of ice particles in the chamber based on the S21 parameter obtained from vector network analyzer. On the basis of the measured S21 parameter of the calibration material (metal plates) and their corresponding theoretical RCS values, the RCS values of a spherical Teflon particle swarm and cuboid candle particle swarm was retrieved at 10 GHz. In this case, the differences between the retrieved and theoretical results were 12.72% and 24.49% for the Teflon particle swarm and cuboid candle swarm, respectively.

  5. A neural network simulating human reach-grasp coordination by continuous updating of vector positioning commands.

    PubMed

    Ulloa, Antonio; Bullock, Daniel

    2003-10-01

    We developed a neural network model to simulate temporal coordination of human reaching and grasping under variable initial grip apertures and perturbations of object size and object location/orientation. The proposed model computes reach-grasp trajectories by continuously updating vector positioning commands. The model hypotheses are (1) hand/wrist transport, grip aperture, and hand orientation control modules are coupled by a gating signal that fosters synchronous completion of the three sub-goals. (2) Coupling from transport and orientation velocities to aperture control causes maximum grip apertures that scale with these velocities and exceed object size. (3) Part of the aperture trajectory is attributable to an aperture-reducing passive biomechanical effect that is stronger for larger apertures. (4) Discrepancies between internal representations of targets partially inhibit the gating signal, leading to movement time increases that compensate for perturbations. Simulations of the model replicate key features of human reach-grasp kinematics observed under three experimental protocols. Our results indicate that no precomputation of component movement times is necessary for online temporal coordination of the components of reaching and grasping.

  6. Efficient modeling of vector hysteresis using a novel Hopfield neural network implementation of Stoner–Wohlfarth-like operators

    PubMed Central

    Adly, Amr A.; Abd-El-Hafiz, Salwa K.

    2012-01-01

    Incorporation of hysteresis models in electromagnetic analysis approaches is indispensable to accurate field computation in complex magnetic media. Throughout those computations, vector nature and computational efficiency of such models become especially crucial when sophisticated geometries requiring massive sub-region discretization are involved. Recently, an efficient vector Preisach-type hysteresis model constructed from only two scalar models having orthogonally coupled elementary operators has been proposed. This paper presents a novel Hopfield neural network approach for the implementation of Stoner–Wohlfarth-like operators that could lead to a significant enhancement in the computational efficiency of the aforementioned model. Advantages of this approach stem from the non-rectangular nature of these operators that substantially minimizes the number of operators needed to achieve an accurate vector hysteresis model. Details of the proposed approach, its identification and experimental testing are presented in the paper. PMID:25685446

  7. Efficient modeling of vector hysteresis using a novel Hopfield neural network implementation of Stoner-Wohlfarth-like operators.

    PubMed

    Adly, Amr A; Abd-El-Hafiz, Salwa K

    2013-07-01

    Incorporation of hysteresis models in electromagnetic analysis approaches is indispensable to accurate field computation in complex magnetic media. Throughout those computations, vector nature and computational efficiency of such models become especially crucial when sophisticated geometries requiring massive sub-region discretization are involved. Recently, an efficient vector Preisach-type hysteresis model constructed from only two scalar models having orthogonally coupled elementary operators has been proposed. This paper presents a novel Hopfield neural network approach for the implementation of Stoner-Wohlfarth-like operators that could lead to a significant enhancement in the computational efficiency of the aforementioned model. Advantages of this approach stem from the non-rectangular nature of these operators that substantially minimizes the number of operators needed to achieve an accurate vector hysteresis model. Details of the proposed approach, its identification and experimental testing are presented in the paper.

  8. Support Vector Machine and Artificial Neural Network Models for the Classification of Grapevine Varieties Using a Portable NIR Spectrophotometer.

    PubMed

    Gutiérrez, Salvador; Tardaguila, Javier; Fernández-Novales, Juan; Diago, María P

    2015-01-01

    The identification of different grapevine varieties, currently attended using visual ampelometry, DNA analysis and very recently, by hyperspectral analysis under laboratory conditions, is an issue of great importance in the wine industry. This work presents support vector machine and artificial neural network's modelling for grapevine varietal classification from in-field leaf spectroscopy. Modelling was attempted at two scales: site-specific and a global scale. Spectral measurements were obtained on the near-infrared (NIR) spectral range between 1600 to 2400 nm under field conditions in a non-destructive way using a portable spectrophotometer. For the site specific approach, spectra were collected from the adaxial side of 400 individual leaves of 20 grapevine (Vitis vinifera L.) varieties one week after veraison. For the global model, two additional sets of spectra were collected one week before harvest from two different vineyards in another vintage, each one consisting on 48 measurement from individual leaves of six varieties. Several combinations of spectra scatter correction and smoothing filtering were studied. For the training of the models, support vector machines and artificial neural networks were employed using the pre-processed spectra as input and the varieties as the classes of the models. The results from the pre-processing study showed that there was no influence whether using scatter correction or not. Also, a second-degree derivative with a window size of 5 Savitzky-Golay filtering yielded the highest outcomes. For the site-specific model, with 20 classes, the best results from the classifiers thrown an overall score of 87.25% of correctly classified samples. These results were compared under the same conditions with a model trained using partial least squares discriminant analysis, which showed a worse performance in every case. For the global model, a 6-class dataset involving samples from three different vineyards, two years and leaves

  9. Performance Evaluation of Lattice-Boltzmann MagnetohydrodynamicsSimulations on Modern Parallel Vector Systems

    SciTech Connect

    Carter, Jonathan; Oliker, Leonid

    2006-01-09

    The last decade has witnessed a rapid proliferation of superscalarcache-based microprocessors to build high-end computing (HEC) platforms, primarily because of their generality, scalability, and cost effectiveness. However, the growing gap between sustained and peak performance for full-scale scientific applications on such platforms has become major concern in high performance computing. The latest generation of custom-built parallel vector systems have the potential to address this concern for numerical algorithms with sufficient regularity in their computational structure. In this work, we explore two and three dimensional implementations of a lattice-Boltzmann magnetohydrodynamics (MHD) physics application, on some of today's most powerful supercomputing platforms. Results compare performance between the vector-based Cray X1, Earth Simulator, and newly-released NEC SX-8, with the commodity-based superscalar platforms of the IBM Power3, IntelItanium2, and AMD Opteron. Overall results show that the SX-8 attains unprecedented aggregate performance across our evaluated applications.

  10. Static internal performance of a two-dimensional convergent-divergent nozzle with thrust vectoring

    NASA Technical Reports Server (NTRS)

    Bare, E. Ann; Reubush, David E.

    1987-01-01

    A parametric investigation of the static internal performance of multifunction two-dimensional convergent-divergent nozzles has been made in the static test facility of the Langley 16-Foot Transonic Tunnel. All nozzles had a constant throat area and aspect ratio. The effects of upper and lower flap angles, divergent flap length, throat approach angle, sidewall containment, and throat geometry were determined. All nozzles were tested at a thrust vector angle that varied from 5.60 tp 23.00 deg. The nozzle pressure ratio was varied up to 10 for all configurations.

  11. Analysis of complex network performance and heuristic node removal strategies

    NASA Astrophysics Data System (ADS)

    Jahanpour, Ehsan; Chen, Xin

    2013-12-01

    Removing important nodes from complex networks is a great challenge in fighting against criminal organizations and preventing disease outbreaks. Six network performance metrics, including four new metrics, are applied to quantify networks' diffusion speed, diffusion scale, homogeneity, and diameter. In order to efficiently identify nodes whose removal maximally destroys a network, i.e., minimizes network performance, ten structured heuristic node removal strategies are designed using different node centrality metrics including degree, betweenness, reciprocal closeness, complement-derived closeness, and eigenvector centrality. These strategies are applied to remove nodes from the September 11, 2001 hijackers' network, and their performance are compared to that of a random strategy, which removes randomly selected nodes, and the locally optimal solution (LOS), which removes nodes to minimize network performance at each step. The computational complexity of the 11 strategies and LOS is also analyzed. Results show that the node removal strategies using degree and betweenness centralities are more efficient than other strategies.

  12. Wireless Local Area Network Performance Inside Aircraft Passenger Cabins

    NASA Technical Reports Server (NTRS)

    Whetten, Frank L.; Soroker, Andrew; Whetten, Dennis A.; Whetten, Frank L.; Beggs, John H.

    2005-01-01

    An examination of IEEE 802.11 wireless network performance within an aircraft fuselage is performed. This examination measured the propagated RF power along the length of the fuselage, and the associated network performance: the link speed, total throughput, and packet losses and errors. A total of four airplanes: one single-aisle and three twin-aisle airplanes were tested with 802.11a, 802.11b, and 802.11g networks.

  13. Performance characteristics of a one-third-scale, vectorable ventral nozzle for SSTOVL aircraft

    NASA Technical Reports Server (NTRS)

    Esker, Barbara S.; Mcardle, Jack G.

    1990-01-01

    Several proposed configurations for supersonic short takeoff, vertical landing aircraft will require one or more ventral nozzles for lift and pitch control. The swivel nozzle is one possible ventral nozzle configuration. A swivel nozzle (approximately one-third scale) was built and tested on a generic model tailpipe. This nozzle was capable of vectoring the flow up to + or - 23 deg from the vertical position. Steady-state performance data were obtained at pressure ratios to 4.5, and pitot-pressure surveys of the nozzle exit plane were made. Two configurations were tested: the swivel nozzle with a square contour of the leading edge of the ventral duct inlet, and the same nozzle with a round leading edge contour. The swivel nozzle showed good performance overall, and the round-leading edge configuration showed an improvement in performance over the square-leading edge configuration.

  14. Epidemic dynamics of a vector-borne disease on a villages-and-city star network with commuters.

    PubMed

    Mpolya, Emmanuel A; Yashima, Kenta; Ohtsuki, Hisashi; Sasaki, Akira

    2014-02-21

    We develop a star-network of connections between a central city and peripheral villages and analyze the epidemic dynamics of a vector-borne disease as influenced by daily commuters. We obtain an analytical solution for the global basic reproductive number R0 and investigate its dependence on key parameters for disease control. We find that in a star-network topology the central hub is not always the best place to focus disease intervention strategies. Disease control decisions are sensitive to the number of commuters from villages to the city as well as the relative densities of mosquitoes between villages and city. With more commuters it becomes important to focus on the surrounding villages. Commuting to the city paradoxically reduces the disease burden even when the bulk of infections are in the city because of the resulting diluting effects of transmissions with more commuters. This effect decreases with heterogeneity in host and vector population sizes in the villages due to the formation of peripheral epicenters of infection. We suggest that to ensure effective control of vector-borne diseases in star networks of villages and cities it is also important to focus on the commuters and where they come from.

  15. Manipulation of Host Quality and Defense by a Plant Virus Improves Performance of Whitefly Vectors.

    PubMed

    Su, Qi; Preisser, Evan L; Zhou, Xiao Mao; Xie, Wen; Liu, Bai Ming; Wang, Shao Li; Wu, Qing Jun; Zhang, You Jun

    2015-02-01

    Pathogen-mediated interactions between insect vectors and their host plants can affect herbivore fitness and the epidemiology of plant diseases. While the role of plant quality and defense in mediating these tripartite interactions has been recognized, there are many ecologically and economically important cases where the nature of the interaction has yet to be characterized. The Bemisia tabaci (Gennadius) cryptic species Mediterranean (MED) is an important vector of tomato yellow leaf curl virus (TYLCV), and performs better on virus-infected tomato than on uninfected controls. We assessed the impact of TYLCV infection on plant quality and defense, and the direct impact of TYLCV infection on MED feeding. We found that although TYLCV infection has a minimal direct impact on MED, the virus alters the nutritional content of leaf tissue and phloem sap in a manner beneficial to MED. TYLCV infection also suppresses herbivore-induced production of plant defensive enzymes and callose deposition. The strongly positive net effect on TYLCV on MED is consistent with previously reported patterns of whitefly behavior and performance, and provides a foundation for further exploration of the molecular mechanisms responsible for these effects and the evolutionary processes that shape them. PMID:26470098

  16. Improving matrix-vector product performance and multi-level preconditioning for the parallel PCG package

    SciTech Connect

    McLay, R.T.; Carey, G.F.

    1996-12-31

    In this study we consider parallel solution of sparse linear systems arising from discretized PDE`s. As part of our continuing work on our parallel PCG Solver package, we have made improvements in two areas. The first is improving the performance of the matrix-vector product. Here on regular finite-difference grids, we are able to use the cache memory more efficiently for smaller domains or where there are multiple degrees of freedom. The second problem of interest in the present work is the construction of preconditioners in the context of the parallel PCG solver we are developing. Here the problem is partitioned over a set of processors subdomains and the matrix-vector product for PCG is carried out in parallel for overlapping grid subblocks. For problems of scaled speedup, the actual rate of convergence of the unpreconditioned system deteriorates as the mesh is refined. Multigrid and subdomain strategies provide a logical approach to resolving the problem. We consider the parallel trade-offs between communication and computation and provide a complexity analysis of a representative algorithm. Some preliminary calculations using the parallel package and comparisons with other preconditioners are provided together with parallel performance results.

  17. A novel application classification and its impact on network performance

    NASA Astrophysics Data System (ADS)

    Zhang, Shuo; Huang, Ning; Sun, Xiaolei; Zhang, Yue

    2016-07-01

    Network traffic is believed to have a significant impact on network performance and is the result of the application operation on networks. Majority of current network performance analysis are based on the premise that the traffic transmission is through the shortest path, which is too simple to reflect a real traffic process. The real traffic process is related to the network application process characteristics, involving the realistic user behavior. In this paper, first, an application can be divided into the following three categories according to realistic application process characteristics: random application, customized application and routine application. Then, numerical simulations are carried out to analyze the effect of different applications on the network performance. The main results show that (i) network efficiency for the BA scale-free network is less than the ER random network when similar single application is loaded on the network; (ii) customized application has the greatest effect on the network efficiency when mixed multiple applications are loaded on BA network.

  18. Predicting body and carcass characteristics of 2 broiler chicken strains using support vector regression and neural network models.

    PubMed

    Faridi, A; Sakomura, N K; Golian, A; Marcato, S M

    2012-12-01

    As a new modeling method, support vector regression (SVR) has been regarded as the state-of-the-art technique for regression and approximation. In this study, the SVR models had been introduced and developed to predict body and carcass-related characteristics of 2 strains of broiler chicken. To evaluate the prediction ability of SVR models, we compared their performance with that of neural network (NN) models. Evaluation of the prediction accuracy of models was based on the R(2), MS error, and bias. The variables of interest as model output were BW, empty BW, carcass, breast, drumstick, thigh, and wing weight in 2 strains of Ross and Cobb chickens based on intake dietary nutrients, including ME (kcal/bird per week), CP, TSAA, and Lys, all as grams per bird per week. A data set composed of 64 measurements taken from each strain were used for this analysis, where 44 data lines were used for model training, whereas the remaining 20 lines were used to test the created models. The results of this study revealed that it is possible to satisfactorily estimate the BW and carcass parts of the broiler chickens via their dietary nutrient intake. Through statistical criteria used to evaluate the performance of the SVR and NN models, the overall results demonstrate that the discussed models can be effective for accurate prediction of the body and carcass-related characteristics investigated here. However, the SVR method achieved better accuracy and generalization than the NN method. This indicates that the new data mining technique (SVR model) can be used as an alternative modeling tool for NN models. However, further reevaluation of this algorithm in the future is suggested.

  19. Performance Evaluation of Plasma and Astrophysics Applications onModern Parallel Vector Systems

    SciTech Connect

    Carter, Jonathan; Oliker, Leonid; Shalf, John

    2005-10-28

    The last decade has witnessed a rapid proliferation ofsuperscalar cache-based microprocessors to build high-endcomputing (HEC)platforms, primarily because of their generality,scalability, and costeffectiveness. However, the growing gap between sustained and peakperformance for full-scale scientific applications on such platforms hasbecome major concern in highperformance computing. The latest generationof custom-built parallel vector systems have the potential to addressthis concern for numerical algorithms with sufficient regularity in theircomputational structure. In this work, we explore two and threedimensional implementations of a plasma physics application, as well as aleading astrophysics package on some of today's most powerfulsupercomputing platforms. Results compare performance between the thevector-based Cray X1, EarthSimulator, and newly-released NEC SX- 8, withthe commodity-based superscalar platforms of the IBM Power3, IntelItanium2, and AMDOpteron. Overall results show that the SX-8 attainsunprecedented aggregate performance across our evaluatedapplications.

  20. Error estimation for ORION baseline vector determination

    NASA Technical Reports Server (NTRS)

    Wu, S. C.

    1980-01-01

    Effects of error sources on Operational Radio Interferometry Observing Network (ORION) baseline vector determination are studied. Partial derivatives of delay observations with respect to each error source are formulated. Covariance analysis is performed to estimate the contribution of each error source to baseline vector error. System design parameters such as antenna sizes, system temperatures and provision for dual frequency operation are discussed.

  1. Performance characteristics of two multiaxis thrust-vectoring nozzles at Mach numbers up to 1.28

    NASA Technical Reports Server (NTRS)

    Wing, David J.; Capone, Francis J.

    1993-01-01

    The thrust-vectoring axisymmetric (VA) nozzle and a spherical convergent flap (SCF) thrust-vectoring nozzle were tested along with a baseline nonvectoring axisymmetric (NVA) nozzle in the Langley 16-Foot Transonic Tunnel at Mach numbers from 0 to 1.28 and nozzle pressure ratios from 1 to 8. Test parameters included geometric yaw vector angle and unvectored divergent flap length. No pitch vectoring was studied. Nozzle drag, thrust minus drag, yaw thrust vector angle, discharge coefficient, and static thrust performance were measured and analyzed, as well as external static pressure distributions. The NVA nozzle and the VA nozzle displayed higher static thrust performance than the SCF nozzle throughout the nozzle pressure ratio (NPR) range tested. The NVA nozzle had higher overall thrust minus drag than the other nozzles throughout the NPR and Mach number ranges tested. The SCF nozzle had the lowest jet-on nozzle drag of the three nozzles throughout the test conditions. The SCF nozzle provided yaw thrust angles that were equal to the geometric angle and constant with NPR. The VA nozzle achieved yaw thrust vector angles that were significantly higher than the geometric angle but not constant with NPR. Nozzle drag generally increased with increases in thrust vectoring for all the nozzles tested.

  2. Network interface unit design options performance analysis

    NASA Technical Reports Server (NTRS)

    Miller, Frank W.

    1991-01-01

    An analysis is presented of three design options for the Space Station Freedom (SSF) onboard Data Management System (DMS) Network Interface Unit (NIU). The NIU provides the interface from the Fiber Distributed Data Interface (FDDI) local area network (LAN) to the DMS processing elements. The FDDI LAN provides the primary means for command and control and low and medium rate telemetry data transfers on board the SSF. The results of this analysis provide the basis for the implementation of the NIU.

  3. On the MAC/network/energy performance evaluation of Wireless Sensor Networks: Contrasting MPH, AODV, DSR and ZTR routing protocols.

    PubMed

    Del-Valle-Soto, Carolina; Mex-Perera, Carlos; Orozco-Lugo, Aldo; Lara, Mauricio; Galván-Tejada, Giselle M; Olmedo, Oscar

    2014-01-01

    Wireless Sensor Networks deliver valuable information for long periods, then it is desirable to have optimum performance, reduced delays, low overhead, and reliable delivery of information. In this work, proposed metrics that influence energy consumption are used for a performance comparison among our proposed routing protocol, called Multi-Parent Hierarchical (MPH), the well-known protocols for sensor networks, Ad hoc On-Demand Distance Vector (AODV), Dynamic Source Routing (DSR), and Zigbee Tree Routing (ZTR), all of them working with the IEEE 802.15.4 MAC layer. Results show how some communication metrics affect performance, throughput, reliability and energy consumption. It can be concluded that MPH is an efficient protocol since it reaches the best performance against the other three protocols under evaluation, such as 19.3% reduction of packet retransmissions, 26.9% decrease of overhead, and 41.2% improvement on the capacity of the protocol for recovering the topology from failures with respect to AODV protocol. We implemented and tested MPH in a real network of 99 nodes during ten days and analyzed parameters as number of hops, connectivity and delay, in order to validate our Sensors 2014, 14 22812 simulator and obtain reliable results. Moreover, an energy model of CC2530 chip is proposed and used for simulations of the four aforementioned protocols, showing that MPH has 15.9% reduction of energy consumption with respect to AODV, 13.7% versus DSR, and 5% against ZTR. PMID:25474377

  4. On the MAC/Network/Energy Performance Evaluation of Wireless Sensor Networks: Contrasting MPH, AODV, DSR and ZTR Routing Protocols

    PubMed Central

    Del-Valle-Soto, Carolina; Mex-Perera, Carlos; Orozco-Lugo, Aldo; Lara, Mauricio; Galván-Tejada, Giselle M.; Olmedo, Oscar

    2014-01-01

    Wireless Sensor Networks deliver valuable information for long periods, then it is desirable to have optimum performance, reduced delays, low overhead, and reliable delivery of information. In this work, proposed metrics that influence energy consumption are used for a performance comparison among our proposed routing protocol, called Multi-Parent Hierarchical (MPH), the well-known protocols for sensor networks, Ad hoc On-Demand Distance Vector (AODV), Dynamic Source Routing (DSR), and Zigbee Tree Routing (ZTR), all of them working with the IEEE 802.15.4 MAC layer. Results show how some communication metrics affect performance, throughput, reliability and energy consumption. It can be concluded that MPH is an efficient protocol since it reaches the best performance against the other three protocols under evaluation, such as 19.3% reduction of packet retransmissions, 26.9% decrease of overhead, and 41.2% improvement on the capacity of the protocol for recovering the topology from failures with respect to AODV protocol. We implemented and tested MPH in a real network of 99 nodes during ten days and analyzed parameters as number of hops, connectivity and delay, in order to validate our simulator and obtain reliable results. Moreover, an energy model of CC2530 chip is proposed and used for simulations of the four aforementioned protocols, showing that MPH has 15.9% reduction of energy consumption with respect to AODV, 13.7% versus DSR, and 5% against ZTR. PMID:25474377

  5. Support Vector Machine and Artificial Neural Network Models for the Classification of Grapevine Varieties Using a Portable NIR Spectrophotometer

    PubMed Central

    Gutiérrez, Salvador; Tardaguila, Javier; Fernández-Novales, Juan; Diago, María P.

    2015-01-01

    The identification of different grapevine varieties, currently attended using visual ampelometry, DNA analysis and very recently, by hyperspectral analysis under laboratory conditions, is an issue of great importance in the wine industry. This work presents support vector machine and artificial neural network’s modelling for grapevine varietal classification from in-field leaf spectroscopy. Modelling was attempted at two scales: site-specific and a global scale. Spectral measurements were obtained on the near-infrared (NIR) spectral range between 1600 to 2400 nm under field conditions in a non-destructive way using a portable spectrophotometer. For the site specific approach, spectra were collected from the adaxial side of 400 individual leaves of 20 grapevine (Vitis vinifera L.) varieties one week after veraison. For the global model, two additional sets of spectra were collected one week before harvest from two different vineyards in another vintage, each one consisting on 48 measurement from individual leaves of six varieties. Several combinations of spectra scatter correction and smoothing filtering were studied. For the training of the models, support vector machines and artificial neural networks were employed using the pre-processed spectra as input and the varieties as the classes of the models. The results from the pre-processing study showed that there was no influence whether using scatter correction or not. Also, a second-degree derivative with a window size of 5 Savitzky-Golay filtering yielded the highest outcomes. For the site-specific model, with 20 classes, the best results from the classifiers thrown an overall score of 87.25% of correctly classified samples. These results were compared under the same conditions with a model trained using partial least squares discriminant analysis, which showed a worse performance in every case. For the global model, a 6-class dataset involving samples from three different vineyards, two years and leaves

  6. Using high-performance networks to enable computational aerosciences applications

    NASA Technical Reports Server (NTRS)

    Johnson, Marjory J.

    1992-01-01

    One component of the U.S. Federal High Performance Computing and Communications Program (HPCCP) is the establishment of a gigabit network to provide a communications infrastructure for researchers across the nation. This gigabit network will provide new services and capabilities, in addition to increased bandwidth, to enable future applications. An understanding of these applications is necessary to guide the development of the gigabit network and other high-performance networks of the future. In this paper we focus on computational aerosciences applications run remotely using the Numerical Aerodynamic Simulation (NAS) facility located at NASA Ames Research Center. We characterize these applications in terms of network-related parameters and relate user experiences that reveal limitations imposed by the current wide-area networking infrastructure. Then we investigate how the development of a nationwide gigabit network would enable users of the NAS facility to work in new, more productive ways.

  7. Experimental performance evaluation of software defined networking (SDN) based data communication networks for large scale flexi-grid optical networks.

    PubMed

    Zhao, Yongli; He, Ruiying; Chen, Haoran; Zhang, Jie; Ji, Yuefeng; Zheng, Haomian; Lin, Yi; Wang, Xinbo

    2014-04-21

    Software defined networking (SDN) has become the focus in the current information and communication technology area because of its flexibility and programmability. It has been introduced into various network scenarios, such as datacenter networks, carrier networks, and wireless networks. Optical transport network is also regarded as an important application scenario for SDN, which is adopted as the enabling technology of data communication networks (DCN) instead of general multi-protocol label switching (GMPLS). However, the practical performance of SDN based DCN for large scale optical networks, which is very important for the technology selection in the future optical network deployment, has not been evaluated up to now. In this paper we have built a large scale flexi-grid optical network testbed with 1000 virtual optical transport nodes to evaluate the performance of SDN based DCN, including network scalability, DCN bandwidth limitation, and restoration time. A series of network performance parameters including blocking probability, bandwidth utilization, average lightpath provisioning time, and failure restoration time have been demonstrated under various network environments, such as with different traffic loads and different DCN bandwidths. The demonstration in this work can be taken as a proof for the future network deployment. PMID:24787842

  8. Experimental performance evaluation of software defined networking (SDN) based data communication networks for large scale flexi-grid optical networks.

    PubMed

    Zhao, Yongli; He, Ruiying; Chen, Haoran; Zhang, Jie; Ji, Yuefeng; Zheng, Haomian; Lin, Yi; Wang, Xinbo

    2014-04-21

    Software defined networking (SDN) has become the focus in the current information and communication technology area because of its flexibility and programmability. It has been introduced into various network scenarios, such as datacenter networks, carrier networks, and wireless networks. Optical transport network is also regarded as an important application scenario for SDN, which is adopted as the enabling technology of data communication networks (DCN) instead of general multi-protocol label switching (GMPLS). However, the practical performance of SDN based DCN for large scale optical networks, which is very important for the technology selection in the future optical network deployment, has not been evaluated up to now. In this paper we have built a large scale flexi-grid optical network testbed with 1000 virtual optical transport nodes to evaluate the performance of SDN based DCN, including network scalability, DCN bandwidth limitation, and restoration time. A series of network performance parameters including blocking probability, bandwidth utilization, average lightpath provisioning time, and failure restoration time have been demonstrated under various network environments, such as with different traffic loads and different DCN bandwidths. The demonstration in this work can be taken as a proof for the future network deployment.

  9. Performance Enhancement for a GPS Vector-Tracking Loop Utilizing an Adaptive Iterated Extended Kalman Filter

    PubMed Central

    Chen, Xiyuan; Wang, Xiying; Xu, Yuan

    2014-01-01

    This paper deals with the problem of state estimation for the vector-tracking loop of a software-defined Global Positioning System (GPS) receiver. For a nonlinear system that has the model error and white Gaussian noise, a noise statistics estimator is used to estimate the model error, and based on this, a modified iterated extended Kalman filter (IEKF) named adaptive iterated Kalman filter (AIEKF) is proposed. A vector-tracking GPS receiver utilizing AIEKF is implemented to evaluate the performance of the proposed method. Through road tests, it is shown that the proposed method has an obvious accuracy advantage over the IEKF and Adaptive Extended Kalman filter (AEKF) in position determination. The results show that the proposed method is effective to reduce the root-mean-square error (RMSE) of position (including longitude, latitude and altitude). Comparing with EKF, the position RMSE values of AIEKF are reduced by about 45.1%, 40.9% and 54.6% in the east, north and up directions, respectively. Comparing with IEKF, the position RMSE values of AIEKF are reduced by about 25.7%, 19.3% and 35.7% in the east, north and up directions, respectively. Compared with AEKF, the position RMSE values of AIEKF are reduced by about 21.6%, 15.5% and 30.7% in the east, north and up directions, respectively. PMID:25502124

  10. Performance enhancement for a GPS vector-tracking loop utilizing an adaptive iterated extended Kalman filter.

    PubMed

    Chen, Xiyuan; Wang, Xiying; Xu, Yuan

    2014-12-09

    This paper deals with the problem of state estimation for the vector-tracking loop of a software-defined Global Positioning System (GPS) receiver. For a nonlinear system that has the model error and white Gaussian noise, a noise statistics estimator is used to estimate the model error, and based on this, a modified iterated extended Kalman filter (IEKF) named adaptive iterated Kalman filter (AIEKF) is proposed. A vector-tracking GPS receiver utilizing AIEKF is implemented to evaluate the performance of the proposed method. Through road tests, it is shown that the proposed method has an obvious accuracy advantage over the IEKF and Adaptive Extended Kalman filter (AEKF) in position determination. The results show that the proposed method is effective to reduce the root-mean-square error (RMSE) of position (including longitude, latitude and altitude). Comparing with EKF, the position RMSE values of AIEKF are reduced by about 45.1%, 40.9% and 54.6% in the east, north and up directions, respectively. Comparing with IEKF, the position RMSE values of AIEKF are reduced by about 25.7%, 19.3% and 35.7% in the east, north and up directions, respectively. Compared with AEKF, the position RMSE values of AIEKF are reduced by about 21.6%, 15.5% and 30.7% in the east, north and up directions, respectively.

  11. Building and measuring a high performance network architecture

    SciTech Connect

    Kramer, William T.C.; Toole, Timothy; Fisher, Chuck; Dugan, Jon; Wheeler, David; Wing, William R; Nickless, William; Goddard, Gregory; Corbato, Steven; Love, E. Paul; Daspit, Paul; Edwards, Hal; Mercer, Linden; Koester, David; Decina, Basil; Dart, Eli; Paul Reisinger, Paul; Kurihara, Riki; Zekauskas, Matthew J; Plesset, Eric; Wulf, Julie; Luce, Douglas; Rogers, James; Duncan, Rex; Mauth, Jeffery

    2001-04-20

    Once a year, the SC conferences present a unique opportunity to create and build one of the most complex and highest performance networks in the world. At SC2000, large-scale and complex local and wide area networking connections were demonstrated, including large-scale distributed applications running on different architectures. This project was designed to use the unique opportunity presented at SC2000 to create a testbed network environment and then use that network to demonstrate and evaluate high performance computational and communication applications. This testbed was designed to incorporate many interoperable systems and services and was designed for measurement from the very beginning. The end results were key insights into how to use novel, high performance networking technologies and to accumulate measurements that will give insights into the networks of the future.

  12. Vector performance analysis of three supercomputers - Cray-2, Cray Y-MP, and ETA10-Q

    NASA Technical Reports Server (NTRS)

    Fatoohi, Rod A.

    1989-01-01

    Results are presented of a series of experiments to study the single-processor performance of three supercomputers: Cray-2, Cray Y-MP, and ETA10-Q. The main object of this study is to determine the impact of certain architectural features on the performance of modern supercomputers. Features such as clock period, memory links, memory organization, multiple functional units, and chaining are considered. A simple performance model is used to examine the impact of these features on the performance of a set of basic operations. The results of implementing this set on these machines for three vector lengths and three memory strides are presented and compared. For unit stride operations, the Cray Y-MP outperformed the Cray-2 by as much as three times and the ETA10-Q by as much as four times for these operations. Moreover, unlike the Cray-2 and ETA10-Q, even-numbered strides do not cause a major performance degradation on the Cray Y-MP. Two numerical algorithms are also used for comparison. For three problem sizes of both algorithms, the Cray Y-MP outperformed the Cray-2 by 43 percent to 68 percent and the ETA10-Q by four to eight times.

  13. Performance Statistics of the DWD Ceilometer Network

    NASA Astrophysics Data System (ADS)

    Wagner, Frank; Mattis, Ina; Flentje, Harald; Thomas, Werner

    2015-04-01

    The DWD ceilometer network was created in 2008. In the following years more and more ceilometers of type CHM15k (manufacturer Jenoptik) were installed with the aim of observing atmospheric aerosol particles. Now, 58 ceilometers are in continuous operation. The presentation aims on the one side on the statistical behavior of a several instrumental parameters which are related to the measurement performance. Some problems are addressed and conclusions or recommendations which parameters should be monitored for unattended automated operation. On the other side, the presentation aims on a statistical analysis of several measured quantities. Differences between geographic locations (e.g. north versus south, mountainous versus flat terrain) are investigated. For instance the occurrence of fog in lowlands is associated with the overall meteorological situation whereas mountain stations such as Hohenpeissenberg are often within a cumulus cloud which appears as fog in the measurements. The longest time series of data were acquired at Lindenberg. The ceilometer was installed in 2008. Until the end of 2008 the number of installed ceilometers increased to 28 and in the end of 2009 already 42 instruments were measuring. In 2011 the ceilometers were upgraded to the so-called Nimbus instruments. The nimbus instruments have enhanced capabilities of coping and correcting short-term instrumental fluctuations (e.g. detector sensitivity). About 30% of all ceilometer measurements were done under clear skies and hence can be used without limitations for aerosol particle observations. Multiple cloud layers could only be detected in about 23% of all cases with clouds. This is caused either by the presence of only 1 cloud layer or that the ceilometer laser beam could not see through the lowest cloud and hence was blind for the detection of several cloud layers. 3 cloud layers could only be detected in 5% of all cases with clouds. Considering only cases without clouds the diurnal cycle for

  14. Urban Heat Island Growth Modeling Using Artificial Neural Networks and Support Vector Regression: A case study of Tehran, Iran

    NASA Astrophysics Data System (ADS)

    Sherafati, Sh. A.; Saradjian, M. R.; Niazmardi, S.

    2013-09-01

    Numerous investigations on Urban Heat Island (UHI) show that land cover change is the main factor of increasing Land Surface Temperature (LST) in urban areas. Therefore, to achieve a model which is able to simulate UHI growth, urban expansion should be concerned first. Considerable researches on urban expansion modeling have been done based on cellular automata. Accordingly the objective of this paper is to implement CA method for trend detection of Tehran UHI spatiotemporal growth based on urban sprawl parameters (such as Distance to nearest road, Digital Elevation Model (DEM), Slope and Aspect ratios). It should be mentioned that UHI growth modeling may have more complexities in comparison with urban expansion, since the amount of each pixel's temperature should be investigated instead of its state (urban and non-urban areas). The most challenging part of CA model is the definition of Transfer Rules. Here, two methods have used to find appropriate transfer Rules which are Artificial Neural Networks (ANN) and Support Vector Regression (SVR). The reason of choosing these approaches is that artificial neural networks and support vector regression have significant abilities to handle the complications of such a spatial analysis in comparison with other methods like Genetic or Swarm intelligence. In this paper, UHI change trend has discussed between 1984 and 2007. For this purpose, urban sprawl parameters in 1984 have calculated and added to the retrieved LST of this year. In order to achieve LST, Thematic Mapper (TM) and Enhanced Thematic Mapper (ETM+) night-time images have exploited. The reason of implementing night-time images is that UHI phenomenon is more obvious during night hours. After that multilayer feed-forward neural networks and support vector regression have used separately to find the relationship between this data and the retrieved LST in 2007. Since the transfer rules might not be the same in different regions, the satellite image of the city has

  15. A parallel-vector algorithm for rapid structural analysis on high-performance computers

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.; Nguyen, Duc T.; Agarwal, Tarun K.

    1990-01-01

    A fast, accurate Choleski method for the solution of symmetric systems of linear equations is presented. This direct method is based on a variable-band storage scheme and takes advantage of column heights to reduce the number of operations in the Choleski factorization. The method employs parallel computation in the outermost DO-loop and vector computation via the loop unrolling technique in the innermost DO-loop. The method avoids computations with zeros outside the column heights, and as an option, zeros inside the band. The close relationship between Choleski and Gauss elimination methods is examined. The minor changes required to convert the Choleski code to a Gauss code to solve non-positive-definite symmetric systems of equations are identified. The results for two large scale structural analyses performed on supercomputers, demonstrate the accuracy and speed of the method.

  16. Learning Vector Quantization Neural Networks Improve Accuracy of Transcranial Color-coded Duplex Sonography in Detection of Middle Cerebral Artery Spasm—Preliminary Report

    PubMed Central

    Swiercz, Miroslaw; Kochanowicz, Jan; Weigele, John; Hurst, Robert; Liebeskind, David S.; Mariak, Zenon; Melhem, Elias R.

    2009-01-01

    To determine the performance of an artificial neural network in transcranial color-coded duplex sonography (TCCS) diagnosis of middle cerebral artery (MCA) spasm. TCCS was prospectively acquired within 2 h prior to routine cerebral angiography in 100 consecutive patients (54M:46F, median age 50 years). Angiographic MCA vasospasm was classified as mild (<25% of vessel caliber reduction), moderate (25–50%), or severe (>50%). A Learning Vector Quantization neural network classified MCA spasm based on TCCS peak-systolic, mean, and end-diastolic velocity data. During a four-class discrimination task, accurate classification by the network ranged from 64.9% to 72.3%, depending on the number of neurons in the Kohonen layer. Accurate classification of vasospasm ranged from 79.6% to 87.6%, with an accuracy of 84.7% to 92.1% for the detection of moderate-to-severe vasospasm. An artificial neural network may increase the accuracy of TCCS in diagnosis of MCA spasm. PMID:18704768

  17. Flood damage assessment performed based on Support Vector Machines combined with Landsat TM imagery and GIS

    NASA Astrophysics Data System (ADS)

    Alouene, Y.; Petropoulos, G. P.; Kalogrias, A.; Papanikolaou, F.

    2012-04-01

    Floods are a water-related natural disaster affecting and often threatening different aspects of human life, such as property damage, economic degradation, and in some instances even loss of precious human lives. Being able to provide accurately and cost-effectively assessment of damage from floods is essential to both scientists and policy makers in many aspects ranging from mitigating to assessing damage extent as well as in rehabilitation of affected areas. Remote Sensing often combined with Geographical Information Systems (GIS) has generally shown a very promising potential in performing rapidly and cost-effectively flooding damage assessment, particularly so in remote, otherwise inaccessible locations. The progress in remote sensing during the last twenty years or so has resulted to the development of a large number of image processing techniques suitable for use with a range of remote sensing data in performing flooding damage assessment. Supervised image classification is regarded as one of the most widely used approaches employed for this purpose. Yet, the use of recently developed image classification algorithms such as of machine learning-based Support Vector Machines (SVMs) classifier has not been adequately investigated for this purpose. The objective of our work had been to quantitatively evaluate the ability of SVMs combined with Landsat TM multispectral imagery in performing a damage assessment of a flood occurred in a Mediterranean region. A further objective has been to examine if the inclusion of additional spectral information apart from the original TM bands in SVMs can improve flooded area extraction accuracy. As a case study is used the case of a river Evros flooding of 2010 located in the north of Greece, in which TM imagery before and shortly after the flooding was available. Assessment of the flooded area is performed in a GIS environment on the basis of classification accuracy assessment metrics as well as comparisons versus a vector

  18. Static performance of nonaxisymmetric nozzles with yaw thrust-vectoring vanes

    NASA Technical Reports Server (NTRS)

    Mason, Mary L.; Berrier, Bobby L.

    1988-01-01

    A static test was conducted in the static test facility of the Langley 16 ft Transonic Tunnel to evaluate the effects of post exit vane vectoring on nonaxisymmetric nozzles. Three baseline nozzles were tested: an unvectored two dimensional convergent nozzle, an unvectored two dimensional convergent-divergent nozzle, and a pitch vectored two dimensional convergent-divergent nozzle. Each nozzle geometry was tested with 3 exit aspect ratios (exit width divided by exit height) of 1.5, 2.5 and 4.0. Two post exit yaw vanes were externally mounted on the nozzle sidewalls at the nozzle exit to generate yaw thrust vectoring. Vane deflection angle (0, -20 and -30 deg), vane planform and vane curvature were varied during the test. Results indicate that the post exit vane concept produced resultant yaw vector angles which were always smaller than the geometric yaw vector angle. Losses in resultant thrust ratio increased with the magnitude of resultant yaw vector angle. The widest post exit vane produced the largest degree of flow turning, but vane curvature had little effect on thrust vectoring. Pitch vectoring was independent of yaw vectoring, indicating that multiaxis thrust vectoring is feasible for the nozzle concepts tested.

  19. End-to-end network/application performance troubleshooting methodology

    SciTech Connect

    Wu, Wenji; Bobyshev, Andrey; Bowden, Mark; Crawford, Matt; Demar, Phil; Grigaliunas, Vyto; Grigoriev, Maxim; Petravick, Don; /Fermilab

    2007-09-01

    The computing models for HEP experiments are globally distributed and grid-based. Obstacles to good network performance arise from many causes and can be a major impediment to the success of the computing models for HEP experiments. Factors that affect overall network/application performance exist on the hosts themselves (application software, operating system, hardware), in the local area networks that support the end systems, and within the wide area networks. Since the computer and network systems are globally distributed, it can be very difficult to locate and identify the factors that are hurting application performance. In this paper, we present an end-to-end network/application performance troubleshooting methodology developed and in use at Fermilab. The core of our approach is to narrow down the problem scope with a divide and conquer strategy. The overall complex problem is split into two distinct sub-problems: host diagnosis and tuning, and network path analysis. After satisfactorily evaluating, and if necessary resolving, each sub-problem, we conduct end-to-end performance analysis and diagnosis. The paper will discuss tools we use as part of the methodology. The long term objective of the effort is to enable site administrators and end users to conduct much of the troubleshooting themselves, before (or instead of) calling upon network and operating system 'wizards,' who are always in short supply.

  20. A comparative study of artificial neural network, adaptive neuro fuzzy inference system and support vector machine for forecasting river flow in the semiarid mountain region

    NASA Astrophysics Data System (ADS)

    He, Zhibin; Wen, Xiaohu; Liu, Hu; Du, Jun

    2014-02-01

    Data driven models are very useful for river flow forecasting when the underlying physical relationships are not fully understand, but it is not clear whether these data driven models still have a good performance in the small river basin of semiarid mountain regions where have complicated topography. In this study, the potential of three different data driven methods, artificial neural network (ANN), adaptive neuro fuzzy inference system (ANFIS) and support vector machine (SVM) were used for forecasting river flow in the semiarid mountain region, northwestern China. The models analyzed different combinations of antecedent river flow values and the appropriate input vector has been selected based on the analysis of residuals. The performance of the ANN, ANFIS and SVM models in training and validation sets are compared with the observed data. The model which consists of three antecedent values of flow has been selected as the best fit model for river flow forecasting. To get more accurate evaluation of the results of ANN, ANFIS and SVM models, the four quantitative standard statistical performance evaluation measures, the coefficient of correlation (R), root mean squared error (RMSE), Nash-Sutcliffe efficiency coefficient (NS) and mean absolute relative error (MARE), were employed to evaluate the performances of various models developed. The results indicate that the performance obtained by ANN, ANFIS and SVM in terms of different evaluation criteria during the training and validation period does not vary substantially; the performance of the ANN, ANFIS and SVM models in river flow forecasting was satisfactory. A detailed comparison of the overall performance indicated that the SVM model performed better than ANN and ANFIS in river flow forecasting for the validation data sets. The results also suggest that ANN, ANFIS and SVM method can be successfully applied to establish river flow with complicated topography forecasting models in the semiarid mountain regions.

  1. A performance data network for solar process heat systems

    SciTech Connect

    Barker, G.; Hale, M.J.

    1996-03-01

    A solar process heat (SPH) data network has been developed to access remote-site performance data from operational solar heat systems. Each SPH system in the data network is outfitted with monitoring equipment and a datalogger. The datalogger is accessed via modem from the data network computer at the National Renewable Energy Laboratory (NREL). The dataloggers collect both ten-minute and hourly data and download it to the data network every 24-hours for archiving, processing, and plotting. The system data collected includes energy delivered (fluid temperatures and flow rates) and site meteorological conditions, such as solar insolation and ambient temperature. The SPH performance data network was created for collecting performance data from SPH systems that are serving in industrial applications or from systems using technologies that show promise for industrial applications. The network will be used to identify areas of SPH technology needing further development, to correlate computer models with actual performance, and to improve the credibility of SPH technology. The SPH data network also provides a centralized bank of user-friendly performance data that will give prospective SPH users an indication of how actual systems perform. There are currently three systems being monitored and archived under the SPH data network: two are parabolic trough systems and the third is a flat-plate system. The two trough systems both heat water for prisons; the hot water is used for personal hygiene, kitchen operations, and laundry. The flat plate system heats water for meat processing at a slaughter house. We plan to connect another parabolic trough system to the network during the first months of 1996. We continue to look for good examples of systems using other types of collector technologies and systems serving new applications (such as absorption chilling) to include in the SPH performance data network.

  2. Performance comparison of neural networks for undersea mine detection

    NASA Astrophysics Data System (ADS)

    Toborg, Scott T.; Lussier, Matthew; Rowe, David

    1994-03-01

    This paper describes the design of an undersea mine detection system and compares the performance of various neural network models for classification of features extracted from side-scan sonar images. Techniques for region of interest and statistical feature extraction are described. Subsequent feature analysis verifies the need for neural network processing. Several different neural and conventional pattern classifiers are compared including: k-Nearest Neighbors, Backprop, Quickprop, and LVQ. Results using the Naval Image Database from Coastal Systems Station (Panama City, FL) indicate neural networks have consistently superior performance over conventional classifiers. Concepts for further performance improvements are also discussed including: alternative image preprocessing and classifier fusion.

  3. Performance of a Regional Aeronautical Telecommunications Network

    NASA Technical Reports Server (NTRS)

    Bretmersky, Steven C.; Ripamonti, Claudio; Konangi, Vijay K.; Kerczewski, Robert J.

    2001-01-01

    This paper reports the findings of the simulation of the ATN (Aeronautical Telecommunications Network) for three typical average-sized U.S. airports and their associated air traffic patterns. The models of the protocols were designed to achieve the same functionality and meet the ATN specifications. The focus of this project is on the subnetwork and routing aspects of the simulation. To maintain continuous communication between the aircrafts and the ground facilities, a model based on mobile IP is used. The results indicate that continuous communication is indeed possible. The network can support two applications of significance in the immediate future FTP and HTTP traffic. Results from this simulation prove the feasibility of development of the ATN concept for AC/ATM (Advanced Communications for Air Traffic Management).

  4. Evaluation of delay performance in valiant load-balancing network

    NASA Astrophysics Data System (ADS)

    Yu, Yingdi; Jin, Yaohui; Cheng, Hong; Gao, Yu; Sun, Weiqiang; Guo, Wei; Hu, Weisheng

    2007-11-01

    Network traffic grows in an unpredictable way, which forces network operators to over-provision their backbone network in order to meet the increasing demands. In the consideration of new users, applications and unexpected failures, the utilization is typically below 30% [1]. There are two methods aimed to solve this problem. The first one is to adjust link capacity with the variation of traffic. However in optical network, rapid signaling scheme and large buffer is required. The second method is to use the statistical multiplexing function of IP routers connected point-to-point by optical links to counteract the effect brought by traffic's variation [2]. But the routing mechanism would be much more complex, and introduce more overheads into backbone network. To exert the potential of network and reduce its overhead, the use of Valiant Load-balancing for backbone network has been proposed in order to enhance the utilization of the network and to simplify the routing process. Raising the network utilization and improving throughput would inevitably influence the end-to-end delay. However, the study on delays of Load-balancing is lack. In the work presented in this paper, we study the delay performance in Valiant Load-balancing network, and isolate the queuing delay for modeling and detail analysis. We design the architecture of a switch with the ability of load-balancing for our simulation and experiment, and analyze the relationship between switch architecture and delay performance.

  5. Applying neural networks to optimize instrumentation performance

    SciTech Connect

    Start, S.E.; Peters, G.G.

    1995-06-01

    Well calibrated instrumentation is essential in providing meaningful information about the status of a plant. Signals from plant instrumentation frequently have inherent non-linearities, may be affected by environmental conditions and can therefore cause calibration difficulties for the people who maintain them. Two neural network approaches are described in this paper for improving the accuracy of a non-linear, temperature sensitive level probe ised in Expermental Breeder Reactor II (EBR-II) that was difficult to calibrate.

  6. A vector-integration-to-endpoint model for performance of viapoint movements.

    PubMed

    Bullock, Daniel; Bongers, Raoul M.; Lankhorst, Marnix; Beek, Peter J.

    1999-01-01

    Viapoint (VP) movements are movements to a desired point that are constrained to pass through an intermediate point. Studies have shown that VP movements possess properties, such as smooth curvature around the VP, that are not explicable by treating VP movements as strict concatenations of simpler point-to-point (PTP) movements. Such properties have led some theorists to propose whole-trajectory optimization models, which imply that the entire trajectory is precomputed before movement initiation. This paper reports new experiments conducted to systematically compare VP with PTP trajectories. Analyses revealed a statistically significant early directional deviation in VP movements but no associated curvature change. An explanation of this effect is offered by extending the vector-integration-to-endpoint (VITE) model (Bullock, D., & Grossberg, S. (1988a). Neural dynamics of planned arm movements: Emergent invariants and speed-accuracy properties during trajectory formation. Psychological Review, 95, 49-90; Bullock, D., & Grossberg, S. (1988b). The VITE model: A neural command circuit for generating arm and articulator trajectories. In J.A.S. Kelso, A.J. Mandell & M.F. Schlesinger (Eds.), Dynamic patterns in complex systems (pp. 305-326). Singapore: World Scientific Publishers.), which postulates that voluntary movement trajectories emerge as internal gating signals control the integration of continuously computed vector commands based on the evolving, perceptible difference between desired and actual position variables. The model explains the observed trajectories of VP and PTP movements as emergent properties of a dynamical system that does not precompute entire trajectories before movement initiation. The new model includes a working memory and a stage sensitive to time-to-contact information. These cooperate to control serial performance. The structural and functional relationships proposed in the model are consistent with available data on forebrain physiology

  7. Towards a Social Networks Model for Online Learning & Performance

    ERIC Educational Resources Information Center

    Chung, Kon Shing Kenneth; Paredes, Walter Christian

    2015-01-01

    In this study, we develop a theoretical model to investigate the association between social network properties, "content richness" (CR) in academic learning discourse, and performance. CR is the extent to which one contributes content that is meaningful, insightful and constructive to aid learning and by social network properties we…

  8. Interactive Effects of Southern Rice Black-Streaked Dwarf Virus Infection of Host Plant and Vector on Performance of the Vector, Sogatella furcifera (Homoptera: Delphacidae).

    PubMed

    Lei, Wenbin; Liu, Danfeng; Li, Pei; Hou, Maolin

    2014-10-01

    Performance of insect vectors can be influenced by the viruses they transmit, either directly by infection of the vectors or indirectly via infection of the host plants. Southern rice black-streaked dwarf virus (SRBSDV) is a propagative virus transmitted by the white-backed planthopper, Sogatella furcifera (Hovath). To elucidate the influence of SRBSDV on the performance of white-backed planthopper, life parameters of viruliferous and nonviruliferous white-backed planthopper fed rice seedlings infected or noninfected with SRBSDV were measured using a factorial design. Regardless of the infection status of the rice plant host, viruliferous white-backed planthopper nymphs took longer to develop from nymph to adult than did nonviruliferous nymphs. Viruliferous white-backed planthopper females deposited fewer eggs than nonviruliferous females and both viruliferous and nonviruliferous white-backed planthopper females laid fewer eggs on infected than on noninfected plants. Longevity of white-backed planthopper females was also affected by the infection status of the rice plant and white-backed planthopper. Nonviruliferous white-backed planthopper females that fed on infected rice plants lived longer than the other three treatment groups. These results indicate that the performance of white-backed planthopper is affected by SRBSDV either directly (by infection of white-backed planthopper) or indirectly (by infection of rice plant). The extended development of viruliferous nymphs and the prolonged life span of nonviruliferous adults on infected plants may increase their likelihood of transmitting virus, which would increase virus spread. PMID:26309259

  9. High-performance, bare silver nanowire network transparent heaters.

    PubMed

    Ergun, Orcun; Coskun, Sahin; Yusufoglu, Yusuf; Unalan, Husnu Emrah

    2016-11-01

    Silver nanowire (Ag NW) networks are one of the most promising candidates for the replacement of indium tin oxide (ITO) thin films in many different applications. Recently, Ag-NW-based transparent heaters (THs) showed excellent heating performance. In order to overcome the instability issues of Ag NW networks, researchers have offered different hybrid structures. However, these approaches not only require extra processing, but also decrease the optical performance of Ag NW networks. So, it is important to investigate and determine the thermal performance limits of bare-Ag-NW-network-based THs. Herein, we report on the effect of NW density, contact geometry, applied bias, flexing and incremental bias application on the TH performance of Ag NW networks. Ag-NW-network-based THs with a sheet resistance and percentage transmittance of 4.3 Ω sq(-1) and 83.3%, respectively, and a NW density of 1.6 NW μm(-2) reached a maximum temperature of 275 °C under incremental bias application (5 V maximum). With this performance, our results provide a different perspective on bare-Ag-NW-network-based transparent heaters. PMID:27678197

  10. High-performance, bare silver nanowire network transparent heaters.

    PubMed

    Ergun, Orcun; Coskun, Sahin; Yusufoglu, Yusuf; Unalan, Husnu Emrah

    2016-11-01

    Silver nanowire (Ag NW) networks are one of the most promising candidates for the replacement of indium tin oxide (ITO) thin films in many different applications. Recently, Ag-NW-based transparent heaters (THs) showed excellent heating performance. In order to overcome the instability issues of Ag NW networks, researchers have offered different hybrid structures. However, these approaches not only require extra processing, but also decrease the optical performance of Ag NW networks. So, it is important to investigate and determine the thermal performance limits of bare-Ag-NW-network-based THs. Herein, we report on the effect of NW density, contact geometry, applied bias, flexing and incremental bias application on the TH performance of Ag NW networks. Ag-NW-network-based THs with a sheet resistance and percentage transmittance of 4.3 Ω sq(-1) and 83.3%, respectively, and a NW density of 1.6 NW μm(-2) reached a maximum temperature of 275 °C under incremental bias application (5 V maximum). With this performance, our results provide a different perspective on bare-Ag-NW-network-based transparent heaters.

  11. High-performance, bare silver nanowire network transparent heaters

    NASA Astrophysics Data System (ADS)

    Ergun, Orcun; Coskun, Sahin; Yusufoglu, Yusuf; Emrah Unalan, Husnu

    2016-11-01

    Silver nanowire (Ag NW) networks are one of the most promising candidates for the replacement of indium tin oxide (ITO) thin films in many different applications. Recently, Ag-NW-based transparent heaters (THs) showed excellent heating performance. In order to overcome the instability issues of Ag NW networks, researchers have offered different hybrid structures. However, these approaches not only require extra processing, but also decrease the optical performance of Ag NW networks. So, it is important to investigate and determine the thermal performance limits of bare-Ag-NW-network-based THs. Herein, we report on the effect of NW density, contact geometry, applied bias, flexing and incremental bias application on the TH performance of Ag NW networks. Ag-NW-network-based THs with a sheet resistance and percentage transmittance of 4.3 Ω sq‑1 and 83.3%, respectively, and a NW density of 1.6 NW μm‑2 reached a maximum temperature of 275 °C under incremental bias application (5 V maximum). With this performance, our results provide a different perspective on bare-Ag-NW-network-based transparent heaters.

  12. IBM SP high-performance networking with a GRF.

    SciTech Connect

    Navarro, J.P.

    1999-05-27

    Increasing use of highly distributed applications, demand for faster data exchange, and highly parallel applications can push the limits of conventional external networking for IBM SP sites. In technical computing applications we have observed a growing use of a pipeline of hosts and networks collaborating to collect, process, and visualize large amounts of realtime data. The GRF, a high-performance IP switch from Ascend and IBM, is the first backbone network switch to offer a media card that can directly connect to an SP Switch. This enables switch attached hosts in an SP complex to communicate at near SP Switch speeds with other GRF attached hosts and networks.

  13. Performance evaluation of optical cross-connected networks

    NASA Astrophysics Data System (ADS)

    Castanon Avila, Gerardo Antonio

    1998-07-01

    The transmission performance of regular two-connected multi-hop transparent optical networks in uniform traffic under hot-potato, single-buffer deflection routing schemes is evaluated. Manhattan Street (MS) Network and ShuffleNet (SN) are compared in terms of bit error rate (BER) and packet error rate (PER). We implement a novel strategy of analysis, in which the transmission performance evaluation is linked to the traffic randomness of the networks. Amplifier spontaneous emission (ASE) noise, and device-induced crosstalk severely limit the characteristics of the network, such as propagation distance, sustainable traffic, and bit- rate. To improve the teletraffic/transmission performance of regular two-connected optical networks a hybrid semi- transparent store-and-forward node architecture is presented. MS and SN are compared in terms of average queueing delay, queue size, propagation delay, throughput, and BER. Packets are stored just in the case of conflict to avoid deflection, otherwise they transparently traverse the node (transparent cut-through routing) without optical-electronic conversion. This architecture performs well, in terms of throughput, propagation delay and BER. It is also shown that by combining deflection routing with the store-and-forward scheme the network can accommodate two different bit- rate. This suggests that the proposed hybrid scheme may have good potential for future multimedia networks. In addition, the steady state behavior of two-connected mesh packet-switched optical networks under wavelength translation scheme (WT) is analyzed. It is shown that wavelength translation mitigates the blocking of cells substantially in cross-connected networks. By increasing the number of wavelengths and employing wavelength translation the probability of deflection can be reduced which, in turn, leads to a significant improvement in the teletraffic performance of the network.

  14. The performance analysis of linux networking - packet receiving

    SciTech Connect

    Wu, Wenji; Crawford, Matt; Bowden, Mark; /Fermilab

    2006-11-01

    The computing models for High-Energy Physics experiments are becoming ever more globally distributed and grid-based, both for technical reasons (e.g., to place computational and data resources near each other and the demand) and for strategic reasons (e.g., to leverage equipment investments). To support such computing models, the network and end systems, computing and storage, face unprecedented challenges. One of the biggest challenges is to transfer scientific data sets--now in the multi-petabyte (10{sup 15} bytes) range and expected to grow to exabytes within a decade--reliably and efficiently among facilities and computation centers scattered around the world. Both the network and end systems should be able to provide the capabilities to support high bandwidth, sustained, end-to-end data transmission. Recent trends in technology are showing that although the raw transmission speeds used in networks are increasing rapidly, the rate of advancement of microprocessor technology has slowed down. Therefore, network protocol-processing overheads have risen sharply in comparison with the time spent in packet transmission, resulting in degraded throughput for networked applications. More and more, it is the network end system, instead of the network, that is responsible for degraded performance of network applications. In this paper, the Linux system's packet receive process is studied from NIC to application. We develop a mathematical model to characterize the Linux packet receiving process. Key factors that affect Linux systems network performance are analyzed.

  15. Differentiation of several interstitial lung disease patterns in HRCT images using support vector machine: role of databases on performance

    NASA Astrophysics Data System (ADS)

    Kale, Mandar; Mukhopadhyay, Sudipta; Dash, Jatindra K.; Garg, Mandeep; Khandelwal, Niranjan

    2016-03-01

    Interstitial lung disease (ILD) is complicated group of pulmonary disorders. High Resolution Computed Tomography (HRCT) considered to be best imaging technique for analysis of different pulmonary disorders. HRCT findings can be categorised in several patterns viz. Consolidation, Emphysema, Ground Glass Opacity, Nodular, Normal etc. based on their texture like appearance. Clinician often find it difficult to diagnosis these pattern because of their complex nature. In such scenario computer-aided diagnosis system could help clinician to identify patterns. Several approaches had been proposed for classification of ILD patterns. This includes computation of textural feature and training /testing of classifier such as artificial neural network (ANN), support vector machine (SVM) etc. In this paper, wavelet features are calculated from two different ILD database, publically available MedGIFT ILD database and private ILD database, followed by performance evaluation of ANN and SVM classifiers in terms of average accuracy. It is found that average classification accuracy by SVM is greater than ANN where trained and tested on same database. Investigation continued further to test variation in accuracy of classifier when training and testing is performed with alternate database and training and testing of classifier with database formed by merging samples from same class from two individual databases. The average classification accuracy drops when two independent databases used for training and testing respectively. There is significant improvement in average accuracy when classifiers are trained and tested with merged database. It infers dependency of classification accuracy on training data. It is observed that SVM outperforms ANN when same database is used for training and testing.

  16. Performance analysis of local area networks

    NASA Technical Reports Server (NTRS)

    Alkhatib, Hasan S.; Hall, Mary Grace

    1990-01-01

    A simulation of the TCP/IP protocol running on a CSMA/CD data link layer was described. The simulation was implemented using the simula language, and object oriented discrete event language. It allows the user to set the number of stations at run time, as well as some station parameters. Those parameters are the interrupt time and the dma transfer rate for each station. In addition, the user may configure the network at run time with stations of differing characteristics. Two types are available, and the parameters of both types are read from input files at run time. The parameters include the dma transfer rate, interrupt time, data rate, average message size, maximum frame size and the average interarrival time of messages per station. The information collected for the network is the throughput and the mean delay per packet. For each station, the number of messages attempted as well as the number of messages successfully transmitted is collected in addition to the throughput and mean packet delay per station.

  17. Leveraging Structure to Improve Classification Performance in Sparsely Labeled Networks

    SciTech Connect

    Gallagher, B; Eliassi-Rad, T

    2007-10-22

    We address the problem of classification in a partially labeled network (a.k.a. within-network classification), with an emphasis on tasks in which we have very few labeled instances to start with. Recent work has demonstrated the utility of collective classification (i.e., simultaneous inferences over class labels of related instances) in this general problem setting. However, the performance of collective classification algorithms can be adversely affected by the sparseness of labels in real-world networks. We show that on several real-world data sets, collective classification appears to offer little advantage in general and hurts performance in the worst cases. In this paper, we explore a complimentary approach to within-network classification that takes advantage of network structure. Our approach is motivated by the observation that real-world networks often provide a great deal more structural information than attribute information (e.g., class labels). Through experiments on supervised and semi-supervised classifiers of network data, we demonstrate that a small number of structural features can lead to consistent and sometimes dramatic improvements in classification performance. We also examine the relative utility of individual structural features and show that, in many cases, it is a combination of both local and global network structure that is most informative.

  18. Storage Area Networks and The High Performance Storage System

    SciTech Connect

    Hulen, H; Graf, O; Fitzgerald, K; Watson, R W

    2002-03-04

    The High Performance Storage System (HPSS) is a mature Hierarchical Storage Management (HSM) system that was developed around a network-centered architecture, with client access to storage provided through third-party controls. Because of this design, HPSS is able to leverage today's Storage Area Network (SAN) infrastructures to provide cost effective, large-scale storage systems and high performance global file access for clients. Key attributes of SAN file systems are found in HPSS today, and more complete SAN file system capabilities are being added. This paper traces the HPSS storage network architecture from the original implementation using HIPPI and IPI-3 technology, through today's local area network (LAN) capabilities, and to SAN file system capabilities now in development. At each stage, HPSS capabilities are compared with capabilities generally accepted today as characteristic of storage area networks and SAN file systems.

  19. Optical interconnection networks for high-performance computing systems.

    PubMed

    Biberman, Aleksandr; Bergman, Keren

    2012-04-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers.

  20. Performance enhancement of OSPF protocol in the private network

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Lu, Yang; Lin, Xiaokang

    2005-11-01

    The private network serves as an information exchange platform to support the integrated services via microwave channels and accordingly selects the open shortest path first (OSPF) as the IP routing protocol. But the existing OSPF can't fit the private network very well for its special characteristics. This paper presents our modifications to the standard protocol in such aspects as the single-area scheme, link state advertisement (LSA) types and formats, OSPF packet formats, important state machines, setting of protocol parameters and link flap damping. Finally simulations are performed in various scenarios and the results indicate that our modifications can enhance the OSPF performance in the private network effectively.

  1. Challenges for high-performance networking for exascale computing.

    SciTech Connect

    Barrett, Brian W.; Hemmert, K. Scott; Underwood, Keith Douglas; Brightwell, Ronald Brian

    2010-05-01

    Achieving the next three orders of magnitude performance increase to move from petascale to exascale computing will require a significant advancements in several fundamental areas. Recent studies have outlined many of the challenges in hardware and software that will be needed. In this paper, we examine these challenges with respect to high-performance networking. We describe the repercussions of anticipated changes to computing and networking hardware and discuss the impact that alternative parallel programming models will have on the network software stack. We also present some ideas on possible approaches that address some of these challenges.

  2. Arrhythmia Identification with Two-Lead Electrocardiograms Using Artificial Neural Networks and Support Vector Machines for a Portable ECG Monitor System

    PubMed Central

    Liu, Shing-Hong; Cheng, Da-Chuan; Lin, Chih-Ming

    2013-01-01

    An automatic configuration that can detect the position of R-waves, classify the normal sinus rhythm (NSR) and other four arrhythmic types from the continuous ECG signals obtained from the MIT-BIH arrhythmia database is proposed. In this configuration, a support vector machine (SVM) was used to detect and mark the ECG heartbeats with raw signals and differential signals of a lead ECG. An algorithm based on the extracted markers segments waveforms of Lead II and V1 of the ECG as the pattern classification features. A self-constructing neural fuzzy inference network (SoNFIN) was used to classify NSR and four arrhythmia types, including premature ventricular contraction (PVC), premature atrium contraction (PAC), left bundle branch block (LBBB), and right bundle branch block (RBBB). In a real scenario, the classification results show the accuracy achieved is 96.4%. This performance is suitable for a portable ECG monitor system for home care purposes. PMID:23303379

  3. Investigation of road network features and safety performance.

    PubMed

    Wang, Xuesong; Wu, Xingwei; Abdel-Aty, Mohamed; Tremont, Paul J

    2013-07-01

    The analysis of road network designs can provide useful information to transportation planners as they seek to improve the safety of road networks. The objectives of this study were to compare and define the effective road network indices and to analyze the relationship between road network structure and traffic safety at the level of the Traffic Analysis Zone (TAZ). One problem in comparing different road networks is establishing criteria that can be used to scale networks in terms of their structures. Based on data from Orange and Hillsborough Counties in Florida, road network structural properties within TAZs were scaled using 3 indices: Closeness Centrality, Betweenness Centrality, and Meshedness Coefficient. The Meshedness Coefficient performed best in capturing the structural features of the road network. Bayesian Conditional Autoregressive (CAR) models were developed to assess the safety of various network configurations as measured by total crashes, crashes on state roads, and crashes on local roads. The models' results showed that crash frequencies on local roads were closely related to factors within the TAZs (e.g., zonal network structure, TAZ population), while crash frequencies on state roads were closely related to the road and traffic features of state roads. For the safety effects of different networks, the Grid type was associated with the highest frequency of crashes, followed by the Mixed type, the Loops & Lollipops type, and the Sparse type. This study shows that it is possible to develop a quantitative scale for structural properties of a road network, and to use that scale to calculate the relationships between network structural properties and safety. PMID:23584537

  4. Performance Analysis of a NASA Integrated Network Array

    NASA Technical Reports Server (NTRS)

    Nessel, James A.

    2012-01-01

    The Space Communications and Navigation (SCaN) Program is planning to integrate its individual networks into a unified network which will function as a single entity to provide services to user missions. This integrated network architecture is expected to provide SCaN customers with the capabilities to seamlessly use any of the available SCaN assets to support their missions to efficiently meet the collective needs of Agency missions. One potential optimal application of these assets, based on this envisioned architecture, is that of arraying across existing networks to significantly enhance data rates and/or link availabilities. As such, this document provides an analysis of the transmit and receive performance of a proposed SCaN inter-network antenna array. From the study, it is determined that a fully integrated internetwork array does not provide any significant advantage over an intra-network array, one in which the assets of an individual network are arrayed for enhanced performance. Therefore, it is the recommendation of this study that NASA proceed with an arraying concept, with a fundamental focus on a network-centric arraying.

  5. Performance characteristics of a variable-area vane nozzle for vectoring an ASTOVL exhaust jet up to 45 deg

    NASA Technical Reports Server (NTRS)

    Mcardle, Jack G.; Esker, Barbara S.

    1993-01-01

    Many conceptual designs for advanced short-takeoff, vertical landing (ASTOVL) aircraft need exhaust nozzles that can vector the jet to provide forces and moments for controlling the aircraft's movement or attitude in flight near the ground. A type of nozzle that can both vector the jet and vary the jet flow area is called a vane nozzle. Basically, the nozzle consists of parallel, spaced-apart flow passages formed by pairs of vanes (vanesets) that can be rotated on axes perpendicular to the flow. Two important features of this type of nozzle are the abilities to vector the jet rearward up to 45 degrees and to produce less harsh pressure and velocity footprints during vertical landing than does an equivalent single jet. A one-third-scale model of a generic vane nozzle was tested with unheated air at the NASA Lewis Research Center's Powered Lift Facility. The model had three parallel flow passages. Each passage was formed by a vaneset consisting of a long and a short vane. The longer vanes controlled the jet vector angle, and the shorter controlled the flow area. Nozzle performance for three nominal flow areas (basic and plus or minus 21 percent of basic area), each at nominal jet vector angles from -20 deg (forward of vertical) to +45 deg (rearward of vertical) are presented. The tests were made with the nozzle mounted on a model tailpipe with a blind flange on the end to simulate a closed cruise nozzle, at tailpipe-to-ambient pressure ratios from 1.8 to 4.0. Also included are jet wake data, single-vaneset vector performance for long/short and equal-length vane designs, and pumping capability. The pumping capability arises from the subambient pressure developed in the cavities between the vanesets, which could be used to aspirate flow from a source such as the engine compartment. Some of the performance characteristics are compared with characteristics of a single-jet nozzle previously reported.

  6. High-Performance Satellite/Terrestrial-Network Gateway

    NASA Technical Reports Server (NTRS)

    Beering, David R.

    2005-01-01

    A gateway has been developed to enable digital communication between (1) the high-rate receiving equipment at NASA's White Sands complex and (2) a standard terrestrial digital communication network at data rates up to 622 Mb/s. The design of this gateway can also be adapted for use in commercial Earth/satellite and digital communication networks, and in terrestrial digital communication networks that include wireless subnetworks. Gateway as used here signifies an electronic circuit that serves as an interface between two electronic communication networks so that a computer (or other terminal) on one network can communicate with a terminal on the other network. The connection between this gateway and the high-rate receiving equipment is made via a synchronous serial data interface at the emitter-coupled-logic (ECL) level. The connection between this gateway and a standard asynchronous transfer mode (ATM) terrestrial communication network is made via a standard user network interface with a synchronous optical network (SONET) connector. The gateway contains circuitry that performs the conversion between the ECL and SONET interfaces. The data rate of the SONET interface can be either 155.52 or 622.08 Mb/s. The gateway derives its clock signal from a satellite modem in the high-rate receiving equipment and, hence, is agile in the sense that it adapts to the data rate of the serial interface.

  7. Urban traffic-network performance: flow theory and simulation experiments

    SciTech Connect

    Williams, J.C.

    1986-01-01

    Performance models for urban street networks were developed to describe the response of a traffic network to given travel-demand levels. The three basic traffic flow variables, speed, flow, and concentration, are defined at the network level, and three model systems are proposed. Each system consists of a series of interrelated, consistent functions between the three basic traffic-flow variables as well as the fraction of stopped vehicles in the network. These models are subsequently compared with the results of microscopic simulation of a small test network. The sensitivity of one of the model systems to a variety of network features was also explored. Three categories of features were considered, with the specific features tested listed in parentheses: network topology (block length and street width), traffic control (traffic signal coordination), and traffic characteristics (level of inter-vehicular interaction). Finally, a fundamental issue concerning the estimation of two network-level parameters (from a nonlinear relation in the two-fluid theory) was examined. The principal concern was that of comparability of these parameters when estimated with information from a single vehicle (or small group of vehicles), as done in conjunction with previous field studies, and when estimated with network-level information (i.e., all the vehicles), as is possible with simulation.

  8. Body Area Networks performance analysis using UWB.

    PubMed

    Fatehy, Mohammed; Kohno, Ryuji

    2013-01-01

    The successful realization of a Wireless Body Area Network (WBAN) using Ultra Wideband (UWB) technology supports different medical and consumer electronics (CE) applications but stand in a need for an innovative solution to meet the different requirements of these applications. Previously, we proposed to use adaptive processing gain (PG) to fulfill the different QoS requirements of these WBAN applications. In this paper, interference occurred between two different BANs in a UWB-based system has been analyzed in terms of acceptable ratio of overlapping between these BANs' PG providing the required QoS for each BAN. The first BAN employed for a healthcare device (e.g. EEG, ECG, etc.) with a relatively longer spreading sequence is used and the second customized for entertainment application (e.g. wireless headset, wireless game pad, etc.) where a shorter spreading code is assigned. Considering bandwidth utilization and difference in the employed spreading sequence, the acceptable ratio of overlapping between these BANs should fall between 0.05 and 0.5 in order to optimize the used spreading sequence and in the meantime satisfying the required QoS for these applications.

  9. Body Area Networks performance analysis using UWB.

    PubMed

    Fatehy, Mohammed; Kohno, Ryuji

    2013-01-01

    The successful realization of a Wireless Body Area Network (WBAN) using Ultra Wideband (UWB) technology supports different medical and consumer electronics (CE) applications but stand in a need for an innovative solution to meet the different requirements of these applications. Previously, we proposed to use adaptive processing gain (PG) to fulfill the different QoS requirements of these WBAN applications. In this paper, interference occurred between two different BANs in a UWB-based system has been analyzed in terms of acceptable ratio of overlapping between these BANs' PG providing the required QoS for each BAN. The first BAN employed for a healthcare device (e.g. EEG, ECG, etc.) with a relatively longer spreading sequence is used and the second customized for entertainment application (e.g. wireless headset, wireless game pad, etc.) where a shorter spreading code is assigned. Considering bandwidth utilization and difference in the employed spreading sequence, the acceptable ratio of overlapping between these BANs should fall between 0.05 and 0.5 in order to optimize the used spreading sequence and in the meantime satisfying the required QoS for these applications. PMID:24109913

  10. Static internal performance of single-expansion-ramp nozzles with thrust-vectoring capability up to 60 deg

    NASA Technical Reports Server (NTRS)

    Berrier, B. L.; Leavitt, L. D.

    1984-01-01

    An investigation has been conducted at static conditions (wind off) in the static-test facility of the Langley 16-Foot Transonic Tunnel. The effects of geometric thrust-vector angle, sidewall containment, ramp curvature, lower-flap lip angle, and ramp length on the internal performance of nonaxisymmetric single-expansion-ramp nozzles were investigated. Geometric thrust-vector angle was varied from -20 deg. to 60 deg., and nozzle pressure ratio was varied from 1.0 (jet off) to approximately 10.0.

  11. Semi-Supervised Multimodal Relevance Vector Regression Improves Cognitive Performance Estimation from Imaging and Biological Biomarkers

    PubMed Central

    Cheng, Bo; Chen, Songcan; Kaufer, Daniel I.

    2013-01-01

    Accurate estimation of cognitive scores for patients can help track the progress of neurological diseases. In this paper, we present a novel semi-supervised multimodal relevance vector regression (SM-RVR) method for predicting clinical scores of neurological diseases from multimodal imaging and biological biomarker, to help evaluate pathological stage and predict progression of diseases, e.g., Alzheimer’s diseases (AD). Unlike most existing methods, we predict clinical scores from multimodal (imaging and biological) biomarkers, including MRI, FDG-PET, and CSF. Considering that the clinical scores of mild cognitive impairment (MCI) subjects are often less stable compared to those of AD and normal control (NC) subjects due to the heterogeneity of MCI, we use only the multimodal data of MCI subjects, but no corresponding clinical scores, to train a semi-supervised model for enhancing the estimation of clinical scores for AD and NC subjects. We also develop a new strategy for selecting the most informative MCI subjects. We evaluate the performance of our approach on 202 subjects with all three modalities of data (MRI, FDG-PET and CSF) from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database. The experimental results show that our SM-RVR method achieves a root-mean-square error (RMSE) of 1.91 and a correlation coefficient (CORR) of 0.80 for estimating the MMSE scores, and also a RMSE of 4.45 and a CORR of 0.78 for estimating the ADAS-Cog scores, demonstrating very promising performances in AD studies. PMID:23504659

  12. Performance evaluation of reactive and proactive routing protocol in IEEE 802.11 ad hoc network

    NASA Astrophysics Data System (ADS)

    Hamma, Salima; Cizeron, Eddy; Issaka, Hafiz; Guédon, Jean-Pierre

    2006-10-01

    Wireless technology based on the IEEE 802.11 standard is widely deployed. This technology is used to support multiple types of communication services (data, voice, image) with different QoS requirements. MANET (Mobile Adhoc NETwork) does not require a fixed infrastructure. Mobile nodes communicate through multihop paths. The wireless communication medium has variable and unpredictable characteristics. Furthermore, node mobility creates a continuously changing communication topology in which paths break and new one form dynamically. The routing table of each router in an adhoc network must be kept up-to-date. MANET uses Distance Vector or Link State algorithms which insure that the route to every host is always known. However, this approach must take into account the adhoc networks specific characteristics: dynamic topologies, limited bandwidth, energy constraints, limited physical security, ... Two main routing protocols categories are studied in this paper: proactive protocols (e.g. Optimised Link State Routing - OLSR) and reactive protocols (e.g. Ad hoc On Demand Distance Vector - AODV, Dynamic Source Routing - DSR). The proactive protocols are based on periodic exchanges that update the routing tables to all possible destinations, even if no traffic goes through. The reactive protocols are based on on-demand route discoveries that update routing tables only for the destination that has traffic going through. The present paper focuses on study and performance evaluation of these categories using NS2 simulations. We have considered qualitative and quantitative criteria. The first one concerns distributed operation, loop-freedom, security, sleep period operation. The second are used to assess performance of different routing protocols presented in this paper. We can list end-to-end data delay, jitter, packet delivery ratio, routing load, activity distribution. Comparative study will be presented with number of networking context consideration and the results show

  13. Diversity improves performance in excitable networks

    PubMed Central

    Copelli, Mauro; Roberts, James A.

    2016-01-01

    As few real systems comprise indistinguishable units, diversity is a hallmark of nature. Diversity among interacting units shapes properties of collective behavior such as synchronization and information transmission. However, the benefits of diversity on information processing at the edge of a phase transition, ordinarily assumed to emerge from identical elements, remain largely unexplored. Analyzing a general model of excitable systems with heterogeneous excitability, we find that diversity can greatly enhance optimal performance (by two orders of magnitude) when distinguishing incoming inputs. Heterogeneous systems possess a subset of specialized elements whose capability greatly exceeds that of the nonspecialized elements. We also find that diversity can yield multiple percolation, with performance optimized at tricriticality. Our results are robust in specific and more realistic neuronal systems comprising a combination of excitatory and inhibitory units, and indicate that diversity-induced amplification can be harnessed by neuronal systems for evaluating stimulus intensities. PMID:27168961

  14. Performance analysis of a common-mode signal based low-complexity crosstalk cancelation scheme in vectored VDSL

    NASA Astrophysics Data System (ADS)

    Zafaruddin, SM; Prakriya, Shankar; Prasad, Surendra

    2012-12-01

    In this article, we propose a vectored system by using both common mode (CM) and differential mode (DM) signals in upstream VDSL. We first develop a multi-input multi-output (MIMO) CM channel by using the single-pair CM and MIMO DM channels proposed recently, and study the characteristics of the resultant CM-DM channel matrix. We then propose a low complexity receiver structure in which the CM and DM signals of each twisted-pair (TP) are combined before the application of a MIMO zero forcing (ZF) receiver. We study capacity of the proposed system, and show that the vectored CM-DM processing provides higher data-rates at longer loop-lengths. In the absence of alien crosstalk, application of the ZF receiver on the vectored CM-DM signals yields performance close to the single user bound (SUB). In the presence of alien crosstalk, we show that the vectored CM-DM processing exploits the spatial correlation of CM and DM signals and provides higher data rates than with DM processing only. Simulation results validate the analysis and demonstrate the importance of CM-DM joint processing in vectored VDSL systems.

  15. Performance limitations for networked control systems with plant uncertainty

    NASA Astrophysics Data System (ADS)

    Chi, Ming; Guan, Zhi-Hong; Cheng, Xin-Ming; Yuan, Fu-Shun

    2016-04-01

    There has recently been significant interest in performance study for networked control systems with communication constraints. But the existing work mainly assumes that the plant has an exact model. The goal of this paper is to investigate the optimal tracking performance for networked control system in the presence of plant uncertainty. The plant under consideration is assumed to be non-minimum phase and unstable, while the two-parameter controller is employed and the integral square criterion is adopted to measure the tracking error. And we formulate the uncertainty by utilising stochastic embedding. The explicit expression of the tracking performance has been obtained. The results show that the network communication noise and the model uncertainty, as well as the unstable poles and non-minimum phase zeros, can worsen the tracking performance.

  16. Virulence Factors of Geminivirus Interact with MYC2 to Subvert Plant Resistance and Promote Vector Performance[C][W

    PubMed Central

    Li, Ran; Weldegergis, Berhane T.; Li, Jie; Jung, Choonkyun; Qu, Jing; Sun, Yanwei; Qian, Hongmei; Tee, ChuanSia; van Loon, Joop J.A.; Dicke, Marcel; Chua, Nam-Hai; Liu, Shu-Sheng

    2014-01-01

    A pathogen may cause infected plants to promote the performance of its transmitting vector, which accelerates the spread of the pathogen. This positive effect of a pathogen on its vector via their shared host plant is termed indirect mutualism. For example, terpene biosynthesis is suppressed in begomovirus-infected plants, leading to reduced plant resistance and enhanced performance of the whiteflies (Bemisia tabaci) that transmit these viruses. Although begomovirus-whitefly mutualism has been known, the underlying mechanism is still elusive. Here, we identified βC1 of Tomato yellow leaf curl China virus, a monopartite begomovirus, as the viral genetic factor that suppresses plant terpene biosynthesis. βC1 directly interacts with the basic helix-loop-helix transcription factor MYC2 to compromise the activation of MYC2-regulated terpene synthase genes, thereby reducing whitefly resistance. MYC2 associates with the bipartite begomoviral protein BV1, suggesting that MYC2 is an evolutionarily conserved target of begomoviruses for the suppression of terpene-based resistance and the promotion of vector performance. Our findings describe how this viral pathogen regulates host plant metabolism to establish mutualism with its insect vector. PMID:25490915

  17. High performance network and channel-based storage

    NASA Technical Reports Server (NTRS)

    Katz, Randy H.

    1991-01-01

    In the traditional mainframe-centered view of a computer system, storage devices are coupled to the system through complex hardware subsystems called input/output (I/O) channels. With the dramatic shift towards workstation-based computing, and its associated client/server model of computation, storage facilities are now found attached to file servers and distributed throughout the network. We discuss the underlying technology trends that are leading to high performance network-based storage, namely advances in networks, storage devices, and I/O controller and server architectures. We review several commercial systems and research prototypes that are leading to a new approach to high performance computing based on network-attached storage.

  18. Portals 4 network API definition and performance measurement

    SciTech Connect

    Brightwell, R. B.

    2012-03-01

    Portals is a low-level network programming interface for distributed memory massively parallel computing systems designed by Sandia, UNM, and Intel. Portals has been designed to provide high message rates and to provide the flexibility to support a variety of higher-level communication paradigms. This project developed and analyzed an implementation of Portals using shared memory in order to measure and understand the impact of using general-purpose compute cores to handle network protocol processing functions. The goal of this study was to evaluate an approach to high-performance networking software design and hardware support that would enable important DOE modeling and simulation applications to perform well and to provide valuable input to Intel so they can make informed decisions about future network software and hardware products that impact DOE applications.

  19. Using AQM to improve TCP performance over wireless networks

    NASA Astrophysics Data System (ADS)

    Li, Victor H.; Liu, Zhi-Qiang; Low, Steven H.

    2002-07-01

    TCP flow control algorithms have been designed for wireline networks where congestion is measured by packet loss due to buffer overflow. However, wireless networks also suffer from significant packet losses due to bit errors and handoffs. TCP responds to all the packet losses by invoking congestion control and avoidance algorithms and this results in degraded end-to-end performance in wireless networks. In this paper, we describe an Wireless Random Exponential Marking(WREM) scheme which effectively improves TCP performance over wireless networks by decoupling loss recovery from congestion control. Moreover, WREM is capable of handling the coexistence of both ECN-Capable and Non-ECN-Capable routers. We present simulation results to show its effectiveness and compatibility.

  20. Multimedia application performance on a WiMAX network

    NASA Astrophysics Data System (ADS)

    Halepovic, Emir; Ghaderi, Majid; Williamson, Carey

    2009-01-01

    In this paper, we use experimental measurements to study the performance of multimedia applications over a commercial IEEE 802.16 WiMAX network. Voice-over-IP (VoIP) and video streaming applications are tested. We observe that the WiMAX-based network solidly supports VoIP. The voice quality degradation compared to high-speed Ethernet is only moderate, despite higher packet loss and network delays. Despite different characteristics of the uplink and the downlink, call quality is comparable for both directions. On-demand video streaming performs well using UDP. Smooth playback of high-quality video/audio clips at aggregate rates exceeding 700 Kbps is achieved about 63% of the time, with low-quality playback periods observed only 7% of the time. Our results show that WiMAX networks can adequately support currently popular multimedia Internet applications.

  1. Molten carbonate fuel cell networks: Principles, analysis, and performance

    NASA Astrophysics Data System (ADS)

    Wimer, J. G.; Williams, M. C.

    1993-01-01

    The chemical reactions in an internally reforming molten carbonate fuel cell (IRMCFC) are described and combined into the overall IRMCFC reaction. Thermodynamic and electrochemical principles are discussed, and structure and operation of fuel cell stacks are explained. In networking, multiple fuel cell stacks are arranged so that reactant streams are fed and recycled through stacks in series for higher reactant utilization and increased system efficiency. Advantages and performance of networked and conventional systems are compared, using ASPEN simulations. The concept of networking can be applied to any electrochemical membrane, such as that developed for hot gas cleanup in future power plants.

  2. Performance Evaluation of Video Streaming in Vehicular Adhoc Network

    NASA Astrophysics Data System (ADS)

    Rahim, Aneel; Khan, Zeeshan Shafi; Bin Muhaya, Fahad

    In Vehicular Ad-Hoc Networks (VANETs) wireless-equipped vehicles form a temporary network for sharing information among vehicles. Secure Multimedia communication enhances the safety of passenger by providing visual picture of accidents and danger situations. In this paper we will evaluate the performance of multimedia data in VANETS scenario and consider the impact of malicious node using NS-2 and Evalvid video evaluation tool.

  3. Hospital network performance: a survey of hospital stakeholders' perspectives.

    PubMed

    Bravi, F; Gibertoni, D; Marcon, A; Sicotte, C; Minvielle, E; Rucci, P; Angelastro, A; Carradori, T; Fantini, M P

    2013-02-01

    Hospital networks are an emerging organizational form designed to face the new challenges of public health systems. Although the benefits introduced by network models in terms of rationalization of resources are known, evidence about stakeholders' perspectives on hospital network performance from the literature is scanty. Using the Competing Values Framework of organizational effectiveness and its subsequent adaptation by Minvielle et al., we conducted in 2009 a survey in five hospitals of an Italian network for oncological care to examine and compare the views on hospital network performance of internal stakeholders (physicians, nurses and the administrative staff). 329 questionnaires exploring stakeholders' perspectives were completed, with a response rate of 65.8%. Using exploratory factor analysis of the 66 items of the questionnaire, we identified 4 factors, i.e. Centrality of relationships, Quality of care, Attractiveness/Reputation and Staff empowerment and Protection of workers' rights. 42 items were retained in the analysis. Factor scores proved to be high (mean score>8 on a 10-item scale), except for Attractiveness/Reputation (mean score 6.79), indicating that stakeholders attach a higher importance to relational and health care aspects. Comparison of factor scores among stakeholders did not reveal significant differences, suggesting a broadly shared view on hospital network performance.

  4. Sub-terahertz spectroscopy of magnetic resonance in BiFeO3 using a vector network analyzer

    NASA Astrophysics Data System (ADS)

    Caspers, Christian; Gandhi, Varun P.; Magrez, Arnaud; de Rijk, Emile; Ansermet, Jean-Philippe

    2016-06-01

    Detection of sub-THz spin cycloid resonances (SCRs) of stoichiometric BiFeO3 (BFO) was demonstrated using a vector network analyzer. Continuous wave absorption spectroscopy is possible, thanks to heterodyning and electronic sweep control using frequency extenders for frequencies from 480 to 760 GHz. High frequency resolution reveals SCR absorption peaks with a frequency precision in the ppm regime. Three distinct SCR features of BFO were observed and identified as Ψ1 and Φ2 modes, which are out-of-plane and in-plane modes of the spin cycloid, respectively. A spin reorientation transition at 200 K is evident in the frequency vs temperature study. The global minimum in linewidth for both Ψ modes at 140 K is ascribed to the critical slowing down of spin fluctuations.

  5. Efficient resting-state EEG network facilitates motor imagery performance

    NASA Astrophysics Data System (ADS)

    Zhang, Rui; Yao, Dezhong; Valdés-Sosa, Pedro A.; Li, Fali; Li, Peiyang; Zhang, Tao; Ma, Teng; Li, Yongjie; Xu, Peng

    2015-12-01

    Objective. Motor imagery-based brain-computer interface (MI-BCI) systems hold promise in motor function rehabilitation and assistance for motor function impaired people. But the ability to operate an MI-BCI varies across subjects, which becomes a substantial problem for practical BCI applications beyond the laboratory. Approach. Several previous studies have demonstrated that individual MI-BCI performance is related to the resting state of brain. In this study, we further investigate offline MI-BCI performance variations through the perspective of resting-state electroencephalography (EEG) network. Main results. Spatial topologies and statistical measures of the network have close relationships with MI classification accuracy. Specifically, mean functional connectivity, node degrees, edge strengths, clustering coefficient, local efficiency and global efficiency are positively correlated with MI classification accuracy, whereas the characteristic path length is negatively correlated with MI classification accuracy. The above results indicate that an efficient background EEG network may facilitate MI-BCI performance. Finally, a multiple linear regression model was adopted to predict subjects’ MI classification accuracy based on the efficiency measures of the resting-state EEG network, resulting in a reliable prediction. Significance. This study reveals the network mechanisms of the MI-BCI and may help to find new strategies for improving MI-BCI performance.

  6. On a vector space representation in genetic algorithms for sensor scheduling in wireless sensor networks.

    PubMed

    Martins, F V C; Carrano, E G; Wanner, E F; Takahashi, R H C; Mateus, G R; Nakamura, F G

    2014-01-01

    Recent works raised the hypothesis that the assignment of a geometry to the decision variable space of a combinatorial problem could be useful both for providing meaningful descriptions of the fitness landscape and for supporting the systematic construction of evolutionary operators (the geometric operators) that make a consistent usage of the space geometric properties in the search for problem optima. This paper introduces some new geometric operators that constitute the realization of searches along the combinatorial space versions of the geometric entities descent directions and subspaces. The new geometric operators are stated in the specific context of the wireless sensor network dynamic coverage and connectivity problem (WSN-DCCP). A genetic algorithm (GA) is developed for the WSN-DCCP using the proposed operators, being compared with a formulation based on integer linear programming (ILP) which is solved with exact methods. That ILP formulation adopts a proxy objective function based on the minimization of energy consumption in the network, in order to approximate the objective of network lifetime maximization, and a greedy approach for dealing with the system's dynamics. To the authors' knowledge, the proposed GA is the first algorithm to outperform the lifetime of networks as synthesized by the ILP formulation, also running in much smaller computational times for large instances. PMID:24102647

  7. On a vector space representation in genetic algorithms for sensor scheduling in wireless sensor networks.

    PubMed

    Martins, F V C; Carrano, E G; Wanner, E F; Takahashi, R H C; Mateus, G R; Nakamura, F G

    2014-01-01

    Recent works raised the hypothesis that the assignment of a geometry to the decision variable space of a combinatorial problem could be useful both for providing meaningful descriptions of the fitness landscape and for supporting the systematic construction of evolutionary operators (the geometric operators) that make a consistent usage of the space geometric properties in the search for problem optima. This paper introduces some new geometric operators that constitute the realization of searches along the combinatorial space versions of the geometric entities descent directions and subspaces. The new geometric operators are stated in the specific context of the wireless sensor network dynamic coverage and connectivity problem (WSN-DCCP). A genetic algorithm (GA) is developed for the WSN-DCCP using the proposed operators, being compared with a formulation based on integer linear programming (ILP) which is solved with exact methods. That ILP formulation adopts a proxy objective function based on the minimization of energy consumption in the network, in order to approximate the objective of network lifetime maximization, and a greedy approach for dealing with the system's dynamics. To the authors' knowledge, the proposed GA is the first algorithm to outperform the lifetime of networks as synthesized by the ILP formulation, also running in much smaller computational times for large instances.

  8. Adaptive Optimization of Aircraft Engine Performance Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Long, Theresa W.

    1995-01-01

    Preliminary results are presented on the development of an adaptive neural network based control algorithm to enhance aircraft engine performance. This work builds upon a previous National Aeronautics and Space Administration (NASA) effort known as Performance Seeking Control (PSC). PSC is an adaptive control algorithm which contains a model of the aircraft's propulsion system which is updated on-line to match the operation of the aircraft's actual propulsion system. Information from the on-line model is used to adapt the control system during flight to allow optimal operation of the aircraft's propulsion system (inlet, engine, and nozzle) to improve aircraft engine performance without compromising reliability or operability. Performance Seeking Control has been shown to yield reductions in fuel flow, increases in thrust, and reductions in engine fan turbine inlet temperature. The neural network based adaptive control, like PSC, will contain a model of the propulsion system which will be used to calculate optimal control commands on-line. Hopes are that it will be able to provide some additional benefits above and beyond those of PSC. The PSC algorithm is computationally intensive, it is valid only at near steady-state flight conditions, and it has no way to adapt or learn on-line. These issues are being addressed in the development of the optimal neural controller. Specialized neural network processing hardware is being developed to run the software, the algorithm will be valid at steady-state and transient conditions, and will take advantage of the on-line learning capability of neural networks. Future plans include testing the neural network software and hardware prototype against an aircraft engine simulation. In this paper, the proposed neural network software and hardware is described and preliminary neural network training results are presented.

  9. Investigation into the relationship between the gravity vector and the flow vector to improve performance in two-phase continuous flow biodiesel reactor.

    PubMed

    Unker, S A; Boucher, M B; Hawley, K R; Midgette, A A; Stuart, J D; Parnas, R S

    2010-10-01

    The following study analyzes the performance of a continuous flow biodiesel reactor/separator. The reactor achieves high conversion of vegetable oil triglycerides to biodiesel while simultaneously separating co-product glycerol. The influence of the flow direction, relative to the gravity vector, on the reactor performance was measured. Reactor performance was assessed by both the conversion of vegetable oil triglycerides to biodiesel and the separation efficiency of removing the co-product glycerol. At slightly elevated temperatures of 40-50 degrees C, an overall feed of 1.2 L/min, a 6:1 M ratio of methanol to vegetable oil triglycerides, and a 1-1.3 wt.% potassium hydroxide catalyst loading, the reactor converted more than 96% of the pretreated waste vegetable oil to biodiesel. The reactor also separated 36-95% of the glycerol that was produced. Tilting the reactor away from the vertical direction produced a large increase in glycerol separation efficiency and only a small decrease in conversion.

  10. Performance of Social Network Sensors during Hurricane Sandy

    PubMed Central

    Kryvasheyeu, Yury; Chen, Haohui; Moro, Esteban; Van Hentenryck, Pascal; Cebrian, Manuel

    2015-01-01

    Information flow during catastrophic events is a critical aspect of disaster management. Modern communication platforms, in particular online social networks, provide an opportunity to study such flow and derive early-warning sensors, thus improving emergency preparedness and response. Performance of the social networks sensor method, based on topological and behavioral properties derived from the “friendship paradox”, is studied here for over 50 million Twitter messages posted before, during, and after Hurricane Sandy. We find that differences in users’ network centrality effectively translate into moderate awareness advantage (up to 26 hours); and that geo-location of users within or outside of the hurricane-affected area plays a significant role in determining the scale of such an advantage. Emotional response appears to be universal regardless of the position in the network topology, and displays characteristic, easily detectable patterns, opening a possibility to implement a simple “sentiment sensing” technique that can detect and locate disasters. PMID:25692690

  11. Investigation of Natural Draft Cooling Tower Performance Using Neural Network

    NASA Astrophysics Data System (ADS)

    Mahdi, Qasim S.; Saleh, Saad M.; Khalaf, Basima S.

    In the present work Artificial Neural Network (ANN) technique is used to investigate the performance of Natural Draft Wet Cooling Tower (NDWCT). Many factors are affected the rang, approach, pressure drop, and effectiveness of the cooling tower which are; fill type, water flow rate, air flow rate, inlet water temperature, wet bulb temperature of air, and nozzle hole diameter. Experimental data included the effects of these factors are used to train the network using Back Propagation (BP) algorithm. The network included seven input variables (Twi, hfill, mw, Taiwb, Taidb, vlow, vup) and five output variables (ma, Taowb, Two, Δp, ɛ) while hidden layer is different for each case. Network results compared with experimental results and good agreement was observed between the experimental and theoretical results.

  12. Asynchronous transfer mode link performance over ground networks

    NASA Technical Reports Server (NTRS)

    Chow, E. T.; Markley, R. W.

    1993-01-01

    The results of an experiment to determine the feasibility of using asynchronous transfer mode (ATM) technology to support advanced spacecraft missions that require high-rate ground communications and, in particular, full-motion video are reported. Potential nodes in such a ground network include Deep Space Network (DSN) antenna stations, the Jet Propulsion Laboratory, and a set of national and international end users. The experiment simulated a lunar microrover, lunar lander, the DSN ground communications system, and distributed science users. The users were equipped with video-capable workstations. A key feature was an optical fiber link between two high-performance workstations equipped with ATM interfaces. Video was also transmitted through JPL's institutional network to a user 8 km from the experiment. Variations in video depending on the networks and computers were observed, the results are reported.

  13. Performance of social network sensors during Hurricane Sandy.

    PubMed

    Kryvasheyeu, Yury; Chen, Haohui; Moro, Esteban; Van Hentenryck, Pascal; Cebrian, Manuel

    2015-01-01

    Information flow during catastrophic events is a critical aspect of disaster management. Modern communication platforms, in particular online social networks, provide an opportunity to study such flow and derive early-warning sensors, thus improving emergency preparedness and response. Performance of the social networks sensor method, based on topological and behavioral properties derived from the "friendship paradox", is studied here for over 50 million Twitter messages posted before, during, and after Hurricane Sandy. We find that differences in users' network centrality effectively translate into moderate awareness advantage (up to 26 hours); and that geo-location of users within or outside of the hurricane-affected area plays a significant role in determining the scale of such an advantage. Emotional response appears to be universal regardless of the position in the network topology, and displays characteristic, easily detectable patterns, opening a possibility to implement a simple "sentiment sensing" technique that can detect and locate disasters.

  14. Comparison of Support Vector Machine, Neural Network, and CART Algorithms for the Land-Cover Classification Using Limited Training Data Points

    EPA Science Inventory

    Support vector machine (SVM) was applied for land-cover characterization using MODIS time-series data. Classification performance was examined with respect to training sample size, sample variability, and landscape homogeneity (purity). The results were compared to two convention...

  15. A new integrated approach for characterizing the soil electromagnetic properties and detecting landmines using a hand-held vector network analyzer

    NASA Astrophysics Data System (ADS)

    Lopera, Olga; Lambot, Sebastien; Slob, Evert; Vanclooster, Marnik; Macq, Benoit; Milisavljevic, Nada

    2006-05-01

    The application of ground-penetrating radar (GPR) in humanitarian demining labors presents two major challenges: (1) the development of affordable and practical systems to detect metallic and non-metallic antipersonnel (AP) landmines under different conditions, and (2) the development of accurate soil characterization techniques to evaluate soil properties effects and determine the performance of these GPR-based systems. In this paper, we present a new integrated approach for characterizing electromagnetic (EM) properties of mine-affected soils and detecting landmines using a low cost hand-held vector network analyzer (VNA) connected to a highly directive antenna. Soil characterization is carried out using the radar-antenna-subsurface model of Lambot et al.1 and full-wave inversion of the radar signal focused in the time domain on the surface reflection. This methodology is integrated to background subtraction (BS) and migration to enhance landmine detection. Numerical and laboratory experiments are performed to show the effect of the soil EM properties on the detectability of the landmines and how the proposed approach can ameliorate the GPR performance.

  16. Competitive Learning Neural Network Ensemble Weighted by Predicted Performance

    ERIC Educational Resources Information Center

    Ye, Qiang

    2010-01-01

    Ensemble approaches have been shown to enhance classification by combining the outputs from a set of voting classifiers. Diversity in error patterns among base classifiers promotes ensemble performance. Multi-task learning is an important characteristic for Neural Network classifiers. Introducing a secondary output unit that receives different…

  17. USING MULTIRAIL NETWORKS IN HIGH-PERFORMANCE CLUSTERS

    SciTech Connect

    Coll, S.; Fratchtenberg, E.; Petrini, F.; Hoisie, A.; Gurvits, L.

    2001-01-01

    Using multiple independent networks (also known as rails) is an emerging technique to overcome bandwidth limitations and enhance fault tolerance of current high-performance clusters. We present an extensive experimental comparison of the behavior of various allocation schemes in terms of bandwidth and latency. We show that striping messages over multiple rails can substantially reduce network latency, depending on average message size, network load, and allocation scheme. The compared methods include a basic round-robin rail allocation, a local-dynamic allocation based on local knowledge, and a dynamic rail allocation that reserves both communication endpoints of a message before sending it. The last method is shown to perform better than the others at higher loads: up to 49% better than local-knowledge allocation and 37% better than the round-robin allocation. This allocation scheme also shows lower latency and it saturates on higher loads (for messages large enough). Most importantly, this proposed allocation scheme scales well with the number of rails and message sizes. In addition we propose a hybrid algorithm that combines the benefits of the local-dynamic for short messages with those of the dynamic algorithm for large messages. Keywords: Communication Protocols, High-Performance Interconnection Networks, Performance Evaluation, Routing, Communication Libraries, Parallel Architectures.

  18. High Performance Computing and Networking for Science--Background Paper.

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. Office of Technology Assessment.

    The Office of Technology Assessment is conducting an assessment of the effects of new information technologies--including high performance computing, data networking, and mass data archiving--on research and development. This paper offers a view of the issues and their implications for current discussions about Federal supercomputer initiatives…

  19. Optical performance monitoring (OPM) in next-generation optical networks

    NASA Astrophysics Data System (ADS)

    Neuhauser, Richard E.

    2002-09-01

    DWDM transmission is the enabling technology currently pushing the transmission bandwidths in core networks towards the multi-Tb/s regime with unregenerated transmission distances of several thousand km. Such systems represent the basic platform for transparent DWDM networks enabling both the transport of client signals with different data formats and bit rates (e.g. SDH/SONET, IP over WDM, Gigabit Ethernet, etc.) and dynamic provisioning of optical wavelength channels. Optical Performance Monitoring (OPM) will be one of the key elements for providing the capabilities of link set-up/control, fault localization, protection/restoration and path supervisioning for stable network operation becoming the major differentiator in next-generation networks. Currently, signal quality is usually characterized by DWDM power levels, spectrum-interpolated Optical Signal-to-Noise-Ratio (OSNR), and channel wavelengths. On the other hand there is urgent need for new OPM technologies and strategies providing solutions for in-channel OSNR, signal quality measurement, fault localization and fault identification. Innovative research and product activities include polarization nulling, electrical and optical amplitude sampling, BER estimation, electrical spectrum analysis, and pilot tone technologies. This presentation focuses on reviewing the requirements and solution concepts in current and next-generation networks with respect to Optical Performance Monitoring.

  20. Performance Evaluation in Network-Based Parallel Computing

    NASA Technical Reports Server (NTRS)

    Dezhgosha, Kamyar

    1996-01-01

    Network-based parallel computing is emerging as a cost-effective alternative for solving many problems which require use of supercomputers or massively parallel computers. The primary objective of this project has been to conduct experimental research on performance evaluation for clustered parallel computing. First, a testbed was established by augmenting our existing SUNSPARCs' network with PVM (Parallel Virtual Machine) which is a software system for linking clusters of machines. Second, a set of three basic applications were selected. The applications consist of a parallel search, a parallel sort, a parallel matrix multiplication. These application programs were implemented in C programming language under PVM. Third, we conducted performance evaluation under various configurations and problem sizes. Alternative parallel computing models and workload allocations for application programs were explored. The performance metric was limited to elapsed time or response time which in the context of parallel computing can be expressed in terms of speedup. The results reveal that the overhead of communication latency between processes in many cases is the restricting factor to performance. That is, coarse-grain parallelism which requires less frequent communication between processes will result in higher performance in network-based computing. Finally, we are in the final stages of installing an Asynchronous Transfer Mode (ATM) switch and four ATM interfaces (each 155 Mbps) which will allow us to extend our study to newer applications, performance metrics, and configurations.

  1. Performance analysis of wireless sensor networks in geophysical sensing applications

    NASA Astrophysics Data System (ADS)

    Uligere Narasimhamurthy, Adithya

    Performance is an important criteria to consider before switching from a wired network to a wireless sensing network. Performance is especially important in geophysical sensing where the quality of the sensing system is measured by the precision of the acquired signal. Can a wireless sensing network maintain the same reliability and quality metrics that a wired system provides? Our work focuses on evaluating the wireless GeoMote sensor motes that were developed by previous computer science graduate students at Mines. Specifically, we conducted a set of experiments, namely WalkAway and Linear Array experiments, to characterize the performance of the wireless motes. The motes were also equipped with the Sticking Heartbeat Aperture Resynchronization Protocol (SHARP), a time synchronization protocol developed by a previous computer science graduate student at Mines. This protocol should automatically synchronize the mote's internal clocks and reduce time synchronization errors. We also collected passive data to evaluate the response of GeoMotes to various frequency components associated with the seismic waves. With the data collected from these experiments, we evaluated the performance of the SHARP protocol and compared the performance of our GeoMote wireless system against the industry standard wired seismograph system (Geometric-Geode). Using arrival time analysis and seismic velocity calculations, we set out to answer the following question. Can our wireless sensing system (GeoMotes) perform similarly to a traditional wired system in a realistic scenario?

  2. Performance of Neural Networks Methods In Intrusion Detection

    SciTech Connect

    Dao, V N; Vemuri, R

    2001-07-09

    By accurately profiling the users via their unique attributes, it is possible to view the intrusion detection problem as a classification of authorized users and intruders. This paper demonstrates that artificial neural network (ANN) techniques can be used to solve this classification problem. Furthermore, the paper compares the performance of three neural networks methods in classifying authorized users and intruders using synthetically generated data. The three methods are the gradient descent back propagation (BP) with momentum, the conjugate gradient BP, and the quasi-Newton BP.

  3. Network Performance Testing for the BaBar Event Builder

    SciTech Connect

    Pavel, Tomas J

    1998-11-17

    We present an overview of the design of event building in the BABAR Online, based upon TCP/IP and commodity networking technology. BABAR is a high-rate experiment to study CP violation in asymmetric e{sup +}e{sup {minus}} collisions. In order to validate the event-builder design, an extensive program was undertaken to test the TCP performance delivered by various machine types with both ATM OC-3 and Fast Ethernet networks. The buffering characteristics of several candidate switches were examined and found to be generally adequate for our purposes. We highlight the results of this testing and present some of the more significant findings.

  4. Topology and computational performance of attractor neural networks.

    PubMed

    McGraw, Patrick N; Menzinger, Michael

    2003-10-01

    To explore the relation between network structure and function, we studied the computational performance of Hopfield-type attractor neural nets with regular lattice, random, small-world, and scale-free topologies. The random configuration is the most efficient for storage and retrieval of patterns by the network as a whole. However, in the scale-free case retrieval errors are not distributed uniformly among the nodes. The portion of a pattern encoded by the subset of highly connected nodes is more robust and efficiently recognized than the rest of the pattern. The scale-free network thus achieves a very strong partial recognition. The implications of these findings for brain function and social dynamics are suggestive.

  5. Enhanced memory performance thanks to neural network assortativity

    SciTech Connect

    Franciscis, S. de; Johnson, S.; Torres, J. J.

    2011-03-24

    The behaviour of many complex dynamical systems has been found to depend crucially on the structure of the underlying networks of interactions. An intriguing feature of empirical networks is their assortativity--i.e., the extent to which the degrees of neighbouring nodes are correlated. However, until very recently it was difficult to take this property into account analytically, most work being exclusively numerical. We get round this problem by considering ensembles of equally correlated graphs and apply this novel technique to the case of attractor neural networks. Assortativity turns out to be a key feature for memory performance in these systems - so much so that for sufficiently correlated topologies the critical temperature diverges. We predict that artificial and biological neural systems could significantly enhance their robustness to noise by developing positive correlations.

  6. Team Assembly Mechanisms Determine Collaboration Network Structure and Team Performance

    PubMed Central

    Guimerà, Roger; Uzzi, Brian; Spiro, Jarrett; Nunes Amaral, Luís A.

    2007-01-01

    Agents in creative enterprises are embedded in networks that inspire, support, and evaluate their work. Here, we investigate how the mechanisms by which creative teams self-assemble determine the structure of these collaboration networks. We propose a model for the self-assembly of creative teams that has its basis in three parameters: team size, the fraction of newcomers in new productions, and the tendency of incumbents to repeat previous collaborations. The model suggests that the emergence of a large connected community of practitioners can be described as a phase transition. We find that team assembly mechanisms determine both the structure of the collaboration network and team performance for teams derived from both artistic and scientific fields. PMID:15860629

  7. Team assembly mechanisms determine collaboration network structure and team performance.

    PubMed

    Guimerà, Roger; Uzzi, Brian; Spiro, Jarrett; Amaral, Luís A Nunes

    2005-04-29

    Agents in creative enterprises are embedded in networks that inspire, support, and evaluate their work. Here, we investigate how the mechanisms by which creative teams self-assemble determine the structure of these collaboration networks. We propose a model for the self-assembly of creative teams that has its basis in three parameters: team size, the fraction of newcomers in new productions, and the tendency of incumbents to repeat previous collaborations. The model suggests that the emergence of a large connected community of practitioners can be described as a phase transition. We find that team assembly mechanisms determine both the structure of the collaboration network and team performance for teams derived from both artistic and scientific fields.

  8. Performance evaluation of a routing algorithm based on Hopfield Neural Network for network-on-chip

    NASA Astrophysics Data System (ADS)

    Esmaelpoor, Jamal; Ghafouri, Abdollah

    2015-12-01

    Network on chip (NoC) has emerged as a solution to overcome the system on chip growing complexity and design challenges. A proper routing algorithm is a key issue of an NoC design. An appropriate routing method balances load across the network channels and keeps path length as short as possible. This survey investigates the performance of a routing algorithm based on Hopfield Neural Network. It is a dynamic programming to provide optimal path and network monitoring in real time. The aim of this article is to analyse the possibility of using a neural network as a router. The algorithm takes into account the path with the lowest delay (cost) form source to destination. In other words, the path a message takes from source to destination depends on network traffic situation at the time and it is the fastest one. The simulation results show that the proposed approach improves average delay, throughput and network congestion efficiently. At the same time, the increase in power consumption is almost negligible.

  9. Evaluation of GPFS Connectivity Over High-Performance Networks

    SciTech Connect

    Srinivasan, Jay; Canon, Shane; Andrews, Matthew

    2009-02-17

    We present the results of an evaluation of new features of the latest release of IBM's GPFS filesystem (v3.2). We investigate different ways of connecting to a high-performance GPFS filesystem from a remote cluster using Infiniband (IB) and 10 Gigabit Ethernet. We also examine the performance of the GPFS filesystem with both serial and parallel I/O. Finally, we also present our recommendations for effective ways of utilizing high-bandwidth networks for high-performance I/O to parallel file systems.

  10. On the Performance of TCP Spoofing in Satellite Networks

    NASA Technical Reports Server (NTRS)

    Ishac, Joseph; Allman, Mark

    2001-01-01

    In this paper, we analyze the performance of Transmission Control Protocol (TCP) in a network that consists of both satellite and terrestrial components. One method, proposed by outside research, to improve the performance of data transfers over satellites is to use a performance enhancing proxy often dubbed 'spoofing.' Spoofing involves the transparent splitting of a TCP connection between the source and destination by some entity within the network path. In order to analyze the impact of spoofing, we constructed a simulation suite based around the network simulator ns-2. The simulation reflects a host with a satellite connection to the Internet and allows the option to spoof connections just prior to the satellite. The methodology used in our simulation allows us to analyze spoofing over a large range of file sizes and under various congested conditions, while prior work on this topic has primarily focused on bulk transfers with no congestion. As a result of these simulations, we find that the performance of spoofing is dependent upon a number of conditions.

  11. Design and implementation of a high performance network security processor

    NASA Astrophysics Data System (ADS)

    Wang, Haixin; Bai, Guoqiang; Chen, Hongyi

    2010-03-01

    The last few years have seen many significant progresses in the field of application-specific processors. One example is network security processors (NSPs) that perform various cryptographic operations specified by network security protocols and help to offload the computation intensive burdens from network processors (NPs). This article presents a high performance NSP system architecture implementation intended for both internet protocol security (IPSec) and secure socket layer (SSL) protocol acceleration, which are widely employed in virtual private network (VPN) and e-commerce applications. The efficient dual one-way pipelined data transfer skeleton and optimised integration scheme of the heterogenous parallel crypto engine arrays lead to a Gbps rate NSP, which is programmable with domain specific descriptor-based instructions. The descriptor-based control flow fragments large data packets and distributes them to the crypto engine arrays, which fully utilises the parallel computation resources and improves the overall system data throughput. A prototyping platform for this NSP design is implemented with a Xilinx XC3S5000 based FPGA chip set. Results show that the design gives a peak throughput for the IPSec ESP tunnel mode of 2.85 Gbps with over 2100 full SSL handshakes per second at a clock rate of 95 MHz.

  12. Comparison of Bayesian network and support vector machine models for two-year survival prediction in lung cancer patients treated with radiotherapy

    SciTech Connect

    Jayasurya, K.; Fung, G.; Yu, S.; Dehing-Oberije, C.; De Ruysscher, D.; Hope, A.; De Neve, W.; Lievens, Y.; Lambin, P.; Dekker, A. L. A. J.

    2010-04-15

    Purpose: Classic statistical and machine learning models such as support vector machines (SVMs) can be used to predict cancer outcome, but often only perform well if all the input variables are known, which is unlikely in the medical domain. Bayesian network (BN) models have a natural ability to reason under uncertainty and might handle missing data better. In this study, the authors hypothesize that a BN model can predict two-year survival in non-small cell lung cancer (NSCLC) patients as accurately as SVM, but will predict survival more accurately when data are missing. Methods: A BN and SVM model were trained on 322 inoperable NSCLC patients treated with radiotherapy from Maastricht and validated in three independent data sets of 35, 47, and 33 patients from Ghent, Leuven, and Toronto. Missing variables occurred in the data set with only 37, 28, and 24 patients having a complete data set. Results: The BN model structure and parameter learning identified gross tumor volume size, performance status, and number of positive lymph nodes on a PET as prognostic factors for two-year survival. When validated in the full validation set of Ghent, Leuven, and Toronto, the BN model had an AUC of 0.77, 0.72, and 0.70, respectively. A SVM model based on the same variables had an overall worse performance (AUC 0.71, 0.68, and 0.69) especially in the Ghent set, which had the highest percentage of missing the important GTV size data. When only patients with complete data sets were considered, the BN and SVM model performed more alike. Conclusions: Within the limitations of this study, the hypothesis is supported that BN models are better at handling missing data than SVM models and are therefore more suitable for the medical domain. Future works have to focus on improving the BN performance by including more patients, more variables, and more diversity.

  13. Copercolating Networks: An Approach for Realizing High-Performance Transparent Conductors using Multicomponent Nanostructured Networks

    NASA Astrophysics Data System (ADS)

    Das, Suprem R.; Sadeque, Sajia; Jeong, Changwook; Chen, Ruiyi; Alam, Muhammad A.; Janes, David B.

    2016-06-01

    Although transparent conductive oxides such as indium tin oxide (ITO) are widely employed as transparent conducting electrodes (TCEs) for applications such as touch screens and displays, new nanostructured TCEs are of interest for future applications, including emerging transparent and flexible electronics. A number of twodimensional networks of nanostructured elements have been reported, including metallic nanowire networks consisting of silver nanowires, metallic carbon nanotubes (m-CNTs), copper nanowires or gold nanowires, and metallic mesh structures. In these single-component systems, it has generally been difficult to achieve sheet resistances that are comparable to ITO at a given broadband optical transparency. A relatively new third category of TCEs consisting of networks of 1D-1D and 1D-2D nanocomposites (such as silver nanowires and CNTs, silver nanowires and polycrystalline graphene, silver nanowires and reduced graphene oxide) have demonstrated TCE performance comparable to, or better than, ITO. In such hybrid networks, copercolation between the two components can lead to relatively low sheet resistances at nanowire densities corresponding to high optical transmittance. This review provides an overview of reported hybrid networks, including a comparison of the performance regimes achievable with those of ITO and single-component nanostructured networks. The performance is compared to that expected from bulk thin films and analyzed in terms of the copercolation model. In addition, performance characteristics relevant for flexible and transparent applications are discussed. The new TCEs are promising, but significant work must be done to ensure earth abundance, stability, and reliability so that they can eventually replace traditional ITO-based transparent conductors.

  14. On using multiple routing metrics with destination sequenced distance vector protocol for MultiHop wireless ad hoc networks

    NASA Astrophysics Data System (ADS)

    Mehic, M.; Fazio, P.; Voznak, M.; Partila, P.; Komosny, D.; Tovarek, J.; Chmelikova, Z.

    2016-05-01

    A mobile ad hoc network is a collection of mobile nodes which communicate without a fixed backbone or centralized infrastructure. Due to the frequent mobility of nodes, routes connecting two distant nodes may change. Therefore, it is not possible to establish a priori fixed paths for message delivery through the network. Because of its importance, routing is the most studied problem in mobile ad hoc networks. In addition, if the Quality of Service (QoS) is demanded, one must guarantee the QoS not only over a single hop but over an entire wireless multi-hop path which may not be a trivial task. In turns, this requires the propagation of QoS information within the network. The key to the support of QoS reporting is QoS routing, which provides path QoS information at each source. To support QoS for real-time traffic one needs to know not only minimum delay on the path to the destination but also the bandwidth available on it. Therefore, throughput, end-to-end delay, and routing overhead are traditional performance metrics used to evaluate the performance of routing protocol. To obtain additional information about the link, most of quality-link metrics are based on calculation of the lost probabilities of links by broadcasting probe packets. In this paper, we address the problem of including multiple routing metrics in existing routing packets that are broadcasted through the network. We evaluate the efficiency of such approach with modified version of DSDV routing protocols in ns-3 simulator.

  15. Sensor Networking Testbed with IEEE 1451 Compatibility and Network Performance Monitoring

    NASA Technical Reports Server (NTRS)

    Gurkan, Deniz; Yuan, X.; Benhaddou, D.; Figueroa, F.; Morris, Jonathan

    2007-01-01

    Design and implementation of a testbed for testing and verifying IEEE 1451-compatible sensor systems with network performance monitoring is of significant importance. The performance parameters measurement as well as decision support systems implementation will enhance the understanding of sensor systems with plug-and-play capabilities. The paper will present the design aspects for such a testbed environment under development at University of Houston in collaboration with NASA Stennis Space Center - SSST (Smart Sensor System Testbed).

  16. Simulation Modeling and Performance Evaluation of Space Networks

    NASA Technical Reports Server (NTRS)

    Jennings, Esther H.; Segui, John

    2006-01-01

    In space exploration missions, the coordinated use of spacecraft as communication relays increases the efficiency of the endeavors. To conduct trade-off studies of the performance and resource usage of different communication protocols and network designs, JPL designed a comprehensive extendable tool, the Multi-mission Advanced Communications Hybrid Environment for Test and Evaluation (MACHETE). The design and development of MACHETE began in 2000 and is constantly evolving. Currently, MACHETE contains Consultative Committee for Space Data Systems (CCSDS) protocol standards such as Proximity-1, Advanced Orbiting Systems (AOS), Packet Telemetry/Telecommand, Space Communications Protocol Specification (SCPS), and the CCSDS File Delivery Protocol (CFDP). MACHETE uses the Aerospace Corporation s Satellite Orbital Analysis Program (SOAP) to generate the orbital geometry information and contact opportunities. Matlab scripts provide the link characteristics. At the core of MACHETE is a discrete event simulator, QualNet. Delay Tolerant Networking (DTN) is an end-to-end architecture providing communication in and/or through highly stressed networking environments. Stressed networking environments include those with intermittent connectivity, large and/or variable delays, and high bit error rates. To provide its services, the DTN protocols reside at the application layer of the constituent internets, forming a store-and-forward overlay network. The key capabilities of the bundling protocols include custody-based reliability, ability to cope with intermittent connectivity, ability to take advantage of scheduled and opportunistic connectivity, and late binding of names to addresses. In this presentation, we report on the addition of MACHETE models needed to support DTN, namely: the Bundle Protocol (BP) model. To illustrate the use of MACHETE with the additional DTN model, we provide an example simulation to benchmark its performance. We demonstrate the use of the DTN protocol

  17. Performance analysis of FDDI network under frequent bidding requirements

    NASA Astrophysics Data System (ADS)

    Neo, L. K.; Cheng, T. H.; Subramanian, K. R.; Dubey, V. K.

    1993-05-01

    A new bidding scheme is described for the fiber distributed data interface (FDDI). An analysis is presented for the throughput performance of an FDDI network under the assumption of heavy load, which allows the target token rotation time (TTRT) to be bid for and adjusted frequently as and when the access time requirements of synchronous traffic change. Our results show that better throughput performance is achievable under the new bidding scheme. It is also observed that although re-bidding is desirable, escalating and uncontrolled bidding intensity may incur undue overheads that results in unacceptable throughput degradation.

  18. Practical Performance Analysis for Multiple Information Fusion Based Scalable Localization System Using Wireless Sensor Networks.

    PubMed

    Zhao, Yubin; Li, Xiaofan; Zhang, Sha; Meng, Tianhui; Zhang, Yiwen

    2016-08-23

    In practical localization system design, researchers need to consider several aspects to make the positioning efficiently and effectively, e.g., the available auxiliary information, sensing devices, equipment deployment and the environment. Then, these practical concerns turn out to be the technical problems, e.g., the sequential position state propagation, the target-anchor geometry effect, the Non-line-of-sight (NLOS) identification and the related prior information. It is necessary to construct an efficient framework that can exploit multiple available information and guide the system design. In this paper, we propose a scalable method to analyze system performance based on the Cramér-Rao lower bound (CRLB), which can fuse all of the information adaptively. Firstly, we use an abstract function to represent all of the wireless localization system model. Then, the unknown vector of the CRLB consists of two parts: the first part is the estimated vector, and the second part is the auxiliary vector, which helps improve the estimation accuracy. Accordingly, the Fisher information matrix is divided into two parts: the state matrix and the auxiliary matrix. Unlike the theoretical analysis, our CRLB can be a practical fundamental limit to denote the system that fuses multiple information in the complicated environment, e.g., recursive Bayesian estimation based on the hidden Markov model, the map matching method and the NLOS identification and mitigation methods. Thus, the theoretical results are approaching the real case more. In addition, our method is more adaptable than other CRLBs when considering more unknown important factors. We use the proposed method to analyze the wireless sensor network-based indoor localization system. The influence of the hybrid LOS/NLOS channels, the building layout information and the relative height differences between the target and anchors are analyzed. It is demonstrated that our method exploits all of the available information for

  19. Practical Performance Analysis for Multiple Information Fusion Based Scalable Localization System Using Wireless Sensor Networks.

    PubMed

    Zhao, Yubin; Li, Xiaofan; Zhang, Sha; Meng, Tianhui; Zhang, Yiwen

    2016-01-01

    In practical localization system design, researchers need to consider several aspects to make the positioning efficiently and effectively, e.g., the available auxiliary information, sensing devices, equipment deployment and the environment. Then, these practical concerns turn out to be the technical problems, e.g., the sequential position state propagation, the target-anchor geometry effect, the Non-line-of-sight (NLOS) identification and the related prior information. It is necessary to construct an efficient framework that can exploit multiple available information and guide the system design. In this paper, we propose a scalable method to analyze system performance based on the Cramér-Rao lower bound (CRLB), which can fuse all of the information adaptively. Firstly, we use an abstract function to represent all of the wireless localization system model. Then, the unknown vector of the CRLB consists of two parts: the first part is the estimated vector, and the second part is the auxiliary vector, which helps improve the estimation accuracy. Accordingly, the Fisher information matrix is divided into two parts: the state matrix and the auxiliary matrix. Unlike the theoretical analysis, our CRLB can be a practical fundamental limit to denote the system that fuses multiple information in the complicated environment, e.g., recursive Bayesian estimation based on the hidden Markov model, the map matching method and the NLOS identification and mitigation methods. Thus, the theoretical results are approaching the real case more. In addition, our method is more adaptable than other CRLBs when considering more unknown important factors. We use the proposed method to analyze the wireless sensor network-based indoor localization system. The influence of the hybrid LOS/NLOS channels, the building layout information and the relative height differences between the target and anchors are analyzed. It is demonstrated that our method exploits all of the available information for

  20. Practical Performance Analysis for Multiple Information Fusion Based Scalable Localization System Using Wireless Sensor Networks

    PubMed Central

    Zhao, Yubin; Li, Xiaofan; Zhang, Sha; Meng, Tianhui; Zhang, Yiwen

    2016-01-01

    In practical localization system design, researchers need to consider several aspects to make the positioning efficiently and effectively, e.g., the available auxiliary information, sensing devices, equipment deployment and the environment. Then, these practical concerns turn out to be the technical problems, e.g., the sequential position state propagation, the target-anchor geometry effect, the Non-line-of-sight (NLOS) identification and the related prior information. It is necessary to construct an efficient framework that can exploit multiple available information and guide the system design. In this paper, we propose a scalable method to analyze system performance based on the Cramér–Rao lower bound (CRLB), which can fuse all of the information adaptively. Firstly, we use an abstract function to represent all of the wireless localization system model. Then, the unknown vector of the CRLB consists of two parts: the first part is the estimated vector, and the second part is the auxiliary vector, which helps improve the estimation accuracy. Accordingly, the Fisher information matrix is divided into two parts: the state matrix and the auxiliary matrix. Unlike the theoretical analysis, our CRLB can be a practical fundamental limit to denote the system that fuses multiple information in the complicated environment, e.g., recursive Bayesian estimation based on the hidden Markov model, the map matching method and the NLOS identification and mitigation methods. Thus, the theoretical results are approaching the real case more. In addition, our method is more adaptable than other CRLBs when considering more unknown important factors. We use the proposed method to analyze the wireless sensor network-based indoor localization system. The influence of the hybrid LOS/NLOS channels, the building layout information and the relative height differences between the target and anchors are analyzed. It is demonstrated that our method exploits all of the available information for

  1. Social value of high bandwidth networks: creative performance and education.

    PubMed

    Mansell, Robin; Foresta, Don

    2016-03-01

    This paper considers limitations of existing network technologies for distributed theatrical performance in the creative arts and for symmetrical real-time interaction in online learning environments. It examines the experience of a multidisciplinary research consortium that aimed to introduce a solution to latency and other network problems experienced by users in these sectors. The solution builds on the Multicast protocol, Access Grid, an environment supported by very high bandwidth networks. The solution is intended to offer high-quality image and sound, interaction with other network platforms, maximum user control of multipoint transmissions, and open programming tools that are flexible and modifiable for specific uses. A case study is presented drawing upon an extended period of participant observation by the authors. This provides a basis for an examination of the challenges of promoting technological innovation in a multidisciplinary project. We highlight the kinds of technical advances and cultural and organizational changes that would be required to meet demanding quality standards, the way a research consortium planned to engage in experimentation and learning, and factors making it difficult to achieve an open platform that is responsive to the needs of users in the creative arts and education sectors.

  2. Social value of high bandwidth networks: creative performance and education.

    PubMed

    Mansell, Robin; Foresta, Don

    2016-03-01

    This paper considers limitations of existing network technologies for distributed theatrical performance in the creative arts and for symmetrical real-time interaction in online learning environments. It examines the experience of a multidisciplinary research consortium that aimed to introduce a solution to latency and other network problems experienced by users in these sectors. The solution builds on the Multicast protocol, Access Grid, an environment supported by very high bandwidth networks. The solution is intended to offer high-quality image and sound, interaction with other network platforms, maximum user control of multipoint transmissions, and open programming tools that are flexible and modifiable for specific uses. A case study is presented drawing upon an extended period of participant observation by the authors. This provides a basis for an examination of the challenges of promoting technological innovation in a multidisciplinary project. We highlight the kinds of technical advances and cultural and organizational changes that would be required to meet demanding quality standards, the way a research consortium planned to engage in experimentation and learning, and factors making it difficult to achieve an open platform that is responsive to the needs of users in the creative arts and education sectors. PMID:26809576

  3. OPTIMAL CONFIGURATION OF A COMMAND AND CONTROL NETWORK: BALANCING PERFORMANCE AND RECONFIGURATION CONSTRAINTS

    SciTech Connect

    L. DOWELL

    1999-08-01

    The optimization of the configuration of communications and control networks is important for assuring the reliability and performance of the networks. This paper presents techniques for determining the optimal configuration for such a network in the presence of communication and connectivity constraints. reconfiguration to restore connectivity to a data-fusion network following the failure of a network component.

  4. A case study using support vector machines, neural networks and logistic regression in a GIS to identify wells contaminated with nitrate-N

    NASA Astrophysics Data System (ADS)

    Dixon, Barnali

    2009-09-01

    Accurate and inexpensive identification of potentially contaminated wells is critical for water resources protection and management. The objectives of this study are to 1) assess the suitability of approximation tools such as neural networks (NN) and support vector machines (SVM) integrated in a geographic information system (GIS) for identifying contaminated wells and 2) use logistic regression and feature selection methods to identify significant variables for transporting contaminants in and through the soil profile to the groundwater. Fourteen GIS derived soil hydrogeologic and landuse parameters were used as initial inputs in this study. Well water quality data (nitrate-N) from 6,917 wells provided by Florida Department of Environmental Protection (USA) were used as an output target class. The use of the logistic regression and feature selection methods reduced the number of input variables to nine. Receiver operating characteristics (ROC) curves were used for evaluation of these approximation tools. Results showed superior performance with the NN as compared to SVM especially on training data while testing results were comparable. Feature selection did not improve accuracy; however, it helped increase the sensitivity or true positive rate (TPR). Thus, a higher TPR was obtainable with fewer variables.

  5. Network DEA: an application to analysis of academic performance

    NASA Astrophysics Data System (ADS)

    Saniee Monfared, Mohammad Ali; Safi, Mahsa

    2013-05-01

    As governmental subsidies to universities are declining in recent years, sustaining excellence in academic performance and more efficient use of resources have become important issues for university stakeholders. To assess the academic performances and the utilization of the resources, two important issues need to be addressed, i.e., a capable methodology and a set of good performance indicators as we consider in this paper. In this paper, we propose a set of performance indicators to enable efficiency analysis of academic activities and apply a novel network DEA structure to account for subfunctional efficiencies such as teaching quality, research productivity, as well as the overall efficiency. We tested our approach on the efficiency analysis of academic colleges at Alzahra University in Iran.

  6. Performance analysis of reactive congestion control for ATM networks

    NASA Astrophysics Data System (ADS)

    Kawahara, Kenji; Oie, Yuji; Murata, Masayuki; Miyahara, Hideo

    1995-05-01

    In ATM networks, preventive congestion control is widely recognized for efficiently avoiding congestion, and it is implemented by a conjunction of connection admission control and usage parameter control. However, congestion may still occur because of unpredictable statistical fluctuation of traffic sources even when preventive control is performed in the network. In this paper, we study another kind of congestion control, i.e., reactive congestion control, in which each source changes its cell emitting rate adaptively to the traffic load at the switching node (or at the multiplexer). Our intention is that, by incorporating such a congestion control method in ATM networks, more efficient congestion control is established. We develop an analytical model, and carry out an approximate analysis of reactive congestion control algorithm. Numerical results show that the reactive congestion control algorithms are very effective in avoiding congestion and in achieving the statistical gain. Furthermore, the binary congestion control algorithm with pushout mechanism is shown to provide the best performance among the reactive congestion control algorithms treated here.

  7. Introducing Vectors.

    ERIC Educational Resources Information Center

    Roche, John

    1997-01-01

    Suggests an approach to teaching vectors that promotes active learning through challenging questions addressed to the class, as opposed to subtle explanations. Promotes introducing vector graphics with concrete examples, beginning with an explanation of the displacement vector. Also discusses artificial vectors, vector algebra, and unit vectors.…

  8. High-performance, scalable optical network-on-chip architectures

    NASA Astrophysics Data System (ADS)

    Tan, Xianfang

    The rapid advance of technology enables a large number of processing cores to be integrated into a single chip which is called a Chip Multiprocessor (CMP) or a Multiprocessor System-on-Chip (MPSoC) design. The on-chip interconnection network, which is the communication infrastructure for these processing cores, plays a central role in a many-core system. With the continuously increasing complexity of many-core systems, traditional metallic wired electronic networks-on-chip (NoC) became a bottleneck because of the unbearable latency in data transmission and extremely high energy consumption on chip. Optical networks-on-chip (ONoC) has been proposed as a promising alternative paradigm for electronic NoC with the benefits of optical signaling communication such as extremely high bandwidth, negligible latency, and low power consumption. This dissertation focus on the design of high-performance and scalable ONoC architectures and the contributions are highlighted as follow: 1. A micro-ring resonator (MRR)-based Generic Wavelength-routed Optical Router (GWOR) is proposed. A method for developing any sized GWOR is introduced. GWOR is a scalable non-blocking ONoC architecture with simple structure, low cost and high power efficiency compared to existing ONoC designs. 2. To expand the bandwidth and improve the fault tolerance of the GWOR, a redundant GWOR architecture is designed by cascading different type of GWORs into one network. 3. The redundant GWOR built with MRR-based comb switches is proposed. Comb switches can expand the bandwidth while keep the topology of GWOR unchanged by replacing the general MRRs with comb switches. 4. A butterfly fat tree (BFT)-based hybrid optoelectronic NoC (HONoC) architecture is developed in which GWORs are used for global communication and electronic routers are used for local communication. The proposed HONoC uses less numbers of electronic routers and links than its counterpart of electronic BFT-based NoC. It takes the advantages of

  9. Vector Reflectometry in a Beam Waveguide

    NASA Technical Reports Server (NTRS)

    Eimer, J. R.; Bennett, C. L.; Chuss, D. T.; Wollack, E. J.

    2011-01-01

    We present a one-port calibration technique for characterization of beam waveguide components with a vector network analyzer. This technique involves using a set of known delays to separate the responses of the instrument and the device under test. We demonstrate this technique by measuring the reflected performance of a millimeter-wave variable-delay polarization modulator.

  10. Note: Vector reflectometry in a beam waveguide

    SciTech Connect

    Eimer, J. R.; Bennett, C. L.; Chuss, D. T.; Wollack, E. J.

    2011-08-15

    We present a one-port calibration technique for characterization of beam waveguide components with a vector network analyzer. This technique involves using a set of known delays to separate the responses of the instrument and the device under test. We demonstrate this technique by measuring the reflected performance of a millimeter-wave variable-delay polarization modulator.

  11. A simulation study of TCP performance in ATM networks

    SciTech Connect

    Chien Fang; Chen, Helen; Hutchins, J.

    1994-08-01

    This paper presents a simulation study of TCP performance over congested ATM local area networks. We simulated a variety of schemes for congestion control for ATM LANs, including a simple cell-drop, a credit-based flow control scheme that back-pressures individual VC`s, and two selective cell-drop schemes. Our simulation results for congested ATM LANs show the following: (1) TCP performance is poor under simple cell-drop, (2) the selective cell-drop schemes increase effective link utilization and result in higher TCP throughputs than the simple cell-drop scheme, and (3) the credit-based flow control scheme eliminates cell loss and achieves maximum performance and effective link utilization.

  12. High-Performance Tools: Nevada's Experiences Growing Network Capability

    NASA Astrophysics Data System (ADS)

    Biasi, G.; Smith, K. D.; Slater, D.; Preston, L.; Tibuleac, I.

    2007-05-01

    Like most regional seismic networks, the Nevada Seismic Network relies on a combination of software components to perform its mission. Core components for automatic network operation are from Antelope, a real- time environmental monitoring software system from Boulder Real-Time Technologies (BRTT). We configured the detector for multiple filtering bands, generally to distinguish local, regional, and teleseismic phases. The associator can use all or a subset of detections for each location grid. Presently we use detailed grids in the Reno-Carson City, Las Vegas, and Yucca Mountain areas, a large regional grid and a teleseismic grid, with a configurable order of precedence among solutions. Incorporating USArray stations into the network was straight- forward. Locations for local events are available in 30-60 seconds, and relocations are computed every 20 seconds. Testing indicates that relocations could be computed every few seconds or less if desired on a modest Sun server. Successive locations may be kept in the database, or criteria applied to select a single preferred location. New code developed by BRTT partially in response to an NSL request automatically launches a gradient-based relocator to refine locations and depths. Locations are forwarded to QDDS and other notification mechanisms. We also use Antelope tools for earthquake picking and analysis and for database viewing and maintenance. We have found the programming interfaces supplied with Antelope instrumental as we work toward ANSS system performance requirements. For example, the Perl language interface to the real-time Object Ring Buffer (ORB) was used to reduce the time to produce ShakeMaps to the present value of ~3 minutes. Hypoinverse was incorporated into a real-time system with Perl ORB access tools. Using the Antelope PHP interface, we now have off-site review capabilities for events and ShakeMaps from hand-held internet devices. PHP and Perl tools were used to develop a remote capability, now

  13. Network and User-Perceived Performance of Web Page Retrievals

    NASA Technical Reports Server (NTRS)

    Kruse, Hans; Allman, Mark; Mallasch, Paul

    1998-01-01

    The development of the HTTP protocol has been driven by the need to improve the network performance of the protocol by allowing the efficient retrieval of multiple parts of a web page without the need for multiple simultaneous TCP connections between a client and a server. We suggest that the retrieval of multiple page elements sequentially over a single TCP connection may result in a degradation of the perceived performance experienced by the user. We attempt to quantify this perceived degradation through the use of a model which combines a web retrieval simulation and an analytical model of TCP operation. Starting with the current HTTP/l.1 specification, we first suggest a client@side heuristic to improve the perceived transfer performance. We show that the perceived speed of the page retrieval can be increased without sacrificing data transfer efficiency. We then propose a new client/server extension to the HTTP/l.1 protocol to allow for the interleaving of page element retrievals. We finally address the issue of the display of advertisements on web pages, and in particular suggest a number of mechanisms which can make efficient use of IP multicast to send advertisements to a number of clients within the same network.

  14. High-Performance, Semi-Interpenetrating Polymer Network

    NASA Technical Reports Server (NTRS)

    Pater, Ruth H.; Lowther, Sharon E.; Smith, Janice Y.; Cannon, Michelle S.; Whitehead, Fred M.; Ely, Robert M.

    1992-01-01

    High-performance polymer made by new synthesis in which one or more easy-to-process, but brittle, thermosetting polyimides combined with one or more tough, but difficult-to-process, linear thermoplastics to yield semi-interpenetrating polymer network (semi-IPN) having combination of easy processability and high tolerance to damage. Two commercially available resins combined to form tough, semi-IPN called "LaRC-RP49." Displays improvements in toughness and resistance to microcracking. LaRC-RP49 has potential as high-temperature matrix resin, adhesive, and molding resin. Useful in aerospace, automotive, and electronic industries.

  15. Supporting Proactive Application Event Notification to Improve Sensor Network Performance

    NASA Astrophysics Data System (ADS)

    Merlin, Christophe J.; Heinzelman, Wendi B.

    As wireless sensor networks gain in popularity, many deployments are posing new challenges due to their diverse topologies and resource constraints. Previous work has shown the advantage of adapting protocols based on current network conditions (e.g., link status, neighbor status), in order to provide the best service in data transport. Protocols can similarly benefit from adaptation based on current application conditions. In particular, if proactively informed of the status of active queries in the network, protocols can adjust their behavior accordingly. In this paper, we propose a novel approach to provide such proactive application event notification to all interested protocols in the stack. Specifically, we use the existing interfaces and event signaling structure provided by the X-Lisa (Cross-layer Information Sharing Architecture) protocol architecture, augmenting this architecture with a Middleware Interpreter for managing application queries and performing event notification. Using this approach, we observe gains in Quality of Service of up to 40% in packet delivery ratios and a 75% decrease in packet delivery delay for the tested scenario.

  16. Deploying optical performance monitoring in TeliaSonera's network

    NASA Astrophysics Data System (ADS)

    Svensson, Torbjorn K.; Karlsson, Per-Olov E.

    2004-09-01

    This paper reports on the first steps taken by TeliaSonera towards deploying optical performance monitoring (OPM) in the company"s transport network, in order to assure increasingly reliable communications on the physical layer. The big leap, a world-wide deployment of OPM still awaits a breakthrough. There is required very obvious benefits from using OPM in order to change this stalemate. Reasons may be the anaemic economy of many telecom operators, shareholders" pushing for short-term payback, and reluctance to add complexity and to integrate a system management. Technically, legacy digital systems do already have a proven ability of monitoring, so adding OPM to the dense wavelength division multiplexed (DWDM) systems in operation should be judged with care. Duly installed, today"s DWDM systems do their job well, owing to rigorous rules for link design and a prosperous power budget, a power management inherent to the system, and a reliable supplier"s support. So what may bring this stalemate to an end? -A growing number of appliances of OPM, for enhancing network operation and maintenance, and enabling new customer services, will most certainly bring momentum to a change. The first employment of OPM in TeliaSonera"s network is launched this year, 2004. The preparedness of future OPM dependent services and transport technologies will thereby be granted.

  17. Support vector machine-an alternative to artificial neuron network for water quality forecasting in an agricultural nonpoint source polluted river?

    PubMed

    Liu, Mei; Lu, Jun

    2014-09-01

    Water quality forecasting in agricultural drainage river basins is difficult because of the complicated nonpoint source (NPS) pollution transport processes and river self-purification processes involved in highly nonlinear problems. Artificial neural network (ANN) and support vector model (SVM) were developed to predict total nitrogen (TN) and total phosphorus (TP) concentrations for any location of the river polluted by agricultural NPS pollution in eastern China. River flow, water temperature, flow travel time, rainfall, dissolved oxygen, and upstream TN or TP concentrations were selected as initial inputs of the two models. Monthly, bimonthly, and trimonthly datasets were selected to train the two models, respectively, and the same monthly dataset which had not been used for training was chosen to test the models in order to compare their generalization performance. Trial and error analysis and genetic algorisms (GA) were employed to optimize the parameters of ANN and SVM models, respectively. The results indicated that the proposed SVM models performed better generalization ability due to avoiding the occurrence of overtraining and optimizing fewer parameters based on structural risk minimization (SRM) principle. Furthermore, both TN and TP SVM models trained by trimonthly datasets achieved greater forecasting accuracy than corresponding ANN models. Thus, SVM models will be a powerful alternative method because it is an efficient and economic tool to accurately predict water quality with low risk. The sensitivity analyses of two models indicated that decreasing upstream input concentrations during the dry season and NPS emission along the reach during average or flood season should be an effective way to improve Changle River water quality. If the necessary water quality and hydrology data and even trimonthly data are available, the SVM methodology developed here can easily be applied to other NPS-polluted rivers.

  18. A Generic Framework of Performance Measurement in Networked Enterprises

    NASA Astrophysics Data System (ADS)

    Kim, Duk-Hyun; Kim, Cheolhan

    Performance measurement (PM) is essential for managing networked enterprises (NEs) because it greatly affects the effectiveness of collaboration among members of NE.PM in NE requires somewhat different approaches from PM in a single enterprise because of heterogeneity, dynamism, and complexity of NE’s. This paper introduces a generic framework of PM in NE (we call it NEPM) based on the Balanced Scorecard (BSC) approach. In NEPM key performance indicators and cause-and-effect relationships among them are defined in a generic strategy map. NEPM could be applied to various types of NEs after specializing KPIs and relationships among them. Effectiveness of NEPM is shown through a case study of some Korean NEs.

  19. The Deep Space Network: Noise temperature concepts, measurements, and performance

    NASA Technical Reports Server (NTRS)

    Stelzried, C. T.

    1982-01-01

    The use of higher operational frequencies is being investigated for improved performance of the Deep Space Network. Noise temperature and noise figure concepts are used to describe the noise performance of these receiving systems. The ultimate sensitivity of a linear receiving system is limited by the thermal noise of the source and the quantum noise of the receiver amplifier. The atmosphere, antenna and receiver amplifier of an Earth station receiving system are analyzed separately and as a system. Performance evaluation and error analysis techniques are investigated. System noise temperature and antenna gain parameters are combined to give an overall system figure of merit G/T. Radiometers are used to perform radio ""star'' antenna and system sensitivity calibrations. These are analyzed and the performance of several types compared to an idealized total power radiometer. The theory of radiative transfer is applicable to the analysis of transmission medium loss. A power series solution in terms of the transmission medium loss is given for the solution of the noise temperature contribution.

  20. Runtime Performance and Virtual Network Control Alternatives in VM-Based High-Fidelity Network Simulations

    SciTech Connect

    Yoginath, Srikanth B; Perumalla, Kalyan S; Henz, Brian J

    2012-01-01

    In prior work (Yoginath and Perumalla, 2011; Yoginath, Perumalla and Henz, 2012), the motivation, challenges and issues were articulated in favor of virtual time ordering of Virtual Machines (VMs) in network simulations hosted on multi-core machines. Two major components in the overall virtualization challenge are (1) virtual timeline establishment and scheduling of VMs, and (2) virtualization of inter-VM communication. Here, we extend prior work by presenting scaling results for the first component, with experiment results on up to 128 VMs scheduled in virtual time order on a single 12-core host. We also explore the solution space of design alternatives for the second component, and present performance results from a multi-threaded, multi-queue implementation of inter-VM network control for synchronized execution with VM scheduling, incorporated in our NetWarp simulation system.

  1. A high-performance MPI implementation on a shared-memory vector supercomputer.

    SciTech Connect

    Gropp, W.; Lusk, E.; Mathematics and Computer Science

    1997-01-01

    In this article we recount the sequence of steps by which MPICH, a high-performance, portable implementation of the Message-Passing Interface (MPI) standard, was ported to the NEC SX-4, a high-performance parallel supercomputer. Each step in the sequence raised issues that are important for shared-memory programming in general and shed light on both MPICH and the SX-4. The result is a low-latency, very high bandwidth implementation of MPI for the NEC SX-4. In the process, MPICH was also improved in several general ways.

  2. An easily fabricated high performance ionic polymer based sensor network

    NASA Astrophysics Data System (ADS)

    Zhu, Zicai; Wang, Yanjie; Hu, Xiaopin; Sun, Xiaofei; Chang, Longfei; Lu, Pin

    2016-08-01

    Ionic polymer materials can generate an electrical potential from ion migration under an external force. For traditional ionic polymer metal composite sensors, the output voltage is very small (a few millivolts), and the fabrication process is complex and time-consuming. This letter presents an ionic polymer based network of pressure sensors which is easily and quickly constructed, and which can generate high voltage. A 3 × 3 sensor array was prepared by casting Nafion solution directly over copper wires. Under applied pressure, two different levels of voltage response were observed among the nine nodes in the array. For the group producing the higher level, peak voltages reached as high as 25 mV. Computational stress analysis revealed the physical origin of the different responses. High voltages resulting from the stress concentration and asymmetric structure can be further utilized to modify subsequent designs to improve the performance of similar sensors.

  3. Improving the performance of algorithms to find communities in networks.

    PubMed

    Darst, Richard K; Nussinov, Zohar; Fortunato, Santo

    2014-03-01

    Most algorithms to detect communities in networks typically work without any information on the cluster structure to be found, as one has no a priori knowledge of it, in general. Not surprisingly, knowing some features of the unknown partition could help its identification, yielding an improvement of the performance of the method. Here we show that, if the number of clusters was known beforehand, standard methods, like modularity optimization, would considerably gain in accuracy, mitigating the severe resolution bias that undermines the reliability of the results of the original unconstrained version. The number of clusters can be inferred from the spectra of the recently introduced nonbacktracking and flow matrices, even in benchmark graphs with realistic community structure. The limit of such a two-step procedure is the overhead of the computation of the spectra.

  4. Implementation and performance evaluation of mobile ad hoc network for Emergency Telemedicine System in disaster areas.

    PubMed

    Kim, J C; Kim, D Y; Jung, S M; Lee, M H; Kim, K S; Lee, C K; Nah, J Y; Lee, S H; Kim, J H; Choi, W J; Yoo, S K

    2009-01-01

    So far we have developed Emergency Telemedicine System (ETS) which is a robust system using heterogeneous networks. In disaster areas, however, ETS cannot be used if the primary network channel is disabled due to damages on the network infrastructures. Thus we designed network management software for disaster communication network by combination of Mobile Ad hoc Network (MANET) and Wireless LAN (WLAN). This software maintains routes to a Backbone Gateway Node in dynamic network topologies. In this paper, we introduce the proposed disaster communication network with management software, and evaluate its performance using ETS between Medical Center and simulated disaster areas. We also present the results of network performance analysis which identifies the possibility of actual Telemedicine Service in disaster areas via MANET and mobile network (e.g. HSDPA, WiBro).

  5. Do plant viruses facilitate their aphid vectors by inducing symptoms that alter behavior and performance?

    PubMed

    Hodge, Simon; Powell, Glen

    2008-12-01

    Aphids can respond both positively and negatively to virus-induced modifications of the shared host plant. It can be speculated that viruses dependent on aphids for their transmission might evolve to induce changes in the host plant that attract aphids and improve their performance, subsequently enhancing the success of the pathogen itself. We studied how pea aphids [Acyrthosiphon pisum (Harris)] responded to infection of tic beans (Vicia faba L.) by three viruses with varying degrees of dependence on this aphid for their transmission: pea enation mosaic virus (PEMV), bean yellow mosaic virus (BYMV), and broad bean mottle virus (BBMV). BYMV has a nonpersistent mode of transmission by aphids, whereas PEMV is transmitted in a circulative-persistent manner. BBMV is not aphid transmitted. When reared on plants infected by PEMV, no changes in aphid survival, growth, or reproductive performance were observed, whereas infection of beans by the other aphid-dependent virus, BYMV, actually caused a reduction in aphid survival in some assays. None of the viruses induced A. pisum to increase production of winged progeny, and aphids settled preferentially on leaf tissue from plants infected by all three viruses, the likely mechanism being visual responses to yellowing of foliage. Thus, in this system, the attractiveness of an infected host plant and its quality in terms of aphid growth and reproduction were not related to the pathogen's dependence on the aphid for transmission to new hosts.

  6. Preliminary performance of a vertical-attitude takeoff and landing, supersonic cruise aircraft concept having thrust vectoring integrated into the flight control system

    NASA Technical Reports Server (NTRS)

    Robins, A. W.; Beissner, F. L., Jr.; Domack, C. S.; Swanson, E. E.

    1985-01-01

    A performance study was made of a vertical attitude takeoff and landing (VATOL), supersonic cruise aircraft concept having thrust vectoring integrated into the flight control system. Those characteristics considered were aerodynamics, weight, balance, and performance. Preliminary results indicate that high levels of supersonic aerodynamic performance can be achieved. Further, with the assumption of an advanced (1985 technology readiness) low bypass ratio turbofan engine and advanced structures, excellent mission performance capability is indicated.

  7. Social Networks Use, Loneliness and Academic Performance among University Students

    ERIC Educational Resources Information Center

    Stankovska, Gordana; Angelkovska, Slagana; Grncarovska, Svetlana Pandiloska

    2016-01-01

    The world is extensively changed by Social Networks Sites (SNSs) on the Internet. A large number of children and adolescents in the world have access to the internet and are exposed to the internet at a very early age. Most of them use the Social Networks Sites with the purpose of exchanging academic activities and developing a social network all…

  8. Networks: A Route to Improving Performance in Manufacturing SMEs

    ERIC Educational Resources Information Center

    Coleman, J.

    2003-01-01

    Perceived as important contributors to economic growth, network and cluster groups are currently receiving much attention. The same may be said of SMEs. But practical and theoretical perspectives indicate that SMEs, and particularly the owner-managers, place little value on networks and have only limited networking resources. Consequently, they do…

  9. Network's cardiology data help member groups benchmark performance, market services.

    PubMed

    1997-11-01

    Data-driven cardiac network improves outcomes, reduces costs. Thirty-seven high-volume network member hospitals are using detailed demographic, procedure, and outcomes data in benchmarking and marketing efforts, and network physicians are using the aggregate data on 120,000 angioplasty and bypass procedures in research studies. Here are the details, plus sample reports.

  10. Performance of demand assignment TDMA and multicarrier TDMA satellite networks

    NASA Astrophysics Data System (ADS)

    Jabbari, Bijan; McDysan, David

    1992-02-01

    The authors develop an analytical model of satellite communication networks using time-division multiple-access (TDMA) and multiple-carrier TDMA (MC-TDMA) systems to support circuit-switched traffic. The model defines the functions required to implement fixed assignments (FAs), variable destinations (VDs), and demand assignments (DAs). The authors describe a general system model for the various allocation schemes and traffic activity. They define analytical expressions for the blocking and freeze-out probabilities. This is followed by the derivation of the satellite capacity requirements at a specified performance level for FA, VD, and DA systems with and without digital speech interpolation. The analysis of disjoint pools (DP) and combined pools (CP) in DA systems is presented and attention is given to MC-TDMA with limited connectivity demand assignment. Expressions for the required satellite capacity for specified traffic and performance are derived along with numerical results. The degree of complexity and implementation alternatives for the various allocation schemes are considered.

  11. Performance Improvement in Geographic Routing for Vehicular Ad Hoc Networks

    PubMed Central

    Kaiwartya, Omprakash; Kumar, Sushil; Lobiyal, D. K.; Abdullah, Abdul Hanan; Hassan, Ahmed Nazar

    2014-01-01

    Geographic routing is one of the most investigated themes by researchers for reliable and efficient dissemination of information in Vehicular Ad Hoc Networks (VANETs). Recently, different Geographic Distance Routing (GEDIR) protocols have been suggested in the literature. These protocols focus on reducing the forwarding region towards destination to select the Next Hop Vehicles (NHV). Most of these protocols suffer from the problem of elevated one-hop link disconnection, high end-to-end delay and low throughput even at normal vehicle speed in high vehicle density environment. This paper proposes a Geographic Distance Routing protocol based on Segment vehicle, Link quality and Degree of connectivity (SLD-GEDIR). The protocol selects a reliable NHV using the criteria segment vehicles, one-hop link quality and degree of connectivity. The proposed protocol has been simulated in NS-2 and its performance has been compared with the state-of-the-art protocols: P-GEDIR, J-GEDIR and V-GEDIR. The empirical results clearly reveal that SLD-GEDIR has lower link disconnection and end-to-end delay, and higher throughput as compared to the state-of-the-art protocols. It should be noted that the performance of the proposed protocol is preserved irrespective of vehicle density and speed. PMID:25429415

  12. Performance improvement in geographic routing for Vehicular Ad Hoc Networks.

    PubMed

    Kaiwartya, Omprakash; Kumar, Sushil; Lobiyal, D K; Abdullah, Abdul Hanan; Hassan, Ahmed Nazar

    2014-01-01

    Geographic routing is one of the most investigated themes by researchers for reliable and efficient dissemination of information in Vehicular Ad Hoc Networks (VANETs). Recently, different Geographic Distance Routing (GEDIR) protocols have been suggested in the literature. These protocols focus on reducing the forwarding region towards destination to select the Next Hop Vehicles (NHV). Most of these protocols suffer from the problem of elevated one-hop link disconnection, high end-to-end delay and low throughput even at normal vehicle speed in high vehicle density environment. This paper proposes a Geographic Distance Routing protocol based on Segment vehicle, Link quality and Degree of connectivity (SLD-GEDIR). The protocol selects a reliable NHV using the criteria segment vehicles, one-hop link quality and degree of connectivity. The proposed protocol has been simulated in NS-2 and its performance has been compared with the state-of-the-art protocols: P-GEDIR, J-GEDIR and V-GEDIR. The empirical results clearly reveal that SLD-GEDIR has lower link disconnection and end-to-end delay, and higher throughput as compared to the state-of-the-art protocols. It should be noted that the performance of the proposed protocol is preserved irrespective of vehicle density and speed. PMID:25429415

  13. Performance improvement in geographic routing for Vehicular Ad Hoc Networks.

    PubMed

    Kaiwartya, Omprakash; Kumar, Sushil; Lobiyal, D K; Abdullah, Abdul Hanan; Hassan, Ahmed Nazar

    2014-01-01

    Geographic routing is one of the most investigated themes by researchers for reliable and efficient dissemination of information in Vehicular Ad Hoc Networks (VANETs). Recently, different Geographic Distance Routing (GEDIR) protocols have been suggested in the literature. These protocols focus on reducing the forwarding region towards destination to select the Next Hop Vehicles (NHV). Most of these protocols suffer from the problem of elevated one-hop link disconnection, high end-to-end delay and low throughput even at normal vehicle speed in high vehicle density environment. This paper proposes a Geographic Distance Routing protocol based on Segment vehicle, Link quality and Degree of connectivity (SLD-GEDIR). The protocol selects a reliable NHV using the criteria segment vehicles, one-hop link quality and degree of connectivity. The proposed protocol has been simulated in NS-2 and its performance has been compared with the state-of-the-art protocols: P-GEDIR, J-GEDIR and V-GEDIR. The empirical results clearly reveal that SLD-GEDIR has lower link disconnection and end-to-end delay, and higher throughput as compared to the state-of-the-art protocols. It should be noted that the performance of the proposed protocol is preserved irrespective of vehicle density and speed.

  14. Clinical Performance of a New Biomimetic Double Network Material

    PubMed Central

    Dirxen, Christine; Blunck, Uwe; Preissner, Saskia

    2013-01-01

    Background: The development of ceramics during the last years was overwhelming. However, the focus was laid on the hardness and the strength of the restorative materials, resulting in high antagonistic tooth wear. This is critical for patients with bruxism. Objectives: The purpose of this study was to evaluate the clinical performance of the new double hybrid material for non-invasive treatment approaches. Material and Methods: The new approach of the material tested, was to modify ceramics to create a biomimetic material that has similar physical properties like dentin and enamel and is still as strong as conventional ceramics. Results: The produced crowns had a thickness ranging from 0.5 to 1.5 mm. To evaluate the clinical performance and durability of the crowns, the patient was examined half a year later. The crowns were still intact and soft tissues appeared healthy and this was achieved without any loss of tooth structure. Conclusions: The material can be milled to thin layers, but is still strong enough to prevent cracks which are stopped by the interpenetrating polymer within the network. Depending on the clinical situation, minimally- up to non-invasive restorations can be milled. Clinical Relevance: Dentistry aims in preservation of tooth structure. Patients suffering from loss of tooth structure (dental erosion, Amelogenesis imperfecta) or even young patients could benefit from minimally-invasive crowns. Due to a Vickers hardness between dentin and enamel, antagonistic tooth wear is very low. This might be interesting for treating patients with bruxism. PMID:24167534

  15. Sandia`s research network for Supercomputing `93: A demonstration of advanced technologies for building high-performance networks

    SciTech Connect

    Gossage, S.A.; Vahle, M.O.

    1993-12-01

    Supercomputing `93, a high-performance computing and communications conference, was held November 15th through 19th, 1993 in Portland, Oregon. For the past two years, Sandia National Laboratories has used this conference to showcase and focus its communications and networking endeavors. At the 1993 conference, the results of Sandia`s efforts in exploring and utilizing Asynchronous Transfer Mode (ATM) and Synchronous Optical Network (SONET) technologies were vividly demonstrated by building and operating three distinct networks. The networks encompassed a Switched Multimegabit Data Service (SMDS) network running at 44.736 megabits per second, an ATM network running on a SONET circuit at the Optical Carrier (OC) rate of 155.52 megabits per second, and a High Performance Parallel Interface (HIPPI) network running over a 622.08 megabits per second SONET circuit. The SMDS and ATM networks extended from Albuquerque, New Mexico to the showroom floor, while the HIPPI/SONET network extended from Beaverton, Oregon to the showroom floor. This paper documents and describes these networks.

  16. Effects of Infection by Trypanosoma cruzi and Trypanosoma rangeli on the Reproductive Performance of the Vector Rhodnius prolixus

    PubMed Central

    Fellet, Maria Raquel; Lorenzo, Marcelo Gustavo; Elliot, Simon Luke; Carrasco, David; Guarneri, Alessandra Aparecida

    2014-01-01

    The insect Rhodnius prolixus is responsible for the transmission of Trypanosoma cruzi, which is the etiological agent of Chagas disease in areas of Central and South America. Besides this, it can be infected by other trypanosomes such as Trypanosoma rangeli. The effects of these parasites on vectors are poorly understood and are often controversial so here we focussed on possible negative effects of these parasites on the reproductive performance of R. prolixus, specifically comparing infected and uninfected couples. While T. cruzi infection did not delay pre-oviposition time of infected couples at either temperature tested (25 and 30°C) it did, at 25°C, increase the e-value in the second reproductive cycle, as well as hatching rates. Meanwhile, at 30°C, T. cruzi infection decreased the e-value of insects during the first cycle and also the fertility of older insects. When couples were instead infected with T. rangeli, pre-oviposition time was delayed, while reductions in the e-value and hatching rate were observed in the second and third cycles. We conclude that both T. cruzi and T. rangeli can impair reproductive performance of R. prolixus, although for T. cruzi, this is dependent on rearing temperature and insect age. We discuss these reproductive costs in terms of potential consequences on triatomine behavior and survival. PMID:25136800

  17. Assessing Infrasound Network Performance Using the Ambient Ocean Noise

    NASA Astrophysics Data System (ADS)

    Stopa, J. E.; Cheung, K.; Garces, M. A.; Williams, B.; Le Pichon, A.

    2013-12-01

    Infrasonic microbarom signals are attributed to the nonlinear resonant interaction of ocean surface waves. IMS stations around the globe routinely detect microbaroms with a dominant frequency of ~0.2 Hz from regions of marine storminess. We have produced the predicted global microbarom source field for 2000-2010 using the spectral wave model WAVEWATCH III in hindcast mode. The wave hindcast utilizes NCEP's Climate Forecast System Reanalysis (CFSR) winds to drive the ocean waves. CFSR is a coupled global modeling system created by a state-of-the-art numerical models and assimilation techniques to construct a homogenous dataset in time and space at 0.5° resolution. The microbarom source model of Waxler and Gilbert (2005) is implemented to estimate the ocean noise created by counter-propagating waves with similar wave frequencies. Comparisons between predicted and observed global microbarom fields suggest the model results are reasonable; however, further error analysis between the predicted and observed infrasound signals is required to quantitatively assess the predictions. The 11-year hindcast suggests global sources are stable in both magnitude and spatial distribution. These statistically stable features represent the ambient microbarom climatology of the ambient ocean noise. This supports the use of numerical forecast models to assess the IMS infrasound network performance and explosion detection capabilities in the 0.1-0.4 Hz frequency band above the ambient ocean noise. Theoretical/modeled microbarom source strength (colors) versus infrasonic observations from the IMS network (directional histograms). The contours represent the maximum intersections from the recorded acoustic signals for a large extra-tropical event on December 7, 2009.

  18. Is functional integration of resting state brain networks an unspecific biomarker for working memory performance?

    PubMed

    Alavash, Mohsen; Doebler, Philipp; Holling, Heinz; Thiel, Christiane M; Gießing, Carsten

    2015-03-01

    Is there one optimal topology of functional brain networks at rest from which our cognitive performance would profit? Previous studies suggest that functional integration of resting state brain networks is an important biomarker for cognitive performance. However, it is still unknown whether higher network integration is an unspecific predictor for good cognitive performance or, alternatively, whether specific network organization during rest predicts only specific cognitive abilities. Here, we investigated the relationship between network integration at rest and cognitive performance using two tasks that measured different aspects of working memory; one task assessed visual-spatial and the other numerical working memory. Network clustering, modularity and efficiency were computed to capture network integration on different levels of network organization, and to statistically compare their correlations with the performance in each working memory test. The results revealed that each working memory aspect profits from a different resting state topology, and the tests showed significantly different correlations with each of the measures of network integration. While higher global network integration and modularity predicted significantly better performance in visual-spatial working memory, both measures showed no significant correlation with numerical working memory performance. In contrast, numerical working memory was superior in subjects with highly clustered brain networks, predominantly in the intraparietal sulcus, a core brain region of the working memory network. Our findings suggest that a specific balance between local and global functional integration of resting state brain networks facilitates special aspects of cognitive performance. In the context of working memory, while visual-spatial performance is facilitated by globally integrated functional resting state brain networks, numerical working memory profits from increased capacities for local processing

  19. Performance evaluation of NASA/KSC CAD/CAE graphics local area network

    NASA Technical Reports Server (NTRS)

    Zobrist, George

    1988-01-01

    This study had as an objective the performance evaluation of the existing CAD/CAE graphics network at NASA/KSC. This evaluation will also aid in projecting planned expansions, such as the Space Station project on the existing CAD/CAE network. The objectives were achieved by collecting packet traffic on the various integrated sub-networks. This included items, such as total number of packets on the various subnetworks, source/destination of packets, percent utilization of network capacity, peak traffic rates, and packet size distribution. The NASA/KSC LAN was stressed to determine the useable bandwidth of the Ethernet network and an average design station workload was used to project the increased traffic on the existing network and the planned T1 link. This performance evaluation of the network will aid the NASA/KSC network managers in planning for the integration of future workload requirements into the existing network.

  20. Disentangling vector-borne transmission networks: a universal DNA barcoding method to identify vertebrate hosts from arthropod bloodmeals.

    PubMed

    Alcaide, Miguel; Rico, Ciro; Ruiz, Santiago; Soriguer, Ramón; Muñoz, Joaquín; Figuerola, Jordi

    2009-01-01

    Emerging infectious diseases represent a challenge for global economies and public health. About one fourth of the last pandemics have been originated by the spread of vector-borne pathogens. In this sense, the advent of modern molecular techniques has enhanced our capabilities to understand vector-host interactions and disease ecology. However, host identification protocols have poorly profited of international DNA barcoding initiatives and/or have focused exclusively on a limited array of vector species. Therefore, ascertaining the potential afforded by DNA barcoding tools in other vector-host systems of human and veterinary importance would represent a major advance in tracking pathogen life cycles and hosts. Here, we show the applicability of a novel and efficient molecular method for the identification of the vertebrate host's DNA contained in the midgut of blood-feeding arthropods. To this end, we designed a eukaryote-universal forward primer and a vertebrate-specific reverse primer to selectively amplify 758 base pairs (bp) of the vertebrate mitochondrial Cytochrome c Oxidase Subunit I (COI) gene. Our method was validated using both extensive sequence surveys from the public domain and Polymerase Chain Reaction (PCR) experiments carried out over specimens from different Classes of vertebrates (Mammalia, Aves, Reptilia and Amphibia) and invertebrate ectoparasites (Arachnida and Insecta). The analysis of mosquito, culicoid, phlebotomie, sucking bugs, and tick bloodmeals revealed up to 40 vertebrate hosts, including 23 avian, 16 mammalian and one reptilian species. Importantly, the inspection and analysis of direct sequencing electropherograms also assisted the resolving of mixed bloodmeals. We therefore provide a universal and high-throughput diagnostic tool for the study of the ecology of haematophagous invertebrates in relation to their vertebrate hosts. Such information is crucial to support the efficient management of initiatives aimed at reducing

  1. Design and performance of a robust WDM network

    NASA Astrophysics Data System (ADS)

    Vaishnav, Chintan H.; Nieberger, Matt; Jayasumana, Anura P.; Sauer, Jon R.

    1996-11-01

    Increasing number of bandwidth intensive applications demand introduction of concurrency among multiple user transmissions, using techniques such as time division multiplexing (TDM) or wavelength division multiplexing (WDM), to achieve higher aggregate bandwidth. Commercial exploitation of WDM networks, despite their immense potential and promise, find extreme reluctance due to their vulnerability to the wavelength instability of the transmitter sources, and budgetary concerns raised by the inordinate cost of stabilizing them. Robust WDM network uses an intelligent multichannel wavelength tracking receiver that is capable of dynamically accommodating the manufacturing and operating imperfections of the transmitter sources. This yields robust WDM to be a lucrative option for cost sensitive computer networking markets. In this paper we present token-based medium access control protocol and network node architecture designed for the demonstration testbed of robust WDM network. Behavior of average waiting time for a typical connection request on the network is presented using computer simulation.

  2. Performance of the SwissQuantum network over 21 months

    NASA Astrophysics Data System (ADS)

    Stucki, Damien; Legré, Matthieu; Monat, Laurent; Robyr, Samuel; Trinkler, Patrick; Ribordy, Grégoire; Thew, Rob; Walenta, Nino; Gisin, Nicolas; Buntschu, François; Perroud, Didier; Litzistorf, Gerald; Tavares, Jose; Ventura, Stefano; Junod, Pascal; Voirol, Raphael; Monbaron, Patrick

    2011-11-01

    In this paper, we present the architecture and results of the SwissQuantum quantum key distribution (QKD) network. This three nodes triangular quantum network was running from March 2009 to January 2011 in the Geneva metropolitan area. The three trusted nodes were located at the University of Geneva (Unige), the CERN and the University of Applied Sciences Western Switzerland in Geneva (hepia Geneva). This quantum network was deployed to prove reliability of QKD in telecommunication network over a long period. To facilitate integration of QKD in telecommunication network, this quantum network was composed of three layers: a quantum layer, a key management layer, and an application layer. The keys are distributed in the first layer; they are handled in the second layer; and they are used in the third layer.

  3. Models of logistic regression analysis, support vector machine, and back-propagation neural network based on serum tumor markers in colorectal cancer diagnosis.

    PubMed

    Zhang, B; Liang, X L; Gao, H Y; Ye, L S; Wang, Y G

    2016-05-13

    We evaluated the application of three machine learning algorithms, including logistic regression, support vector machine and back-propagation neural network, for diagnosing congenital heart disease and colorectal cancer. By inspecting related serum tumor marker levels in colorectal cancer patients and healthy subjects, early diagnosis models for colorectal cancer were built using three machine learning algorithms to assess their corresponding diagnostic values. Except for serum alpha-fetoprotein, the levels of 11 other serum markers of patients in the colorectal cancer group were higher than those in the benign colorectal cancer group (P < 0.05). The results of logistic regression analysis indicted that individual detection of serum carcinoembryonic antigens, CA199, CA242, CA125, and CA153 and their combined detection was effective for diagnosing colorectal cancer. Combined detection had a better diagnostic effect with a sensitivity of 94.2% and specificity of 97.7%; combining serum carcinoembryonic antigens, CA199, CA242, CA125, and CA153, with the support vector machine diagnosis model and back-propagation, a neural network diagnosis model was built with diagnostic accuracies of 82 and 75%, sensitivities of 85 and 80%, and specificities of 80 and 70%, respectively. Colorectal cancer diagnosis models based on the three machine learning algorithms showed high diagnostic value and can help obtain evidence for the early diagnosis of colorectal cancer.

  4. First manufactured diamond AGPM vector vortex for the L- and N-bands: metrology and expected performances

    NASA Astrophysics Data System (ADS)

    Delacroix, C.; Forsberg, P.; Karlsson, M.; Mawet, D.; Lenaerts, C.; Habraken, S.; Absil, O.; Hanot, C.; Surdej, J.

    2010-10-01

    The AGPM (Annular Groove Phase Mask, Mawet et al. 2005) is an optical vectorial vortex coronagraph (or vector vortex) synthesized by a circular subwavelength grating, that is a grating with a period smaller than λ/n (λ being the observed wavelength and n the refractive index of the grating substrate). Since it is a phase mask, it allows to reach a high contrast with a small working angle. Moreover, its subwavelength structure provides a good achromatization over wide spectral bands. Recently, we have manufactured and measured our first N-band prototypes that allowed us to validate the reproducibility of the microfabrication process. Here, we present newly produced mid-IR diamond AGPMs in the N-band (˜10 μm), and in the most wanted L-band (˜3.5 μm). We first give an extrapolation of the expected coronagraph performances. We then present the manufacturing and measurement results, using diamond-optimized microfabrication techniques such as nano-imprint lithography (NIL) and reactive ion etching (RIE). Finally, the subwavelength grating profile metrology combines surface metrology (scanning electron microscopy, atomic force microscopy, white light interferometry) with diffractometry on an optical polarimetric bench and cross correlation with theoretical simulations using rigorous coupled wave analysis (RCWA).

  5. Static internal performance of a thrust vectoring and reversing two-dimensional convergent-divergent nozzle with an aft flap

    NASA Technical Reports Server (NTRS)

    Re, R. J.; Leavitt, L. D.

    1986-01-01

    The static internal performance of a multifunction nozzle having some of the geometric characteristics of both two-dimensional convergent-divergent and single expansion ramp nozzles has been investigated in the static-test facility of the Langley 16-Foot Transonic Tunnel. The internal expansion portion of the nozzle consisted of two symmetrical flat surfaces of equal length, and the external expansion portion of the nozzle consisted of a single aft flap. The aft flap could be varied in angle independently of the upper internal expansion surface to which it was attached. The effects of internal expansion ratio, nozzle thrust-vector angle (-30 deg. to 30 deg., aft flap shape, aft flap angle, and sidewall containment were determined for dry and afterburning power settings. In addition, a partial afterburning power setting nozzle, a fully deployed thrust reverser, and four vertical takeoff or landing nozzle, configurations were investigated. Nozzle pressure ratio was varied up to 10 for the dry power nozzles and 7 for the afterburning power nozzles.

  6. Index Sets and Vectorization

    SciTech Connect

    Keasler, J A

    2012-03-27

    Vectorization is data parallelism (SIMD, SIMT, etc.) - extension of ISA enabling the same instruction to be performed on multiple data items simultaeously. Many/most CPUs support vectorization in some form. Vectorization is difficult to enable, but can yield large efficiency gains. Extra programmer effort is required because: (1) not all algorithms can be vectorized (regular algorithm structure and fine-grain parallelism must be used); (2) most CPUs have data alignment restrictions for load/store operations (obey or risk incorrect code); (3) special directives are often needed to enable vectorization; and (4) vector instructions are architecture-specific. Vectorization is the best way to optimize for power and performance due to reduced clock cycles. When data is organized properly, a vector load instruction (i.e. movaps) can replace 'normal' load instructions (i.e. movsd). Vector operations can potentially have a smaller footprint in the instruction cache when fewer instructions need to be executed. Hybrid index sets insulate users from architecture specific details. We have applied hybrid index sets to achieve optimal vectorization. We can extend this concept to handle other programming models.

  7. OPTIMAL CONFIGURATION OF A COMMAND AND CONTROL NETWORK: BALANCING PERFORMANCE AND RECONFIGURATION CONSTRAINTS

    SciTech Connect

    L. DOWELL

    1999-07-01

    The optimization of the configuration of communications and control networks is important for assuring the reliability and performance of the networks. This paper presents techniques for determining the optimal configuration for such a network in the presence of communication and connectivity constraints.

  8. A Bayesian Network Approach to Modeling Learning Progressions and Task Performance. CRESST Report 776

    ERIC Educational Resources Information Center

    West, Patti; Rutstein, Daisy Wise; Mislevy, Robert J.; Liu, Junhui; Choi, Younyoung; Levy, Roy; Crawford, Aaron; DiCerbo, Kristen E.; Chappel, Kristina; Behrens, John T.

    2010-01-01

    A major issue in the study of learning progressions (LPs) is linking student performance on assessment tasks to the progressions. This report describes the challenges faced in making this linkage using Bayesian networks to model LPs in the field of computer networking. The ideas are illustrated with exemplar Bayesian networks built on Cisco…

  9. Evaluation of Techniques to Detect Significant Network Performance Problems using End-to-End Active Network Measurements

    SciTech Connect

    Cottrell, R.Les; Logg, Connie; Chhaparia, Mahesh; Grigoriev, Maxim; Haro, Felipe; Nazir, Fawad; Sandford, Mark

    2006-01-25

    End-to-End fault and performance problems detection in wide area production networks is becoming increasingly hard as the complexity of the paths, the diversity of the performance, and dependency on the network increase. Several monitoring infrastructures are built to monitor different network metrics and collect monitoring information from thousands of hosts around the globe. Typically there are hundreds to thousands of time-series plots of network metrics which need to be looked at to identify network performance problems or anomalous variations in the traffic. Furthermore, most commercial products rely on a comparison with user configured static thresholds and often require access to SNMP-MIB information, to which a typical end-user does not usually have access. In our paper we propose new techniques to detect network performance problems proactively in close to realtime and we do not rely on static thresholds and SNMP-MIB information. We describe and compare the use of several different algorithms that we have implemented to detect persistent network problems using anomalous variations analysis in real end-to-end Internet performance measurements. We also provide methods and/or guidance for how to set the user settable parameters. The measurements are based on active probes running on 40 production network paths with bottlenecks varying from 0.5Mbits/s to 1000Mbit/s. For well behaved data (no missed measurements and no very large outliers) with small seasonal changes most algorithms identify similar events. We compare the algorithms' robustness with respect to false positives and missed events especially when there are large seasonal effects in the data. Our proposed techniques cover a wide variety of network paths and traffic patterns. We also discuss the applicability of the algorithms in terms of their intuitiveness, their speed of execution as implemented, and areas of applicability. Our encouraging results compare and evaluate the accuracy of our detection

  10. Molten carbonate fuel cell networks: Principles, analysis and performance

    SciTech Connect

    Wimer, J.G.; Williams, M.C.; Archer, D.H.; Osterle, J.F.

    1993-09-01

    Key to the concept of networking is multiple fuel cell stacks with regard to flow of reactant streams. In a fuel cell network, reactant streams are ducted so that they are fed and recycled through stacks in series. Stacks networked in series more closely approach a reversible process, which increases efficiency. Higher total reactant utilizations can be achieved by stacks networked in series. Placing stacks in series also allows reactant streams to be conditioned at different stages of utilization. Between stacks, heat can be consumed or removed, (methane injection, heat exchange) which improves thermal balance. Composition of streams can be adjusted between stacks by mixing exhaust streams or by injecting reactant streams. Computer simulations demonstrated that a combined cycle system with MCFC stacks networked in series is more efficient than an identical system with MCFC stacks in parallel.

  11. Support vector machine to predict diesel engine performance and emission parameters fueled with nano-particles additive to diesel fuel

    NASA Astrophysics Data System (ADS)

    Ghanbari, M.; Najafi, G.; Ghobadian, B.; Mamat, R.; Noor, M. M.; Moosavian, A.

    2015-12-01

    This paper studies the use of adaptive Support Vector Machine (SVM) to predict the performance parameters and exhaust emissions of a diesel engine operating on nanodiesel blended fuels. In order to predict the engine parameters, the whole experimental data were randomly divided into training and testing data. For SVM modelling, different values for radial basis function (RBF) kernel width and penalty parameters (C) were considered and the optimum values were then found. The results demonstrate that SVM is capable of predicting the diesel engine performance and emissions. In the experimental step, Carbon nano tubes (CNT) (40, 80 and 120 ppm) and nano silver particles (40, 80 and 120 ppm) with nanostructure were prepared and added as additive to the diesel fuel. Six cylinders, four-stroke diesel engine was fuelled with these new blended fuels and operated at different engine speeds. Experimental test results indicated the fact that adding nano particles to diesel fuel, increased diesel engine power and torque output. For nano-diesel it was found that the brake specific fuel consumption (bsfc) was decreased compared to the net diesel fuel. The results proved that with increase of nano particles concentrations (from 40 ppm to 120 ppm) in diesel fuel, CO2 emission increased. CO emission in diesel fuel with nano-particles was lower significantly compared to pure diesel fuel. UHC emission with silver nano-diesel blended fuel decreased while with fuels that contains CNT nano particles increased. The trend of NOx emission was inverse compared to the UHC emission. With adding nano particles to the blended fuels, NOx increased compared to the net diesel fuel. The tests revealed that silver & CNT nano particles can be used as additive in diesel fuel to improve complete combustion of the fuel and reduce the exhaust emissions significantly.

  12. The Current State of Human Performance Technology: A Citation Network Analysis of "Performance Improvement Quarterly," 1988-2010

    ERIC Educational Resources Information Center

    Cho, Yonjoo; Jo, Sung Jun; Park, Sunyoung; Kang, Ingu; Chen, Zengguan

    2011-01-01

    This study conducted a citation network analysis (CNA) of human performance technology (HPT) to examine its current state of the field. Previous reviews of the field have used traditional research methods, such as content analysis, survey, Delphi, and citation analysis. The distinctive features of CNA come from using a social network analysis…

  13. Data mining methods in the prediction of Dementia: A real-data comparison of the accuracy, sensitivity and specificity of linear discriminant analysis, logistic regression, neural networks, support vector machines, classification trees and random forests

    PubMed Central

    2011-01-01

    Background Dementia and cognitive impairment associated with aging are a major medical and social concern. Neuropsychological testing is a key element in the diagnostic procedures of Mild Cognitive Impairment (MCI), but has presently a limited value in the prediction of progression to dementia. We advance the hypothesis that newer statistical classification methods derived from data mining and machine learning methods like Neural Networks, Support Vector Machines and Random Forests can improve accuracy, sensitivity and specificity of predictions obtained from neuropsychological testing. Seven non parametric classifiers derived from data mining methods (Multilayer Perceptrons Neural Networks, Radial Basis Function Neural Networks, Support Vector Machines, CART, CHAID and QUEST Classification Trees and Random Forests) were compared to three traditional classifiers (Linear Discriminant Analysis, Quadratic Discriminant Analysis and Logistic Regression) in terms of overall classification accuracy, specificity, sensitivity, Area under the ROC curve and Press'Q. Model predictors were 10 neuropsychological tests currently used in the diagnosis of dementia. Statistical distributions of classification parameters obtained from a 5-fold cross-validation were compared using the Friedman's nonparametric test. Results Press' Q test showed that all classifiers performed better than chance alone (p < 0.05). Support Vector Machines showed the larger overall classification accuracy (Median (Me) = 0.76) an area under the ROC (Me = 0.90). However this method showed high specificity (Me = 1.0) but low sensitivity (Me = 0.3). Random Forest ranked second in overall accuracy (Me = 0.73) with high area under the ROC (Me = 0.73) specificity (Me = 0.73) and sensitivity (Me = 0.64). Linear Discriminant Analysis also showed acceptable overall accuracy (Me = 0.66), with acceptable area under the ROC (Me = 0.72) specificity (Me = 0.66) and sensitivity (Me = 0.64). The remaining classifiers showed

  14. The Global Seismographic Network (GSN): Challenges and Methods for Maintaining High Quality Network Performance

    NASA Astrophysics Data System (ADS)

    Hafner, Katrin; Davis, Peter; Wilson, David; Sumy, Danielle; Woodward, Bob

    2016-04-01

    The Global Seismographic Network (GSN) is a 152 station, globally-distributed, permanent network of state-of-the-art seismological and geophysical sensors. The GSN has been operating for over 20 years via an ongoing successful partnership between IRIS, the USGS, the University of California at San Diego, NSF and numerous host institutions worldwide. The central design goal of the GSN may be summarized as "to record with full fidelity and bandwidth all seismic signals above the Earth noise, accompanied by some efforts to reduce Earth noise by deployment strategies". While many of the technical design goals have been met, we continue to strive for higher data quality with a combination of new sensors and improved installation techniques designed to achieve the lowest noise possible under existing site conditions. Data from the GSN are used not only for research, but on a daily basis as part of the operational missions of the USGS NEIC, NOAA tsunami warning centers, the Comprehensive Nuclear-Test-Ban-Treaty Organization as well as other organizations. In the recent period of very tight funding budgets, the primary challenges for the GSN include maintaining these operational capabilities while simultaneously developing and replacing the primary sensors, maintaining high quality data and repairing station infrastructure. Aging of GSN equipment and station infrastructure has resulted in renewed emphasis on developing, evaluating and implementing quality control tools such as MUSTANG and DQA to maintain the high data quality from the GSN stations. These tools allow the network operators to routinely monitor and analyze waveform data to detect and track problems and develop action plans as issues are found. We will present summary data quality metrics for the GSN as obtained via these quality assurance tools. In recent years, the GSN has standardized dataloggers to the Quanterra Q330HR data acquisition system at all but three stations resulting in significantly improved

  15. Performance evaluation of a burst-mode EDFA in an optical packet and circuit integrated network.

    PubMed

    Shiraiwa, Masaki; Awaji, Yoshinari; Furukawa, Hideaki; Shinada, Satoshi; Puttnam, Benjamin J; Wada, Naoya

    2013-12-30

    We experimentally investigate the performance of burst-mode EDFA in an optical packet and circuit integrated system. In such networks, packets and light paths can be dynamically assigned to the same fibers, resulting in gain transients in EDFAs throughout the network that can limit network performance. Here, we compare the performance of a 'burst-mode' EDFA (BM-EDFA), employing transient suppression techniques and optical feedback, with conventional EDFAs, and those using automatic gain control and previous BM-EDFA implementations. We first measure gain transients and other impairments in a simplified set-up before making frame error-rate measurements in a network demonstration.

  16. Vector computer implementation of power flow outage studies

    SciTech Connect

    Granelli, G.P.; Montagna, M.; Pasini, G.L. ); Marannino, P. )

    1992-05-01

    This paper presents an application of vector and parallel processing to power flow outage studies on large-scale networks. Standard sparsity programming is not well suited to the capabilities of vector and parallel computers because of the extremely short vectors processed in load flow studies. In order to improve computation efficiency, the operations required to perform both forward/backward solution and power residual calculation are gathered in the form of long FORTRAN DO loops. Two algorithms are proposed and compared with the results of a program written for scalar processing. Simulations for the outage studies on IEEE standard networks and some different configurations of the Italian and European (UCPTE) EHV systems are run on a CRAY Y-MP8/432 vector computer (and partially on a IBM 3090/200S VF). The multitasking facility of the CRAY computer is also exploited in order to shorten the wall clock time required by a complete outage simulation.

  17. Performance evaluation of the IBM RISC (reduced instruction set computer) System/6000: Comparison of an optimized scalar processor with two vector processors

    SciTech Connect

    Simmons, M.L.; Wasserman, H.J.

    1990-01-01

    RISC System/6000 computers are workstations with a reduced instruction set processor recently developed by IBM. This report details the performance of the 6000-series computers as measured using a set of portable, standard-Fortran, computationally-intensive benchmark codes that represent the scientific workload at the Los Alamos National Laboratory. On all but three of our benchmark codes, the 40-ns RISC System was able to perform as well as a single Convex C-240 processor, a vector processor that also has a 40-ns clock cycle, and on these same codes, it performed as well as the FPS-500, a vector processor with a 30-ns clock cycle. 17 refs., 2 figs., 6 tabs.

  18. The performance evaluation of a new neural network based traffic management scheme for a satellite communication network

    NASA Technical Reports Server (NTRS)

    Ansari, Nirwan; Liu, Dequan

    1991-01-01

    A neural-network-based traffic management scheme for a satellite communication network is described. The scheme consists of two levels of management. The front end of the scheme is a derivation of Kohonen's self-organization model to configure maps for the satellite communication network dynamically. The model consists of three stages. The first stage is the pattern recognition task, in which an exemplar map that best meets the current network requirements is selected. The second stage is the analysis of the discrepancy between the chosen exemplar map and the state of the network, and the adaptive modification of the chosen exemplar map to conform closely to the network requirement (input data pattern) by means of Kohonen's self-organization. On the basis of certain performance criteria, whether a new map is generated to replace the original chosen map is decided in the third stage. A state-dependent routing algorithm, which arranges the incoming call to some proper path, is used to make the network more efficient and to lower the call block rate. Simulation results demonstrate that the scheme, which combines self-organization and the state-dependent routing mechanism, provides better performance in terms of call block rate than schemes that only have either the self-organization mechanism or the routing mechanism.

  19. Performance Evolution of IEEE 802.11b Wireless Local Area Network

    NASA Astrophysics Data System (ADS)

    Malik, Deepak; Singhal, Ankur

    2011-12-01

    The Wireless network can be employed to connect wired network to the wireless network. Wireless local area networks (WLAN) are more bandwidth limited as compared to the wired networks because they rely on an inexpensive, but error prone, physical medium (air). Hence it is important to evaluate their performance. This paper presents a study of IEEE 802.11b wireless LAN (WLAN). The performance evaluation has been presented via a series of test with different parameters such as data rate, different number of nodes and physical characteristics. The different qualities of service parameter are chosen to be throughput, media access delay and dropped data packets. The simulation results show that an IEEE 802.11b WLAN can support up to 60 clients with modest throughput. Finally the results are compared to evaluate the performance of wireless local networks.

  20. DISCRETE EVENT SIMULATION OF OPTICAL SWITCH MATRIX PERFORMANCE IN COMPUTER NETWORKS

    SciTech Connect

    Imam, Neena; Poole, Stephen W

    2013-01-01

    In this paper, we present application of a Discrete Event Simulator (DES) for performance modeling of optical switching devices in computer networks. Network simulators are valuable tools in situations where one cannot investigate the system directly. This situation may arise if the system under study does not exist yet or the cost of studying the system directly is prohibitive. Most available network simulators are based on the paradigm of discrete-event-based simulation. As computer networks become increasingly larger and more complex, sophisticated DES tool chains have become available for both commercial and academic research. Some well-known simulators are NS2, NS3, OPNET, and OMNEST. For this research, we have applied OMNEST for the purpose of simulating multi-wavelength performance of optical switch matrices in computer interconnection networks. Our results suggest that the application of DES to computer interconnection networks provides valuable insight in device performance and aids in topology and system optimization.

  1. International network for capacity building for the control of emerging viral vector-borne zoonotic diseases: ARBO-ZOONET.

    PubMed

    Ahmed, J; Bouloy, M; Ergonul, O; Fooks, Ar; Paweska, J; Chevalier, V; Drosten, C; Moormann, R; Tordo, N; Vatansever, Z; Calistri, P; Estrada-Pena, A; Mirazimi, A; Unger, H; Yin, H; Seitzer, U

    2009-03-26

    Arboviruses are arthropod-borne viruses, which include West Nile fever virus (WNFV), a mosquito-borne virus, Rift Valley fever virus (RVFV), a mosquito-borne virus, and Crimean-Congo haemorrhagic fever virus (CCHFV), a tick-borne virus. These arthropod-borne viruses can cause disease in different domestic and wild animals and in humans, posing a threat to public health because of their epidemic and zoonotic potential. In recent decades, the geographical distribution of these diseases has expanded. Outbreaks of WNF have already occurred in Europe, especially in the Mediterranean basin. Moreover, CCHF is endemic in many European countries and serious outbreaks have occurred, particularly in the Balkans, Turkey and Southern Federal Districts of Russia. In 2000, RVF was reported for the first time outside the African continent, with cases being confirmed in Saudi Arabia and Yemen. This spread was probably caused by ruminant trade and highlights that there is a threat of expansion of the virus into other parts of Asia and Europe. In the light of global warming and globalisation of trade and travel, public interest in emerging zoonotic diseases has increased. This is especially evident regarding the geographical spread of vector-borne diseases. A multi-disciplinary approach is now imperative, and groups need to collaborate in an integrated manner that includes vector control, vaccination programmes, improved therapy strategies, diagnostic tools and surveillance, public awareness, capacity building and improvement of infrastructure in endemic regions. PMID:19341603

  2. International network for capacity building for the control of emerging viral vector-borne zoonotic diseases: ARBO-ZOONET.

    PubMed

    Ahmed, J; Bouloy, M; Ergonul, O; Fooks, Ar; Paweska, J; Chevalier, V; Drosten, C; Moormann, R; Tordo, N; Vatansever, Z; Calistri, P; Estrada-Pena, A; Mirazimi, A; Unger, H; Yin, H; Seitzer, U

    2009-03-26

    Arboviruses are arthropod-borne viruses, which include West Nile fever virus (WNFV), a mosquito-borne virus, Rift Valley fever virus (RVFV), a mosquito-borne virus, and Crimean-Congo haemorrhagic fever virus (CCHFV), a tick-borne virus. These arthropod-borne viruses can cause disease in different domestic and wild animals and in humans, posing a threat to public health because of their epidemic and zoonotic potential. In recent decades, the geographical distribution of these diseases has expanded. Outbreaks of WNF have already occurred in Europe, especially in the Mediterranean basin. Moreover, CCHF is endemic in many European countries and serious outbreaks have occurred, particularly in the Balkans, Turkey and Southern Federal Districts of Russia. In 2000, RVF was reported for the first time outside the African continent, with cases being confirmed in Saudi Arabia and Yemen. This spread was probably caused by ruminant trade and highlights that there is a threat of expansion of the virus into other parts of Asia and Europe. In the light of global warming and globalisation of trade and travel, public interest in emerging zoonotic diseases has increased. This is especially evident regarding the geographical spread of vector-borne diseases. A multi-disciplinary approach is now imperative, and groups need to collaborate in an integrated manner that includes vector control, vaccination programmes, improved therapy strategies, diagnostic tools and surveillance, public awareness, capacity building and improvement of infrastructure in endemic regions.

  3. Support vector machine regression (SVR/LS-SVM)--an alternative to neural networks (ANN) for analytical chemistry? Comparison of nonlinear methods on near infrared (NIR) spectroscopy data.

    PubMed

    Balabin, Roman M; Lomakina, Ekaterina I

    2011-04-21

    In this study, we make a general comparison of the accuracy and robustness of five multivariate calibration models: partial least squares (PLS) regression or projection to latent structures, polynomial partial least squares (Poly-PLS) regression, artificial neural networks (ANNs), and two novel techniques based on support vector machines (SVMs) for multivariate data analysis: support vector regression (SVR) and least-squares support vector machines (LS-SVMs). The comparison is based on fourteen (14) different datasets: seven sets of gasoline data (density, benzene content, and fractional composition/boiling points), two sets of ethanol gasoline fuel data (density and ethanol content), one set of diesel fuel data (total sulfur content), three sets of petroleum (crude oil) macromolecules data (weight percentages of asphaltenes, resins, and paraffins), and one set of petroleum resins data (resins content). Vibrational (near-infrared, NIR) spectroscopic data are used to predict the properties and quality coefficients of gasoline, biofuel/biodiesel, diesel fuel, and other samples of interest. The four systems presented here range greatly in composition, properties, strength of intermolecular interactions (e.g., van der Waals forces, H-bonds), colloid structure, and phase behavior. Due to the high diversity of chemical systems studied, general conclusions about SVM regression methods can be made. We try to answer the following question: to what extent can SVM-based techniques replace ANN-based approaches in real-world (industrial/scientific) applications? The results show that both SVR and LS-SVM methods are comparable to ANNs in accuracy. Due to the much higher robustness of the former, the SVM-based approaches are recommended for practical (industrial) application. This has been shown to be especially true for complicated, highly nonlinear objects.

  4. Support vector machine regression (SVR/LS-SVM)--an alternative to neural networks (ANN) for analytical chemistry? Comparison of nonlinear methods on near infrared (NIR) spectroscopy data.

    PubMed

    Balabin, Roman M; Lomakina, Ekaterina I

    2011-04-21

    In this study, we make a general comparison of the accuracy and robustness of five multivariate calibration models: partial least squares (PLS) regression or projection to latent structures, polynomial partial least squares (Poly-PLS) regression, artificial neural networks (ANNs), and two novel techniques based on support vector machines (SVMs) for multivariate data analysis: support vector regression (SVR) and least-squares support vector machines (LS-SVMs). The comparison is based on fourteen (14) different datasets: seven sets of gasoline data (density, benzene content, and fractional composition/boiling points), two sets of ethanol gasoline fuel data (density and ethanol content), one set of diesel fuel data (total sulfur content), three sets of petroleum (crude oil) macromolecules data (weight percentages of asphaltenes, resins, and paraffins), and one set of petroleum resins data (resins content). Vibrational (near-infrared, NIR) spectroscopic data are used to predict the properties and quality coefficients of gasoline, biofuel/biodiesel, diesel fuel, and other samples of interest. The four systems presented here range greatly in composition, properties, strength of intermolecular interactions (e.g., van der Waals forces, H-bonds), colloid structure, and phase behavior. Due to the high diversity of chemical systems studied, general conclusions about SVM regression methods can be made. We try to answer the following question: to what extent can SVM-based techniques replace ANN-based approaches in real-world (industrial/scientific) applications? The results show that both SVR and LS-SVM methods are comparable to ANNs in accuracy. Due to the much higher robustness of the former, the SVM-based approaches are recommended for practical (industrial) application. This has been shown to be especially true for complicated, highly nonlinear objects. PMID:21350755

  5. Impact of Network Activity Levels on the Performance of Passive Network Service Dependency Discovery

    SciTech Connect

    Carroll, Thomas E.; Chikkagoudar, Satish; Arthur-Durett, Kristine M.

    2015-11-02

    Network services often do not operate alone, but instead, depend on other services distributed throughout a network to correctly function. If a service fails, is disrupted, or degraded, it is likely to impair other services. The web of dependencies can be surprisingly complex---especially within a large enterprise network---and evolve with time. Acquiring, maintaining, and understanding dependency knowledge is critical for many network management and cyber defense activities. While automation can improve situation awareness for network operators and cyber practitioners, poor detection accuracy reduces their confidence and can complicate their roles. In this paper we rigorously study the effects of network activity levels on the detection accuracy of passive network-based service dependency discovery methods. The accuracy of all except for one method was inversely proportional to network activity levels. Our proposed cross correlation method was particularly robust to the influence of network activity. The proposed experimental treatment will further advance a more scientific evaluation of methods and provide the ability to determine their operational boundaries.

  6. Distinguishing Parkinson's disease from atypical parkinsonian syndromes using PET data and a computer system based on support vector machines and Bayesian networks.

    PubMed

    Segovia, Fermín; Illán, Ignacio A; Górriz, Juan M; Ramírez, Javier; Rominger, Axel; Levin, Johannes

    2015-01-01

    Differentiating between Parkinson's disease (PD) and atypical parkinsonian syndromes (APS) is still a challenge, specially at early stages when the patients show similar symptoms. During last years, several computer systems have been proposed in order to improve the diagnosis of PD, but their accuracy is still limited. In this work we demonstrate a full automatic computer system to assist the diagnosis of PD using (18)F-DMFP PET data. First, a few regions of interest are selected by means of a two-sample t-test. The accuracy of the selected regions to separate PD from APS patients is then computed using a support vector machine classifier. The accuracy values are finally used to train a Bayesian network that can be used to predict the class of new unseen data. This methodology was evaluated using a database with 87 neuroimages, achieving accuracy rates over 78%. A fair comparison with other similar approaches is also provided. PMID:26594165

  7. Distinguishing Parkinson's disease from atypical parkinsonian syndromes using PET data and a computer system based on support vector machines and Bayesian networks

    PubMed Central

    Segovia, Fermín; Illán, Ignacio A.; Górriz, Juan M.; Ramírez, Javier; Rominger, Axel; Levin, Johannes

    2015-01-01

    Differentiating between Parkinson's disease (PD) and atypical parkinsonian syndromes (APS) is still a challenge, specially at early stages when the patients show similar symptoms. During last years, several computer systems have been proposed in order to improve the diagnosis of PD, but their accuracy is still limited. In this work we demonstrate a full automatic computer system to assist the diagnosis of PD using 18F-DMFP PET data. First, a few regions of interest are selected by means of a two-sample t-test. The accuracy of the selected regions to separate PD from APS patients is then computed using a support vector machine classifier. The accuracy values are finally used to train a Bayesian network that can be used to predict the class of new unseen data. This methodology was evaluated using a database with 87 neuroimages, achieving accuracy rates over 78%. A fair comparison with other similar approaches is also provided. PMID:26594165

  8. Characterization and comparative performance of lentiviral vector preparations concentrated by either one-step ultrafiltration or ultracentrifugation.

    PubMed

    Papanikolaou, Eleni; Kontostathi, Georgia; Drakopoulou, Ekati; Georgomanoli, Maria; Stamateris, Evangelos; Vougas, Kostas; Vlahou, Antonia; Maloy, Andrew; Ware, Mark; Anagnou, Nicholas P

    2013-07-01

    Gene therapy utilizing lentiviral vectors (LVs) constitutes a real therapeutic alternative for many inherited monogenic diseases. Therefore, the generation of functional vectors using fast, non-laborious and cost-effective strategies is imperative. Among the available concentration methods for VSV-G pseudotyped lentiviruses to achieve high therapeutic titers, ultracentrifugation represents the most common approach. However, the procedure requires special handling and access to special instrumentation, it is time-consuming, and most importantly, it is cost-ineffective due to the high maintenance expenses and consumables of the ultracentrifuge apparatus. Here we describe an improved protocol in which vector stocks are prepared by transient transfection using standard cell culture media and are then concentrated by ultrafiltration, resulting in functional vector titers of up to 6×10(9) transducing units per millilitre (TU/ml) without the involvement of any purification step. Although ultrafiltration per se for concentrating viruses is not a new procedure, our work displays one major novelty; we characterized the nature and the constituents of the viral batches produced by ultrafiltration using peptide mass fingerprint analysis. We also determined the viral functional titer by employing flow cytometry and evaluated the actual viral particle size and concentration in real time by using laser-based nanoparticle tracking analysis based on Brownian motion. Vectors generated by this production method are contained in intact virions and when tested to transduce in vitro either murine total bone marrow or human CD34(+) hematopoietic stem cells, resulted in equal transduction efficiency and reduced toxicity, compared to lentiviral vectors produced using standard ultracentrifugation-based methods. The data from this study can eventually lead to the improvement of protocols and technical modifications for the clinical trials for gene therapy.

  9. Issues in performing a network meta-analysis.

    PubMed

    Senn, Stephen; Gavini, Francois; Magrez, David; Scheen, André

    2013-04-01

    The example of the analysis of a collection of trials in diabetes consisting of a sparsely connected network of 10 treatments is used to make some points about approaches to analysis. In particular various graphical and tabular presentations, both of the network and of the results are provided and the connection to the literature of incomplete blocks is made. It is clear from this example that is inappropriate to treat the main effect of trial as random and the implications of this for analysis are discussed. It is also argued that the generalisation from a classic random-effect meta-analysis to one applied to a network usually involves strong assumptions about the variance components involved. Despite this, it is concluded that such an analysis can be a useful way of exploring a set of trials.

  10. Computer systems for laboratory networks and high-performance NMR.

    PubMed

    Levy, G C; Begemann, J H

    1985-08-01

    Modern computer technology is significantly enhancing the associated tasks of spectroscopic data acquisition and data reduction and analysis. Distributed data processing techniques, particularly laboratory computer networking, are rapidly changing the scientist's ability to optimize results from complex experiments. Optimization of nuclear magnetic resonance spectroscopy (NMR) and magnetic resonance imaging (MRI) experimental results requires use of powerful, large-memory (virtual memory preferred) computers with integrated (and supported) high-speed links to magnetic resonance instrumentation. Laboratory architectures with larger computers, in order to extend data reduction capabilities, have facilitated the transition to NMR laboratory computer networking. Examples of a polymer microstructure analysis and in vivo 31P metabolic analysis are given. This paper also discusses laboratory data processing trends anticipated over the next 5-10 years. Full networking of NMR laboratories is just now becoming a reality. PMID:3840171

  11. Issues in performing a network meta-analysis.

    PubMed

    Senn, Stephen; Gavini, Francois; Magrez, David; Scheen, André

    2013-04-01

    The example of the analysis of a collection of trials in diabetes consisting of a sparsely connected network of 10 treatments is used to make some points about approaches to analysis. In particular various graphical and tabular presentations, both of the network and of the results are provided and the connection to the literature of incomplete blocks is made. It is clear from this example that is inappropriate to treat the main effect of trial as random and the implications of this for analysis are discussed. It is also argued that the generalisation from a classic random-effect meta-analysis to one applied to a network usually involves strong assumptions about the variance components involved. Despite this, it is concluded that such an analysis can be a useful way of exploring a set of trials. PMID:22218368

  12. Performance Impacts of Lower-Layer Cryptographic Methods in Mobile Wireless Ad Hoc Networks

    SciTech Connect

    VAN LEEUWEN, BRIAN P.; TORGERSON, MARK D.

    2002-10-01

    In high consequence systems, all layers of the protocol stack need security features. If network and data-link layer control messages are not secured, a network may be open to adversarial manipulation. The open nature of the wireless channel makes mobile wireless mobile ad hoc networks (MANETs) especially vulnerable to control plane manipulation. The objective of this research is to investigate MANET performance issues when cryptographic processing delays are applied at the data-link layer. The results of analysis are combined with modeling and simulation experiments to show that network performance in MANETs is highly sensitive to the cryptographic overhead.

  13. Statistical modelling of networked human-automation performance using working memory capacity.

    PubMed

    Ahmed, Nisar; de Visser, Ewart; Shaw, Tyler; Mohamed-Ameen, Amira; Campbell, Mark; Parasuraman, Raja

    2014-01-01

    This study examines the challenging problem of modelling the interaction between individual attentional limitations and decision-making performance in networked human-automation system tasks. Analysis of real experimental data from a task involving networked supervision of multiple unmanned aerial vehicles by human participants shows that both task load and network message quality affect performance, but that these effects are modulated by individual differences in working memory (WM) capacity. These insights were used to assess three statistical approaches for modelling and making predictions with real experimental networked supervisory performance data: classical linear regression, non-parametric Gaussian processes and probabilistic Bayesian networks. It is shown that each of these approaches can help designers of networked human-automated systems cope with various uncertainties in order to accommodate future users by linking expected operating conditions and performance from real experimental data to observable cognitive traits like WM capacity. Practitioner Summary: Working memory (WM) capacity helps account for inter-individual variability in operator performance in networked unmanned aerial vehicle supervisory tasks. This is useful for reliable performance prediction near experimental conditions via linear models; robust statistical prediction beyond experimental conditions via Gaussian process models and probabilistic inference about unknown task conditions/WM capacities via Bayesian network models. PMID:24308716

  14. Visualizing weighted networks: a performance comparison of adjacency matrices versus node-link diagrams

    NASA Astrophysics Data System (ADS)

    McIntire, John P.; Osesina, O. Isaac; Bartley, Cecilia; Tudoreanu, M. Eduard; Havig, Paul R.; Geiselman, Eric E.

    2012-06-01

    Ensuring the proper and effective ways to visualize network data is important for many areas of academia, applied sciences, the military, and the public. Fields such as social network analysis, genetics, biochemistry, intelligence, cybersecurity, neural network modeling, transit systems, communications, etc. often deal with large, complex network datasets that can be difficult to interact with, study, and use. There have been surprisingly few human factors performance studies on the relative effectiveness of different graph drawings or network diagram techniques to convey information to a viewer. This is particularly true for weighted networks which include the strength of connections between nodes, not just information about which nodes are linked to other nodes. We describe a human factors study in which participants performed four separate network analysis tasks (finding a direct link between given nodes, finding an interconnected node between given nodes, estimating link strengths, and estimating the most densely interconnected nodes) on two different network visualizations: an adjacency matrix with a heat-map versus a node-link diagram. The results should help shed light on effective methods of visualizing network data for some representative analysis tasks, with the ultimate goal of improving usability and performance for viewers of network data displays.

  15. Vectorized Monte Carlo

    SciTech Connect

    Brown, F.B.

    1981-01-01

    Examination of the global algorithms and local kernels of conventional general-purpose Monte Carlo codes shows that multigroup Monte Carlo methods have sufficient structure to permit efficient vectorization. A structured multigroup Monte Carlo algorithm for vector computers is developed in which many particle events are treated at once on a cell-by-cell basis. Vectorization of kernels for tracking and variance reduction is described, and a new method for discrete sampling is developed to facilitate the vectorization of collision analysis. To demonstrate the potential of the new method, a vectorized Monte Carlo code for multigroup radiation transport analysis was developed. This code incorporates many features of conventional general-purpose production codes, including general geometry, splitting and Russian roulette, survival biasing, variance estimation via batching, a number of cutoffs, and generalized tallies of collision, tracklength, and surface crossing estimators with response functions. Predictions of vectorized performance characteristics for the CYBER-205 were made using emulated coding and a dynamic model of vector instruction timing. Computation rates were examined for a variety of test problems to determine sensitivities to batch size and vector lengths. Significant speedups are predicted for even a few hundred particles per batch, and asymptotic speedups by about 40 over equivalent Amdahl 470V/8 scalar codes arepredicted for a few thousand particles per batch. The principal conclusion is that vectorization of a general-purpose multigroup Monte Carlo code is well worth the significant effort required for stylized coding and major algorithmic changes.

  16. Algorithms for Performance, Dependability, and Performability Evaluation using Stochastic Activity Networks

    NASA Technical Reports Server (NTRS)

    Deavours, Daniel D.; Qureshi, M. Akber; Sanders, William H.

    1997-01-01

    Modeling tools and technologies are important for aerospace development. At the University of Illinois, we have worked on advancing the state of the art in modeling by Markov reward models in two important areas: reducing the memory necessary to numerically solve systems represented as stochastic activity networks and other stochastic Petri net extensions while still obtaining solutions in a reasonable amount of time, and finding numerically stable and memory-efficient methods to solve for the reward accumulated during a finite mission time. A long standing problem when modeling with high level formalisms such as stochastic activity networks is the so-called state space explosion, where the number of states increases exponentially with size of the high level model. Thus, the corresponding Markov model becomes prohibitively large and solution is constrained by the the size of primary memory. To reduce the memory necessary to numerically solve complex systems, we propose new methods that can tolerate such large state spaces that do not require any special structure in the model (as many other techniques do). First, we develop methods that generate row and columns of the state transition-rate-matrix on-the-fly, eliminating the need to explicitly store the matrix at all. Next, we introduce a new iterative solution method, called modified adaptive Gauss-Seidel, that exhibits locality in its use of data from the state transition-rate-matrix, permitting us to cache portions of the matrix and hence reduce the solution time. Finally, we develop a new memory and computationally efficient technique for Gauss-Seidel based solvers that avoids the need for generating rows of A in order to solve Ax = b. This is a significant performance improvement for on-the-fly methods as well as other recent solution techniques based on Kronecker operators. Taken together, these new results show that one can solve very large models without any special structure.

  17. Social Networks and Performance in Distributed Learning Communities

    ERIC Educational Resources Information Center

    Cadima, Rita; Ojeda, Jordi; Monguet, Josep M.

    2012-01-01

    Social networks play an essential role in learning environments as a key channel for knowledge sharing and students' support. In distributed learning communities, knowledge sharing does not occur as spontaneously as when a working group shares the same physical space; knowledge sharing depends even more on student informal connections. In this…

  18. Support vector machines

    NASA Technical Reports Server (NTRS)

    Garay, Michael J.; Mazzoni, Dominic; Davies, Roger; Wagstaff, Kiri

    2004-01-01

    Support Vector Machines (SVMs) are a type of supervised learning algorith,, other examples of which are Artificial Neural Networks (ANNs), Decision Trees, and Naive Bayesian Classifiers. Supervised learning algorithms are used to classify objects labled by a 'supervisor' - typically a human 'expert.'.

  19. Singular Vectors' Subtle Secrets

    ERIC Educational Resources Information Center

    James, David; Lachance, Michael; Remski, Joan

    2011-01-01

    Social scientists use adjacency tables to discover influence networks within and among groups. Building on work by Moler and Morrison, we use ordered pairs from the components of the first and second singular vectors of adjacency matrices as tools to distinguish these groups and to identify particularly strong or weak individuals.

  20. Performance Modeling of Network-Attached Storage Device Based Hierarchical Mass Storage Systems

    NASA Technical Reports Server (NTRS)

    Menasce, Daniel A.; Pentakalos, Odysseas I.

    1995-01-01

    Network attached storage devices improve I/O performance by separating control and data paths and eliminating host intervention during the data transfer phase. Devices are attached to both a high speed network for data transfer and to a slower network for control messages. Hierarchical mass storage systems use disks to cache the most recently used files and a combination of robotic and manually mounted tapes to store the bulk of the files in the file system. This paper shows how queuing network models can be used to assess the performance of hierarchical mass storage systems that use network attached storage devices as opposed to host attached storage devices. Simulation was used to validate the model. The analytic model presented here can be used, among other things, to evaluate the protocols involved in 1/0 over network attached devices.

  1. A high-performance feedback neural network for solving convex nonlinear programming problems.

    PubMed

    Leung, Yee; Chen, Kai-Zhou; Gao, Xing-Bao

    2003-01-01

    Based on a new idea of successive approximation, this paper proposes a high-performance feedback neural network model for solving convex nonlinear programming problems. Differing from existing neural network optimization models, no dual variables, penalty parameters, or Lagrange multipliers are involved in the proposed network. It has the least number of state variables and is very simple in structure. In particular, the proposed network has better asymptotic stability. For an arbitrarily given initial point, the trajectory of the network converges to an optimal solution of the convex nonlinear programming problem under no more than the standard assumptions. In addition, the network can also solve linear programming and convex quadratic programming problems, and the new idea of a feedback network may be used to solve other optimization problems. Feasibility and efficiency are also substantiated by simulation examples.

  2. Moving Large Data Sets Over High-Performance Long Distance Networks

    SciTech Connect

    Hodson, Stephen W; Poole, Stephen W; Ruwart, Thomas; Settlemyer, Bradley W

    2011-04-01

    In this project we look at the performance characteristics of three tools used to move large data sets over dedicated long distance networking infrastructure. Although performance studies of wide area networks have been a frequent topic of interest, performance analyses have tended to focus on network latency characteristics and peak throughput using network traffic generators. In this study we instead perform an end-to-end long distance networking analysis that includes reading large data sets from a source file system and committing large data sets to a destination file system. An evaluation of end-to-end data movement is also an evaluation of the system configurations employed and the tools used to move the data. For this paper, we have built several storage platforms and connected them with a high performance long distance network configuration. We use these systems to analyze the capabilities of three data movement tools: BBcp, GridFTP, and XDD. Our studies demonstrate that existing data movement tools do not provide efficient performance levels or exercise the storage devices in their highest performance modes. We describe the device information required to achieve high levels of I/O performance and discuss how this data is applicable in use cases beyond data movement performance.

  3. G-NetMon: a GPU-accelerated network performance monitoring system

    SciTech Connect

    Wu, Wenji; DeMar, Phil; Holmgren, Don; Singh, Amitoj; /Fermilab

    2011-06-01

    At Fermilab, we have prototyped a GPU-accelerated network performance monitoring system, called G-NetMon, to support large-scale scientific collaborations. In this work, we explore new opportunities in network traffic monitoring and analysis with GPUs. Our system exploits the data parallelism that exists within network flow data to provide fast analysis of bulk data movement between Fermilab and collaboration sites. Experiments demonstrate that our G-NetMon can rapidly detect sub-optimal bulk data movements.

  4. Communication, Opponents, and Clan Performance in Online Games: A Social Network Approach

    PubMed Central

    Lee, Hong Joo; Choi, Jaewon; Park, Sung Joo; Gloor, Peter

    2013-01-01

    Abstract Online gamers form clans voluntarily to play together and to discuss their real and virtual lives. Although these clans have diverse goals, they seek to increase their rank in the game community by winning more battles. Communications among clan members and battles with other clans may influence the performance of a clan. In this study, we compared the effects of communication structure inside a clan, and battle networks among clans, with the performance of the clans. We collected battle histories, posts, and comments on clan pages from a Korean online game, and measured social network indices for communication and battle networks. Communication structures in terms of density and group degree centralization index had no significant association with clan performance. However, the centrality of clans in the battle network was positively related to the performance of the clan. If a clan had many battle opponents, the performance of the clan improved. PMID:23745617

  5. Communication, opponents, and clan performance in online games: a social network approach.

    PubMed

    Lee, Hong Joo; Choi, Jaewon; Kim, Jong Woo; Park, Sung Joo; Gloor, Peter

    2013-12-01

    Online gamers form clans voluntarily to play together and to discuss their real and virtual lives. Although these clans have diverse goals, they seek to increase their rank in the game community by winning more battles. Communications among clan members and battles with other clans may influence the performance of a clan. In this study, we compared the effects of communication structure inside a clan, and battle networks among clans, with the performance of the clans. We collected battle histories, posts, and comments on clan pages from a Korean online game, and measured social network indices for communication and battle networks. Communication structures in terms of density and group degree centralization index had no significant association with clan performance. However, the centrality of clans in the battle network was positively related to the performance of the clan. If a clan had many battle opponents, the performance of the clan improved.

  6. A national laboratory network for bioterrorism: evolution from a prototype network of laboratories performing routine surveillance.

    PubMed

    Gilchrist, M J

    2000-07-01

    The need for an enhanced network of laboratories to respond to a bioterrorism attack has been realized. Therefore, the Association of Public Health Laboratories and the Centers for Disease Control are developing a system involving civilian public health and private laboratories that builds on the existing network for routine disease surveillance. It is anticipated that most bioterrorist attacks will not be immediately recognized, so increased laboratory capabilities and communications are necessary. The laboratory network has four categories with different biosafety levels assigned to clearly delineate the correct referral route. Improving communications through World Wide Web-based systems will allow test results, surge capacity, and training and identification algorithms to be shared instantly. There are plans to expand the network to include standard public health surveillance and emerging infectious diseases.

  7. Hydrogen Bond Nanoscale Networks Showing Switchable Transport Performance

    NASA Astrophysics Data System (ADS)

    Long, Yong; Hui, Jun-Feng; Wang, Peng-Peng; Xiang, Guo-Lei; Xu, Biao; Hu, Shi; Zhu, Wan-Cheng; Lü, Xing-Qiang; Zhuang, Jing; Wang, Xun

    2012-08-01

    Hydrogen bond is a typical noncovalent bond with its strength only one-tenth of a general covalent bond. Because of its easiness to fracture and re-formation, materials based on hydrogen bonds can enable a reversible behavior in their assembly and other properties, which supplies advantages in fabrication and recyclability. In this paper, hydrogen bond nanoscale networks have been utilized to separate water and oil in macroscale. This is realized upon using nanowire macro-membranes with pore sizes ~tens of nanometers, which can form hydrogen bonds with the water molecules on the surfaces. It is also found that the gradual replacement of the water by ethanol molecules can endow this film tunable transport properties. It is proposed that a hydrogen bond network in the membrane is responsible for this switching effect. Significant application potential is demonstrated by the successful separation of oil and water, especially in the emulsion forms.

  8. Optimizing performance of hybrid FSO/RF networks in realistic dynamic scenarios

    NASA Astrophysics Data System (ADS)

    Llorca, Jaime; Desai, Aniket; Baskaran, Eswaran; Milner, Stuart; Davis, Christopher

    2005-08-01

    Hybrid Free Space Optical (FSO) and Radio Frequency (RF) networks promise highly available wireless broadband connectivity and quality of service (QoS), particularly suitable for emerging network applications involving extremely high data rate transmissions such as high quality video-on-demand and real-time surveillance. FSO links are prone to atmospheric obscuration (fog, clouds, snow, etc) and are difficult to align over long distances due the use of narrow laser beams and the effect of atmospheric turbulence. These problems can be mitigated by using adjunct directional RF links, which provide backup connectivity. In this paper, methodologies for modeling and simulation of hybrid FSO/RF networks are described. Individual link propagation models are derived using scattering theory, as well as experimental measurements. MATLAB is used to generate realistic atmospheric obscuration scenarios, including moving cloud layers at different altitudes. These scenarios are then imported into a network simulator (OPNET) to emulate mobile hybrid FSO/RF networks. This framework allows accurate analysis of the effects of node mobility, atmospheric obscuration and traffic demands on network performance, and precise evaluation of topology reconfiguration algorithms as they react to dynamic changes in the network. Results show how topology reconfiguration algorithms, together with enhancements to TCP/IP protocols which reduce the network response time, enable the network to rapidly detect and act upon link state changes in highly dynamic environments, ensuring optimized network performance and availability.

  9. The effects of malicious nodes on performance of mobile ad hoc networks

    NASA Astrophysics Data System (ADS)

    Li, Fanzhi; Shi, Xiyu; Jassim, Sabah; Adams, Christopher

    2006-05-01

    Wireless ad hoc networking offers convenient infrastructureless communication over the shared wireless channel. However, the nature of ad hoc networks makes them vulnerable to security attacks. Unlike their wired counterpart, infrastructureless ad hoc networks do not have a clear line of defense, their topology is dynamically changing, and every mobile node can receive messages from its neighbors and can be contacted by all other nodes in its neighborhood. This poses a great danger to network security if some nodes behave in a malicious manner. The immediate concern about the security in this type of networks is how to protect the network and the individual mobile nodes against malicious act of rogue nodes from within the network. This paper is concerned with security aspects of wireless ad hoc networks. We shall present results of simulation experiments on ad hoc network's performance in the presence of malicious nodes. We shall investigate two types of attacks and the consequences will be simulated and quantified in terms of loss of packets and other factors. The results show that network performance, in terms of successful packet delivery ratios, significantly deteriorates when malicious nodes act according to the defined misbehaving characteristics.

  10. Traffic Dimensioning and Performance Modeling of 4G LTE Networks

    ERIC Educational Resources Information Center

    Ouyang, Ye

    2011-01-01

    Rapid changes in mobile techniques have always been evolutionary, and the deployment of 4G Long Term Evolution (LTE) networks will be the same. It will be another transition from Third Generation (3G) to Fourth Generation (4G) over a period of several years, as is the case still with the transition from Second Generation (2G) to 3G. As a result,…

  11. Performance analysis of Integrated Communication and Control System networks

    NASA Technical Reports Server (NTRS)

    Halevi, Y.; Ray, A.

    1990-01-01

    This paper presents statistical analysis of delays in Integrated Communication and Control System (ICCS) networks that are based on asynchronous time-division multiplexing. The models are obtained in closed form for analyzing control systems with randomly varying delays. The results of this research are applicable to ICCS design for complex dynamical processes like advanced aircraft and spacecraft, autonomous manufacturing plants, and chemical and processing plants.

  12. Enhancing End-to-End Performance of Information Services Over Ka-Band Global Satellite Networks

    NASA Technical Reports Server (NTRS)

    Bhasin, Kul B.; Glover, Daniel R.; Ivancic, William D.; vonDeak, Thomas C.

    1997-01-01

    The Internet has been growing at a rapid rate as the key medium to provide information services such as e-mail, WWW and multimedia etc., however its global reach is limited. Ka-band communication satellite networks are being developed to increase the accessibility of information services via the Internet at global scale. There is need to assess satellite networks in their ability to provide these services and interconnect seamlessly with existing and proposed terrestrial telecommunication networks. In this paper the significant issues and requirements in providing end-to-end high performance for the delivery of information services over satellite networks based on various layers in the OSI reference model are identified. Key experiments have been performed to evaluate the performance of digital video and Internet over satellite-like testbeds. The results of the early developments in ATM and TCP protocols over satellite networks are summarized.

  13. High performance interconnection between high data rate networks

    NASA Technical Reports Server (NTRS)

    Foudriat, E. C.; Maly, K.; Overstreet, C. M.; Zhang, L.; Sun, W.

    1992-01-01

    The bridge/gateway system needed to interconnect a wide range of computer networks to support a wide range of user quality-of-service requirements is discussed. The bridge/gateway must handle a wide range of message types including synchronous and asynchronous traffic, large, bursty messages, short, self-contained messages, time critical messages, etc. It is shown that messages can be classified into three basic classes, synchronous and large and small asynchronous messages. The first two require call setup so that packet identification, buffer handling, etc. can be supported in the bridge/gateway. Identification enables resequences in packet size. The third class is for messages which do not require call setup. Resequencing hardware based to handle two types of resequencing problems is presented. The first is for a virtual parallel circuit which can scramble channel bytes. The second system is effective in handling both synchronous and asynchronous traffic between networks with highly differing packet sizes and data rates. The two other major needs for the bridge/gateway are congestion and error control. A dynamic, lossless congestion control scheme which can easily support effective error correction is presented. Results indicate that the congestion control scheme provides close to optimal capacity under congested conditions. Under conditions where error may develop due to intervening networks which are not lossless, intermediate error recovery and correction takes 1/3 less time than equivalent end-to-end error correction under similar conditions.

  14. Functional Connectivity in Multiple Cortical Networks Is Associated with Performance Across Cognitive Domains in Older Adults

    PubMed Central

    Shaw, Emily E.; Schultz, Aaron P.; Sperling, Reisa A.

    2015-01-01

    Abstract Intrinsic functional connectivity MRI has become a widely used tool for measuring integrity in large-scale cortical networks. This study examined multiple cortical networks using Template-Based Rotation (TBR), a method that applies a priori network and nuisance component templates defined from an independent dataset to test datasets of interest. A priori templates were applied to a test dataset of 276 older adults (ages 65–90) from the Harvard Aging Brain Study to examine the relationship between multiple large-scale cortical networks and cognition. Factor scores derived from neuropsychological tests represented processing speed, executive function, and episodic memory. Resting-state BOLD data were acquired in two 6-min acquisitions on a 3-Tesla scanner and processed with TBR to extract individual-level metrics of network connectivity in multiple cortical networks. All results controlled for data quality metrics, including motion. Connectivity in multiple large-scale cortical networks was positively related to all cognitive domains, with a composite measure of general connectivity positively associated with general cognitive performance. Controlling for the correlations between networks, the frontoparietal control network (FPCN) and executive function demonstrated the only significant association, suggesting specificity in this relationship. Further analyses found that the FPCN mediated the relationships of the other networks with cognition, suggesting that this network may play a central role in understanding individual variation in cognition during aging. PMID:25827242

  15. Social Networks and Students' Performance in Secondary Schools: Lessons from an Open Learning Centre, Kenya

    ERIC Educational Resources Information Center

    Muhingi, Wilkins Ndege; Mutavi, Teresia; Kokonya, Donald; Simiyu, Violet Nekesa; Musungu, Ben; Obondo, Anne; Kuria, Mary Wangari

    2015-01-01

    Given the known positive and negative effects of uncontrolled social networking among secondary school students worldwide, it is necessary to establish the relationship between social network sites and academic performances among secondary school students. This study, therefore, aimed at establishing the relationship between secondary school…

  16. Social Networks, Communication Styles, and Learning Performance in a CSCL Community

    ERIC Educational Resources Information Center

    Cho, Hichang; Gay, Geri; Davidson, Barry; Ingraffea, Anthony

    2007-01-01

    The aim of this study is to empirically investigate the relationships between communication styles, social networks, and learning performance in a computer-supported collaborative learning (CSCL) community. Using social network analysis (SNA) and longitudinal survey data, we analyzed how 31 distributed learners developed collaborative learning…

  17. A Technique for Moving Large Data Sets over High-Performance Long Distance Networks

    SciTech Connect

    Settlemyer, Bradley W; Dobson, Jonathan D; Hodson, Stephen W; Kuehn, Jeffery A; Poole, Stephen W; Ruwart, Thomas

    2011-01-01

    In this paper we look at the performance characteristics of three tools used to move large data sets over dedicated long distance networking infrastructure. Although performance studies of wide area networks have been a frequent topic of interest, performance analyses have tended to focus on network latency characteristics and peak throughput using network traffic generators. In this study we instead perform an end-to-end long distance networking analysis that includes reading large data sets from a source file system and committing the data to a remote destination file system. An evaluation of end-to-end data movement is also an evaluation of the system configurations employed and the tools used to move the data. For this paper, we have built several storage platforms and connected them with a high performance long distance network configuration. We use these systems to analyze the capabilities of three data movement tools: BBcp, GridFTP, and XDD. Our studies demonstrate that existing data movement tools do not provide efficient performance levels or exercise the storage devices in their highest performance modes.

  18. Performance of the Birmingham Solar-Oscillations Network (BiSON)

    NASA Astrophysics Data System (ADS)

    Hale, S. J.; Howe, R.; Chaplin, W. J.; Davies, G. R.; Elsworth, Y. P.

    2016-01-01

    The Birmingham Solar-Oscillations Network (BiSON) has been operating with a full complement of six stations since 1992. Over 20 years later, we look back on the network history. The meta-data from the sites have been analysed to assess performance in terms of site insolation, with a brief look at the challenges that have been encountered over the years. We explain how the international community can gain easy access to the ever-growing dataset produced by the network, and finally look to the future of the network and the potential impact of nearly 25 years of technology miniaturisation.

  19. OSI Network-layer Abstraction: Analysis of Simulation Dynamics and Performance Indicators

    NASA Astrophysics Data System (ADS)

    Lawniczak, Anna T.; Gerisch, Alf; Di Stefano, Bruno

    2005-06-01

    The Open Systems Interconnection (OSI) reference model provides a conceptual framework for communication among computers in a data communication network. The Network Layer of this model is responsible for the routing and forwarding of packets of data. We investigate the OSI Network Layer and develop an abstraction suitable for the study of various network performance indicators, e.g. throughput, average packet delay, average packet speed, average packet path-length, etc. We investigate how the network dynamics and the network performance indicators are affected by various routing algorithms and by the addition of randomly generated links into a regular network connection topology of fixed size. We observe that the network dynamics is not simply the sum of effects resulting from adding individual links to the connection topology but rather is governed nonlinearly by the complex interactions caused by the existence of all randomly added and already existing links in the network. Data for our study was gathered using Netzwerk-1, a C++ simulation tool that we developed for our abstraction.

  20. Can artificial neural networks provide an "expert's" view of medical students performances on computer based simulations?

    PubMed

    Stevens, R H; Najafi, K

    1992-01-01

    Artificial neural networks were trained to recognize the test selection patterns of students' successful solutions to seven immunology computer based simulations. When new student's test selections were presented to the trained neural network, their problem solutions were correctly classified as successful or non-successful > 90% of the time. Examination of the neural networks output weights after each test selection revealed a progressive increase for the relevant problem suggesting that a successful solution was represented by the neural network as the accumulation of relevant tests. Unsuccessful problem solutions revealed two patterns of students performances. The first pattern was characterized by low neural network output weights for all seven problems reflecting extensive searching and lack of recognition of relevant information. In the second pattern, the output weights from the neural network were biased towards one of the remaining six incorrect problems suggesting that the student mis-represented the current problem as an instance of a previous problem.

  1. Performance analysis of a scheme for concurrency/synchronization using queueing network models

    SciTech Connect

    Almeida, V.A.F.; Dowdy, L.W.

    1986-12-01

    Queueing network models have been used extensively to analyze performance of computer systems. However, queueing network models with product form solutions are not directly applicable to systems that process programs with internal concurrency/synchronization. An exact solution of system systems is often not feasible because of its large state space. Approximation techniques, based on queueing network theory, are presented which analyze the performance of closed systems with a specific scheme of concurrency/synchronization. The techniques are applicable to multitasking systems, distributed database systems, packet routing environments, and fork/join situations.

  2. Performance analysis of QAM-VADSL systems for FTTC networks

    NASA Astrophysics Data System (ADS)

    Crespo, Pedro M.; Garcia-Frias, Javier

    1995-11-01

    Digital transport capabilities to a home served by a fiber-to-the-curb network are limited by the transmission characteristics of the twisted-pair drop cable. However, advanced digital signal processing techniques can substantially increase the data transmission capability over the relatively short lengths of these metallic sections. The purpose of this study is to estimate the maximum achievable information rate versus drop cable length (between 100 to 500 meters), when very high rate asymmetric digital subscriber line (VADSL) modems, with a QAM modulation technique, are used. Different QAM constellations have been analyzed and two types of disturbances have been considered: far-end crosstalk (FEXT) and additive white Gaussian noise (AWGN). Simulation results show that FEXT is a greater impairment than AWGN, and that a 16-QAM constellation outperforms any other number.

  3. Study on multiple-hops performance of MOOC sequences-based optical labels for OPS networks

    NASA Astrophysics Data System (ADS)

    Zhang, Chongfu; Qiu, Kun; Ma, Chunli

    2009-11-01

    In this paper, we utilize a new study method that is under independent case of multiple optical orthogonal codes to derive the probability function of MOOCS-OPS networks, discuss the performance characteristics for a variety of parameters, and compare some characteristics of the system employed by single optical orthogonal code or multiple optical orthogonal codes sequences-based optical labels. The performance of the system is also calculated, and our results verify that the method is effective. Additionally it is found that performance of MOOCS-OPS networks would, negatively, be worsened, compared with single optical orthogonal code-based optical label for optical packet switching (SOOC-OPS); however, MOOCS-OPS networks can greatly enlarge the scalability of optical packet switching networks.

  4. Performance Evaluation Analysis of Group Mobility in Mobile Ad Hoc Networks

    NASA Astrophysics Data System (ADS)

    Irshad, Ehtsham; Noshairwan, Wajahat; Shafiq, Muhammad; Khurram, Shahzada; Irshad, Azeem; Usman, Muhammad

    Mobility of nodes is an important issue in mobile adhoc networks (MANET). Nodes in MANET move from one network to another individually and in the form of group. In single node mobility scheme every node performs registration individually in new MANET whereas in group mobility scheme only one node in a group i.e group representative (GR) performs registration on behalf of all other nodes in the group and is assigned Care of Address (CoA). Internet protocol (IP) of all other nodes in the group remains same. Our simulated results prove that group mobility scheme reduces number of messages and consumes less time for registration of nodes as compared to single node mobility scheme. Thus network load is reduced in group mobility scheme. This research paper evaluates the performance of group mobility with single node mobility scheme. Test bed for this evaluation is based on Network Simulator 2 (NS-2) environment.

  5. Hybrid Neural-Network: Genetic Algorithm Technique for Aircraft Engine Performance Diagnostics Developed and Demonstrated

    NASA Technical Reports Server (NTRS)

    Kobayashi, Takahisa; Simon, Donald L.

    2002-01-01

    As part of the NASA Aviation Safety Program, a unique model-based diagnostics method that employs neural networks and genetic algorithms for aircraft engine performance diagnostics has been developed and demonstrated at the NASA Glenn Research Center against a nonlinear gas turbine engine model. Neural networks are applied to estimate the internal health condition of the engine, and genetic algorithms are used for sensor fault detection, isolation, and quantification. This hybrid architecture combines the excellent nonlinear estimation capabilities of neural networks with the capability to rank the likelihood of various faults given a specific sensor suite signature. The method requires a significantly smaller data training set than a neural network approach alone does, and it performs the combined engine health monitoring objectives of performance diagnostics and sensor fault detection and isolation in the presence of nominal and degraded engine health conditions.

  6. Cloning vector

    DOEpatents

    Guilfoyle, R.A.; Smith, L.M.

    1994-12-27

    A vector comprising a filamentous phage sequence containing a first copy of filamentous phage gene X and other sequences necessary for the phage to propagate is disclosed. The vector also contains a second copy of filamentous phage gene X downstream from a promoter capable of promoting transcription in a bacterial host. In a preferred form of the present invention, the filamentous phage is M13 and the vector additionally includes a restriction endonuclease site located in such a manner as to substantially inactivate the second gene X when a DNA sequence is inserted into the restriction site. 2 figures.

  7. Cloning vector

    DOEpatents

    Guilfoyle, Richard A.; Smith, Lloyd M.

    1994-01-01

    A vector comprising a filamentous phage sequence containing a first copy of filamentous phage gene X and other sequences necessary for the phage to propagate is disclosed. The vector also contains a second copy of filamentous phage gene X downstream from a promoter capable of promoting transcription in a bacterial host. In a preferred form of the present invention, the filamentous phage is M13 and the vector additionally includes a restriction endonuclease site located in such a manner as to substantially inactivate the second gene X when a DNA sequence is inserted into the restriction site.

  8. Static thrust-vectoring performance of nonaxisymmetric convergent-divergent nozzles with post-exit yaw vanes. M.S. Thesis - George Washington Univ., Aug. 1988

    NASA Technical Reports Server (NTRS)

    Foley, Robert J.; Pendergraft, Odis C., Jr.

    1991-01-01

    A static (wind-off) test was conducted in the Static Test Facility of the 16-ft transonic tunnel to determine the performance and turning effectiveness of post-exit yaw vanes installed on two-dimensional convergent-divergent nozzles. One nozzle design that was previously tested was used as a baseline, simulating dry power and afterburning power nozzles at both 0 and 20 degree pitch vectoring conditions. Vanes were installed on these four nozzle configurations to study the effects of vane deflection angle, longitudinal and lateral location, size, and camber. All vanes were hinged at the nozzle sidewall exit, and in addition, some were also hinged at the vane quarter chord (double-hinged). The vane concepts tested generally produced yaw thrust vectoring angles much less than the geometric vane angles, for (up to 8 percent) resultant thrust losses. When the nozzles were pitch vectored, yawing effectiveness decreased as the vanes were moved downstream. Thrust penalties and yawing effectiveness both decreased rapidly as the vanes were moved outboard (laterally). Vane length and height changes increased yawing effectiveness and thrust ratio losses, while using vane camber, and double-hinged vanes increased resultant yaw angles by 50 to 100 percent.

  9. A Method for Integrating Thrust-Vectoring and Actuated Forebody Strakes with Conventional Aerodynamic Controls on a High-Performance Fighter Airplane

    NASA Technical Reports Server (NTRS)

    Lallman, Frederick J.; Davidson, John B.; Murphy, Patrick C.

    1998-01-01

    A method, called pseudo controls, of integrating several airplane controls to achieve cooperative operation is presented. The method eliminates conflicting control motions, minimizes the number of feedback control gains, and reduces the complication of feedback gain schedules. The method is applied to the lateral/directional controls of a modified high-performance airplane. The airplane has a conventional set of aerodynamic controls, an experimental set of thrust-vectoring controls, and an experimental set of actuated forebody strakes. The experimental controls give the airplane additional control power for enhanced stability and maneuvering capabilities while flying over an expanded envelope, especially at high angles of attack. The flight controls are scheduled to generate independent body-axis control moments. These control moments are coordinated to produce stability-axis angular accelerations. Inertial coupling moments are compensated. Thrust-vectoring controls are engaged according to their effectiveness relative to that of the aerodynamic controls. Vane-relief logic removes steady and slowly varying commands from the thrust-vectoring controls to alleviate heating of the thrust turning devices. The actuated forebody strakes are engaged at high angles of attack. This report presents the forward-loop elements of a flight control system that positions the flight controls according to the desired stability-axis accelerations. This report does not include the generation of the required angular acceleration commands by means of pilot controls or the feedback of sensed airplane motions.

  10. Analysis of latency performance of bluetooth low energy (BLE) networks.

    PubMed

    Cho, Keuchul; Park, Woojin; Hong, Moonki; Park, Gisu; Cho, Wooseong; Seo, Jihoon; Han, Kijun

    2014-12-23

    Bluetooth Low Energy (BLE) is a short-range wireless communication technology aiming at low-cost and low-power communication. The performance evaluation of classical Bluetooth device discovery have been intensively studied using analytical modeling and simulative methods, but these techniques are not applicable to BLE, since BLE has a fundamental change in the design of the discovery mechanism, including the usage of three advertising channels. Recently, there several works have analyzed the topic of BLE device discovery, but these studies are still far from thorough. It is thus necessary to develop a new, accurate model for the BLE discovery process. In particular, the wide range settings of the parameters introduce lots of potential for BLE devices to customize their discovery performance. This motivates our study of modeling the BLE discovery process and performing intensive simulation. This paper is focused on building an analytical model to investigate the discovery probability, as well as the expected discovery latency, which are then validated via extensive experiments. Our analysis considers both continuous and discontinuous scanning modes. We analyze the sensitivity of these performance metrics to parameter settings to quantitatively examine to what extent parameters influence the performance metric of the discovery processes.

  11. Social learning strategies modify the effect of network structure on group performance

    PubMed Central

    Barkoczi, Daniel; Galesic, Mirta

    2016-01-01

    The structure of communication networks is an important determinant of the capacity of teams, organizations and societies to solve policy, business and science problems. Yet, previous studies reached contradictory results about the relationship between network structure and performance, finding support for the superiority of both well-connected efficient and poorly connected inefficient network structures. Here we argue that understanding how communication networks affect group performance requires taking into consideration the social learning strategies of individual team members. We show that efficient networks outperform inefficient networks when individuals rely on conformity by copying the most frequent solution among their contacts. However, inefficient networks are superior when individuals follow the best member by copying the group member with the highest payoff. In addition, groups relying on conformity based on a small sample of others excel at complex tasks, while groups following the best member achieve greatest performance for simple tasks. Our findings reconcile contradictory results in the literature and have broad implications for the study of social learning across disciplines. PMID:27713417

  12. High-performance parallel interface to synchronous optical network gateway

    DOEpatents

    St. John, Wallace B.; DuBois, David H.

    1996-01-01

    A system of sending and receiving gateways interconnects high speed data interfaces, e.g., HIPPI interfaces, through fiber optic links, e.g., a SONET network. An electronic stripe distributor distributes bytes of data from a first interface at the sending gateway onto parallel fiber optics of the fiber optic link to form transmitted data. An electronic stripe collector receives the transmitted data on the parallel fiber optics and reforms the data into a format effective for input to a second interface at the receiving gateway. Preferably, an error correcting syndrome is constructed at the sending gateway and sent with a data frame so that transmission errors can be detected and corrected in a real-time basis. Since the high speed data interface operates faster than any of the fiber optic links the transmission rate must be adapted to match the available number of fiber optic links so the sending and receiving gateways monitor the availability of fiber links and adjust the data throughput accordingly. In another aspect, the receiving gateway must have sufficient available buffer capacity to accept an incoming data frame. A credit-based flow control system provides for continuously updating the sending gateway on the available buffer capacity at the receiving gateway.

  13. High-performance parallel interface to synchronous optical network gateway

    DOEpatents

    St. John, W.B.; DuBois, D.H.

    1996-12-03

    Disclosed is a system of sending and receiving gateways interconnects high speed data interfaces, e.g., HIPPI interfaces, through fiber optic links, e.g., a SONET network. An electronic stripe distributor distributes bytes of data from a first interface at the sending gateway onto parallel fiber optics of the fiber optic link to form transmitted data. An electronic stripe collector receives the transmitted data on the parallel fiber optics and reforms the data into a format effective for input to a second interface at the receiving gateway. Preferably, an error correcting syndrome is constructed at the sending gateway and sent with a data frame so that transmission errors can be detected and corrected in a real-time basis. Since the high speed data interface operates faster than any of the fiber optic links the transmission rate must be adapted to match the available number of fiber optic links so the sending and receiving gateways monitor the availability of fiber links and adjust the data throughput accordingly. In another aspect, the receiving gateway must have sufficient available buffer capacity to accept an incoming data frame. A credit-based flow control system provides for continuously updating the sending gateway on the available buffer capacity at the receiving gateway. 7 figs.

  14. System for Automated Calibration of Vector Modulators

    NASA Technical Reports Server (NTRS)

    Lux, James; Boas, Amy; Li, Samuel

    2009-01-01

    Vector modulators are used to impose baseband modulation on RF signals, but non-ideal behavior limits the overall performance. The non-ideal behavior of the vector modulator is compensated using data collected with the use of an automated test system driven by a LabVIEW program that systematically applies thousands of control-signal values to the device under test and collects RF measurement data. The technology innovation automates several steps in the process. First, an automated test system, using computer controlled digital-to-analog converters (DACs) and a computer-controlled vector network analyzer (VNA) systematically can apply different I and Q signals (which represent the complex number by which the RF signal is multiplied) to the vector modulator under test (VMUT), while measuring the RF performance specifically, gain and phase. The automated test system uses the LabVIEW software to control the test equipment, collect the data, and write it to a file. The input to the Lab - VIEW program is either user-input for systematic variation, or is provided in a file containing specific test values that should be fed to the VMUT. The output file contains both the control signals and the measured data. The second step is to post-process the file to determine the correction functions as needed. The result of the entire process is a tabular representation, which allows translation of a desired I/Q value to the required analog control signals to produce a particular RF behavior. In some applications, corrected performance is needed only for a limited range. If the vector modulator is being used as a phase shifter, there is only a need to correct I and Q values that represent points on a circle, not the entire plane. This innovation has been used to calibrate 2-GHz MMIC (monolithic microwave integrated circuit) vector modulators in the High EIRP Cluster Array project (EIRP is high effective isotropic radiated power). These calibrations were then used to create

  15. Implementation and Performance Evaluation Using the Fuzzy Network Balanced Scorecard

    ERIC Educational Resources Information Center

    Tseng, Ming-Lang

    2010-01-01

    The balanced scorecard (BSC) is a multi-criteria evaluation concept that highlights the importance of performance measurement. However, although there is an abundance of literature on the BSC framework, there is a scarcity of literature regarding how the framework with dependence and interactive relationships should be properly implemented in…

  16. Dynamic Social Networks in High Performance Football Coaching

    ERIC Educational Resources Information Center

    Occhino, Joseph; Mallett, Cliff; Rynne, Steven

    2013-01-01

    Background: Sports coaching is largely a social activity where engagement with athletes and support staff can enhance the experiences for all involved. This paper examines how high performance football coaches develop knowledge through their interactions with others within a social learning theory framework. Purpose: The key purpose of this study…

  17. Vector quantization

    NASA Technical Reports Server (NTRS)

    Gray, Robert M.

    1989-01-01

    During the past ten years Vector Quantization (VQ) has developed from a theoretical possibility promised by Shannon's source coding theorems into a powerful and competitive technique for speech and image coding and compression at medium to low bit rates. In this survey, the basic ideas behind the design of vector quantizers are sketched and some comments made on the state-of-the-art and current research efforts.

  18. INCITE: Edge-based Traffic Processing and Inference for High-Performance Networks

    SciTech Connect

    Baraniuk, Richard G.; Feng, Wu-chun; Cottrell, Les; Knightly, Edward; Nowak, Robert; Riedi, Rolf

    2005-06-20

    The INCITE (InterNet Control and Inference Tools at the Edge) Project developed on-line tools to characterize and map host and network performance as a function of space, time, application, protocol, and service. In addition to their utility for trouble-shooting problems, these tools will enable a new breed of applications and operating systems that are network aware and resource aware. Launching from the foundation provided our recent leading-edge research on network measurement, multifractal signal analysis, multiscale random fields, and quality of service, our effort consisted of three closely integrated research thrusts that directly attack several key networking challenges of DOE's SciDAC program. These are: Thrust 1, Multiscale traffic analysis and modeling techniques; Thrust 2, Inference and control algorithms for network paths, links, and routers, and Thrust 3, Data collection tools.

  19. Theoretical Prediction of Hydrogen Separation Performance of Two-Dimensional Carbon Network of Fused Pentagon.

    PubMed

    Zhu, Lei; Xue, Qingzhong; Li, Xiaofang; Jin, Yakang; Zheng, Haixia; Wu, Tiantian; Guo, Qikai

    2015-12-30

    Using the van-der-Waals-corrected density functional theory (DFT) and molecular dynamic (MD) simulations, we theoretically predict the H2 separation performance of a new two-dimensional sp(2) carbon allotropes-fused pentagon network. The DFT calculations demonstrate that the fused pentagon network with proper pore sizes presents a surmountable energy barrier (0.18 eV) for H2 molecule passing through. Furthermore, the fused pentagon network shows an exceptionally high selectivity for H2/gas (CO, CH4, CO2, N2, et al.) at 300 and 450 K. Besides, using MD simulations we demonstrate that the fused pentagon network exhibits a H2 permeance of 4 × 10(7) GPU at 450 K, which is much higher than the value (20 GPU) in the current industrial applications. With high selectivity and excellent permeability, the fused pentagon network should be an excellent candidate for H2 separation. PMID:26632974

  20. Results of computer network experiment via the Japanese communication satellite CS - Performance evaluation of communication protocols

    NASA Astrophysics Data System (ADS)

    Ito, A.; Kakinuma, Y.; Uchida, K.; Matsumoto, K.; Takahashi, H.

    1984-03-01

    Computer network experiments have been performed by using the Japanese communication satellite CS. The network is of a centralized (star) type, consisting of one center station and many user stations. The protocols are determined taking into consideration the long round trip delay of a satellite channel. This paper treats the communication protocol aspects of the experiments. Performances of the burst level and the link protocols (which correspond nearly to data link layer of OSI 7 layer model) are evaluated. System performances of throughput, delay, link level overhead are measured by using the statistically generated traffic.

  1. Performance Analysis of Receive Diversity in Wireless Sensor Networks over GBSBE Models

    PubMed Central

    Goel, Shivali; Abawajy, Jemal H.; Kim, Tai-hoon

    2010-01-01

    Wireless sensor networks have attracted a lot of attention recently. In this paper, we develop a channel model based on the elliptical model for multipath components involving randomly placed scatterers in the scattering region with sensors deployed on a field. We verify that in a sensor network, the use of receive diversity techniques improves the performance of the system. Extensive performance analysis of the system is carried out for both single and multiple antennas with the applied receive diversity techniques. Performance analyses based on variations in receiver height, maximum multipath delay and transmit power have been performed considering different numbers of antenna elements present in the receiver array, Our results show that increasing the number of antenna elements for a wireless sensor network does indeed improve the BER rates that can be obtained. PMID:22163510

  2. Developing Statistics and Performance Measures for the Networked Environment: Final Report.

    ERIC Educational Resources Information Center

    Bertot, John Carlo; McClure, Charles R.; Ryan, Joe

    This report summarizes the findings, issues, and lessons learned from the Developing National Public Library and Statewide Network Statistics and Performance Measures study conducted between January 1999 and August 2000. The overall goal of the study was to develop a core set of national statistics and performance measures that librarians,…

  3. Learning in feed-forward neural networks by improving the performance

    NASA Astrophysics Data System (ADS)

    Gordon, Mirta B.; Pereto, Pierre; Rodriguez-Girones, Miguel

    1992-06-01

    Statistical mechanics is used to derive a new learning rule for a feed-forward neural network with one hidden layer. Generalization to multilayer neural networks is straightforward, and proceeds in the same way as backpropagation. We consider a neural network as a physical system, that can be in different states. There are as many possible states as patterns in the learning set. The energy of each level is proportional to the stability of the corresponding pattern. The statistical mechanics free energy of the system, which we call performance, is a maximum for the synaptic strengths that stabilize all the patterns. We propose a learning algorithm that looks for synaptic strengths that maximize the network's performance. Patterns with lower stabilities are more effective in driving the learning process because they have a higher statistical weight. Taking different temperatures for different layers improves the results.

  4. Bursting dynamics remarkably improve the performance of neural networks on liquid computing.

    PubMed

    Li, Xiumin; Chen, Qing; Xue, Fangzheng

    2016-10-01

    Burst firings are functionally important behaviors displayed by neural circuits, which plays a primary role in reliable transmission of electrical signals for neuronal communication. However, with respect to the computational capability of neural networks, most of relevant studies are based on the spiking dynamics of individual neurons, while burst firing is seldom considered. In this paper, we carry out a comprehensive study to compare the performance of spiking and bursting dynamics on the capability of liquid computing, which is an effective approach for intelligent computation of neural networks. The results show that neural networks with bursting dynamic have much better computational performance than those with spiking dynamics, especially for complex computational tasks. Further analysis demonstrate that the fast firing pattern of bursting dynamics can obviously enhance the efficiency of synaptic integration from pre-neurons both temporally and spatially. This indicates that bursting dynamic can significantly enhance the complexity of network activity, implying its high efficiency in information processing. PMID:27668020

  5. Bursting dynamics remarkably improve the performance of neural networks on liquid computing.

    PubMed

    Li, Xiumin; Chen, Qing; Xue, Fangzheng

    2016-10-01

    Burst firings are functionally important behaviors displayed by neural circuits, which plays a primary role in reliable transmission of electrical signals for neuronal communication. However, with respect to the computational capability of neural networks, most of relevant studies are based on the spiking dynamics of individual neurons, while burst firing is seldom considered. In this paper, we carry out a comprehensive study to compare the performance of spiking and bursting dynamics on the capability of liquid computing, which is an effective approach for intelligent computation of neural networks. The results show that neural networks with bursting dynamic have much better computational performance than those with spiking dynamics, especially for complex computational tasks. Further analysis demonstrate that the fast firing pattern of bursting dynamics can obviously enhance the efficiency of synaptic integration from pre-neurons both temporally and spatially. This indicates that bursting dynamic can significantly enhance the complexity of network activity, implying its high efficiency in information processing.

  6. Human brain functional network changes associated with enhanced and impaired attentional task performance.

    PubMed

    Giessing, Carsten; Thiel, Christiane M; Alexander-Bloch, Aaron F; Patel, Ameera X; Bullmore, Edward T

    2013-04-01

    How is the cognitive performance of the human brain related to its topological and spatial organization as a complex network embedded in anatomical space? To address this question, we used nicotine replacement and duration of attentionally demanding task performance (time-on-task), as experimental factors expected, respectively, to enhance and impair cognitive function. We measured resting-state fMRI data, performance and brain activation on a go/no-go task demanding sustained attention, and subjective fatigue in n = 18 healthy, briefly abstinent, cigarette smokers scanned repeatedly in a placebo-controlled, crossover design. We tested the main effects of drug (placebo vs Nicorette gum) and time-on-task on behavioral performance and brain functional network metrics measured in binary graphs of 477 regional nodes (efficiency, measure of integrative topology; clustering, a measure of segregated topology; and the Euclidean physical distance between connected nodes, a proxy marker of wiring cost). Nicotine enhanced attentional task performance behaviorally and increased efficiency, decreased clustering, and increased connection distance of brain networks. Greater behavioral benefits of nicotine were correlated with stronger drug effects on integrative and distributed network configuration and with greater frequency of cigarette smoking. Greater time-on-task had opposite effects: it impaired attentional accuracy, decreased efficiency, increased clustering, and decreased connection distance of networks. These results are consistent with hypothetical predictions that superior cognitive performance should be supported by more efficient, integrated (high capacity) brain network topology at greater connection distance (high cost). They also demonstrate that brain network analysis can provide novel and theoretically principled pharmacodynamic biomarkers of pro-cognitive drug effects in humans. PMID:23554472

  7. HPNAIDM: The High-Performance Network Anomaly/Intrusion Detection and Mitigation System

    SciTech Connect

    Chen, Yan

    2013-12-05

    Identifying traffic anomalies and attacks rapidly and accurately is critical for large network operators. With the rapid growth of network bandwidth, such as the next generation DOE UltraScience Network, and fast emergence of new attacks/virus/worms, existing network intrusion detection systems (IDS) are insufficient because they: • Are mostly host-based and not scalable to high-performance networks; • Are mostly signature-based and unable to adaptively recognize flow-level unknown attacks; • Cannot differentiate malicious events from the unintentional anomalies. To address these challenges, we proposed and developed a new paradigm called high-performance network anomaly/intrustion detection and mitigation (HPNAIDM) system. The new paradigm is significantly different from existing IDSes with the following features (research thrusts). • Online traffic recording and analysis on high-speed networks; • Online adaptive flow-level anomaly/intrusion detection and mitigation; • Integrated approach for false positive reduction. Our research prototype and evaluation demonstrate that the HPNAIDM system is highly effective and economically feasible. Beyond satisfying the pre-set goals, we even exceed that significantly (see more details in the next section). Overall, our project harvested 23 publications (2 book chapters, 6 journal papers and 15 peer-reviewed conference/workshop papers). Besides, we built a website for technique dissemination, which hosts two system prototype release to the research community. We also filed a patent application and developed strong international and domestic collaborations which span both academia and industry.

  8. Kalman filters improve LSTM network performance in problems unsolvable by traditional recurrent nets.

    PubMed

    Pérez-Ortiz, Juan Antonio; Gers, Felix A; Eck, Douglas; Schmidhuber, Jürgen

    2003-03-01

    The long short-term memory (LSTM) network trained by gradient descent solves difficult problems which traditional recurrent neural networks in general cannot. We have recently observed that the decoupled extended Kalman filter training algorithm allows for even better performance, reducing significantly the number of training steps when compared to the original gradient descent training algorithm. In this paper we present a set of experiments which are unsolvable by classical recurrent networks but which are solved elegantly and robustly and quickly by LSTM combined with Kalman filters.

  9. Molten carbonate fuel cell networks: Principles, analysis, and performance. Technical note

    SciTech Connect

    Wimer, J.G.; Williams, M.C.

    1993-01-01

    The chemical reactions in an internally reforming molten carbonate fuel cell (IRMCFC) are described and combined into the overall IRMCFC reaction. Thermodynamic and electrochemical principles are discussed, and structure and operation of fuel cell stacks are explained. In networking, multiple fuel cell stacks are arranged so that reactant streams are fed and recycled through stacks in series, for higher reactant utilization and increased system efficiency. Advantages and performance of networked and conventional systems are compared, using ASPEN simulations. The concept of networking can be applied to any electrochemical membrane, such as that developed for hot gas cleanup in future power plants. 2 tabs, 16 figs, 9 refs.

  10. Laser ranging network performance and routine orbit determination at D-PAF

    NASA Technical Reports Server (NTRS)

    Massmann, Franz-Heinrich; Reigber, C.; Li, H.; Koenig, Rolf; Raimondo, J. C.; Rajasenan, C.; Vei, M.

    1993-01-01

    ERS-1 is now about 8 months in orbit and has been tracked by the global laser network from the very beginning of the mission. The German processing and archiving facility for ERS-1 (D-PAF) is coordinating and supporting the network and performing the different routine orbit determination tasks. This paper presents details about the global network status, the communication to D-PAF and the tracking data and orbit processing system at D-PAF. The quality of the preliminary and precise orbits are shown and some problem areas are identified.

  11. Software sensors for biomass concentration in a SSC process using artificial neural networks and support vector machine.

    PubMed

    Acuña, Gonzalo; Ramirez, Cristian; Curilem, Millaray

    2014-01-01

    The lack of sensors for some relevant state variables in fermentation processes can be coped by developing appropriate software sensors. In this work, NARX-ANN, NARMAX-ANN, NARX-SVM and NARMAX-SVM models are compared when acting as software sensors of biomass concentration for a solid substrate cultivation (SSC) process. Results show that NARMAX-SVM outperforms the other models with an SMAPE index under 9 for a 20 % amplitude noise. In addition, NARMAX models perform better than NARX models under the same noise conditions because of their better predictive capabilities as they include prediction errors as inputs. In the case of perturbation of initial conditions of the autoregressive variable, NARX models exhibited better convergence capabilities. This work also confirms that a difficult to measure variable, like biomass concentration, can be estimated on-line from easy to measure variables like CO₂ and O₂ using an adequate software sensor based on computational intelligence techniques.

  12. A Holistic Approach to ZigBee Performance Enhancement for Home Automation Networks

    PubMed Central

    Betzler, August; Gomez, Carles; Demirkol, Ilker; Paradells, Josep

    2014-01-01

    Wireless home automation networks are gaining importance for smart homes. In this ambit, ZigBee networks play an important role. The ZigBee specification defines a default set of protocol stack parameters and mechanisms that is further refined by the ZigBee Home Automation application profile. In a holistic approach, we analyze how the network performance is affected with the tuning of parameters and mechanisms across multiple layers of the ZigBee protocol stack and investigate possible performance gains by implementing and testing alternative settings. The evaluations are carried out in a testbed of 57 TelosB motes. The results show that considerable performance improvements can be achieved by using alternative protocol stack configurations. From these results, we derive two improved protocol stack configurations for ZigBee wireless home automation networks that are validated in various network scenarios. In our experiments, these improved configurations yield a relative packet delivery ratio increase of up to 33.6%, a delay decrease of up to 66.6% and an improvement of the energy efficiency for battery powered devices of up to 48.7%, obtainable without incurring any overhead to the network. PMID:25196004

  13. A holistic approach to ZigBee performance enhancement for home automation networks.

    PubMed

    Betzler, August; Gomez, Carles; Demirkol, Ilker; Paradells, Josep

    2014-08-14

    Wireless home automation networks are gaining importance for smart homes. In this ambit, ZigBee networks play an important role. The ZigBee specification defines a default set of protocol stack parameters and mechanisms that is further refined by the ZigBee Home Automation application profile. In a holistic approach, we analyze how the network performance is affected with the tuning of parameters and mechanisms across multiple layers of the ZigBee protocol stack and investigate possible performance gains by implementing and testing alternative settings. The evaluations are carried out in a testbed of 57 TelosB motes. The results show that considerable performance improvements can be achieved by using alternative protocol stack configurations. From these results, we derive two improved protocol stack configurations for ZigBee wireless home automation networks that are validated in various network scenarios. In our experiments, these improved configurations yield a relative packet delivery ratio increase of up to 33.6%, a delay decrease of up to 66.6% and an improvement of the energy efficiency for battery powered devices of up to 48.7%, obtainable without incurring any overhead to the network.

  14. Experimental validation of optical layer performance monitoring using an all-optical network testbed

    NASA Astrophysics Data System (ADS)

    Vukovic, Alex; Savoie, Michel J.; Hua, Heng

    2004-11-01

    Communication transmission systems continue to evolve towards higher data rates, increased wavelength densities, longer transmission distances and more intelligence. Further development of dense wavelength division multiplexing (DWDM) and all-optical networks (AONs) will demand ever-tighter monitoring to assure a specified quality of service (QoS). Traditional monitoring methods have been proven to be insufficient. Higher degree of self-control, intelligence and optimization for functions within next generation networks require new monitoring schemes to be developed and deployed. Both perspective and challenges of performance monitoring, its techniques, requirements and drivers are discussed. It is pointed out that optical layer monitoring is a key enabler for self-control of next generation optical networks. Aside from its real-time feedback and the safeguarding of neighbouring channels, optical performance monitoring ensures the ability to build and control complex network topologies while maintaining an efficiently high QoS. Within an all-optical network testbed environment, key performance monitoring parameters are identified, assessed through real-time proof-of-concept, and proposed for network applications for the safeguarding of neighbouring channels in WDM systems.

  15. Performance of wavelet analysis and neural networks for pathological voices identification

    NASA Astrophysics Data System (ADS)

    Salhi, Lotfi; Talbi, Mourad; Abid, Sabeur; Cherif, Adnane

    2011-09-01

    Within the medical environment, diverse techniques exist to assess the state of the voice of the patient. The inspection technique is inconvenient for a number of reasons, such as its high cost, the duration of the inspection, and above all, the fact that it is an invasive technique. This study focuses on a robust, rapid and accurate system for automatic identification of pathological voices. This system employs non-invasive, non-expensive and fully automated method based on hybrid approach: wavelet transform analysis and neural network classifier. First, we present the results obtained in our previous study while using classic feature parameters. These results allow visual identification of pathological voices. Second, quantified parameters drifting from the wavelet analysis are proposed to characterise the speech sample. On the other hand, a system of multilayer neural networks (MNNs) has been developed which carries out the automatic detection of pathological voices. The developed method was evaluated using voice database composed of recorded voice samples (continuous speech) from normophonic or dysphonic speakers. The dysphonic speakers were patients of a National Hospital 'RABTA' of Tunis Tunisia and a University Hospital in Brussels, Belgium. Experimental results indicate a success rate ranging between 75% and 98.61% for discrimination of normal and pathological voices using the proposed parameters and neural network classifier. We also compared the average classification rate based on the MNN, Gaussian mixture model and support vector machines.

  16. Design and analysis of a novel chaotic diagonal recurrent neural network

    NASA Astrophysics Data System (ADS)

    Wang, Libiao; Meng, Zhuo; Sun, Yize; Guo, Lei; Zhou, Mingxing

    2015-09-01

    A chaotic neural network model with logistic mapping is proposed to improve the performance of the conventional diagonal recurrent neural network. The network shows rich dynamic behaviors that contribute to escaping from a local minimum to reach the global minimum easily. Then, a simple parameter modulated chaos controller is adopted to enhance convergence speed of the network. Furthermore, an adaptive learning algorithm with the robust adaptive dead zone vector is designed to improve the generalization performance of the network, and weights convergence for the network with the adaptive dead zone vectors is proved in the sense of Lyapunov functions. Finally, the numerical simulation is carried out to demonstrate the correctness of the theory.

  17. Classification of fault location and the degree of performance degradation of a rolling bearing based on an improved hyper-sphere-structured multi-class support vector machine

    NASA Astrophysics Data System (ADS)

    Wang, Yujing; Kang, Shouqiang; Jiang, Yicheng; Yang, Guangxue; Song, Lixin; Mikulovich, V. I.

    2012-05-01

    Effective classification of a rolling bearing fault location and especially its degree of performance degradation provides an important basis for appropriate fault judgment and processing. Two methods are introduced to extract features of the rolling bearing vibration signal—one combining empirical mode decomposition (EMD) with the autoregressive model, whose model parameters and variances of the remnant can be obtained using the Yule-Walker or Ulrych-Clayton method, and the other combining EMD with singular value decomposition. Feature vector matrices obtained are then regarded as the input of the improved hyper-sphere-structured multi-class support vector machine (HSSMC-SVM) for classification. Thereby, multi-status intelligent diagnosis of normal rolling bearings and faulty rolling bearings at different locations and the degrees of performance degradation of the faulty rolling bearings can be achieved simultaneously. Experimental results show that EMD combined with singular value decomposition and the improved HSSMC-SVM intelligent method requires less time and has a higher recognition rate.

  18. On the Improvement of Convergence Performance for Integrated Design of Wind Turbine Blade Using a Vector Dominating Multi-objective Evolution Algorithm

    NASA Astrophysics Data System (ADS)

    Wang, L.; Wang, T. G.; Wu, J. H.; Cheng, G. P.

    2016-09-01

    A novel multi-objective optimization algorithm incorporating evolution strategies and vector mechanisms, referred as VD-MOEA, is proposed and applied in aerodynamic- structural integrated design of wind turbine blade. In the algorithm, a set of uniformly distributed vectors is constructed to guide population in moving forward to the Pareto front rapidly and maintain population diversity with high efficiency. For example, two- and three- objective designs of 1.5MW wind turbine blade are subsequently carried out for the optimization objectives of maximum annual energy production, minimum blade mass, and minimum extreme root thrust. The results show that the Pareto optimal solutions can be obtained in one single simulation run and uniformly distributed in the objective space, maximally maintaining the population diversity. In comparison to conventional evolution algorithms, VD-MOEA displays dramatic improvement of algorithm performance in both convergence and diversity preservation for handling complex problems of multi-variables, multi-objectives and multi-constraints. This provides a reliable high-performance optimization approach for the aerodynamic-structural integrated design of wind turbine blade.

  19. Performance Analysis of TCP Enhancements in Satellite Data Networks

    NASA Technical Reports Server (NTRS)

    Broyles, Ren H.

    1999-01-01

    This research examines two proposed enhancements to the well-known Transport Control Protocol (TCP) in the presence of noisy communication links. The Multiple Pipes protocol is an application-level adaptation of the standard TCP protocol, where several TCP links cooperate to transfer data. The Space Communication Protocol Standard - Transport Protocol (SCPS-TP) modifies TCP to optimize performance in a satellite environment. While SCPS-TP has inherent advantages that allow it to deliver data more rapidly than Multiple Pipes, the protocol, when optimized for operation in a high-error environment, is not compatible with legacy TCP systems, and requires changes to the TCP specification. This investigation determines the level of improvement offered by SCPS-TP's Corruption Mode, which will help determine if migration to the protocol is appropriate in different environments. As the percentage of corrupted packets approaches 5 %, Multiple Pipes can take over five times longer than SCPS-TP to deliver data. At high error rates, SCPS-TP's advantage is primarily caused by Multiple Pipes' use of congestion control algorithms. The lack of congestion control, however, limits the systems in which SCPS-TP can be effectively used.

  20. Lentiviral vectors.

    PubMed

    Giry-Laterrière, Marc; Verhoeyen, Els; Salmon, Patrick

    2011-01-01

    Lentiviral vectors have evolved over the last decade as powerful, reliable, and safe tools for stable gene transfer in a wide variety of mammalian cells. Contrary to other vectors derived from oncoretroviruses, they allow for stable gene delivery into most nondividing primary cells. In particular, lentivectors (LVs) derived from HIV-1 have gradually evolved to display many desirable features aimed at increasing both their safety and their versatility. This is why lentiviral vectors are becoming the most useful and promising tools for genetic engineering, to generate cells that can be used for research, diagnosis, and therapy. This chapter describes protocols and guidelines, for production and titration of LVs, which can be implemented in a research laboratory setting, with an emphasis on standardization in order to improve transposability of results between laboratories. We also discuss latest designs in LV technology.

  1. Windows NT 4.0 Asynchronous Transfer Mode network interface card performance

    SciTech Connect

    Tolendino, L.F.

    1997-02-18

    Windows NT desktop and server systems are becoming increasingly important to Sandia. These systems are capable of network performance considerably in excess of the 10 Mbps Ethernet data rate. As alternatives to conventional Ethernet, 155 Mbps Asynchronous Transfer Mode, ATM, and 100 Mbps Ethernet network interface cards were tested and compared to conventional 10 Mbps Ethernet cards in a typical Windows NT system. The results of the tests were analyzed and compared to show the advantages of the alternative technologies. Both 155 Mbps ATM and 100 Mbps Ethernet offer significant performance improvements over conventional 10 Mbps shared media Ethernet.

  2. Support vector machine regression (LS-SVM)--an alternative to artificial neural networks (ANNs) for the analysis of quantum chemistry data?

    PubMed

    Balabin, Roman M; Lomakina, Ekaterina I

    2011-06-28

    A multilayer feed-forward artificial neural network (MLP-ANN) with a single, hidden layer that contains a finite number of neurons can be regarded as a universal non-linear approximator. Today, the ANN method and linear regression (MLR) model are widely used for quantum chemistry (QC) data analysis (e.g., thermochemistry) to improve their accuracy (e.g., Gaussian G2-G4, B3LYP/B3-LYP, X1, or W1 theoretical methods). In this study, an alternative approach based on support vector machines (SVMs) is used, the least squares support vector machine (LS-SVM) regression. It has been applied to ab initio (first principle) and density functional theory (DFT) quantum chemistry data. So, QC + SVM methodology is an alternative to QC + ANN one. The task of the study was to estimate the Møller-Plesset (MPn) or DFT (B3LYP, BLYP, BMK) energies calculated with large basis sets (e.g., 6-311G(3df,3pd)) using smaller ones (6-311G, 6-311G*, 6-311G**) plus molecular descriptors. A molecular set (BRM-208) containing a total of 208 organic molecules was constructed and used for the LS-SVM training, cross-validation, and testing. MP2, MP3, MP4(DQ), MP4(SDQ), and MP4/MP4(SDTQ) ab initio methods were tested. Hartree-Fock (HF/SCF) results were also reported for comparison. Furthermore, constitutional (CD: total number of atoms and mole fractions of different atoms) and quantum-chemical (QD: HOMO-LUMO gap, dipole moment, average polarizability, and quadrupole moment) molecular descriptors were used for the building of the LS-SVM calibration model. Prediction accuracies (MADs) of 1.62 ± 0.51 and 0.85 ± 0.24 kcal mol(-1) (1 kcal mol(-1) = 4.184 kJ mol(-1)) were reached for SVM-based approximations of ab initio and DFT energies, respectively. The LS-SVM model was more accurate than the MLR model. A comparison with the artificial neural network approach shows that the accuracy of the LS-SVM method is similar to the accuracy of ANN. The extrapolation and interpolation results show that LS-SVM is

  3. Support vector machine regression (LS-SVM)--an alternative to artificial neural networks (ANNs) for the analysis of quantum chemistry data?

    PubMed

    Balabin, Roman M; Lomakina, Ekaterina I

    2011-06-28

    A multilayer feed-forward artificial neural network (MLP-ANN) with a single, hidden layer that contains a finite number of neurons can be regarded as a universal non-linear approximator. Today, the ANN method and linear regression (MLR) model are widely used for quantum chemistry (QC) data analysis (e.g., thermochemistry) to improve their accuracy (e.g., Gaussian G2-G4, B3LYP/B3-LYP, X1, or W1 theoretical methods). In this study, an alternative approach based on support vector machines (SVMs) is used, the least squares support vector machine (LS-SVM) regression. It has been applied to ab initio (first principle) and density functional theory (DFT) quantum chemistry data. So, QC + SVM methodology is an alternative to QC + ANN one. The task of the study was to estimate the Møller-Plesset (MPn) or DFT (B3LYP, BLYP, BMK) energies calculated with large basis sets (e.g., 6-311G(3df,3pd)) using smaller ones (6-311G, 6-311G*, 6-311G**) plus molecular descriptors. A molecular set (BRM-208) containing a total of 208 organic molecules was constructed and used for the LS-SVM training, cross-validation, and testing. MP2, MP3, MP4(DQ), MP4(SDQ), and MP4/MP4(SDTQ) ab initio methods were tested. Hartree-Fock (HF/SCF) results were also reported for comparison. Furthermore, constitutional (CD: total number of atoms and mole fractions of different atoms) and quantum-chemical (QD: HOMO-LUMO gap, dipole moment, average polarizability, and quadrupole moment) molecular descriptors were used for the building of the LS-SVM calibration model. Prediction accuracies (MADs) of 1.62 ± 0.51 and 0.85 ± 0.24 kcal mol(-1) (1 kcal mol(-1) = 4.184 kJ mol(-1)) were reached for SVM-based approximations of ab initio and DFT energies, respectively. The LS-SVM model was more accurate than the MLR model. A comparison with the artificial neural network approach shows that the accuracy of the LS-SVM method is similar to the accuracy of ANN. The extrapolation and interpolation results show that LS-SVM is

  4. Visualizing Network Traffic to Understand the Performance of Massively Parallel Simulations.

    PubMed

    Landge, A G; Levine, J A; Bhatele, A; Isaacs, K E; Gamblin, T; Schulz, M; Langer, S H; Bremer, Peer-Timo; Pascucci, V

    2012-12-01

    The performance of massively parallel applications is often heavily impacted by the cost of communication among compute nodes. However, determining how to best use the network is a formidable task, made challenging by the ever increasing size and complexity of modern supercomputers. This paper applies visualization techniques to aid parallel application developers in understanding the network activity by enabling a detailed exploration of the flow of packets through the hardware interconnect. In order to visualize this large and complex data, we employ two linked views of the hardware network. The first is a 2D view, that represents the network structure as one of several simplified planar projections. This view is designed to allow a user to easily identify trends and patterns in the network traffic. The second is a 3D view that augments the 2D view by preserving the physical network topology and providing a context that is familiar to the application developers. Using the massively parallel multi-physics code pF3D as a case study, we demonstrate that our tool provides valuable insight that we use to explain and optimize pF3D's performance on an IBM Blue Gene/P system. PMID:26357155

  5. The tendon network of the fingers performs anatomical computation at a macroscopic scale.

    PubMed

    Valero-Cuevas, Francisco J; Yi, Jae-Woong; Brown, Daniel; McNamara, Robert V; Paul, Chandana; Lipson, Hood

    2007-06-01

    Current thinking attributes information processing for neuromuscular control exclusively to the nervous system. Our cadaveric experiments and computer simulations show, however, that the tendon network of the fingers performs logic computation to preferentially change torque production capabilities. How this tendon network propagates tension to enable manipulation has been debated since the time of Vesalius and DaVinci and remains an unanswered question. We systematically changed the proportion of tension to the tendons of the extensor digitorum versus the two dorsal interosseous muscles of two cadaver fingers and measured the tension delivered to the proximal and distal interphalangeal joints. We find that the distribution of input tensions in the tendon network itself regulates how tensions propagate to the finger joints, acting like the switching function of a logic gate that nonlinearly enables different torque production capabilities. Computer modeling reveals that the deformable structure of the tendon networks is responsible for this phenomenon; and that this switching behavior is an effective evolutionary solution permitting a rich repertoire of finger joint actuation not possible with simpler tendon paths. We conclude that the structural complexity of this tendon network, traditionally oversimplified or ignored, may in fact be critical to understanding brain-body coevolution and neuromuscular control. Moreover, this form of information processing at the macroscopic scale is a new instance of the emerging principle of nonneural "somatic logic" found to perform logic computation such as in cellular networks. PMID:17549909

  6. Comparative analysis of fuzzy ART and ART-2A network clustering performance.

    PubMed

    Frank, T; Kraiss, K F; Kuhlen, T

    1998-01-01

    Adaptive resonance theory (ART) describes a family of self-organizing neural networks, capable of clustering arbitrary sequences of input patterns into stable recognition codes. Many different types of ART-networks have been developed to improve clustering capabilities. In this paper we compare clustering performance of different types of ART-networks: Fuzzy ART, ART 2A with and without complement encoded input patterns, and an Euclidean ART 2A-variation. All types are tested with two- and high-dimensional input patterns in order to illustrate general capabilities and characteristics in different system environments. Based on our simulation results, Fuzzy ART seems to be less appropriate whenever input signals are corrupted by additional noise, while ART 2A-type networks keep stable in all inspected environments. Together with other examined features, ART-architectures suited for particular applications can be selected. PMID:18252478

  7. Distinct Aging Effects on Functional Networks in Good and Poor Cognitive Performers

    PubMed Central

    Lee, Annie; Tan, Mingzhen; Qiu, Anqi

    2016-01-01

    Brain network hubs are susceptible to normal aging processes and disruptions of their functional connectivity are detrimental to decline in cognitive functions in older adults. However, it remains unclear how the functional connectivity of network hubs cope with cognitive heterogeneity in an aging population. This study utilized cognitive and resting-state functional magnetic resonance imaging data, cluster analysis, and graph network analysis to examine age-related alterations in the network hubs’ functional connectivity of good and poor cognitive performers. Our results revealed that poor cognitive performers showed age-dependent disruptions in the functional connectivity of the right insula and posterior cingulate cortex (PCC), while good cognitive performers showed age-related disruptions in the functional connectivity of the left insula and PCC. Additionally, the left PCC had age-related declines in the functional connectivity with the left medial prefrontal cortex (mPFC) and anterior cingulate cortex (ACC). Most interestingly, good cognitive performers showed age-related declines in the functional connectivity of the left insula and PCC with their right homotopic structures. These results may provide insights of neuronal correlates for understanding individual differences in aging. In particular, our study suggests prominent protection roles of the left insula and PCC and bilateral ACC in good performers.

  8. Distinct Aging Effects on Functional Networks in Good and Poor Cognitive Performers

    PubMed Central

    Lee, Annie; Tan, Mingzhen; Qiu, Anqi

    2016-01-01

    Brain network hubs are susceptible to normal aging processes and disruptions of their functional connectivity are detrimental to decline in cognitive functions in older adults. However, it remains unclear how the functional connectivity of network hubs cope with cognitive heterogeneity in an aging population. This study utilized cognitive and resting-state functional magnetic resonance imaging data, cluster analysis, and graph network analysis to examine age-related alterations in the network hubs’ functional connectivity of good and poor cognitive performers. Our results revealed that poor cognitive performers showed age-dependent disruptions in the functional connectivity of the right insula and posterior cingulate cortex (PCC), while good cognitive performers showed age-related disruptions in the functional connectivity of the left insula and PCC. Additionally, the left PCC had age-related declines in the functional connectivity with the left medial prefrontal cortex (mPFC) and anterior cingulate cortex (ACC). Most interestingly, good cognitive performers showed age-related declines in the functional connectivity of the left insula and PCC with their right homotopic structures. These results may provide insights of neuronal correlates for understanding individual differences in aging. In particular, our study suggests prominent protection roles of the left insula and PCC and bilateral ACC in good performers. PMID:27667972

  9. Distinct Aging Effects on Functional Networks in Good and Poor Cognitive Performers.

    PubMed

    Lee, Annie; Tan, Mingzhen; Qiu, Anqi

    2016-01-01

    Brain network hubs are susceptible to normal aging processes and disruptions of their functional connectivity are detrimental to decline in cognitive functions in older adults. However, it remains unclear how the functional connectivity of network hubs cope with cognitive heterogeneity in an aging population. This study utilized cognitive and resting-state functional magnetic resonance imaging data, cluster analysis, and graph network analysis to examine age-related alterations in the network hubs' functional connectivity of good and poor cognitive performers. Our results revealed that poor cognitive performers showed age-dependent disruptions in the functional connectivity of the right insula and posterior cingulate cortex (PCC), while good cognitive performers showed age-related disruptions in the functional connectivity of the left insula and PCC. Additionally, the left PCC had age-related declines in the functional connectivity with the left medial prefrontal cortex (mPFC) and anterior cingulate cortex (ACC). Most interestingly, good cognitive performers showed age-related declines in the functional connectivity of the left insula and PCC with their right homotopic structures. These results may provide insights of neuronal correlates for understanding individual differences in aging. In particular, our study suggests prominent protection roles of the left insula and PCC and bilateral ACC in good performers. PMID:27667972

  10. Advanced Communication Technology Satellite (ACTS) Very Small Aperture Terminal (VSAT) Network Control Performance

    NASA Technical Reports Server (NTRS)

    Coney, T. A.

    1996-01-01

    This paper discusses the performance of the network control function for the Advanced Communications Technology Satellite (ACTS) very small aperture terminal (VSAT) full mesh network. This includes control of all operational activities such as acquisition, synchronization, timing and rain fade compensation as well as control of all communications activities such as on-demand integrated services (voice, video, and date) connects and disconnects Operations control is provided by an in-band orderwire carried in the baseboard processor (BBP) control burst, the orderwire burst, the reference burst, and the uplink traffic burst. Communication services are provided by demand assigned multiple access (DAMA) protocols. The ACTS implementation of DAMA protocols ensures both on-demand and integrated voice, video and data services. Communications services control is also provided by the in-band orderwire but uses only the reference burst and the uplink traffic burst. The performance of the ACTS network control functions have been successfully tested during on-orbit checkout and in various VSAT networks in day to day operations. This paper discusses the network operations and services control performance.

  11. Vectorized garbage collection

    SciTech Connect

    Appel, A.W.; Bendiksen, A.

    1988-01-01

    Garbage collection can be done in vector mode on supercomputers like the Cray-2 and the Cyber 205. Both copying collection and mark-and-sweep can be expressed as breadth-first searches in which the queue can be processed in parallel. The authors have designed a copying garbage collector whose inner loop works entirely in vector mode. The only significant limitation of the algorithm is that if the size of the records is not constant, the implementation becomes much more complicated. The authors give performance measurements of the algorithm as implemented for Lisp CONS cells on the Cyber 205. Vector-mode garbage collection performs up to 9 times faster than scalar-mode collection.

  12. Support vector machine for automatic pain recognition

    NASA Astrophysics Data System (ADS)

    Monwar, Md Maruf; Rezaei, Siamak

    2009-02-01

    Facial expressions are a key index of emotion and the interpretation of such expressions of emotion is critical to everyday social functioning. In this paper, we present an efficient video analysis technique for recognition of a specific expression, pain, from human faces. We employ an automatic face detector which detects face from the stored video frame using skin color modeling technique. For pain recognition, location and shape features of the detected faces are computed. These features are then used as inputs to a support vector machine (SVM) for classification. We compare the results with neural network based and eigenimage based automatic pain recognition systems. The experiment results indicate that using support vector machine as classifier can certainly improve the performance of automatic pain recognition system.

  13. Structure and Topology Dynamics of Hyper-Frequency Networks during Rest and Auditory Oddball Performance

    PubMed Central

    Müller, Viktor; Perdikis, Dionysios; von Oertzen, Timo; Sleimen-Malkoun, Rita; Jirsa, Viktor; Lindenberger, Ulman

    2016-01-01

    Resting-state and task-related recordings are characterized by oscillatory brain activity and widely distributed networks of synchronized oscillatory circuits. Electroencephalographic recordings (EEG) were used to assess network structure and network dynamics during resting state with eyes open and closed, and auditory oddball performance through phase synchronization between EEG channels. For this assessment, we constructed a hyper-frequency network (HFN) based on within- and cross-frequency coupling (WFC and CFC, respectively) at 10 oscillation frequencies ranging between 2 and 20 Hz. We found that CFC generally differentiates between task conditions better than WFC. CFC was the highest during resting state with eyes open. Using a graph-theoretical approach (GTA), we found that HFNs possess small-world network (SWN) topology with a slight tendency to random network characteristics. Moreover, analysis of the temporal fluctuations of HFNs revealed specific network topology dynamics (NTD), i.e., temporal changes of different graph-theoretical measures such as strength, clustering coefficient, characteristic path length (CPL), local, and global efficiency determined for HFNs at different time windows. The different topology metrics showed significant differences between conditions in the mean and standard deviation of these metrics both across time and nodes. In addition, using an artificial neural network approach, we found stimulus-related dynamics that varied across the different network topology metrics. We conclude that functional connectivity dynamics (FCD), or NTD, which was found using the HFN approach during rest and stimulus processing, reflects temporal and topological changes in the functional organization and reorganization of neuronal cell assemblies. PMID:27799906

  14. Vector Magnetograph Design

    NASA Technical Reports Server (NTRS)

    Chipman, Russell A.

    1996-01-01

    This report covers work performed during the period of November 1994 through March 1996 on the design of a Space-borne Solar Vector Magnetograph. This work has been performed as part of a design team under the supervision of Dr. Mona Hagyard and Dr. Alan Gary of the Space Science Laboratory. Many tasks were performed and this report documents the results from some of those tasks, each contained in the corresponding appendix. Appendices are organized in chronological order.

  15. Performance Analysis of Cooperative Wireless Backhaul Networks Operating at Extremely High Frequencies

    NASA Astrophysics Data System (ADS)

    Sakarellos, Vasileios K.; Chortatou, Maria; Skraparlis, Dimitrios; Panagopoulos, Athanasios D.; Kanellopoulos, John D.

    2011-04-01

    Extremely high frequency (EHF) bands above 50 GHz have been proposed to be used as backhaul links of modern cellular mobile networks. They provide interconnectivity between the base stations and the core network. In this paper, we propose the employment of cooperative techniques in backhaul networks. More specifically, the outage performance analysis of a simple cooperative diversity system operating at EHF bands is presented. The destination node combines the direct link with the signal received through a regenerative relay using selection combining. A combined statiform and convective model of rainfall rate for the rain attenuation prediction is considered. The correlation properties and the joint statistics among the microwave paths are also calculated. Numerical results present the impact of the geometrical parameters and the climatic conditions on the outage performance.

  16. A novel approach for short-term load forecasting using support vector machines.

    PubMed

    Tian, Liang; Noore, Afzel

    2004-10-01

    A support vector machine (SVM) modeling approach for short-term load forecasting is proposed. The SVM learning scheme is applied to the power load data, forcing the network to learn the inherent internal temporal property of power load sequence. We also study the performance when other related input variables such as temperature and humidity are considered. The performance of our proposed SVM modeling approach has been tested and compared with feed-forward neural network and cosine radial basis function neural network approaches. Numerical results show that the SVM approach yields better generalization capability and lower prediction error compared to those neural network approaches.

  17. Support Vector Machines in Fault Tolerance Control

    NASA Astrophysics Data System (ADS)

    Ribeiro, Bernardete

    2002-09-01

    This paper presents a new approach for quality monitoring of on-line molded parts in the context of an injection molding problem using Support Vector Machines (SVMs). While the main goal in the industrial framework is to automatically calculate the setpoints, a less important task is to classify plastic molded parts defects efficiently in order to assess multiple quality characteristics. The paper presents a comparison of the performance assessment of SVMs and RBF neural networks as part quality monitoring tools by analyzing complete data patterns. Results show that the classification model using SVMs presents slightly better performance than RBF neural networks mainly due to the superior generalization of the SVMs in high-dimensional spaces. Particularly, when RBF kernels are used, the accuracy of the task increases thus leading to smaller error rates. Besides, the optimization method is a constrained quadratic programming, which is a well studied and understood mathematical programming technique.

  18. Data-flow Performance Optimisation on Unreliable Networks: the ATLAS Data-Acquisition Case

    NASA Astrophysics Data System (ADS)

    Colombo, Tommaso; ATLAS Collaboration

    2015-05-01

    The ATLAS detector at CERN records proton-proton collisions delivered by the Large Hadron Collider (LHC). The ATLAS Trigger and Data-Acquisition (TDAQ) system identifies, selects, and stores interesting collision data. These are received from the detector readout electronics at an average rate of 100 kHz. The typical event data size is 1 to 2 MB. Overall, the ATLAS TDAQ system can be seen as a distributed software system executed on a farm of roughly 2000 commodity PCs. The worker nodes are interconnected by an Ethernet network that at the restart of the LHC in 2015 is expected to experience a sustained throughput of several 10 GB/s. A particular type of challenge posed by this system, and by DAQ systems in general, is the inherently burstynature of the data traffic from the readout buffers to the worker nodes. This can cause instantaneous network congestion and therefore performance degradation. The effect is particularly pronounced for unreliable network interconnections, such as Ethernet. In this paper we report on the design of the data-flow software for the 2015-2018 data-taking period of the ATLAS experiment. This software will be responsible for transporting the data across the distributed Data-Acquisition system. We will focus on the strategies employed to manage the network congestion and therefore minimisethe data-collection latency and maximisethe system performance. We will discuss the results of systematic measurements performed on different types of networking hardware. These results highlight the causes of network congestion and the effects on the overall system performance.

  19. Quality Performance Assessment as a Source of Motivation for Lecturers: A Teaching Network Experience

    ERIC Educational Resources Information Center

    Andreu, R.; Canos, L.; de Juana, S.; Manresa, E.; Rienda, L.; Tari, J. J.

    2006-01-01

    Purpose: The purpose of this paper is to present findings derived from research work carried out by a team of six university lecturers who are members of a teaching quality improvement network. The aim is to increase the motivation of the lecturers involved, so that better performance can be achieved, and the teaching-learning process enriched.…

  20. A Public-Private Partnership Improves Clinical Performance In A Hospital Network In Lesotho.

    PubMed

    McIntosh, Nathalie; Grabowski, Aria; Jack, Brian; Nkabane-Nkholongo, Elizabeth Limakatso; Vian, Taryn

    2015-06-01

    Health care public-private partnerships (PPPs) between a government and the private sector are based on a business model that aims to leverage private-sector expertise to improve clinical performance in hospitals and other health facilities. Although the financial implications of such partnerships have been analyzed, few studies have examined the partnerships' impact on clinical performance outcomes. Using quantitative measures that reflected capacity, utilization, clinical quality, and patient outcomes, we compared a government-managed hospital network in Lesotho, Africa, and the new PPP-managed hospital network that replaced it. In addition, we used key informant interviews to help explain differences in performance. We found that the PPP-managed network delivered more and higher-quality services and achieved significant gains in clinical outcomes, compared to the government-managed network. We conclude that health care public-private partnerships may improve hospital performance in developing countries and that changes in management and leadership practices might account for differences in clinical outcomes. PMID:26056200

  1. Performance evaluation of a multi-granularity and multi-connectivity circuit switched network

    NASA Astrophysics Data System (ADS)

    Guo, Naixing; Xin, Maoqing; Sun, Weiqiang; Jin, Yaohui; Zhu, Yi; Zhang, Chunlei; Hu, Weisheng; Xie, Guowu

    2007-11-01

    This paper introduces a novel notion of multi-granularity and multi-connectivity circuit switched network. Based on this notion, four routing schemes - Fixed Routing (FR), Maximum Remain (MR), Secured Maximum Remain (SMR) and Premium/Punishment Modification (PPM) are proposed. Numerical simulation results about the performance of these four schemes are also presented in this paper.

  2. A Public-Private Partnership Improves Clinical Performance In A Hospital Network In Lesotho.

    PubMed

    McIntosh, Nathalie; Grabowski, Aria; Jack, Brian; Nkabane-Nkholongo, Elizabeth Limakatso; Vian, Taryn

    2015-06-01

    Health care public-private partnerships (PPPs) between a government and the private sector are based on a business model that aims to leverage private-sector expertise to improve clinical performance in hospitals and other health facilities. Although the financial implications of such partnerships have been analyzed, few studies have examined the partnerships' impact on clinical performance outcomes. Using quantitative measures that reflected capacity, utilization, clinical quality, and patient outcomes, we compared a government-managed hospital network in Lesotho, Africa, and the new PPP-managed hospital network that replaced it. In addition, we used key informant interviews to help explain differences in performance. We found that the PPP-managed network delivered more and higher-quality services and achieved significant gains in clinical outcomes, compared to the government-managed network. We conclude that health care public-private partnerships may improve hospital performance in developing countries and that changes in management and leadership practices might account for differences in clinical outcomes.

  3. Early detection monitoring of aquatic invasive species: Measuring performance success in a Lake Superior pilot network

    EPA Science Inventory

    The Great Lakes Water Quality Agreement, Annex 6 calls for a U.S.-Canada, basin-wide aquatic invasive species early detection network by 2015. The objective of our research is to explore survey design strategies that can improve detection efficiency, and to develop performance me...

  4. Performance of a random access packet network with time-capture capability

    NASA Technical Reports Server (NTRS)

    Lin, Y. H.

    1983-01-01

    The Joint Tactical Information Distribution System (JTIDS) is applied to a digital network supporting the command, control and communication requirements of 105 highly mobile users. User data traffic is bursty and the slotted ALOHA channel access scheme is therefore employed. This paper focuses on the determination of JTIDS system performance in this particular application. Emphasis is directed at the specific time-capture capability of JTIDS. Significant system performance parameters are quantified with analysis and simulation.

  5. A Case Study of Performance Degradation Attributable to Run-Time Bounds Checks on C++ Vector Access

    PubMed Central

    Flater, David; Guthrie, William F

    2013-01-01

    Programmers routinely omit run-time safety checks from applications because they assume that these safety checks would degrade performance. The simplest example is the use of arrays or array-like data structures that do not enforce the constraint that indices must be within bounds. This report documents an attempt to measure the performance penalty incurred by two different implementations of bounds-checking in C and C++ using a simple benchmark and a desktop PC with a modern superscalar CPU. The benchmark consisted of a loop that wrote to array elements in sequential order. With this configuration, relative to the best performance observed for any access method in C or C++, mean degradation of only (0.881 ± 0.009) % was measured for a standard bounds-checking access method in C++. This case study showed the need for further work to develop and refine measurement methods and to perform more comparisons of this type. Comparisons across different use cases, configurations, programming languages, and environments are needed to determine under what circumstances (if any) the performance advantage of unchecked access is actually sufficient to outweigh the negative consequences for security and software quality. PMID:26401432

  6. Architecture Modeling and Performance Characterization of Space Communications and Navigation (SCaN) Network Using MACHETE

    NASA Technical Reports Server (NTRS)

    Jennings, Esther; Heckman, David

    2008-01-01

    As future space exploration missions will involve larger number of spacecraft and more complex systems, theoretical analysis alone may have limitations on characterizing system performance and interactions among the systems. Simulation tools can be useful for system performance characterization through detailed modeling and simulation of the systems and its environment...This paper reports the simulation of the Orion (Crew Exploration Vehicle) to the International Space Station (ISS) mission where Orion is launched by Ares into orbit on a 14-day mission to rendezvous with the ISS. Communications services for the mission are provided by the Space Communication and Navigation (SCaN) network infrastructure which includes the NASA Space Network (SN), Ground Network (GN) and NASA Integrated Services Network (NISN). The objectives of the simulation are to determine whether SCaN can meet the communications needs of the mission, to demonstrate the benefit of using QoS prioritization, and to evaluate network-key parameters of interest such as delay and throughout.

  7. Neural network predictions of acoustical parameters in multi-purpose performance halls.

    PubMed

    Cheung, L Y; Tang, S K

    2013-09-01

    A detailed binaural sound measurement was carried out in two multi-purpose performance halls of different seating capacities and designs in Hong Kong in the present study. The effectiveness of using neural network in the predictions of the acoustical properties using a limited number of measurement points was examined. The root-mean-square deviation from measurements, statistical parameter distribution matching, and the results of a t-test for vanishing mean difference between simulations and measurements were adopted as the evaluation criteria for the neural network performance. The audience locations relative to the sound source were used as the inputs to the neural network. Results show that the neural network training scheme using nine uniformly located measurement points in each specific hall area is the best choice regardless of the hall setting and design. It is also found that the neural network prediction of hall spaciousness does not require a large amount of training data, but the accuracy of the reverberance related parameter predictions increases with increasing volume of training data.

  8. Changes in Brain Network Efficiency and Working Memory Performance in Aging

    PubMed Central

    Stanley, Matthew L.; Simpson, Sean L.; Dagenbach, Dale; Lyday, Robert G.; Burdette, Jonathan H.; Laurienti, Paul J.

    2015-01-01

    Working memory is a complex psychological construct referring to the temporary storage and active processing of information. We used functional connectivity brain network metrics quantifying local and global efficiency of information transfer for predicting individual variability in working memory performance on an n-back task in both young (n = 14) and older (n = 15) adults. Individual differences in both local and global efficiency during the working memory task were significant predictors of working memory performance in addition to age (and an interaction between age and global efficiency). Decreases in local efficiency during the working memory task were associated with better working memory performance in both age cohorts. In contrast, increases in global efficiency were associated with much better working performance for young participants; however, increases in global efficiency were associated with a slight decrease in working memory performance for older participants. Individual differences in local and global efficiency during resting-state sessions were not significant predictors of working memory performance. Significant group whole-brain functional network decreases in local efficiency also were observed during the working memory task compared to rest, whereas no significant differences were observed in network global efficiency. These results are discussed in relation to recently developed models of age-related differences in working memory. PMID:25875001

  9. Changes in brain network efficiency and working memory performance in aging.

    PubMed

    Stanley, Matthew L; Simpson, Sean L; Dagenbach, Dale; Lyday, Robert G; Burdette, Jonathan H; Laurienti, Paul J

    2015-01-01

    Working memory is a complex psychological construct referring to the temporary storage and active processing of information. We used functional connectivity brain network metrics quantifying local and global efficiency of information transfer for predicting individual variability in working memory performance on an n-back task in both young (n = 14) and older (n = 15) adults. Individual differences in both local and global efficiency during the working memory task were significant predictors of working memory performance in addition to age (and an interaction between age and global efficiency). Decreases in local efficiency during the working memory task were associated with better working memory performance in both age cohorts. In contrast, increases in global efficiency were associated with much better working performance for young participants; however, increases in global efficiency were associated with a slight decrease in working memory performance for older participants. Individual differences in local and global efficiency during resting-state sessions were not significant predictors of working memory performance. Significant group whole-brain functional network decreases in local efficiency also were observed during the working memory task compared to rest, whereas no significant differences were observed in network global efficiency. These results are discussed in relation to recently developed models of age-related differences in working memory.

  10. Study of Synthetic Vision Systems (SVS) and Velocity-vector Based Command Augmentation System (V-CAS) on Pilot Performance

    NASA Technical Reports Server (NTRS)

    Liu, Dahai; Goodrich, Ken; Peak, Bob

    2006-01-01

    This study investigated the effects of synthetic vision system (SVS) concepts and advanced flight controls on single pilot performance (SPP). Specifically, we evaluated the benefits and interactions of two levels of terrain portrayal, guidance symbology, and control-system response type on SPP in the context of lower-landing minima (LLM) approaches. Performance measures consisted of flight technical error (FTE) and pilot perceived workload. In this study, pilot rating, control type, and guidance symbology were not found to significantly affect FTE or workload. It is likely that transfer from prior experience, limited scope of the evaluation task, specific implementation limitations, and limited sample size were major factors in obtaining these results.

  11. Effects of Sample Size and Dimensionality on the Performance of Four Algorithms for Inference of Association Networks in Metabonomics.

    PubMed

    Suarez-Diez, Maria; Saccenti, Edoardo

    2015-12-01

    We investigated the effect of sample size and dimensionality on the performance of four algorithms (ARACNE, CLR, CORR, and PCLRC) when they are used for the inference of metabolite association networks. We report that as many as 100-400 samples may be necessary to obtain stable network estimations, depending on the algorithm and the number of measured metabolites. The CLR and PCLRC methods produce similar results, whereas network inference based on correlations provides sparse networks; we found ARACNE to be unsuitable for this application, being unable to recover the underlying metabolite association network. We recommend the PCLRC algorithm for the inference on metabolite association networks.

  12. Co-scheduling of network resource provisioning and host-to-host bandwidth reservation on high-performance network and storage systems

    DOEpatents

    Yu, Dantong; Katramatos, Dimitrios; Sim, Alexander; Shoshani, Arie

    2014-04-22

    A cross-domain network resource reservation scheduler configured to schedule a path from at least one end-site includes a management plane device configured to monitor and provide information representing at least one of functionality, performance, faults, and fault recovery associated with a network resource; a control plane device configured to at least one of schedule the network resource, provision local area network quality of service, provision local area network bandwidth, and provision wide area network bandwidth; and a service plane device configured to interface with the control plane device to reserve the network resource based on a reservation request and the information from the management plane device. Corresponding methods and computer-readable medium are also disclosed.

  13. A three-dimensional carbon nano-network for high performance lithium ion batteries

    DOE PAGES

    Tian, Miao; Wang, Wei; Liu, Yang; Jungjohann, Katherine L.; Thomas Harris, C.; Lee, Yung -Cheng; Yang, Ronggui

    2014-11-20

    Three-dimensional (3D) network structure has been envisioned as a superior architecture for lithium ion battery (LIB) electrodes, which enhances both ion and electron transport to significantly improve battery performance. Herein, a 3D carbon nano-network is fabricated through chemical vapor deposition of carbon on a scalably manufactured 3D porous anodic alumina (PAA) template. As a demonstration on the applicability of 3D carbon nano-network for LIB electrodes, the low conductivity active material, TiO2, is then uniformly coated on the 3D carbon nano-network using atomic layer deposition. High power performance is demonstrated in the 3D C/TiO2 electrodes, where the parallel tubes and gapsmore » in the 3D carbon nano-network facilitates fast Li ion transport. A large areal capacity of ~0.37 mAh·cm–2 is achieved due to the large TiO2 mass loading in the 60 µm-thick 3D C/TiO2 electrodes. At a test rate of C/5, the 3D C/TiO2 electrode with 18 nm-thick TiO2 delivers a high gravimetric capacity of ~240 mAh g–1, calculated with the mass of the whole electrode. A long cycle life of over 1000 cycles with a capacity retention of 91% is demonstrated at 1C. In this study, the effects of the electrical conductivity of carbon nano-network, ion diffusion, and the electrolyte permeability on the rate performance of these 3D C/TiO2 electrodes are systematically studied.« less

  14. A three-dimensional carbon nano-network for high performance lithium ion batteries

    SciTech Connect

    Tian, Miao; Wang, Wei; Liu, Yang; Jungjohann, Katherine L.; Thomas Harris, C.; Lee, Yung -Cheng; Yang, Ronggui

    2014-11-20

    Three-dimensional (3D) network structure has been envisioned as a superior architecture for lithium ion battery (LIB) electrodes, which enhances both ion and electron transport to significantly improve battery performance. Herein, a 3D carbon nano-network is fabricated through chemical vapor deposition of carbon on a scalably manufactured 3D porous anodic alumina (PAA) template. As a demonstration on the applicability of 3D carbon nano-network for LIB electrodes, the low conductivity active material, TiO2, is then uniformly coated on the 3D carbon nano-network using atomic layer deposition. High power performance is demonstrated in the 3D C/TiO2 electrodes, where the parallel tubes and gaps in the 3D carbon nano-network facilitates fast Li ion transport. A large areal capacity of ~0.37 mAh·cm–2 is achieved due to the large TiO2 mass loading in the 60 µm-thick 3D C/TiO2 electrodes. At a test rate of C/5, the 3D C/TiO2 electrode with 18 nm-thick TiO2 delivers a high gravimetric capacity of ~240 mAh g–1, calculated with the mass of the whole electrode. A long cycle life of over 1000 cycles with a capacity retention of 91% is demonstrated at 1C. In this study, the effects of the electrical conductivity of carbon nano-network, ion diffusion, and the electrolyte permeability on the rate performance of these 3D C/TiO2 electrodes are systematically studied.

  15. A framework for performance measurement in university using extended network data envelopment analysis (DEA) structures

    NASA Astrophysics Data System (ADS)

    Kashim, Rosmaini; Kasim, Maznah Mat; Rahman, Rosshairy Abd

    2015-12-01

    Measuring university performance is essential for efficient allocation and utilization of educational resources. In most of the previous studies, performance measurement in universities emphasized the operational efficiency and resource utilization without investigating the university's ability to fulfill the needs of its stakeholders and society. Therefore, assessment of the performance of university should be separated into two stages namely efficiency and effectiveness. In conventional DEA analysis, a decision making unit (DMU) or in this context, a university is generally treated as a black-box which ignores the operation and interdependence of the internal processes. When this happens, the results obtained would be misleading. Thus, this paper suggest an alternative framework for measuring the overall performance of a university by incorporating both efficiency and effectiveness and applies network DEA model. The network DEA models are recommended because this approach takes into account the interrelationship between the processes of efficiency and effectiveness in the system. This framework also focuses on the university structure which is expanded from the hierarchical to form a series of horizontal relationship between subordinate units by assuming both intermediate unit and its subordinate units can generate output(s). Three conceptual models are proposed to evaluate the performance of a university. An efficiency model is developed at the first stage by using hierarchical network model. It is followed by an effectiveness model which take output(s) from the hierarchical structure at the first stage as a input(s) at the second stage. As a result, a new overall performance model is proposed by combining both efficiency and effectiveness models. Thus, once this overall model is realized and utilized, the university's top management can determine the overall performance of each unit more accurately and systematically. Besides that, the result from the network

  16. Is a Responsive Default Mode Network Required for Successful Working Memory Task Performance?

    PubMed Central

    Čeko, Marta; Gracely, John L.; Fitzcharles, Mary-Ann; Seminowicz, David A.; Schweinhardt, Petra

    2015-01-01

    In studies of cognitive processing using tasks with externally directed attention, regions showing increased (external-task-positive) and decreased or “negative” [default-mode network (DMN)] fMRI responses during task performance are dynamically responsive to increasing task difficulty. Responsiveness (modulation of fMRI signal by increasing load) has been linked directly to successful cognitive task performance in external-task-positive regions but not in DMN regions. To investigate whether a responsive DMN is required for successful cognitive performance, we compared healthy human subjects (n = 23) with individuals shown to have decreased DMN engagement (chronic pain patients, n = 28). Subjects performed a multilevel working-memory task (N-back) during fMRI. If a responsive DMN is required for successful performance, patients having reduced DMN responsiveness should show worsened performance; if performance is not reduced, their brains should show compensatory activation in external-task-positive regions or elsewhere. All subjects showed decreased accuracy and increased reaction times with increasing task level, with no significant group differences on either measure at any level. Patients had significantly reduced negative fMRI response (deactivation) of DMN regions (posterior cingulate/precuneus, medial prefrontal cortex). Controls showed expected modulation of DMN deactivation with increasing task difficulty. Patients showed significantly reduced modulation of DMN deactivation by task difficulty, despite their successful task performance. We found no evidence of compensatory neural recruitment in external-task-positive regions or elsewhere. Individual responsiveness of the external-task-positive ventrolateral prefrontal cortex, but not of DMN regions, correlated with task accuracy. These findings suggest that a responsive DMN may not be required for successful cognitive performance; a responsive external-task-positive network may be sufficient

  17. Application of artificial neural network for prediction of marine diesel engine performance

    NASA Astrophysics Data System (ADS)

    Mohd Noor, C. W.; Mamat, R.; Najafi, G.; Nik, W. B. Wan; Fadhil, M.

    2015-12-01

    This study deals with an artificial neural network (ANN) modelling of a marine diesel engine to predict the brake power, output torque, brake specific fuel consumption, brake thermal efficiency and volumetric efficiency. The input data for network training was gathered from engine laboratory testing running at various engine speed. The prediction model was developed based on standard back-propagation Levenberg-Marquardt training algorithm. The performance of the model was validated by comparing the prediction data sets with the measured experiment data. Results showed that the ANN model provided good agreement with the experimental data with high accuracy.

  18. Improving the Unsteady Aerodynamic Performance of Transonic Turbines using Neural Networks

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan; Madavan, Nateri K.; Huber, Frank W.

    1999-01-01

    A recently developed neural net-based aerodynamic design procedure is used in the redesign of a transonic turbine stage to improve its unsteady aerodynamic performance. The redesign procedure used incorporates the advantages of both traditional response surface methodology and neural networks by employing a strategy called parameter-based partitioning of the design space. Starting from the reference design, a sequence of response surfaces based on both neural networks and polynomial fits are constructed to traverse the design space in search of an optimal solution that exhibits improved unsteady performance. The procedure combines the power of neural networks and the economy of low-order polynomials (in terms of number of simulations required and network training requirements). A time-accurate, two-dimensional, Navier-Stokes solver is used to evaluate the various intermediate designs and provide inputs to the optimization procedure. The procedure yielded a modified design that improves the aerodynamic performance through small changes to the reference design geometry. These results demonstrate the capabilities of the neural net-based design procedure, and also show the advantages of including high-fidelity unsteady simulations that capture the relevant flow physics in the design optimization process.

  19. Analysis of integrated healthcare networks' performance: a contingency-strategic management perspective.

    PubMed

    Lin, B Y; Wan, T T

    1999-12-01

    Few empirical analyses have been done in the organizational researches of integrated healthcare networks (IHNs) or integrated healthcare delivery systems. Using a contingency derived contact-process-performance model, this study attempts to explore the relationships among an IHN's strategic direction, structural design, and performance. A cross-sectional analysis of 100 IHNs suggests that certain contextual factors such as market competition and network age and tax status have statistically significant effects on the implementation of an IHN's service differentiation strategy, which addresses coordination and control in the market. An IHN's service differentiation strategy is positively related to its integrated structural design, which is characterized as integration of administration, patient care, and information system across different settings. However, no evidence supports that the development of integrated structural design may benefit an IHN's performance in terms of clinical efficiency and financial viability.

  20. Dual Arm Work Package performance estimates and telerobot task network simulation

    SciTech Connect

    Draper, J.V.; Blair, L.M.

    1997-02-01

    This paper describes the methodology and results of a network simulation study of the Dual Arm Work Package (DAWP), to be employed for dismantling the Argonne National Laboratory CP-5 reactor. The development of the simulation model was based upon the results of a task analysis for the same system. This study was performed by the Oak Ridge National Laboratory (ORNL), in the Robotics and Process Systems Division. Funding was provided the US Department of Energy`s Office of Technology Development, Robotics Technology Development Program (RTDP). The RTDP is developing methods of computer simulation to estimate telerobotic system performance. Data were collected to provide point estimates to be used in a task network simulation model. Three skilled operators performed six repetitions of a pipe cutting task representative of typical teleoperation cutting operations.

  1. Benchmarking the IBM 3090 with Vector Facility

    SciTech Connect

    Brickner, R.G.; Wasserman, H.J.; Hayes, A.H.; Moore, J.W.

    1986-01-01

    The IBM 3090 with Vector Facility is an extremely interesting machine because it combines very good scaler performance with enhanced vector and multitasking performance. For many IBM installations with a large scientific workload, the 3090/vector/MTF combination may be an ideal means of increasing throughput at minimum cost. However, neither the vector nor multitasking capabilities are sufficiently developed to make the 3090 competitive with our current worker machines for our large-scale scientific codes.

  2. Personal and Network Dynamics in Performance of Knowledge Workers: A Study of Australian Breast Radiologists

    PubMed Central

    Tavakoli Taba, Seyedamir; Hossain, Liaquat; Heard, Robert; Brennan, Patrick; Lee, Warwick; Lewis, Sarah

    2016-01-01

    Materials and Methods In this paper, we propose a theoretical model based upon previous studies about personal and social network dynamics of job performance. We provide empirical support for this model using real-world data within the context of the Australian radiology profession. An examination of radiologists’ professional network topology through structural-positional and relational dimensions and radiologists’ personal characteristics in terms of knowledge, experience and self-esteem is provided. Thirty one breast imaging radiologists completed a purpose designed questionnaire regarding their network characteristics and personal attributes. These radiologists also independently read a test set of 60 mammographic cases: 20 cases with cancer and 40 normal cases. A Jackknife free response operating characteristic (JAFROC) method was used to measure the performance of the radiologists’ in detecting breast cancers. Results Correlational analyses showed that reader performance was positively correlated with the social network variables of degree centrality and effective size, but negatively correlated with constraint and hierarchy. For personal characteristics, the number of mammograms read per year and self-esteem (self-evaluation) positively correlated with reader performance. Hierarchical multiple regression analysis indicated that the combination of number of mammograms read per year and network’s effective size, hierarchy and tie strength was the best fitting model, explaining 63.4% of the variance in reader performance. The results from this study indicate the positive relationship between reading high volumes of cases by radiologists and expertise development, but also strongly emphasise the association between effective social/professional interactions and informal knowledge sharing with high performance. PMID:26918644

  3. Performance Analysis of Node-Disjoint Paths in Multipath Routing for Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Murthy, G. Shiva; D'Souza, R. J.; Varaprasad, G.

    2011-12-01

    Routing in the resource constrained network is still a challenge. To increase the operational lifetme of the wireless senor networks is the major objective of energy efficeint routing protocols. Multipath routing protocols increases the QoS, network reliability, and lifetime. This work analyses the node-dsijoint paths which contribute in realising the objectives of mutlipath routing. This work proposes the three different criteria to select the node-disjoint paths between the source and sink node. They are (i) minimum hop (ii) maximum residual energy and (iii) maximum path cost. End to end delay, residual energy and throughput are the metrics considered to evaluate the performance of three different criteria to select node-disjoint paths between source and destination.

  4. Predicting the performance of local seismic networks using Matlab and Google Earth.

    SciTech Connect

    Chael, Eric Paul

    2009-11-01

    We have used Matlab and Google Earth to construct a prototype application for modeling the performance of local seismic networks for monitoring small, contained explosions. Published equations based on refraction experiments provide estimates of peak ground velocities as a function of event distance and charge weight. Matlab routines implement these relations to calculate the amplitudes across a network of stations from sources distributed over a geographic grid. The amplitudes are then compared to ambient noise levels at the stations, and scaled to determine the smallest yield that could be detected at each source location by a specified minimum number of stations. We use Google Earth as the primary user interface, both for positioning the stations of a hypothetical local network, and for displaying the resulting detection threshold contours.

  5. A Hybrid Neural Network-Genetic Algorithm Technique for Aircraft Engine Performance Diagnostics

    NASA Technical Reports Server (NTRS)

    Kobayashi, Takahisa; Simon, Donald L.

    2001-01-01

    In this paper, a model-based diagnostic method, which utilizes Neural Networks and Genetic Algorithms, is investigated. Neural networks are applied to estimate the engine internal health, and Genetic Algorithms are applied for sensor bias detection and estimation. This hybrid approach takes advantage of the nonlinear estimation capability provided by neural networks while improving the robustness to measurement uncertainty through the application of Genetic Algorithms. The hybrid diagnostic technique also has the ability to rank multiple potential solutions for a given set of anomalous sensor measurements in order to reduce false alarms and missed detections. The performance of the hybrid diagnostic technique is evaluated through some case studies derived from a turbofan engine simulation. The results show this approach is promising for reliable diagnostics of aircraft engines.

  6. Modelling and temporal performances evaluation of networked control systems using (max, +) algebra

    NASA Astrophysics Data System (ADS)

    Ammour, R.; Amari, S.

    2015-01-01

    In this paper, we address the problem of temporal performances evaluation of producer/consumer networked control systems. The aim is to develop a formal method for evaluating the response time of this type of control systems. Our approach consists on modelling, using Petri nets classes, the behaviour of the whole architecture including the switches that support multicast communications used by this protocol. (max, +) algebra formalism is then exploited to obtain analytical formulas of the response time and the maximal and minimal bounds. The main novelty is that our approach takes into account all delays experienced at the different stages of networked automation systems. Finally, we show how to apply the obtained results through an example of networked control system.

  7. Designing optimal greenhouse gas observing networks that consider performance and cost

    DOE PAGES

    Lucas, D. D.; Yver Kwok, C.; Cameron-Smith, P.; Graven, H.; Bergmann, D.; Guilderson, T. P.; Weiss, R.; Keeling, R.

    2014-12-23

    Emission rates of greenhouse gases (GHGs) entering into the atmosphere can be inferred using mathematical inverse approaches that combine observations from a network of stations with forward atmospheric transport models. Some locations for collecting observations are better than others for constraining GHG emissions through the inversion, but the best locations for the inversion may be inaccessible or limited by economic and other non-scientific factors. We present a method to design an optimal GHG observing network in the presence of multiple objectives that may be in conflict with each other. As a demonstration, we use our method to design a prototypemore » network of six stations to monitor summertime emissions in California of the potent GHG 1,1,1,2-tetrafluoroethane (CH2FCF3, HFC-134a). We use a multiobjective genetic algorithm to evolve network configurations that seek to jointly maximize the scientific accuracy of the inferred HFC-134a emissions and minimize the associated costs of making the measurements. The genetic algorithm effectively determines a set of "optimal" observing networks for HFC-134a that satisfy both objectives (i.e., the Pareto frontier). The Pareto frontier is convex, and clearly shows the tradeoffs between performance and cost, and the diminishing returns in trading one for the other. Without difficulty, our method can be extended to design optimal networks to monitor two or more GHGs with different emissions patterns, or to incorporate other objectives and constraints that are important in the practical design of atmospheric monitoring networks.« less

  8. A New Approach in Advance Network Reservation and Provisioning for High-Performance Scientific Data Transfers

    SciTech Connect

    Balman, Mehmet; Chaniotakis, Evangelos; Shoshani, Arie; Sim, Alex

    2010-01-28

    Scientific applications already generate many terabytes and even petabytes of data from supercomputer runs and large-scale experiments. The need for transferring data chunks of ever-increasing sizes through the network shows no sign of abating. Hence, we need high-bandwidth high speed networks such as ESnet (Energy Sciences Network). Network reservation systems, i.e. ESnet's OSCARS (On-demand Secure Circuits and Advance Reservation System) establish guaranteed bandwidth of secure virtual circuits at a certain time, for a certain bandwidth and length of time. OSCARS checks network availability and capacity for the specified period of time, and allocates requested bandwidth for that user if it is available. If the requested reservation cannot be granted, no further suggestion is returned back to the user. Further, there is no possibility from the users view-point to make an optimal choice. We report a new algorithm, where the user specifies the total volume that needs to be transferred, a maximum bandwidth that he/she can use, and a desired time period within which the transfer should be done. The algorithm can find alternate allocation possibilities, including earliest time for completion, or shortest transfer duration - leaving the choice to the user. We present a novel approach for path finding in time-dependent networks, and a new polynomial algorithm to find possible reservation options according to given constraints. We have implemented our algorithm for testing and incorporation into a future version of ESnet?s OSCARS. Our approach provides a basis for provisioning end-to-end high performance data transfers over storage and network resources.

  9. Designing optimal greenhouse gas observing networks that consider performance and cost

    DOE PAGES

    Lucas, D. D.; Yver Kwok, C.; Cameron-Smith, P.; Graven, H.; Bergmann, D.; Guilderson, T. P.; Weiss, R.; Keeling, R.

    2015-06-16

    Emission rates of greenhouse gases (GHGs) entering into the atmosphere can be inferred using mathematical inverse approaches that combine observations from a network of stations with forward atmospheric transport models. Some locations for collecting observations are better than others for constraining GHG emissions through the inversion, but the best locations for the inversion may be inaccessible or limited by economic and other non-scientific factors. We present a method to design an optimal GHG observing network in the presence of multiple objectives that may be in conflict with each other. As a demonstration, we use our method to design a prototypemore » network of six stations to monitor summertime emissions in California of the potent GHG 1,1,1,2-tetrafluoroethane (CH2FCF3, HFC-134a). We use a multiobjective genetic algorithm to evolve network configurations that seek to jointly maximize the scientific accuracy of the inferred HFC-134a emissions and minimize the associated costs of making the measurements. The genetic algorithm effectively determines a set of "optimal" observing networks for HFC-134a that satisfy both objectives (i.e., the Pareto frontier). The Pareto frontier is convex, and clearly shows the tradeoffs between performance and cost, and the diminishing returns in trading one for the other. Without difficulty, our method can be extended to design optimal networks to monitor two or more GHGs with different emissions patterns, or to incorporate other objectives and constraints that are important in the practical design of atmospheric monitoring networks.« less

  10. Performance-Based Adaptive Fuzzy Tracking Control for Networked Industrial Processes.

    PubMed

    Wang, Tong; Qiu, Jianbin; Yin, Shen; Gao, Huijun; Fan, Jialu; Chai, Tianyou

    2016-08-01

    In this paper, the performance-based control design problem for double-layer networked industrial processes is investigated. At the device layer, the prescribed performance functions are first given to describe the output tracking performance, and then by using backstepping technique, new adaptive fuzzy controllers are designed to guarantee the tracking performance under the effects of input dead-zone and the constraint of prescribed tracking performance functions. At operation layer, by considering the stochastic disturbance, actual index value, target index value, and index prediction simultaneously, an adaptive inverse optimal controller in discrete-time form is designed to optimize the overall performance and stabilize the overall nonlinear system. Finally, a simulation example of continuous stirred tank reactor system is presented to show the effectiveness of the proposed control method.

  11. GPU Accelerated Vector Median Filter

    NASA Technical Reports Server (NTRS)

    Aras, Rifat; Shen, Yuzhong

    2011-01-01

    Noise reduction is an important step for most image processing tasks. For three channel color images, a widely used technique is vector median filter in which color values of pixels are treated as 3-component vectors. Vector median filters are computationally expensive; for a window size of n x n, each of the n(sup 2) vectors has to be compared with other n(sup 2) - 1 vectors in distances. General purpose computation on graphics processing units (GPUs) is the paradigm of utilizing high-performance many-core GPU architectures for computation tasks that are normally handled by CPUs. In this work. NVIDIA's Compute Unified Device Architecture (CUDA) paradigm is used to accelerate vector median filtering. which has to the best of our knowledge never been done before. The performance of GPU accelerated vector median filter is compared to that of the CPU and MPI-based versions for different image and window sizes, Initial findings of the study showed 100x improvement of performance of vector median filter implementation on GPUs over CPU implementations and further speed-up is expected after more extensive optimizations of the GPU algorithm .

  12. High performances CNTFETs achieved using CNT networks for selective gas sensing

    NASA Astrophysics Data System (ADS)

    Gorintin, Louis; Bondavalli, Paolo; Legagneux, Pierre; Pribat, Didier

    2009-08-01

    Our study deals with the utilization of carbon nanotubes networks based transistors with different metal electrodes for highly selective gas sensing. Indeed, carbon nanotubes networks can be used as semi conducting materials to achieve good performances transistors. These devices are extremely sensitive to the change of the Schottky barrier heights between Single Wall Carbon Nanotubes (SWCNTs) and drain/source metal electrodes: the gas adsorption creates an interfacial dipole that modifies the metal work function and so the bending and the height of the Schottky barrier at the contacts. Moreover each gas interacts specifically with each metal identifying a sort of electronic fingerprinting. Using airbrush technique for deposition, we have been able to achieve uniform random networks of carbon nanotubes suitable for large area applications and mass production such as fabrication of CNT based gas sensors. These networks enable us to achieve transistors with on/off ratio of more than 5 orders of magnitude. To reach these characteristics, the density of the CNT network has been adjusted in order to reach the percolation threshold only for semi-conducting nanotubes. These optimized devices have allowed us to tune the sensitivity (improving it) of our sensors for highly selective detection of DiMethyl-Methyl-Phosphonate (DMMP, a sarin stimulant), and even volatile drug precursors using Pd, Au and Mo electrodes.

  13. A Workflow-based Intelligent Network Data Movement Advisor with End-to-end Performance Optimization

    SciTech Connect

    Zhu, Michelle M.; Wu, Chase Q.

    2013-11-07

    Next-generation eScience applications often generate large amounts of simulation, experimental, or observational data that must be shared and managed by collaborative organizations. Advanced networking technologies and services have been rapidly developed and deployed to facilitate such massive data transfer. However, these technologies and services have not been fully utilized mainly because their use typically requires significant domain knowledge and in many cases application users are even not aware of their existence. By leveraging the functionalities of an existing Network-Aware Data Movement Advisor (NADMA) utility, we propose a new Workflow-based Intelligent Network Data Movement Advisor (WINDMA) with end-to-end performance optimization for this DOE funded project. This WINDMA system integrates three major components: resource discovery, data movement, and status monitoring, and supports the sharing of common data movement workflows through account and database management. This system provides a web interface and interacts with existing data/space management and discovery services such as Storage Resource Management, transport methods such as GridFTP and GlobusOnline, and network resource provisioning brokers such as ION and OSCARS. We demonstrate the efficacy of the proposed transport-support workflow system in several use cases based on its implementation and deployment in DOE wide-area networks.

  14. Optimizing the ASC WAN: evaluating network performance tools for comparing transport protocols.

    SciTech Connect

    Lydick, Christopher L.

    2007-07-01

    The Advanced Simulation & Computing Wide Area Network (ASC WAN), which is a high delay-bandwidth network connection between US Department of Energy National Laboratories, is constantly being examined and evaluated for efficiency. One of the current transport-layer protocols which is used, TCP, was developed for traffic demands which are different from that on the ASC WAN. The Stream Control Transport Protocol (SCTP), on the other hand, has shown characteristics which make it more appealing to networks such as these. Most important, before considering a replacement for TCP on any network, a testing tool that performs well against certain criteria needs to be found. In order to try to find such a tool, two popular networking tools (Netperf v.2.4.3 & v.2.4.6 (OpenSS7 STREAMS), and Iperf v.2.0.6) were tested. These tools implement both TCP and SCTP and were evaluated using four metrics: (1) How effectively can the tool reach a throughput near the bandwidth? (2) How much of the CPU does the tool utilize during operation? (3) Is the tool freely and widely available? And, (4) Is the tool actively developed? Following the analysis of those tools, this paper goes further into explaining some recommendations and ideas for future work.

  15. ER-TCP (Exponential Recovery-TCP): High-Performance TCP for Satellite Networks

    NASA Astrophysics Data System (ADS)

    Park, Mankyu; Shin, Minsu; Oh, Deockgil; Ahn, Doseob; Kim, Byungchul; Lee, Jaeyong

    A transmission control protocol (TCP) using an additive increase multiplicative decrease (AIMD) algorithm for congestion control plays a leading role in advanced Internet services. However, the AIMD method shows only low link utilization in lossy networks with long delay such as satellite networks. This is because the cwnd dynamics of TCP are reduced by long propagation delay, and TCP uses an inadequate congestion control algorithm, which does not distinguish packet loss from wireless errors from that due to congestion of the wireless networks. To overcome these problems, we propose an exponential recovery (ER) TCP that uses the exponential recovery function for rapidly occupying available bandwidth during a congestion avoidance period, and an adaptive congestion window decrease scheme using timestamp base available bandwidth estimation (TABE) to cope with wireless channel errors. We simulate the proposed ER-TCP under various test scenarios using the ns-2 network simulator to verify its performance enhancement. Simulation results show that the proposal is a more suitable TCP than the several TCP variants under long delay and heavy loss probability environments of satellite networks.

  16. Integration and the performance of healthcare networks:do integration strategies enhance efficiency, profitability, and image?

    PubMed Central

    Wan, Thomas T.H.; Ma, Allen; Y.J.Lin, Blossom

    2001-01-01

    Abstract Purpose This study examines the integration effects on efficiency and financial viability of the top 100 integrated healthcare networks (IHNs) in the United States. Theory A contingency- strategic theory is used to identify the relationship of IHNs' performance to their structural and operational characteristics and integration strategies. Methods The lists of the top 100 IHNs ranked in two years, 1998 and 1999, by the SMG Marketing Group were merged to create a database for the study. Multiple indicators were used to examine the relationship between IHNs' characteristics and their performance in efficiency and financial viability. A path analytical model was developed and validated by the Mplus statistical program. Factors influencing the top 100 IHNs' images, represented by attaining ranking among the top 100 in two consecutive years, were analysed. Results and conclusion No positive associations were found between integration and network performance in efficiency or profits. Longitudinal data are needed to investigate the effect of integration on healthcare networks' financial performance. PMID:16896405

  17. A high performance long-reach passive optical network with a novel excess bandwidth distribution scheme

    NASA Astrophysics Data System (ADS)

    Chao, I.-Fen; Zhang, Tsung-Min

    2015-06-01

    Long-reach passive optical networks (LR-PONs) have been considered to be promising solutions for future access networks. In this paper, we propose a distributed medium access control (MAC) scheme over an advantageous LR-PON network architecture that reroutes the control information from and back to all ONUs through an (N + 1) × (N + 1) star coupler (SC) deployed near the ONUs, thereby overwhelming the extremely long propagation delay problem in LR-PONs. In the network, the control slot is designed to contain all bandwidth requirements of all ONUs and is in-band time-division-multiplexed with a number of data slots within a cycle. In the proposed MAC scheme, a novel profit-weight-based dynamic bandwidth allocation (P-DBA) scheme is presented. The algorithm is designed to efficiently and fairly distribute the amount of excess bandwidth based on a profit value derived from the excess bandwidth usage of each ONU, which resolves the problems of previously reported DBA schemes that are either unfair or inefficient. The simulation results show that the proposed decentralized algorithms exhibit a nearly three-order-of-magnitude improvement in delay performance compared to the centralized algorithms over LR-PONs. Moreover, the newly proposed P-DBA scheme guarantees low delay performance and fairness even when under attack by the malevolent ONU irrespective of traffic loads and burstiness.

  18. Data Analysis for the SOLIS Vector Spectromagnetograph

    NASA Technical Reports Server (NTRS)

    Jones, Harrison P.; Harvey, John W.; Oegerle, William (Technical Monitor)

    2002-01-01

    The National Solar Observatory's SOLIS Vector Spectromagnetograph (VSM), which will produce three or more full-disk maps of the Sun's photospheric vector magnetic field every day for at least one solar magnetic cycle, is in the final stages of assembly. Initial observations, including cross-calibration with the current NASA/NSO spectromagnetograph (SPM) will soon be carried out at a test site in Tucson. This paper discusses data analysis techniques for reducing the raw data, calculation of line-of-sight magnetograms and both quick-look and high-precision inference of vector fields from Stokes spectral profiles. Existing SPM algorithms, suitably modified to accomodate the cameras, scanning pattern, and polarization calibration optics for the VSM, will be used to "clean" the raw data and to process line-of-sight, magnetograms. A recent. version of the High Altitude Observatory Milne-Eddington (HAO-ME) inversion code (Skumanich and Lites; 1987, 11)J 322, p. 473) will he used for high-precision vector fields since the algorithm has been extensively tested, is well understood, and is fast enough to complete data analysis within 24 hours of data acquisition. The simplified inversion algorithm of Auer, Heasley. arid House (1977, Sol. Phys. 55, p. 47) forms the initial guess for this version of the HAO-ME code and will be used for quick-look vector analysis of VSM data since its performance on simulated Stokes profiles is better than other candidate methods. Improvements (e.g., principal components analysis or neural networks) are under consideration and will be straightforward to implement. However, current resources are sufficient to store the original Stokes profiles only long enough for high-precision analysis. Retrospective reduction of Stokes data with improved methods will not be possible, and modifications will only be introduced when the advantages of doing so are compelling enough to justify discontinuity in the long-term data stream.

  19. Automated image segmentation using support vector machines

    NASA Astrophysics Data System (ADS)

    Powell, Stephanie; Magnotta, Vincent A.; Andreasen, Nancy C.

    2007-03-01

    Neurodegenerative and neurodevelopmental diseases demonstrate problems associated with brain maturation and aging. Automated methods to delineate brain structures of interest are required to analyze large amounts of imaging data like that being collected in several on going multi-center studies. We have previously reported on using artificial neural networks (ANN) to define subcortical brain structures including the thalamus (0.88), caudate (0.85) and the putamen (0.81). In this work, apriori probability information was generated using Thirion's demons registration algorithm. The input vector consisted of apriori probability, spherical coordinates, and an iris of surrounding signal intensity values. We have applied the support vector machine (SVM) machine learning algorithm to automatically segment subcortical and cerebellar regions using the same input vector information. SVM architecture was derived from the ANN framework. Training was completed using a radial-basis function kernel with gamma equal to 5.5. Training was performed using 15,000 vectors collected from 15 training images in approximately 10 minutes. The resulting support vectors were applied to delineate 10 images not part of the training set. Relative overlap calculated for the subcortical structures was 0.87 for the thalamus, 0.84 for the caudate, 0.84 for the putamen, and 0.72 for the hippocampus. Relative overlap for the cerebellar lobes ranged from 0.76 to 0.86. The reliability of the SVM based algorithm was similar to the inter-rater reliability between manual raters and can be achieved without rater intervention.

  20. A Dynamic Network Model to Explain the Development of Excellent Human Performance.

    PubMed

    Den Hartigh, Ruud J R; Van Dijk, Marijn W G; Steenbeek, Henderien W; Van Geert, Paul L C

    2016-01-01

    Across different domains, from sports to science, some individuals accomplish excellent levels of performance. For over 150 years, researchers have debated the roles of specific nature and nurture components to develop excellence. In this article, we argue that the key to excellence does not reside in specific underlying components, but rather in the ongoing interactions among the components. We propose that excellence emerges out of dynamic networks consisting of idiosyncratic mixtures of interacting components such as genetic endowment, motivation, practice, and coaching. Using computer simulations we demonstrate that the dynamic network model accurately predicts typical properties of excellence reported in the literature, such as the idiosyncratic developmental trajectories leading to excellence and the highly skewed distributions of productivity present in virtually any achievement domain. Based on this novel theoretical perspective on excellent human performance, this article concludes by suggesting policy implications and directions for future research. PMID:27148140

  1. A Dynamic Network Model to Explain the Development of Excellent Human Performance

    PubMed Central

    Den Hartigh, Ruud J. R.; Van Dijk, Marijn W. G.; Steenbeek, Henderien W.; Van Geert, Paul L. C.

    2016-01-01

    Across different domains, from sports to science, some individuals accomplish excellent levels of performance. For over 150 years, researchers have debated the roles of specific nature and nurture components to develop excellence. In this article, we argue that the key to excellence does not reside in specific underlying components, but rather in the ongoing interactions among the components. We propose that excellence emerges out of dynamic networks consisting of idiosyncratic mixtures of interacting components such as genetic endowment, motivation, practice, and coaching. Using computer simulations we demonstrate that the dynamic network model accurately predicts typical properties of excellence reported in the literature, such as the idiosyncratic developmental trajectories leading to excellence and the highly skewed distributions of productivity present in virtually any achievement domain. Based on this novel theoretical perspective on excellent human performance, this article concludes by suggesting policy implications and directions for future research. PMID:27148140

  2. Evaluating the performances of statistical and neural network based control charts

    NASA Astrophysics Data System (ADS)

    Teoh, Kok Ban; Ong, Hong Choon

    2015-10-01

    Control chart is used widely in many fields and traditional control chart is no longer adequate in detecting a sudden change in a particular process. So, run rules which are built in into Shewhart X ¯ control chart while Exponential Weighted Moving Average control chart (EWMA), Cumulative Sum control chart (CUSUM) and neural network based control chart are introduced to overcome the limitation regarding to the sensitivity of traditional control chart. In this study, the average run length (ARL) and median run length (MRL) in the shifts in the process mean of control charts mentioned will be computed. We will show that interpretations based only on the ARL can be misleading. Thus, MRL is also used to evaluate the performances of the control charts. From this study, neural network based control chart is found to possess a better performance than run rules of Shewhart X ¯ control chart, EWMA and CUSUM control chart.

  3. A Dynamic Network Model to Explain the Development of Excellent Human Performance.

    PubMed

    Den Hartigh, Ruud J R; Van Dijk, Marijn W G; Steenbeek, Henderien W; Van Geert, Paul L C

    2016-01-01

    Across different domains, from sports to science, some individuals accomplish excellent levels of performance. For over 150 years, researchers have debated the roles of specific nature and nurture components to develop excellence. In this article, we argue that the key to excellence does not reside in specific underlying components, but rather in the ongoing interactions among the components. We propose that excellence emerges out of dynamic networks consisting of idiosyncratic mixtures of interacting components such as genetic endowment, motivation, practice, and coaching. Using computer simulations we demonstrate that the dynamic network model accurately predicts typical properties of excellence reported in the literature, such as the idiosyncratic developmental trajectories leading to excellence and the highly skewed distributions of productivity present in virtually any achievement domain. Based on this novel theoretical perspective on excellent human performance, this article concludes by suggesting policy implications and directions for future research.

  4. Team performance and collective efficacy in the dynamic psychology of competitive team: a Bayesian network analysis.

    PubMed

    Fuster-Parra, P; García-Mas, A; Ponseti, F J; Leo, F M

    2015-04-01

    The purpose of this paper was to discover the relationships among 22 relevant psychological features in semi-professional football players in order to study team's performance and collective efficacy via a Bayesian network (BN). The paper includes optimization of team's performance and collective efficacy using intercausal reasoning pattern which constitutes a very common pattern in human reasoning. The BN is used to make inferences regarding our problem, and therefore we obtain some conclusions; among them: maximizing the team's performance causes a decrease in collective efficacy and when team's performance achieves the minimum value it causes an increase in moderate/high values of collective efficacy. Similarly, we may reason optimizing team collective efficacy instead. It also allows us to determine the features that have the strongest influence on performance and which on collective efficacy. From the BN two different coaching styles were differentiated taking into account the local Markov property: training leadership and autocratic leadership.

  5. Team performance and collective efficacy in the dynamic psychology of competitive team: a Bayesian network analysis.

    PubMed

    Fuster-Parra, P; García-Mas, A; Ponseti, F J; Leo, F M

    2015-04-01

    The purpose of this paper was to discover the relationships among 22 relevant psychological features in semi-professional football players in order to study team's performance and collective efficacy via a Bayesian network (BN). The paper includes optimization of team's performance and collective efficacy using intercausal reasoning pattern which constitutes a very common pattern in human reasoning. The BN is used to make inferences regarding our problem, and therefore we obtain some conclusions; among them: maximizing the team's performance causes a decrease in collective efficacy and when team's performance achieves the minimum value it causes an increase in moderate/high values of collective efficacy. Similarly, we may reason optimizing team collective efficacy instead. It also allows us to determine the features that have the strongest influence on performance and which on collective efficacy. From the BN two different coaching styles were differentiated taking into account the local Markov property: training leadership and autocratic leadership. PMID:25546263

  6. Predicting Subcontractor Performance Using Web-Based Evolutionary Fuzzy Neural Networks

    PubMed Central

    2013-01-01

    Subcontractor performance directly affects project success. The use of inappropriate subcontractors may result in individual work delays, cost overruns, and quality defects throughout the project. This study develops web-based Evolutionary Fuzzy Neural Networks (EFNNs) to predict subcontractor performance. EFNNs are a fusion of Genetic Algorithms (GAs), Fuzzy Logic (FL), and Neural Networks (NNs). FL is primarily used to mimic high level of decision-making processes and deal with uncertainty in the construction industry. NNs are used to identify the association between previous performance and future status when predicting subcontractor performance. GAs are optimizing parameters required in FL and NNs. EFNNs encode FL and NNs using floating numbers to shorten the length of a string. A multi-cut-point crossover operator is used to explore the parameter and retain solution legality. Finally, the applicability of the proposed EFNNs is validated using real subcontractors. The EFNNs are evolved using 22 historical patterns and tested using 12 unseen cases. Application results show that the proposed EFNNs surpass FL and NNs in predicting subcontractor performance. The proposed approach improves prediction accuracy and reduces the effort required to predict subcontractor performance, providing field operators with web-based remote access to a reliable, scientific prediction mechanism. PMID:23864830

  7. Performance of the GLOBALink/HF Network during the Halloween Storm Period of 2003

    NASA Astrophysics Data System (ADS)

    Goodman, J. M.; Patterson, J. D.

    2004-12-01

    The GLOBALink/HF system, developed and managed by ARINC, is a global high frequency data link communications network providing service to commercial aviation worldwide. It consists of 14 ground stations located around the globe, and a network control center located in Annapolis. The system was designed to provide reliable aircraft communications through the use of multi-station accessibility, quasi-dynamic frequency management, and a robust time-diversity modem with equalization. Although HF (i.e., 3-30 MHz) signaling has a poor reputation when considering individual circuits, it has been shown that near-real time channel evaluation and/or adaptive frequency management can improve performance considerably. Moreover, multi-station network operation provides an additional form of diversity, which is probably the most valuable design strategy. Our paper briefly describes the system, but the major discussion will be about performance metrics derived during super storms. The Halloween storm period of October-November 2003 was a period of significant ionospheric effects. Large geomagnetic storms were evidenced. We have examined the impact on HFDL of the various phenomena observed during this period. We have found some impact on HFDL performance for the October 29-31 period, but it is minimal in amplitude. While HFDL is based upon HF propagation, a medium known for its vulnerability to ionospheric variability, the system performance metric does not reflect this vulnerability to a significant degree. This is thought to be the result of the substantial amount of diversity built into the system, especially the adaptive frequency management system, Dynacastr, a system developed by RPSI. The adaptive frequency management system involves the use of active frequency tables (or AFTs) that are based upon space weather observables. During the stormy weeks of October and November, ARINC issued over seven changes to the AFTs used by every HFDL station. These changes helped the HFDL

  8. Scalable Memory Registration for High-Performance Networks Using Helper Threads

    SciTech Connect

    Li, Dong; Cameron, Kirk W.; Nikolopoulos, Dimitrios; de Supinski, Bronis R.; Schulz, Martin

    2011-01-01

    Remote DMA (RDMA) enables high performance networks to reduce data copying between an application and the operating system (OS). However RDMA operations in some high performance networks require communication memory explicitly registered with the network adapter and pinned by the OS. Memory registration and pinning limits the flexibility of the memory system and reduces the amount of memory that user processes can allocate. These issues become more significant on multicore platforms, since registered memory demand grows linearly with the number of processor cores. In this paper we propose a new memory registration/deregistration strategy to reduce registered memory on multicore architectures for HPC applications. We hide the cost of dynamic memory management by offloading all dynamic memory registration and deregistration requests to a dedicated memory management helper thread. We investigate design policies and performance implications of the helper thread approach. We evaluate our framework with the NAS parallel benchmarks, for which our registration scheme significantly reduces the registered memory (23.62% on average and up to 49.39%) and avoids memory registration/deregistration costs for reused communication memory. We show that our system enables the execution of problem sizes that could not complete under existing memory registration strategies.

  9. Performance of Thorup's Shortest Path Algorithm for Large-Scale Network Simulation

    NASA Astrophysics Data System (ADS)

    Sakumoto, Yusuke; Ohsaki, Hiroyuki; Imase, Makoto

    In this paper, we investigate the performance of Thorup's algorithm by comparing it to Dijkstra's algorithm for large-scale network simulations. One of the challenges toward the realization of large-scale network simulations is the efficient execution to find shortest paths in a graph with N vertices and M edges. The time complexity for solving a single-source shortest path (SSSP) problem with Dijkstra's algorithm with a binary heap (DIJKSTRA-BH) is O((M+N)log N). An sophisticated algorithm called Thorup's algorithm has been proposed. The original version of Thorup's algorithm (THORUP-FR) has the time complexity of O(M+N). A simplified version of Thorup's algorithm (THORUP-KL) has the time complexity of O(Mα(N)+N) where α(N) is the functional inverse of the Ackerman function. In this paper, we compare the performances (i.e., execution time and memory consumption) of THORUP-KL and DIJKSTRA-BH since it is known that THORUP-FR is at least ten times slower than Dijkstra's algorithm with a Fibonaccii heap. We find that (1) THORUP-KL is almost always faster than DIJKSTRA-BH for large-scale network simulations, and (2) the performances of THORUP-KL and DIJKSTRA-BH deviate from their time complexities due to the presence of the memory cache in the microprocessor.

  10. Impact of Crosstalk Channel Estimation on the DSM Performance for DSL Networks

    NASA Astrophysics Data System (ADS)

    Lindqvist, Neiva; Lindqvist, Fredrik; Monteiro, Marcio; Dortschy, Boris; Pelaes, Evaldo; Klautau, Aldebaro

    2010-12-01

    The development and assessment of spectrum management methods for the copper access network are usually conducted under the assumption of accurate channel information. Acquiring such information implies, in practice, estimation of the crosstalk coupling functions between the twisted-pair lines in the access network. This type of estimation is not supported or required by current digital subscriber line (DSL) standards. In this work, we investigate the impact of the inaccuracies in crosstalk estimation on the performance of dynamic spectrum management (DSM) algorithms. A recently proposed crosstalk channel estimator is considered and a statistical sensitivity analysis is conducted to investigate the effects of the crosstalk estimation error on the bitloading and on the achievable data rate for a transmission line. The DSM performance is then evaluated based on the achievable data rates obtained through experiments with DSL setups and computer simulations. Since these experiments assume network scenarios consisting of real twisted-pair cables, both crosstalk channel estimates and measurements (for a reference comparison) are considered. The results indicate that the error introduced by the adopted estimation procedure does not compromise the performance of the DSM techniques, that is, the considered crosstalk channel estimator provides enough means for a practical implementation of DSM.

  11. GENE NETWORK EFFECTS ON BRAIN MICROSTRUCTURE AND INTELLECTUAL PERFORMANCE IDENTIFIED IN 472 TWINS

    PubMed Central

    Chiang, Ming-Chang; Barysheva, Marina; McMahon, Katie L.; de Zubicaray, Greig I.; Johnson, Kori; Montgomery, Grant W.; Martin, Nicholas G.; Toga, Arthur W.; Wright, Margaret J.; Shapshak, Paul; Thompson, Paul M.

    2012-01-01

    A major challenge in neuroscience is finding which genes affect brain integrity, connectivity, and intellectual function. Discovering influential genes holds vast promise for neuroscience, but typical genome-wide searches assess around one million genetic variants one-by-one, leading to intractable false positive rates, even with vast samples of subjects. Even more intractable is the question of which genes interact and how they work together to affect brain connectivity. Here we report a novel approach that discovers which genes contribute to brain wiring and fiber integrity at all pairs of points in a brain scan. We studied genetic correlations between thousands of points in human brain images from 472 twins and their non-twin siblings (mean age: 23.7±2.1 SD years; 193 M/279 F). We combined clustering with genome-wide scanning to find brain systems with common genetic determination. We then filtered the image in a new way to boost power to find causal genes. Using network analysis, we found a network of genes that affect brain wiring in healthy young adults. Our new strategy makes it more computationally tractable to discover genes that affect brain integrity. The gene network showed small-world and scale-free topologies, suggesting efficiency in genetic interactions, and resilience to network disruption. Genetic variants at hubs of the network influence intellectual performance by modulating associations between performance intelligence quotient (IQ) and the integrity of major white matter tracts, such as the callosal genu and splenium, cingulum, optic radiations, and the superior longitudinal fasciculus. PMID:22723713

  12. Modeling of Abrasion Resistance Performance of Persian Handmade Wool Carpets Using Artificial Neural Network

    NASA Astrophysics Data System (ADS)

    Gupta, Shravan Kumar; Goswami, Kamal Kanti

    2015-10-01

    This paper presents the application of Artificial Neural Network (ANN) modeling for the prediction of abrasion resistance of Persian handmade wool carpets. Four carpet constructional parameters, namely knot density, pile height, number of ply in pile yarn and pile yarn twist have been used as input parameters for ANN model. The prediction performance was judged in terms of statistical parameters like correlation coefficient ( R) and Mean Absolute Percentage Error ( MAPE). Though the training performance of ANN was very good, the generalization ability was not up to the mark. This implies that large number of training data should be used for the adequate training of ANN models.

  13. Performance Analysis of AODV Routing Protocol for Wireless Sensor Network based Smart Metering

    NASA Astrophysics Data System (ADS)

    >Hasan Farooq, Low Tang Jung,

    2013-06-01

    Today no one can deny the need for Smart Grid and it is being considered as of utmost importance to upgrade outdated electric infrastructure to cope with the ever increasing electric load demand. Wireless Sensor Network (WSN) is considered a promising candidate for internetworking of smart meters with the gateway using mesh topology. This paper investigates the performance of AODV routing protocol for WSN based smart metering deployment. Three case studies are presented to analyze its performance based on four metrics of (i) Packet Delivery Ratio, (ii) Average Energy Consumption of Nodes (iii) Average End-End Delay and (iv) Normalized Routing Load.

  14. Analysing the Correlation between Social Network Analysis Measures and Performance of Students in Social Network-Based Engineering Education

    ERIC Educational Resources Information Center

    Putnik, Goran; Costa, Eric; Alves, Cátia; Castro, Hélio; Varela, Leonilde; Shah, Vaibhav

    2016-01-01

    Social network-based engineering education (SNEE) is designed and implemented as a model of Education 3.0 paradigm. SNEE represents a new learning methodology, which is based on the concept of social networks and represents an extended model of project-led education. The concept of social networks was applied in the real-life experiment,…

  15. Distributed semi-supervised support vector machines.

    PubMed

    Scardapane, Simone; Fierimonte, Roberto; Di Lorenzo, Paolo; Panella, Massimo; Uncini, Aurelio

    2016-08-01

    The semi-supervised support vector machine (S(3)VM) is a well-known algorithm for performing semi-supervised inference under the large margin principle. In this paper, we are interested in the problem of training a S(3)VM when the labeled and unlabeled samples are distributed over a network of interconnected agents. In particular, the aim is to design a distributed training protocol over networks, where communication is restricted only to neighboring agents and no coordinating authority is present. Using a standard relaxation of the original S(3)VM, we formulate the training problem as the distributed minimization of a non-convex social cost function. To find a (stationary) solution in a distributed manner, we employ two different strategies: (i) a distributed gradient descent algorithm; (ii) a recently developed framework for In-Network Nonconvex Optimization (NEXT), which is based on successive convexifications of the original problem, interleaved by state diffusion steps. Our experimental results show that the proposed distributed algorithms have comparable performance with respect to a centralized implementation, while highlighting the pros and cons of the proposed solutions. To the date, this is the first work that paves the way toward the broad field of distributed semi-supervised learning over networks. PMID:27179615

  16. Three-dimensional interconnected nickel phosphide networks with hollow microstructures and desulfurization performance

    SciTech Connect

    Zhang, Shuna; Zhang, Shujuan; Song, Limin; Wu, Xiaoqing; Fang, Sheng

    2014-05-01

    Graphical abstract: Three-dimensional interconnected nickel phosphide networks with hollow microstructures and desulfurization performance. - Highlights: • Three-dimensional Ni{sub 2}P has been prepared using foam nickel as a template. • The microstructures interconnected and formed sponge-like porous networks. • Three-dimensional Ni{sub 2}P shows superior hydrodesulfurization activity. - Abstract: Three-dimensional microstructured nickel phosphide (Ni{sub 2}P) was fabricated by the reaction between foam nickel (Ni) and phosphorus red. The as-prepared Ni{sub 2}P samples, as interconnected networks, maintained the original mesh structure of foamed nickel. The crystal structure and morphology of the as-synthesized Ni{sub 2}P were characterized by X-ray diffraction, scanning electron microscopy, automatic mercury porosimetry and X-ray photoelectron spectroscopy. The SEM study showed adjacent hollow branches were mutually interconnected to form sponge-like networks. The investigation on pore structure provided detailed information for the hollow microstructures. The growth mechanism for the three-dimensionally structured Ni{sub 2}P was postulated and discussed in detail. To investigate its catalytic properties, SiO{sub 2} supported three-dimensional Ni{sub 2}P was prepared successfully and evaluated for the hydrodesulfurization (HDS) of dibenzothiophene (DBT). DBT molecules were mostly hydrogenated and then desulfurized by Ni{sub 2}P/SiO{sub 2}.

  17. A High Performance Computing Network and System Simulator for the Power Grid: NGNS^2

    SciTech Connect

    Villa, Oreste; Tumeo, Antonino; Ciraci, Selim; Daily, Jeffrey A.; Fuller, Jason C.

    2012-11-11

    Designing and planing next generation power grid sys- tems composed of large power distribution networks, monitoring and control networks, autonomous generators and consumers of power requires advanced simulation infrastructures. The objective is to predict and analyze in time the behavior of networks of systems for unexpected events such as loss of connectivity, malicious attacks and power loss scenarios. This ultimately allows one to answer questions such as: “What could happen to the power grid if ...”. We want to be able to answer as many questions as possible in the shortest possible time for the largest possible systems. In this paper we present a new High Performance Computing (HPC) oriented simulation infrastructure named Next Generation Network and System Simulator (NGNS2 ). NGNS2 allows for the distribution of a single simulation among multiple computing elements by using MPI and OpenMP threads. NGNS2 provides extensive configuration, fault tolerant and load balancing capabilities needed to simulate large and dynamic systems for long periods of time. We show the preliminary results of the simulator running approximately two million simulated entities both on a 64-node commodity Infiniband cluster and a 48-core SMP workstation.

  18. Quantifying individual performance in Cricket — A network analysis of batsmen and bowlers

    NASA Astrophysics Data System (ADS)

    Mukherjee, Satyam

    2014-01-01

    Quantifying individual performance in the game of Cricket is critical for team selection in International matches. The number of runs scored by batsmen and wickets taken by bowlers serves as a natural way of quantifying the performance of a cricketer. Traditionally the batsmen and bowlers are rated on their batting or bowling average respectively. However, in a game like Cricket it is always important the manner in which one scores the runs or claims a wicket. Scoring runs against a strong bowling line-up or delivering a brilliant performance against a team with a strong batting line-up deserves more credit. A player’s average is not able to capture this aspect of the game. In this paper we present a refined method to quantify the ‘quality’ of runs scored by a batsman or wickets taken by a bowler. We explore the application of Social Network Analysis (SNA) to rate the players in a team performance. We generate a directed and weighted network of batsmen-bowlers using the player-vs-player information available for Test cricket and ODI cricket. Additionally we generate a network of batsmen and bowlers based on the dismissal record of batsmen in the history of cricket-Test (1877-2011) and ODI (1971-2011). Our results show that M. Muralitharan is the most successful bowler in the history of Cricket. Our approach could potentially be applied in domestic matches to judge a player’s performance which in turn paves the way for a balanced team selection for International matches.

  19. Further result on guaranteed H∞ performance state estimation of delayed static neural networks.

    PubMed

    Huang, He; Huang, Tingwen; Chen, Xiaoping

    2015-06-01

    This brief considers the guaranteed H∞ performance state estimation problem of delayed static neural networks. An Arcak-type state estimator, which is more general than the widely adopted Luenberger-type one, is chosen to tackle this issue. A delay-dependent criterion is derived under which the estimation error system is globally asymptotically stable with a prescribed H∞ performance. It is shown that the design of suitable gain matrices and the optimal performance index are accomplished by solving a convex optimization problem subject to two linear matrix inequalities. Compared with some previous results, much better performance is achieved by our approach, which is greatly benefited from introducing an additional gain matrix in the domain of activation function. An example is finally given to demonstrate the advantage of the developed result.

  20. REMOTE, a Wireless Sensor Network Based System to Monitor Rowing Performance

    PubMed Central

    Llosa, Jordi; Vilajosana, Ignasi; Vilajosana, Xavier; Navarro, Nacho; Suriñach, Emma; Marquès, Joan Manuel

    2009-01-01

    In this paper, we take a hard look at the performance of REMOTE, a sensor network based application that provides a detailed picture of a boat movement, individual rower performance, or his/her performance compared with other crew members. The application analyzes data gathered with a WSN strategically deployed over a boat to obtain information on the boat and oar movements. Functionalities of REMOTE are compared to those of RowX [1] outdoor instrument, a commercial wired sensor instrument designed for similar purposes. This study demonstrates that with smart geometrical configuration of the sensors, rotation and translation of the oars and boat can be obtained. Three different tests are performed: laboratory calibration allows us to become familiar with the accelerometer readings and validate the theory, ergometer tests which help us to set the acquisition parameters, and on boat tests shows the application potential of this technologies in sports. PMID:22423204

  1. Production of lentiviral vectors

    PubMed Central

    Merten, Otto-Wilhelm; Hebben, Matthias; Bovolenta, Chiara

    2016-01-01

    Lentiviral vectors (LV) have seen considerably increase in use as gene therapy vectors for the treatment of acquired and inherited diseases. This review presents the state of the art of the production of these vectors with particular emphasis on their large-scale production for clinical purposes. In contrast to oncoretroviral vectors, which are produced using stable producer cell lines, clinical-grade LV are in most of the cases produced by transient transfection of 293 or 293T cells grown in cell factories. However, more recent developments, also, tend to use hollow fiber reactor, suspension culture processes, and the implementation of stable producer cell lines. As is customary for the biotech industry, rather sophisticated downstream processing protocols have been established to remove any undesirable process-derived contaminant, such as plasmid or host cell DNA or host cell proteins. This review compares published large-scale production and purification processes of LV and presents their process performances. Furthermore, developments in the domain of stable cell lines and their way to the use of production vehicles of clinical material will be presented. PMID:27110581

  2. Performance analysis of Wald-statistic based network detection methods for radiation sources

    SciTech Connect

    Sen, Satyabrata; Rao, Nageswara S; Wu, Qishi; Barry, M. L..; Grieme, M.; Brooks, Richard R; Cordone, G.

    2016-01-01

    There have been increasingly large deployments of radiation detection networks that require computationally fast algorithms to produce prompt results over ad-hoc sub-networks of mobile devices, such as smart-phones. These algorithms are in sharp contrast to complex network algorithms that necessitate all measurements to be sent to powerful central servers. In this work, at individual sensors, we employ Wald-statistic based detection algorithms which are computationally very fast, and are implemented as one of three Z-tests and four chi-square tests. At fusion center, we apply the K-out-of-N fusion to combine the sensors hard decisions. We characterize the performance of detection methods by deriving analytical expressions for the distributions of underlying test statistics, and by analyzing the fusion performances in terms of K, N, and the false-alarm rates of individual detectors. We experimentally validate our methods using measurements from indoor and outdoor characterization tests of the Intelligence Radiation Sensors Systems (IRSS) program. In particular, utilizing the outdoor measurements, we construct two important real-life scenarios, boundary surveillance and portal monitoring, and present the results of our algorithms.

  3. I/O performance evaluation of a Linux-based network-attached storage device

    NASA Astrophysics Data System (ADS)

    Sun, Zhaoyan; Dong, Yonggui; Wu, Jinglian; Jia, Huibo; Feng, Guanping

    2002-09-01

    In a Local Area Network (LAN), clients are permitted to access the files on high-density optical disks via a network server. But the quality of read service offered by the conventional server is not satisfied because of the multiple functions on the server and the overmuch caller. This paper develops a Linux-based Network-Attached Storage (NAS) server. The Operation System (OS), composed of an optimized kernel and a miniaturized file system, is stored in a flash memory. After initialization, the NAS device is connected into the LAN. The administrator and users could configure the access the server through the web page respectively. In order to enhance the quality of access, the management of buffer cache in file system is optimized. Some benchmark programs are peformed to evaluate the I/O performance of the NAS device. Since data recorded in optical disks are usually for reading accesses, our attention is focused on the reading throughput of the device. The experimental results indicate that the I/O performance of our NAS device is excellent.

  4. Vector soliton fission.

    PubMed

    Lu, F; Lin, Q; Knox, W H; Agrawal, Govind P

    2004-10-29

    We investigate the vectorial nature of soliton fission in an isotropic nonlinear medium both theoretically and experimentally. As a specific example, we show that supercontinuum generation in a tapered fiber is extremely sensitive to the input state of polarization. Multiple vector solitons generated through soliton fission exhibit different states of elliptical polarization while emitting nonsolitonic radiation with complicated polarization features. Experiments performed with a tapered fiber agree with our theoretical description.

  5. EFFECT OF MOBILITY ON PERFORMANCE OF WIRELESS AD-HOC NETWORK PROTOCOLS.

    SciTech Connect

    Barrett, C. L.; Drozda, M.; Marathe, M. V.; Marathe, A.

    2001-01-01

    We empirically study the effect of mobility on the performance of protocols designed for wireless adhoc networks. An important ohjective is to study the interaction of the Routing and MAC layer protocols under different mobility parameters. We use three basic mobility models: grid mobility model, random waypoint model, and exponential correlated random model. The performance of protocols was measured in terms of (i) latency, (ii) throughput, (iii) number of packels received, (iv) long term fairness and (v) number of control packets at the MAC layer level. Three different commonly studied routing protocols were used: AODV, DSR and LAR1. Similarly three well known MAC protocols were used: MACA, 802.1 1 and CSMA. The inair1 conclusion of our study include the following: 1. 'I'he performance of the: network varies widely with varying mobility models, packet injection rates and speeds; and can ba in fact characterized as fair to poor depending on the specific situation. Nevertheless, in general, it appears that the combination of AODV and 802.1 I is far better than other combination of routing and MAC protocols. 2. MAC layer protocols interact with routing layer protocols. This concept which is formalized using statistics implies that in general it is not meaningful to speak about a MAC or a routing protocol in isolation. Such an interaction leads to trade-offs between the amount of control packets generated by each layer. More interestingly, the results wise the possibility of improving the performance of a particular MAC layer protocol by using a cleverly designed routing protocol or vice-versa. 3. Routing prolocols with distributed knowledge about routes are more suitable for networks with mobility. This is seen by comparing the performance of AODV with DSR or LAR scheme 1. In DSli and IAR scheme 1, information about a computed path is being stored in the route query control packct. 4. MAC layer protocols have varying performance with varying mobility models. It is

  6. Optimal Routing in General Finite Multi-Server Queueing Networks

    PubMed Central

    van Woensel, Tom; Cruz, Frederico R. B.

    2014-01-01

    The design of general finite multi-server queueing networks is a challenging problem that arises in many real-life situations, including computer networks, manufacturing systems, and telecommunication networks. In this paper, we examine the optimal routing problem in arbitrary configured acyclic queueing networks. The performance of the finite queueing network is evaluated with a known approximate performance evaluation method and the optimization is done by means of a heuristics based on the Powell algorithm. The proposed methodology is then applied to determine the optimal routing probability vector that maximizes the throughput of the queueing network. We show numerical results for some networks to quantify the quality of the routing vector approximations obtained. PMID:25010660

  7. Comparison of the Performances of Five Primer Sets for the Detection and Quantification of Plasmodium in Anopheline Vectors by Real-Time PCR

    PubMed Central

    Chaumeau, V.; Andolina, C.; Fustec, B.; Tuikue Ndam, N.; Brengues, C.; Herder, S.; Cerqueira, D.; Chareonviriyaphap, T.; Nosten, F.; Corbel, V.

    2016-01-01

    Quantitative real-time polymerase chain reaction (qrtPCR) has made a significant improvement for the detection of Plasmodium in anopheline vectors. A wide variety of primers has been used in different assays, mostly adapted from molecular diagnosis of malaria in human. However, such an adaptation can impact the sensitivity of the PCR. Therefore we compared the sensitivity of five primer sets with different molecular targets on blood stages, sporozoites and oocysts standards of Plasmodium falciparum (Pf) and P. vivax (Pv). Dilution series of standard DNA were used to discriminate between methods at low concentrations of parasite and to generate standard curves suitable for the absolute quantification of Plasmodium sporozoites. Our results showed that the best primers to detect blood stages were not necessarily the best ones to detect sporozoites. Absolute detection threshold of our qrtPCR assay varied between 3.6 and 360 Pv sporozoites and between 6 and 600 Pf sporozoites per mosquito according to the primer set used in the reaction mix. In this paper, we discuss the general performance of each primer set and highlight the need to use efficient detection methods for transmission studies. PMID:27441839

  8. Comparison of the Performances of Five Primer Sets for the Detection and Quantification of Plasmodium in Anopheline Vectors by Real-Time PCR.

    PubMed

    Chaumeau, V; Andolina, C; Fustec, B; Tuikue Ndam, N; Brengues, C; Herder, S; Cerqueira, D; Chareonviriyaphap, T; Nosten, F; Corbel, V

    2016-01-01

    Quantitative real-time polymerase chain reaction (qrtPCR) has made a significant improvement for the detection of Plasmodium in anopheline vectors. A wide variety of primers has been used in different assays, mostly adapted from molecular diagnosis of malaria in human. However, such an adaptation can impact the sensitivity of the PCR. Therefore we compared the sensitivity of five primer sets with different molecular targets on blood stages, sporozoites and oocysts standards of Plasmodium falciparum (Pf) and P. vivax (Pv). Dilution series of standard DNA were used to discriminate between methods at low concentrations of parasite and to generate standard curves suitable for the absolute quantification of Plasmodium sporozoites. Our results showed that the best primers to detect blood stages were not necessarily the best ones to detect sporozoites. Absolute detection threshold of our qrtPCR assay varied between 3.6 and 360 Pv sporozoites and between 6 and 600 Pf sporozoites per mosquito according to the primer set used in the reaction mix. In this paper, we discuss the general performance of each primer set and highlight the need to use efficient detection methods for transmission studies. PMID:27441839

  9. Design and Performance of the Acts Gigabit Satellite Network High Data-Rate Ground Station

    NASA Technical Reports Server (NTRS)

    Hoder, Doug; Kearney, Brian

    1995-01-01

    The ACTS High Data-Rate Ground stations were built to support the ACTS Gigabit Satellite Network (GSN). The ACTS GSN was designed to provide fiber-compatible SONET service to remote nodes and networks through a wideband satellite system. The ACTS satellite is unique in its extremely wide bandwidth, and electronically controlled spot beam antennas. This paper discusses the requirements, design and performance of the RF section of the ACTS High Data-Rate Ground Stations and constituent hardware. The ACTS transponder systems incorporate highly nonlinear hard limiting. This introduced a major complexity in to the design and subsequent modification of the ground stations. A discussion of the peculiarities of the A CTS spacecraft transponder system and their impact is included.

  10. Structural influence of the inorganic network in the laser performance of dye-doped hybrid materials

    NASA Astrophysics Data System (ADS)

    Costela, A.; García-Moreno, I.; García, O.; del Agua, D.; Sastre, R.

    2005-05-01

    We report a systematic study of the influence on the laser action of Rhodamine 6G (Rh6G) of the composition and structure of new hybrid matrices based on 2-hydroxyethyl methacrylate (HEMA) as organic monomer and different weight proportions of dimethyldiethoxysilane (DEOS) and tetraethoxysilane (TEOS) as inorganic part. We selected mixtures of di- and tetra-functionalized alkoxides trying to decrease, in a controlled way, the rigidity of the three-dimensional network by making use of the flexibility provided by the linear chains acting as a spacer of the inorganic domains. The organization of the molecular units in these nanomaterials was studied through a structural analysis by solid-state NMR. The different reactivity exhibited by di- and tetra-functionalized silanols generates a non-homogeneous tri-dimensional network. Thus, the laser performance in dye-doped hybrid materials is improved when the inorganic phase is composed of a unique alkoxide.

  11. A model to compare performance of space and ground network support of low-Earth orbiters

    NASA Technical Reports Server (NTRS)

    Posner, E. C.

    1992-01-01

    This article compares the downlink performance in a gross average sense between space and ground network support of low-Earth orbiters. The purpose is to assess what the demand for DSN support of future small, low-cost missions might be, if data storage for spacecraft becomes reliable enough and small enough to support the storage requirements needed to enable support only a fraction of the time. It is shown that the link advantage of the DSN over space reception in an average sense is enormous for low-Earth orbiters. The much shorter distances needed to communicate with the ground network more than make up for the speedup in data rate needed to compensate for the short contact times with the DSN that low-Earth orbiters have. The result is that more and more requests for DSN-only support of low-Earth orbiters can be expected.

  12. Improving TCP Network Performance by Detecting and Reacting to Packet Reordering

    NASA Technical Reports Server (NTRS)

    Kruse, Hans; Ostermann, Shawn; Allman, Mark

    2003-01-01

    There are many factors governing the performance of TCP-basec applications traversing satellite channels. The end-to-end performance of TCP is known to be degraded by the reordering, delay, noise and asymmetry inherent in geosynchronous systems. This result has been largely based on experiments that evaluate the performance of TCP in single flow tests. While single flow tests are useful for deriving information on the theoretical behavior of TCP and allow for easy diagnosis of problems they do not represent a broad range of realistic situations and therefore cannot be used to authoritatively comment on performance issues. The experiments discussed in this report test TCP s performance in a more dynamic environment with competing traffic flows from hundreds of TCP connections running simultaneously across the satellite channel. Another aspect we investigate is TCP's reaction to bit errors on satellite channels. TCP interprets loss as a sign of network congestion. This causes TCP to reduce its transmission rate leading to reduced performance when loss is due to corruption. We allowed the bit error rate on our satellite channel to vary widely and tested the performance of TCP as a function of these bit error rates. Our results show that the average performance of TCP on satellite channels is good even under conditions of loss as high as bit error rates of 10(exp -5)

  13. Investigating the performance of neural network backpropagation algorithms for TEC estimations using South African GPS data

    NASA Astrophysics Data System (ADS)

    Habarulema, J. B.; McKinnell, L.-A.

    2012-05-01

    In this work, results obtained by investigating the application of different neural network backpropagation training algorithms are presented. This was done to assess the performance accuracy of each training algorithm in total electron content (TEC) estimations using identical datasets in models development and verification processes. Investigated training algorithms are standard backpropagation (SBP), backpropagation with weight delay (BPWD), backpropagation with momentum (BPM) term, backpropagation with chunkwise weight update (BPC) and backpropagation for batch (BPB) training. These five algorithms are inbuilt functions within the Stuttgart Neural Network Simulator (SNNS) and the main objective was to find out the training algorithm that generates the minimum error between the TEC derived from Global Positioning System (GPS) observations and the modelled TEC data. Another investigated algorithm is the MatLab based Levenberg-Marquardt backpropagation (L-MBP), which achieves convergence after the least number of iterations during training. In this paper, neural network (NN) models were developed using hourly TEC data (for 8 years: 2000-2007) derived from GPS observations over a receiver station located at Sutherland (SUTH) (32.38° S, 20.81° E), South Africa. Verification of the NN models for all algorithms considered was performed on both "seen" and "unseen" data. Hourly TEC values over SUTH for 2003 formed the "seen" dataset. The "unseen" dataset consisted of hourly TEC data for 2002 and 2008 over Cape Town (CPTN) (33.95° S, 18.47° E) and SUTH, respectively. The models' verification showed that all algorithms investigated provide comparable results statistically, but differ significantly in terms of time required to achieve convergence during input-output data training/learning. This paper therefore provides a guide to neural network users for choosing appropriate algorithms based on the availability of computation capabilities used for research.

  14. Control mechanism to prevent correlated message arrivals from degrading signaling no. 7 network performance

    NASA Astrophysics Data System (ADS)

    Kosal, Haluk; Skoog, Ronald A.

    1994-04-01

    Signaling System No. 7 (SS7) is designed to provide a connection-less transfer of signaling messages of reasonable length. Customers having access to user signaling bearer capabilities as specified in the ANSI T1.623 and CCITT Q.931 standards can send bursts of correlated messages (e.g., by doing a file transfer that results in the segmentation of a block of data into a number of consecutive signaling messages) through SS7 networks. These message bursts with short interarrival times could have an adverse impact on the delay performance of the SS7 networks. A control mechanism, Credit Manager, is investigated in this paper to regulate incoming traffic to the SS7 network by imposing appropriate time separation between messages when the incoming stream is too bursty. The credit manager has a credit bank where credits accrue at a fixed rate up to a prespecified credit bank capacity. When a message arrives, the number of octets in that message is compared to the number of credits in the bank. If the number of credits is greater than or equal to the number of octets, then the message is accepted for transmission and the number of credits in the bank is decremented by the number of octets. If the number of credits is less than the number of octets, then the message is delayed until enough credits are accumulated. This paper presents simulation results showing delay performance of the SS7 ISUP and TCAP message traffic with a range of correlated message traffic, and control parameters of the credit manager (i.e., credit generation rate and bank capacity) are determined that ensure the traffic entering the SS7 network is acceptable. The results show that control parameters can be set so that for any incoming traffic stream there is no detrimental impact on the SS7 ISUP and TCAP message delay, and the credit manager accepts a wide range of traffic patterns without causing significant delay.

  15. Comparative investigation of multiplane thrust vectoring nozzles

    NASA Technical Reports Server (NTRS)

    Capone, F.; Smereczniak, P.; Spetnagel, D.; Thayer, E.

    1992-01-01

    The inflight aerodynamic performance of multiplane vectoring nozzles is critical to development of advanced aircraft and flight control systems utilizing thrust vectoring. To investigate vectoring nozzle performance, subscale models of two second-generation thrust vectoring nozzle concepts currently under development for advanced fighters were integrated into an axisymmetric test pod. Installed drag and vectoring performance characteristics of both concepts were experimentally determined in wind tunnel testing. CFD analyses were conducted to understand the impact of internal flow turning on thrust vectoring characteristics. Both nozzles exhibited drag comparable with current nonvectoring axisymmetric nozzles. During vectored-thrust operations, forces produced by external flow effects amounted to about 25 percent of the total force measured.

  16. Performance of an Abbreviated Version of the Lubben Social Network Scale among Three European Community-Dwelling Older Adult Populations

    ERIC Educational Resources Information Center

    Lubben, James; Blozik, Eva; Gillmann, Gerhard; Iliffe, Steve; von Renteln-Kruse, Wolfgang; Beck, John C.; Stuck, Andreas E.

    2006-01-01

    Purpose: There is a need for valid and reliable short scales that can be used to assess social networks and social supports and to screen for social isolation in older persons. Design and Methods: The present study is a cross-national and cross-cultural evaluation of the performance of an abbreviated version of the Lubben Social Network Scale…

  17. Network Performance and Coordination in the Health, Education, Telecommunications System. Satellite Technology Demonstration, Technical Report No. 0422.

    ERIC Educational Resources Information Center

    Braunstein, Jean; Janky, James M.

    This paper describes the network coordination for the Health, Education, Telecommunications (HET) system. Specifically, it discusses HET network performance as a function of a specially-developed coordination system which was designed to link terrestrial equipment to satellite operations centers. Because all procedures and equipment developed for…

  18. Recurrent fuzzy neural network backstepping control for the prescribed output tracking performance of nonlinear dynamic systems.

    PubMed

    Han, Seong-Ik; Lee, Jang-Myung

    2014-01-01

    This paper proposes a backstepping control system that uses a tracking error constraint and recurrent fuzzy neural networks (RFNNs) to achieve a prescribed tracking performance for a strict-feedback nonlinear dynamic system. A new constraint variable was defined to generate the virtual control that forces the tracking error to fall within prescribed boundaries. An adaptive RFNN was also used to obtain the required improvement on the approximation performances in order to avoid calculating the explosive number of terms generated by the recursive steps of traditional backstepping control. The boundedness and convergence of the closed-loop system was confirmed based on the Lyapunov stability theory. The prescribed performance of the proposed control scheme was validated by using it to control the prescribed error of a nonlinear system and a robot manipulator.

  19. Analytic network process model for sustainable lean and green manufacturing performance indicator

    NASA Astrophysics Data System (ADS)

    Aminuddin, Adam Shariff Adli; Nawawi, Mohd Kamal Mohd; Mohamed, Nik Mohd Zuki Nik

    2014-09-01

    Sustainable manufacturing is regarded as the most complex manufacturing paradigm to date as it holds the widest scope of requirements. In addition, its three major pillars of economic, environment and society though distinct, have some overlapping among each of its elements. Even though the concept of sustainability is not new, the development of the performance indicator still needs a lot of improvement due to its multifaceted nature, which requires integrated approach to solve the problem. This paper proposed the best combination of criteria en route a robust sustainable manufacturing performance indicator formation via Analytic Network Process (ANP). The integrated lean, green and sustainable ANP model can be used to comprehend the complex decision system of the sustainability assessment. The finding shows that green manufacturing is more sustainable than lean manufacturing. It also illustrates that procurement practice is the most important criteria in the sustainable manufacturing performance indicator.

  20. Unsupervised learning of binary vectors

    NASA Astrophysics Data System (ADS)

    Copelli Lopes da Silva, Mauro

    In this thesis, unsupervised learning of binary vectors from data is studied using methods from Statistical Mechanics of disordered systems. In the model, data vectors are distributed according to a single symmetry-breaking direction. The aim of unsupervised learning is to provide a good approximation to this direction. The difference with respect to previous studies is the knowledge that this preferential direction has binary components. It is shown that sampling from the posterior distribution (Gibbs learning) leads, for general smooth distributions, to an exponentially fast approach to perfect learning in the asymptotic limit of large number of examples. If the distribution is non-smooth, then first order phase transitions to perfect learning are expected. In the limit of poor performance, a second order phase transition ("retarded learning") is predicted to occur if the data distribution is not biased. Using concepts from Bayesian inference, the center of mass of the Gibbs ensemble is shown to have maximal average (Bayes-optimal) performance. This upper bound for continuous vectors is extended to a discrete space, resulting in the clipped center of mass of the Gibbs ensemble having maximal average performance among the binary vectors. To calculate the performance of this best binary vector, the geometric properties of the center of mass of binary vectors are studied. The surprising result is found that the center of mass of infinite binary vectors which obey some simple constraints, is again a binary vector. When disorder is taken into account in the calculation, however, a vector with continuous components is obtained. The performance of the best binary vector is calculated and shown to always lie above that of Gibbs learning and below the Bayes-optimal performance. Making use of a variational approach under the replica symmetric ansatz, an optimal potential is constructed in the limits of zero temperature and mutual overlap 1. Minimization of this potential

  1. Resting spontaneous activity in the default mode network predicts performance decline during prolonged attention workload.

    PubMed

    Gui, Danyang; Xu, Sihua; Zhu, Senhua; Fang, Zhuo; Spaeth, Andrea M; Xin, Yuanyuan; Feng, Tingyong; Rao, Hengyi

    2015-10-15

    After continuous and prolonged cognitive workload, people typically show reduced behavioral performance and increased feelings of fatigue, which are known as "time-on-task (TOT) effects". Although TOT effects are pervasive in modern life, their underlying neural mechanisms remain elusive. In this study, we induced TOT effects by administering a 20-min continuous psychomotor vigilance test (PVT) to a group of 16 healthy adults and used resting-state blood oxygen level-dependent (BOLD) functional magnetic resonance imaging (fMRI) to examine spontaneous brain activity changes associated with fatigue and performance. Behaviorally, subjects displayed robust TOT effects, as reflected by increasingly slower reaction times as the test progressed and higher self-reported mental fatigue ratings after the 20-min PVT. Compared to pre-test measurements, subjects exhibited reduced amplitudes of low-frequency fluctuation (ALFF) in the default mode network (DMN) and increased ALFF in the thalamus after the test. Subjects also exhibited reduced anti-correlations between the posterior cingulate cortex (PCC) and right middle prefrontal cortex after the test. Moreover, pre-test resting ALFF in the PCC and medial prefrontal cortex (MePFC) predicted subjects' subsequent performance decline; individuals with higher ALFF in these regions exhibited more stable reaction times throughout the 20-min PVT. These results support the important role of both task-positive and task-negative networks in mediating TOT effects and suggest that spontaneous activity measured by resting-state BOLD fMRI may be a marker of mental fatigue.

  2. Changes in whole-brain functional networks and memory performance in aging.

    PubMed

    Sala-Llonch, Roser; Junqué, Carme; Arenaza-Urquijo, Eider M; Vidal-Piñeiro, Dídac; Valls-Pedret, Cinta; Palacios, Eva M; Domènech, Sara; Salvà, Antoni; Bargalló, Nuria; Bartrés-Faz, David

    2014-10-01

    We used resting-functional magnetic resonance imaging data from 98 healthy older adults to analyze how local and global measures of functional brain connectivity are affected by age, and whether they are related to differences in memory performance. Whole-brain networks were created individually by parcellating the brain into 90 cerebral regions and obtaining pairwise connectivity. First, we studied age-associations in interregional connectivity and their relationship with the length of the connections. Aging was associated with less connectivity in the long-range connections of fronto-parietal and fronto-occipital systems and with higher connectivity of the short-range connections within frontal, parietal, and occipital lobes. We also used the graph theory to measure functional integration and segregation. The pattern of the overall age-related correlations presented positive correlations of average minimum path length (r = 0.380, p = 0.008) and of global clustering coefficients (r = 0.454, p < 0.001), leading to less integrated and more segregated global networks. Main correlations in clustering coefficients were located in the frontal and parietal lobes. Higher clustering coefficients of some areas were related to lower performance in verbal and visual memory functions. In conclusion, we found that older participants showed lower connectivity of long-range connections together with higher functional segregation of these same connections, which appeared to indicate a more local clustering of information processing. Higher local clustering in older participants was negatively related to memory performance.

  3. Free-Standing Copper Nanowire Network Current Collector for Improving Lithium Anode Performance.

    PubMed

    Lu, Lei-Lei; Ge, Jin; Yang, Jun-Nan; Chen, Si-Ming; Yao, Hong-Bin; Zhou, Fei; Yu, Shu-Hong

    2016-07-13

    Lithium metal is one of the most attractive anode materials for next-generation lithium batteries due to its high specific capacity and low electrochemical potential. However, the poor cycling performance and serious safety hazards, caused by the growth of dendritic and mossy lithium, has long hindered the application of lithium metal based batteries. Herein, we reported a rational design of free-standing Cu nanowire (CuNW) network to suppress the growth of dendritic lithium via accommodating the lithium metal in three-dimensional (3D) nanostructures. We demonstrated that as high as 7.5 mA h cm(-2) of lithium can be plated into the free-standing copper nanowire (CuNW) current collector without the growth of dendritic lithium. The lithium metal anode based on the CuNW exhibited high Coulombic efficiency (average 98.6% during 200 cycles) and outstanding rate performance owing to the suppression of lithium dendrite growth and high conductivity of CuNW network. Our results demonstrate that the rational nanostructural design of current collector could be a promising strategy to improve the performance of lithium metal anode enabling its application in next-generation lithium-metal based batteries. PMID:27253417

  4. Improving the Performance of the Structure-Based Connectionist Network for Diagnosis of Helicopter Gearboxes

    NASA Technical Reports Server (NTRS)

    Jammu, Vinay B.; Danai, Koroush; Lewicki, David G.

    1996-01-01

    A diagnostic method is introduced for helicopter gearboxes that uses knowledge of the gear-box structure and characteristics of the 'features' of vibration to define the influences of faults on features. The 'structural influences' in this method are defined based on the root mean square value of vibration obtained from a simplified lumped-mass model of the gearbox. The structural influences are then converted to fuzzy variables, to account for the approximate nature of the lumped-mass model, and used as the weights of a connectionist network. Diagnosis in this Structure-Based Connectionist Network (SBCN) is performed by propagating the abnormal vibration features through the weights of SBCN to obtain fault possibility values for each component in the gearbox. Upon occurrence of misdiagnoses, the SBCN also has the ability to improve its diagnostic performance. For this, a supervised training method is presented which adapts the weights of SBCN to minimize the number of misdiagnoses. For experimental evaluation of the SBCN, vibration data from a OH-58A helicopter gearbox collected at NASA Lewis Research Center is used. Diagnostic results indicate that the SBCN is able to diagnose about 80% of the faults without training, and is able to improve its performance to nearly 100% after training.

  5. A Mountain-Scale Monitoring Network for Yucca Mountain PerformanceConfirmation

    SciTech Connect

    Freifeld, Barry; Tsang, Yvonne

    2006-01-20

    Confirmation of the performance of Yucca Mountain is required by 10 CFR Part 63.131 to indicate, where practicable, that the natural system acts as a barrier, as intended. Hence, performance confirmation monitoring and testing would provide data for continued assessment during the pre-closure period. In general, to carry out testing at a relevant scale is always important, and in the case of performance confirmation, it is particularly important to be able to test at the scale of the repository. We view the large perturbation caused by construction of the repository at Yucca Mountain as a unique opportunity to study the large-scale behavior of the natural barrier system. Repository construction would necessarily introduce traced fluids and result in the creation of leachates. A program to monitor traced fluids and construction leachates permits evaluation of transport through the unsaturated zone and potentially downgradient through the saturated zone. A robust sampling and monitoring network for continuous measurement of important parameters, and for periodic collection of agrochemical samples, is proposed to observe thermo-hydrogeochemical changes near the repository horizon and down to the water table. The sampling and monitoring network can be used to provide data to (1) assess subsurface conditions encountered and changes in those conditions during construction and waste emplacement operations; and (2) for modeling to determine that the natural system is functioning as intended.

  6. Evaluation of UltraBattery™ performance in comparison with a battery-supercapacitor parallel network

    NASA Astrophysics Data System (ADS)

    Fairweather, A. J.; Stone, D. A.; Foster, M. P.

    2013-03-01

    This paper examines the emerging technology of batteries incorporating carbon in the negative plate to affect a parallel capacitance within the battery itself. Using a frequency domain approach in conjunction with low frequency static tests and step responses an UltraBattery™ is examined. Initial examinations using the Randles' model lead to development of a modified model to better represent the battery parameters. These findings then expand the work to examine a similar conventional battery connected in a parallel network with a supercapacitor bank, allowing comparisons to be made and performance criteria to be established.

  7. Performance analysis of bi-directional broadband passive optical network using erbium-doped fiber amplifier

    NASA Astrophysics Data System (ADS)

    Almalaq, Yasser; Matin, Mohammad A.

    2014-09-01

    The broadband passive optical network (BPON) has the ability to support high-speed data, voice, and video services to home and small businesses customers. In this work, the performance of bi-directional BPON is analyzed for both down and up streams traffic cases by the help of erbium doped fiber amplifier (EDFA). The importance of BPON is reduced cost. Because PBON uses a splitter the cost of the maintenance between the providers and the customers side is suitable. In the proposed research, BPON has been tested by the use of bit error rate (BER) analyzer. BER analyzer realizes maximum Q factor, minimum bit error rate, and eye height.

  8. Digital audio broadcasting: Measuring techniques and coverage performance for a medium power VHF single frequency network

    NASA Astrophysics Data System (ADS)

    Maddocks, M. C. D.; Eng, C.; Pullen, I. R.; Green, J. A.

    1995-02-01

    The advent of digital formats such as CD has created demand for uniformly high audio quality from radio. In order to provide such high-quality stereo reception, a Digital Audio Broadcasting (DAB) system capable of reliable reception in vehicles and on portables has been developed by the European EUREKA 147 Project. As a VHF frequency allocation would appear most suitable for the introduction of terrestrial broadcasting of DAB in the United Kingdom, the BBC is undertaking a major experiment to test the EUREKA DAB system and to generate data to allow efficient planning of its transmitter network. A network of four, 1 kW e.r.p., VHF transmitters has been installed to cover the London area in England. This Report describes the experimental program and the rationale and measurement techniques behind it. The results show a wide-area coverage from the transmitter network which is in reasonable agreement with computer predictions. This indicates that the current transmitting and receiving equipment (built to the EUREKA specification) is operating in the way that would be expected from theoretical studies and simulation. The results also provide quantitative values which can be used for coverage prediction and for international co-ordination of services. Finally, the performance of the system demonstrates a number of the benefits of the EUREKA DAB system for mobile and portable reception.

  9. A New Security Architecture for Personal Networks and Its Performance Evaluation

    NASA Astrophysics Data System (ADS)

    Shin, Seonghan; Fathi, Hanane; Kobara, Kazukuni; Prasad, Neeli R.; Imai, Hideki

    The concept of personal networks is very user-centric and representative for the next generation networks. However, the present security mechanism does not consider at all what happens whenever a mobile node (device) is compromised, lost or stolen. Of course, a compromised, lost or stolen mobile node (device) is a main factor to leak stored secrets. This kind of leakage of stored secrets remains a great danger in the field of communication security since it can lead to the complete breakdown of the intended security level. In order to solve this problem, we propose a 3-way Leakage-Resilient and Forward-Secure Authenticated Key Exchange (3LRFS-AKE) protocol and its security architecture suitable for personal networks. The 3LRFS-AKE protocol guarantees not only forward secrecy of the shared key between device and its server as well as providing a new additional layer of security against the leakage of stored secrets. The proposed security architecture includes two different types of communications: PN wide communication and communication between P-PANs of two different users. In addition, we give a performance evaluation and numerical results of the delay generated by the proposed security architecture.

  10. Networks and the fiscal performance of rural hospitals in Oklahoma: are they associated?

    PubMed

    Broyles, R W; Brandt, E N; Biard-Holmes, D

    1998-01-01

    This paper uses regression analysis to explore the relation of network membership to the financial performance of rural hospitals in Oklahoma during fiscal year 1995. After adjusting for the scope of service, as measured by the number of facilities or services offered by the hospital, indicators of fiscal status are (1) the cash receipts derived from net patient revenue; (2) the cash disbursements related to operating costs, net of interest and depreciation expense, labor costs and nonlabor costs; and (3) net cash flow, defined as the difference between cash receipts and disbursements. Controlling for the effects of the hospital's structural attributes, operating characteristics and market conditions, the results indicate that members of a network reported lower net operating costs, labor costs and nonlabor expenses per service than nonmembers. Hence, the analysis seems to suggest that the membership of rural hospitals in a network is associated with lower cash disbursements and an improved net cash flow, outcomes that may preserve their fiscal viability and the access of the population at risk to service.

  11. Hellenic Unified Seismological Network: an evaluation of its performance through SNES method

    NASA Astrophysics Data System (ADS)

    D'Alessandro, Antonino; Papanastassiou, Dimitris; Baskoutas, Ioannis

    2011-06-01

    In this paper, we analyse the location performance of the Hellenic (Greek) Unified Seismological Network (HUSN) by Seismic Network Evaluation through Simulation method (SNES). This method gives, as a function of magnitude, hypocentral depth and confidence level, the spatial distribution of the: number of active stations in the location procedure and their relative azimuthal gaps and confidence intervals in hypocentral parameters regarding both the geometry of the seismic network and the use of an inadequate velocity model. Greece is located on a tectonically active plate boundary at the convergence of the Eurasian and African lithospheric plates and exhibits a high level of seismicity. The HUSN monitors the seismicity in Greek territory from 2007. At present it is composed by 88 seismic stations appropriately distribute in the area of Greece. The application of the SNES method permitted us to evaluate the background noise levels recorded by the network stations and estimate an empirical law that links the variance of P and S traveltime residuals to hypocentral distance. The statistical analysis of the P and S traveltime residuals allowed us to assess the appropriateness of the velocity model used by the HUSN in the location routine process. We constructed SNES maps for magnitudes (?) of 2, 2.5 and 3, fixing the hypocentral depth to 10 km and the confidence level to 95 per cent. We also investigated, by two different vertical sections, the behaviour of the errors in hypocentral parameters estimates as function of depth. Finally, we also evaluated, fixing the hypocentral depth to 10 km and the confidence level to 95 per cent, the Magnitude of Completeness. Through the application of the SNES method, we demonstrate that the HUSN provides the best monitoring coverage in western Greece with errors, that for ? = 2.5, are less than 2 and 5 km for epicentre and hypocentral depth, respectively. At magnitude 2.5, this seismic network is capable of constraining earthquake

  12. Optimizing the Reliability and Performance of Service Composition Applications with Fault Tolerance in Wireless Sensor Networks.

    PubMed

    Wu, Zhao; Xiong, Naixue; Huang, Yannong; Xu, Degang; Hu, Chunyang

    2015-01-01

    The services composition technology provides flexible methods for building service composition applications (SCAs) in wireless sensor networks (WSNs). The high reliability and high performance of SCAs help services composition technology promote the practical application of WSNs. The optimization methods for reliability and performance used for traditional software systems are mostly based on the instantiations of software components, which are inapplicable and inefficient in the ever-changing SCAs in WSNs. In this paper, we consider the SCAs with fault tolerance in WSNs. Based on a Universal Generating Function (UGF) we propose a reliability and performance model of SCAs in WSNs, which generalizes a redundancy optimization problem to a multi-state system. Based on this model, an efficient optimization algorithm for reliability and performance of SCAs in WSNs is developed based on a Genetic Algorithm (GA) to find the optimal structure of SCAs with fault-tolerance in WSNs. In order to examine the feasibility of our algorithm, we have evaluated the performance. Furthermore, the interrelationships between the reliability, performance and cost are investigated. In addition, a distinct approach to determine the most suitable parameters in the suggested algorithm is proposed.

  13. Optimizing the Reliability and Performance of Service Composition Applications with Fault Tolerance in Wireless Sensor Networks

    PubMed Central

    Wu, Zhao; Xiong, Naixue; Huang, Yannong; Xu, Degang; Hu, Chunyang

    2015-01-01

    The services composition technology provides flexible methods for building service composition applications (SCAs) in wireless sensor networks (WSNs). The high reliability and high performance of SCAs help services composition technology promote the practical application of WSNs. The optimization methods for reliability and performance used for traditional software systems are mostly based on the instantiations of software components, which are inapplicable and inefficient in the ever-changing SCAs in WSNs. In this paper, we consider the SCAs with fault tolerance in WSNs. Based on a Universal Generating Function (UGF) we propose a reliability and performance model of SCAs in WSNs, which generalizes a redundancy optimization problem to a multi-state system. Based on this model, an efficient optimization algorithm for reliability and performance of SCAs in WSNs is developed based on a Genetic Algorithm (GA) to find the optimal structure of SCAs with fault-tolerance in WSNs. In order to examine the feasibility of our algorithm, we have evaluated the performance. Furthermore, the interrelationships between the reliability, performance and cost are investigated. In addition, a distinct approach to determine the most suitable parameters in the suggested algorithm is proposed. PMID:26561818

  14. Utah's Regional/Urban ANSS Seismic Network---Strategies and Tools for Quality Performance

    NASA Astrophysics Data System (ADS)

    Burlacu, R.; Arabasz, W. J.; Pankow, K. L.; Pechmann, J. C.; Drobeck, D. L.; Moeinvaziri, A.; Roberson, P. M.; Rusho, J. A.

    2007-05-01

    The University of Utah's regional/urban seismic network (224 stations recorded: 39 broadband, 87 strong-motion, 98 short-period) has become a model for locally implementing the Advanced National Seismic System (ANSS) because of successes in integrating weak- and strong-motion recording and in developing an effective real-time earthquake information system. Early achievements included implementing ShakeMap, ShakeCast, point-to- multipoint digital telemetry, and an Earthworm Oracle database, as well as in-situ calibration of all broadband and strong-motion stations and submission of all data and metadata into the IRIS DMC. Regarding quality performance, our experience as a medium-size regional network affirms the fundamental importance of basics such as the following: for data acquisition, deliberate attention to high-quality field installations, signal quality, and computer operations; for operational efficiency, a consistent focus on professional project management and human resources; and for customer service, healthy partnerships---including constant interactions with emergency managers, engineers, public policy-makers, and other stakeholders as part of an effective state earthquake program. (Operational cost efficiencies almost invariably involve trade-offs between personnel costs and the quality of hardware and software.) Software tools that we currently rely on for quality performance include those developed by UUSS (e.g., SAC and shell scripts for estimating local magnitudes) and software developed by other organizations such as: USGS (Earthworm), University of Washington (interactive analysis software), ISTI (SeisNetWatch), and IRIS (PDCC, BUD tools). Although there are many pieces, there is little integration. One of the main challenges we face is the availability of a complete and coherent set of tools for automatic and post-processing to assist in achieving the goals/requirements set forth by ANSS. Taking our own network---and ANSS---to the next level

  15. A simplified adaptive neural network prescribed performance controller for uncertain MIMO feedback linearizable systems.

    PubMed

    Theodorakopoulos, Achilles; Rovithakis, George A

    2015-03-01

    In this paper, the problem of deriving a continuous, state-feedback controller for a class of multiinput multioutput feedback linearizable systems is considered with special emphasis on controller simplification and reduction of the overall design complexity with respect to the current state of the art. The proposed scheme achieves prescribed bounds on the transient and steady-state performance of the output tracking errors despite the uncertainty in system nonlinearities. Contrary to the current state of the art, however, only a single neural network is utilized to approximate a scalar function that partly incorporates the system nonlinearities. Furthermore, the loss of model controllability problem, typically introduced owing to approximation model singularities, is avoided without attaching additional complexity to the control or adaptive law. Simulations are performed to verify and clarify the theoretical findings.

  16. Leasing-Based Performance Analysis in Energy Harvesting Cognitive Radio Networks

    PubMed Central

    Zeng, Fanzi; Xu, Jisheng

    2016-01-01

    In this paper, we consider an energy harvesting cognitive radio network (CRN), where both of primary user (PU) and secondary user (SU) are operating in time slotted mode, and the SU powered exclusively by the energy harvested from the radio signal of the PU. The SU can only perform either energy harvesting or data transmission due to the hardware limitation. In this case, the entire time-slot is segmented into two non-overlapping fractions. During the first sub-timeslot, the SU can harvest energy from the ambient radio signal when the PU is transmitting. In order to obtain more revenue, the PU leases a portion of its time to SU, while the SU can transmit its own data by using the harvested energy. According to convex optimization, we get the optimal leasing time to maximize the SU’s throughput while guaranteeing the quality of service (QoS) of PU. To evaluate the performance of our proposed spectrum leasing scheme, we compare the utility of PU and the energy efficiency ratio of the entire networks in our framework with the conventional strategies respectively. The numerical simulation results prove the superiority of our proposed spectrum leasing scheme. PMID:26927131

  17. Leasing-Based Performance Analysis in Energy Harvesting Cognitive Radio Networks.

    PubMed

    Zeng, Fanzi; Xu, Jisheng

    2016-02-27

    In this paper, we consider an energy harvesting cognitive radio network (CRN), where both of primary user (PU) and secondary user (SU) are operating in time slotted mode, and the SU powered exclusively by the energy harvested from the radio signal of the PU. The SU can only perform either energy harvesting or data transmission due to the hardware limitation. In this case, the entire time-slot is segmented into two non-overlapping fractions. During the first sub-timeslot, the SU can harvest energy from the ambient radio signal when the PU is transmitting. In order to obtain more revenue, the PU leases a portion of its time to SU, while the SU can transmit its own data by using the harvested energy. According to convex optimization, we get the optimal leasing time to maximize the SU's throughput while guaranteeing the quality of service (QoS) of PU. To evaluate the performance of our proposed spectrum leasing scheme, we compare the utility of PU and the energy efficiency ratio of the entire networks in our framework with the conventional strategies respectively. The numerical simulation results prove the superiority of our proposed spectrum leasing scheme.

  18. Leasing-Based Performance Analysis in Energy Harvesting Cognitive Radio Networks.

    PubMed

    Zeng, Fanzi; Xu, Jisheng

    2016-01-01

    In this paper, we consider an energy harvesting cognitive radio network (CRN), where both of primary user (PU) and secondary user (SU) are operating in time slotted mode, and the SU powered exclusively by the energy harvested from the radio signal of the PU. The SU can only perform either energy harvesting or data transmission due to the hardware limitation. In this case, the entire time-slot is segmented into two non-overlapping fractions. During the first sub-timeslot, the SU can harvest energy from the ambient radio signal when the PU is transmitting. In order to obtain more revenue, the PU leases a portion of its time to SU, while the SU can transmit its own data by using the harvested energy. According to convex optimization, we get the optimal leasing time to maximize the SU's throughput while guaranteeing the quality of service (QoS) of PU. To evaluate the performance of our proposed spectrum leasing scheme, we compare the utility of PU and the energy efficiency ratio of the entire networks in our framework with the conventional strategies respectively. The numerical simulation results prove the superiority of our proposed spectrum leasing scheme. PMID:26927131

  19. Using multi-class queuing network to solve performance models of e-business sites.

    PubMed

    Zheng, Xiao-ying; Chen, De-ren

    2004-01-01

    Due to e-business's variety of customers with different navigational patterns and demands, multi-class queuing network is a natural performance model for it. The open multi-class queuing network(QN) models are based on the assumption that no service center is saturated as a result of the combined loads of all the classes. Several formulas are used to calculate performance measures, including throughput, residence time, queue length, response time and the average number of requests. The solution technique of closed multi-class QN models is an approximate mean value analysis algorithm (MVA) based on three key equations, because the exact algorithm needs huge time and space requirement. As mixed multi-class QN models, include some open and some closed classes, the open classes should be eliminated to create a closed multi-class QN so that the closed model algorithm can be applied. Some corresponding examples are given to show how to apply the algorithms mentioned in this article. These examples indicate that multi-class QN is a reasonably accurate model of e-business and can be solved efficiently.

  20. Sea water level forecasting using genetic programming and comparing the performance with Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Ali Ghorbani, Mohammad; Khatibi, Rahman; Aytek, Ali; Makarynskyy, Oleg; Shiri, Jalal

    2010-05-01

    Water level forecasting at various time intervals using records of past time series is of importance in water resources engineering and management. In the last 20 years, emerging approaches over the conventional harmonic analysis techniques are based on using Genetic Programming (GP) and Artificial Neural Networks (ANNs). In the present study, the GP is used to forecast sea level variations, three time steps ahead, for a set of time intervals comprising 12 h, 24 h, 5 day and 10 day time intervals using observed sea levels. The measurements from a single tide gauge at Hillarys Boat Harbor, Western Australia, were used to train and validate the employed GP for the period from December 1991 to December 2002. Statistical parameters, namely, the root mean square error, correlation coefficient and scatter index, are used to measure their performances. These were compared with a corresponding set of published results using an Artificial Neural Network model. The results show that both these artificial intelligence methodologies perform satisfactorily and may be considered as alternatives to the harmonic analysis.