Sample records for network analysis procedure

  1. Communication Network Analysis Methods.

    ERIC Educational Resources Information Center

    Farace, Richard V.; Mabee, Timothy

    This paper reviews a variety of analytic procedures that can be applied to network data, discussing the assumptions and usefulness of each procedure when applied to the complexity of human communication. Special attention is paid to the network properties measured or implied by each procedure. Factor analysis and multidimensional scaling are among…

  2. Modeling Training Site Vegetation Coverage Probability with a Random Optimizing Procedure: An Artificial Neural Network Approach.

    DTIC Science & Technology

    1998-05-01

    Coverage Probability with a Random Optimization Procedure: An Artificial Neural Network Approach by Biing T. Guan, George Z. Gertner, and Alan B...Modeling Training Site Vegetation Coverage Probability with a Random Optimizing Procedure: An Artificial Neural Network Approach 6. AUTHOR(S) Biing...coverage based on past coverage. Approach A literature survey was conducted to identify artificial neural network analysis techniques applicable for

  3. A combined Bodian-Nissl stain for improved network analysis in neuronal cell culture.

    PubMed

    Hightower, M; Gross, G W

    1985-11-01

    Bodian and Nissl procedures were combined to stain dissociated mouse spinal cord cells cultured on coverslips. The Bodian technique stains fine neuronal processes in great detail as well as an intracellular fibrillar network concentrated around the nucleus and in proximal neurites. The Nissl stain clearly delimits neuronal cytoplasm in somata and in large dendrites. A combination of these techniques allows the simultaneous depiction of neuronal perikarya and all afferent and efferent processes. Costaining with little background staining by either procedure suggests high specificity for neurons. This procedure could be exploited for routine network analysis of cultured neurons.

  4. Evaluation of shoulder function in clavicular fracture patients after six surgical procedures based on a network meta-analysis.

    PubMed

    Huang, Shou-Guo; Chen, Bo; Lv, Dong; Zhang, Yong; Nie, Feng-Feng; Li, Wei; Lv, Yao; Zhao, Huan-Li; Liu, Hong-Mei

    2017-01-01

    Purpose Using a network meta-analysis approach, our study aims to develop a ranking of the six surgical procedures, that is, Plate, titanium elastic nail (TEN), tension band wire (TBW), hook plate (HP), reconstruction plate (RP) and Knowles pin, by comparing the post-surgery constant shoulder scores in patients with clavicular fracture (CF). Methods A comprehensive search of electronic scientific literature databases was performed to retrieve publications investigating surgical procedures in CF, with the stringent eligible criteria, and clinical experimental studies of high quality and relevance to our area of interest were selected for network meta-analysis. Statistical analyses were conducted using Stata 12.0. Results A total of 19 studies met our inclusion criteria were eventually enrolled into our network meta-analysis, representing 1164 patients who had undergone surgical procedures for CF (TEN group = 240; Plate group = 164; TBW group  =  180; RP group  =  168; HP group  =  245; Knowles pin group  =  167). The network meta-analysis results revealed that RP significantly improved constant shoulder score in patients with CF when compared with TEN, and the post-operative constant shoulder scores in patients with CF after Plate, TBW, HP, Knowles pin and TEN were similar with no statistically significant differences. The treatment relative ranking of predictive probabilities of constant shoulder scores in patients with CF after surgery revealed the surface under the cumulative ranking curves (SUCRA) value is the highest in RP. Conclusion The current network meta-analysis suggests that RP may be the optimum surgical treatment among six inventions for patients with CF, and it can improve the shoulder score of patients with CF. Implications for Rehabilitation RP improves shoulder joint function after surgical procedure. RP achieves stability with minimal complications after surgery. RP may be the optimum surgical treatment for rehabilitation of patients with CF.

  5. Communications network design and costing model users manual

    NASA Technical Reports Server (NTRS)

    Logan, K. P.; Somes, S. S.; Clark, C. A.

    1983-01-01

    The information and procedures needed to exercise the communications network design and costing model for performing network analysis are presented. Specific procedures are included for executing the model on the NASA Lewis Research Center IBM 3033 computer. The concepts, functions, and data bases relating to the model are described. Model parameters and their format specifications for running the model are detailed.

  6. Patent Network Analysis and Quadratic Assignment Procedures to Identify the Convergence of Robot Technologies

    PubMed Central

    Lee, Woo Jin; Lee, Won Kyung

    2016-01-01

    Because of the remarkable developments in robotics in recent years, technological convergence has been active in this area. We focused on finding patterns of convergence within robot technology using network analysis of patents in both the USPTO and KIPO. To identify the variables that affect convergence, we used quadratic assignment procedures (QAP). From our analysis, we observed the patent network ecology related to convergence and found technologies that have great potential to converge with other robotics technologies. The results of our study are expected to contribute to setting up convergence based R&D policies for robotics, which can lead new innovation. PMID:27764196

  7. Fault Analysis of Space Station DC Power Systems-Using Neural Network Adaptive Wavelets to Detect Faults

    NASA Technical Reports Server (NTRS)

    Momoh, James A.; Wang, Yanchun; Dolce, James L.

    1997-01-01

    This paper describes the application of neural network adaptive wavelets for fault diagnosis of space station power system. The method combines wavelet transform with neural network by incorporating daughter wavelets into weights. Therefore, the wavelet transform and neural network training procedure become one stage, which avoids the complex computation of wavelet parameters and makes the procedure more straightforward. The simulation results show that the proposed method is very efficient for the identification of fault locations.

  8. NetIntel: A Database for Manipulation of Rich Social Network Data

    DTIC Science & Technology

    2005-03-03

    between entities in a social or organizational system. For most of its history , social network analysis has operated on a notion of a dataset - a clearly...and procedural), as well as stored procedure and trigger capabilities. For the current implementation, we have chosen PostgreSQL [1] database. Of the...data and easy-to-use facilities for export of data into analysis tools as well as online browsing and data entry. References [1] Postgresql

  9. An Exploratory Study Examining the Feasibility of Using Bayesian Networks to Predict Circuit Analysis Understanding

    ERIC Educational Resources Information Center

    Chung, Gregory K. W. K.; Dionne, Gary B.; Kaiser, William J.

    2006-01-01

    Our research question was whether we could develop a feasible technique, using Bayesian networks, to diagnose gaps in student knowledge. Thirty-four college-age participants completed tasks designed to measure conceptual knowledge, procedural knowledge, and problem-solving skills related to circuit analysis. A Bayesian network was used to model…

  10. Neural networks for structural design - An integrated system implementation

    NASA Technical Reports Server (NTRS)

    Berke, Laszlo; Hafez, Wassim; Pao, Yoh-Han

    1992-01-01

    The development of powerful automated procedures to aid the creative designer is becoming increasingly critical for complex design tasks. In the work described here Artificial Neural Nets are applied to acquire structural analysis and optimization domain expertise. Based on initial instructions from the user an automated procedure generates random instances of structural analysis and/or optimization 'experiences' that cover a desired domain. It extracts training patterns from the created instances, constructs and trains an appropriate network architecture and checks the accuracy of net predictions. The final product is a trained neural net that can estimate analysis and/or optimization results instantaneously.

  11. The role of simulation in the design of a neural network chip

    NASA Technical Reports Server (NTRS)

    Desai, Utpal; Roppel, Thaddeus A.; Padgett, Mary L.

    1993-01-01

    An iterative, simulation-based design procedure for a neural network chip is introduced. For this design procedure, the goal is to produce a chip layout for a neural network in which the weights are determined by transistor gate width-to-length ratios. In a given iteration, the current layout is simulated using the circuit simulator SPICE, and layout adjustments are made based on conventional gradient-decent methods. After the iteration converges, the chip is fabricated. Monte Carlo analysis is used to predict the effect of statistical fabrication process variations on the overall performance of the neural network chip.

  12. On the efficacy of using the transfer-controlled procedure during periods of STP processor overloads in SS7 networks

    NASA Astrophysics Data System (ADS)

    Rumsewicz, Michael

    1994-04-01

    In this paper, we examine call completion performance, rather than message throughput, in a Common Channel Signaling network in which the processing resources, and not transmission resources, of a Signaling Transfer Point (STP) are overloaded. Specifically, we perform a transient analysis, via simulation, of a network consisting of a single Central Processor-based STP connecting many local exchanges. We consider the efficacy of using the Transfer Controlled (TFC) procedure when the network call attempt rate exceeds the processing capability of the STP. We find the following: (1) the success of the control depends critically on the rate at which TFC's are sent; (2) use of the TFC procedure in theevent of processor overload can provide reasonable call completion rates.

  13. Decimal Fraction Arithmetic: Logical Error Analysis and Its Validation.

    ERIC Educational Resources Information Center

    Standiford, Sally N.; And Others

    This report illustrates procedures of item construction for addition and subtraction examples involving decimal fractions. Using a procedural network of skills required to solve such examples, an item characteristic matrix of skills analysis was developed to describe the characteristics of the content domain by projected student difficulties. Then…

  14. Assessment of Matrix Multiplication Learning with a Rule-Based Analytical Model--"A Bayesian Network Representation"

    ERIC Educational Resources Information Center

    Zhang, Zhidong

    2016-01-01

    This study explored an alternative assessment procedure to examine learning trajectories of matrix multiplication. It took rule-based analytical and cognitive task analysis methods specifically to break down operation rules for a given matrix multiplication. Based on the analysis results, a hierarchical Bayesian network, an assessment model,…

  15. INTERDISCIPLINARY PHYSICS AND RELATED AREAS OF SCIENCE AND TECHNOLOGY: Synchronization in Complex Networks with Multiple Connections

    NASA Astrophysics Data System (ADS)

    Wu, Qing-Chu; Fu, Xin-Chu; Sun, Wei-Gang

    2010-01-01

    In this paper a class of networks with multiple connections are discussed. The multiple connections include two different types of links between nodes in complex networks. For this new model, we give a simple generating procedure. Furthermore, we investigate dynamical synchronization behavior in a delayed two-layer network, giving corresponding theoretical analysis and numerical examples.

  16. A Model of Network Porosity

    DTIC Science & Technology

    2016-11-09

    the model does not become a full probabilistic attack graph analysis of the network , whose data requirements are currently unrealistic. The second...flow. – Untrustworthy persons may intentionally try to exfiltrate known sensitive data to ex- ternal networks . People may also unintentionally leak...section will provide details on the components, procedures, data requirements, and parameters required to instantiate the network porosity model. These

  17. Span graphics display utilities handbook, first edition

    NASA Technical Reports Server (NTRS)

    Gallagher, D. L.; Green, J. L.; Newman, R.

    1985-01-01

    The Space Physics Analysis Network (SPAN) is a computer network connecting scientific institutions throughout the United States. This network provides an avenue for timely, correlative research between investigators, in a multidisciplinary approach to space physics studies. An objective in the development of SPAN is to make available direct and simplified procedures that scientists can use, without specialized training, to exchange information over the network. Information exchanges include raw and processes data, analysis programs, correspondence, documents, and graphite images. This handbook details procedures that can be used to exchange graphic images over SPAN. The intent is to periodically update this handbook to reflect the constantly changing facilities available on SPAN. The utilities described within reflect an earnest attempt to provide useful descriptions of working utilities that can be used to transfer graphic images across the network. Whether graphic images are representative of satellite servations or theoretical modeling and whether graphics images are of device dependent or independent type, the SPAN graphics display utilities handbook will be the users guide to graphic image exchange.

  18. Vehicle Signal Analysis Using Artificial Neural Networks for a Bridge Weigh-in-Motion System

    PubMed Central

    Kim, Sungkon; Lee, Jungwhee; Park, Min-Seok; Jo, Byung-Wan

    2009-01-01

    This paper describes the procedures for development of signal analysis algorithms using artificial neural networks for Bridge Weigh-in-Motion (B-WIM) systems. Through the analysis procedure, the extraction of information concerning heavy traffic vehicles such as weight, speed, and number of axles from the time domain strain data of the B-WIM system was attempted. As one of the several possible pattern recognition techniques, an Artificial Neural Network (ANN) was employed since it could effectively include dynamic effects and bridge-vehicle interactions. A number of vehicle traveling experiments with sufficient load cases were executed on two different types of bridges, a simply supported pre-stressed concrete girder bridge and a cable-stayed bridge. Different types of WIM systems such as high-speed WIM or low-speed WIM were also utilized during the experiments for cross-checking and to validate the performance of the developed algorithms. PMID:22408487

  19. Vehicle Signal Analysis Using Artificial Neural Networks for a Bridge Weigh-in-Motion System.

    PubMed

    Kim, Sungkon; Lee, Jungwhee; Park, Min-Seok; Jo, Byung-Wan

    2009-01-01

    This paper describes the procedures for development of signal analysis algorithms using artificial neural networks for Bridge Weigh-in-Motion (B-WIM) systems. Through the analysis procedure, the extraction of information concerning heavy traffic vehicles such as weight, speed, and number of axles from the time domain strain data of the B-WIM system was attempted. As one of the several possible pattern recognition techniques, an Artificial Neural Network (ANN) was employed since it could effectively include dynamic effects and bridge-vehicle interactions. A number of vehicle traveling experiments with sufficient load cases were executed on two different types of bridges, a simply supported pre-stressed concrete girder bridge and a cable-stayed bridge. Different types of WIM systems such as high-speed WIM or low-speed WIM were also utilized during the experiments for cross-checking and to validate the performance of the developed algorithms.

  20. Development of analytic intermodal freight networks for use within a GIS

    DOT National Transportation Integrated Search

    1997-05-01

    The paper discusses the practical issues involved in constructing intermodal freight networks that can be used within GIS platforms to support inter-regional freight routing and subsequent (for example, commodity flow) analysis. The procedures descri...

  1. Network analysis for the visualization and analysis of qualitative data.

    PubMed

    Pokorny, Jennifer J; Norman, Alex; Zanesco, Anthony P; Bauer-Wu, Susan; Sahdra, Baljinder K; Saron, Clifford D

    2018-03-01

    We present a novel manner in which to visualize the coding of qualitative data that enables representation and analysis of connections between codes using graph theory and network analysis. Network graphs are created from codes applied to a transcript or audio file using the code names and their chronological location. The resulting network is a representation of the coding data that characterizes the interrelations of codes. This approach enables quantification of qualitative codes using network analysis and facilitates examination of associations of network indices with other quantitative variables using common statistical procedures. Here, as a proof of concept, we applied this method to a set of interview transcripts that had been coded in 2 different ways and the resultant network graphs were examined. The creation of network graphs allows researchers an opportunity to view and share their qualitative data in an innovative way that may provide new insights and enhance transparency of the analytical process by which they reach their conclusions. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  2. Visibility Graph Based Time Series Analysis.

    PubMed

    Stephen, Mutua; Gu, Changgui; Yang, Huijie

    2015-01-01

    Network based time series analysis has made considerable achievements in the recent years. By mapping mono/multivariate time series into networks, one can investigate both it's microscopic and macroscopic behaviors. However, most proposed approaches lead to the construction of static networks consequently providing limited information on evolutionary behaviors. In the present paper we propose a method called visibility graph based time series analysis, in which series segments are mapped to visibility graphs as being descriptions of the corresponding states and the successively occurring states are linked. This procedure converts a time series to a temporal network and at the same time a network of networks. Findings from empirical records for stock markets in USA (S&P500 and Nasdaq) and artificial series generated by means of fractional Gaussian motions show that the method can provide us rich information benefiting short-term and long-term predictions. Theoretically, we propose a method to investigate time series from the viewpoint of network of networks.

  3. Impact parameter determination in experimental analysis using a neural network

    NASA Astrophysics Data System (ADS)

    Haddad, F.; Hagel, K.; Li, J.; Mdeiwayeh, N.; Natowitz, J. B.; Wada, R.; Xiao, B.; David, C.; Freslier, M.; Aichelin, J.

    1997-03-01

    A neural network is used to determine the impact parameter in 40Ca+40Ca reactions. The effect of the detection efficiency as well as the model dependence of the training procedure has been studied carefully. An overall improvement of the impact parameter determination of 25% is obtained using this technique. The analysis of Amphora 40Ca+40Ca data at 35 MeV per nucleon using a neural network shows two well-separated classes of events among the selected ``complete'' events.

  4. Reconstruction of network topology using status-time-series data

    NASA Astrophysics Data System (ADS)

    Pandey, Pradumn Kumar; Badarla, Venkataramana

    2018-01-01

    Uncovering the heterogeneous connection pattern of a networked system from the available status-time-series (STS) data of a dynamical process on the network is of great interest in network science and known as a reverse engineering problem. Dynamical processes on a network are affected by the structure of the network. The dependency between the diffusion dynamics and structure of the network can be utilized to retrieve the connection pattern from the diffusion data. Information of the network structure can help to devise the control of dynamics on the network. In this paper, we consider the problem of network reconstruction from the available status-time-series (STS) data using matrix analysis. The proposed method of network reconstruction from the STS data is tested successfully under susceptible-infected-susceptible (SIS) diffusion dynamics on real-world and computer-generated benchmark networks. High accuracy and efficiency of the proposed reconstruction procedure from the status-time-series data define the novelty of the method. Our proposed method outperforms compressed sensing theory (CST) based method of network reconstruction using STS data. Further, the same procedure of network reconstruction is applied to the weighted networks. The ordering of the edges in the weighted networks is identified with high accuracy.

  5. Analysis of large power systems

    NASA Technical Reports Server (NTRS)

    Dommel, H. W.

    1975-01-01

    Computer-oriented power systems analysis procedures in the electric utilities are surveyed. The growth of electric power systems is discussed along with the solution of sparse network equations, power flow, and stability studies.

  6. A Network Analysis of Concept Maps of Triangle Concepts

    ERIC Educational Resources Information Center

    Haiyue, Jin; Khoon Yoong, Wong

    2010-01-01

    Mathematics educators and mathematics standards of curriculum have emphasised the importance of constructing the interconnectedness among mathematic concepts ("conceptual understanding") instead of only the ability to carry out standard procedures in an isolated fashion. Researchers have attempted to assess the knowledge networks in…

  7. Functional Interaction Network Construction and Analysis for Disease Discovery.

    PubMed

    Wu, Guanming; Haw, Robin

    2017-01-01

    Network-based approaches project seemingly unrelated genes or proteins onto a large-scale network context, therefore providing a holistic visualization and analysis platform for genomic data generated from high-throughput experiments, reducing the dimensionality of data via using network modules and increasing the statistic analysis power. Based on the Reactome database, the most popular and comprehensive open-source biological pathway knowledgebase, we have developed a highly reliable protein functional interaction network covering around 60 % of total human genes and an app called ReactomeFIViz for Cytoscape, the most popular biological network visualization and analysis platform. In this chapter, we describe the detailed procedures on how this functional interaction network is constructed by integrating multiple external data sources, extracting functional interactions from human curated pathway databases, building a machine learning classifier called a Naïve Bayesian Classifier, predicting interactions based on the trained Naïve Bayesian Classifier, and finally constructing the functional interaction database. We also provide an example on how to use ReactomeFIViz for performing network-based data analysis for a list of genes.

  8. Evaluation of the streamflow-gaging network of Alaska in providing regional streamflow information

    USGS Publications Warehouse

    Brabets, Timothy P.

    1996-01-01

    In 1906, the U.S. Geological Survey (USGS) began operating a network of streamflow-gaging stations in Alaska. The primary purpose of the streamflow- gaging network has been to provide peak flow, average flow, and low-flow characteristics to a variety of users. In 1993, the USGS began a study to evaluate the current network of 78 stations. The objectives of this study were to determine the adequacy of the existing network in predicting selected regional flow characteristics and to determine if providing additional streamflow-gaging stations could improve the network's ability to predict these characteristics. Alaska was divided into six distinct hydrologic regions: Arctic, Northwest, Southcentral, Southeast, Southwest, and Yukon. For each region, historical and current streamflow data were compiled. In Arctic, Northwest, and Southwest Alaska, insufficient data were available to develop regional regression equations. In these areas, proposed locations of streamflow-gaging stations were selected by using clustering techniques to define similar areas within a region and by spatial visual analysis using the precipitation, physiographic, and hydrologic unit maps of Alaska. Sufficient data existed in Southcentral and Southeast Alaska to use generalized least squares (GLS) procedures to develop regional regression equations to estimate the 50-year peak flow, annual average flow, and a low-flow statistic. GLS procedures were also used for Yukon Alaska but the results should be used with caution because the data do not have an adequate spatial distribution. Network analysis procedures were used for the Southcentral, Southeast, and Yukon regions. Network analysis indicates the reduction in the sampling error of the regional regression equation that can be obtained given different scenarios. For Alaska, a 10-year planning period was used. One scenario showed the results of continuing the current network with no additional gaging stations and another scenario showed the results of adding gaging stations to the network. With the exception of the annual average discharge equation for Southeast Alaska, by adding gaging stations in all three regions, the sampling error was reduced to a greater extent than by not adding gaging stations. The proposed streamflow-gaging network for Alaska consists of 308 gaging stations, of which 32 are designated as index stations. If the proposed network can not be implemented in its entirety, then a lesser cost alternative would be to establish the index stations and to implement the network for a particular region.

  9. Manufacturing error sensitivity analysis and optimal design method of cable-network antenna structures

    NASA Astrophysics Data System (ADS)

    Zong, Yali; Hu, Naigang; Duan, Baoyan; Yang, Guigeng; Cao, Hongjun; Xu, Wanye

    2016-03-01

    Inevitable manufacturing errors and inconsistency between assumed and actual boundary conditions can affect the shape precision and cable tensions of a cable-network antenna, and even result in failure of the structure in service. In this paper, an analytical sensitivity analysis method of the shape precision and cable tensions with respect to the parameters carrying uncertainty was studied. Based on the sensitivity analysis, an optimal design procedure was proposed to alleviate the effects of the parameters that carry uncertainty. The validity of the calculated sensitivities is examined by those computed by a finite difference method. Comparison with a traditional design method shows that the presented design procedure can remarkably reduce the influence of the uncertainties on the antenna performance. Moreover, the results suggest that especially slender front net cables, thick tension ties, relatively slender boundary cables and high tension level can improve the ability of cable-network antenna structures to resist the effects of the uncertainties on the antenna performance.

  10. Application of the Intuitionistic Fuzzy InterCriteria Analysis Method with Triples to a Neural Network Preprocessing Procedure

    PubMed Central

    Atanassova, Vassia; Sotirova, Evdokia; Doukovska, Lyubka; Bureva, Veselina; Mavrov, Deyan; Tomov, Jivko

    2017-01-01

    The approach of InterCriteria Analysis (ICA) was applied for the aim of reducing the set of variables on the input of a neural network, taking into account the fact that their large number increases the number of neurons in the network, thus making them unusable for hardware implementation. Here, for the first time, with the help of the ICA method, correlations between triples of the input parameters for training of the neural networks were obtained. In this case, we use the approach of ICA for data preprocessing, which may yield reduction of the total time for training the neural networks, hence, the time for the network's processing of data and images. PMID:28874908

  11. Structure-function analysis of genetically defined neuronal populations.

    PubMed

    Groh, Alexander; Krieger, Patrik

    2013-10-01

    Morphological and functional classification of individual neurons is a crucial aspect of the characterization of neuronal networks. Systematic structural and functional analysis of individual neurons is now possible using transgenic mice with genetically defined neurons that can be visualized in vivo or in brain slice preparations. Genetically defined neurons are useful for studying a particular class of neurons and also for more comprehensive studies of the neuronal content of a network. Specific subsets of neurons can be identified by fluorescence imaging of enhanced green fluorescent protein (eGFP) or another fluorophore expressed under the control of a cell-type-specific promoter. The advantages of such genetically defined neurons are not only their homogeneity and suitability for systematic descriptions of networks, but also their tremendous potential for cell-type-specific manipulation of neuronal networks in vivo. This article describes a selection of procedures for visualizing and studying the anatomy and physiology of genetically defined neurons in transgenic mice. We provide information about basic equipment, reagents, procedures, and analytical approaches for obtaining three-dimensional (3D) cell morphologies and determining the axonal input and output of genetically defined neurons. We exemplify with genetically labeled cortical neurons, but the procedures are applicable to other brain regions with little or no alterations.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Townsend, D.W.; Linnhoff, B.

    In Part I, criteria for heat engine and heat pump placement in chemical process networks were derived, based on the ''temperature interval'' (T.I) analysis of the heat exchanger network problem. Using these criteria, this paper gives a method for identifying the best outline design for any combined system of chemical process, heat engines, and heat pumps. The method eliminates inferior alternatives early, and positively leads on to the most appropriate solution. A graphical procedure based on the T.I. analysis forms the heart of the approach, and the calculations involved are simple enough to be carried out on, say, a programmablemore » calculator. Application to a case study is demonstrated. Optimization methods based on this procedure are currently under research.« less

  13. Progression In The Concepts Of Cognitive Sense Wireless Networks - An Analysis Report

    NASA Astrophysics Data System (ADS)

    Ajay, V. P.; Nesasudha, M.

    2017-10-01

    This paper illustrates the conception of networks, their primary goals (from day one to the present), the changes it had to endure to get to its present form and the developments which are in progress and in store for further standardization. The analysis gives more importance to the specifics of the Cognitive Radio Networks, which makes use of the dynamic spectrum access procedures, framed for better utilization of our available spectrum resources. The main conceptual difficulties and current research trends are also discussed in terms of real time implementation.

  14. Visibility Graph Based Time Series Analysis

    PubMed Central

    Stephen, Mutua; Gu, Changgui; Yang, Huijie

    2015-01-01

    Network based time series analysis has made considerable achievements in the recent years. By mapping mono/multivariate time series into networks, one can investigate both it’s microscopic and macroscopic behaviors. However, most proposed approaches lead to the construction of static networks consequently providing limited information on evolutionary behaviors. In the present paper we propose a method called visibility graph based time series analysis, in which series segments are mapped to visibility graphs as being descriptions of the corresponding states and the successively occurring states are linked. This procedure converts a time series to a temporal network and at the same time a network of networks. Findings from empirical records for stock markets in USA (S&P500 and Nasdaq) and artificial series generated by means of fractional Gaussian motions show that the method can provide us rich information benefiting short-term and long-term predictions. Theoretically, we propose a method to investigate time series from the viewpoint of network of networks. PMID:26571115

  15. Self-organization in neural networks - Applications in structural optimization

    NASA Technical Reports Server (NTRS)

    Hajela, Prabhat; Fu, B.; Berke, Laszlo

    1993-01-01

    The present paper discusses the applicability of ART (Adaptive Resonance Theory) networks, and the Hopfield and Elastic networks, in problems of structural analysis and design. A characteristic of these network architectures is the ability to classify patterns presented as inputs into specific categories. The categories may themselves represent distinct procedural solution strategies. The paper shows how this property can be adapted in the structural analysis and design problem. A second application is the use of Hopfield and Elastic networks in optimization problems. Of particular interest are problems characterized by the presence of discrete and integer design variables. The parallel computing architecture that is typical of neural networks is shown to be effective in such problems. Results of preliminary implementations in structural design problems are also included in the paper.

  16. Communication Network Integration and Group Uniformity in a Complex Organization.

    ERIC Educational Resources Information Center

    Danowski, James A.; Farace, Richard V.

    This paper contains a discussion of the limitations of research on group processes in complex organizations and the manner in which a procedure for network analysis in on-going systems can reduce problems. The research literature on group uniformity processes and on theoretical models of these processes from an information processing perspective…

  17. Equilibrium paths analysis of materials with rheological properties by using the chaos theory

    NASA Astrophysics Data System (ADS)

    Bednarek, Paweł; Rządkowski, Jan

    2018-01-01

    The numerical equilibrium path analysis of the material with random rheological properties by using standard procedures and specialist computer programs was not successful. The proper solution for the analysed heuristic model of the material was obtained on the base of chaos theory elements and neural networks. The paper deals with mathematical reasons of used computer programs and also are elaborated the properties of the attractor used in analysis. There are presented results of conducted numerical analysis both in a numerical and in graphical form for the used procedures.

  18. Improvements of the Regional Seismic network of Northwestern Italy in the framework of ALCoTra program activities

    NASA Astrophysics Data System (ADS)

    Bosco, Fabrizio

    2014-05-01

    Arpa Piemonte (Regional Agency for Environmental Protection), in partnership with University of Genoa, manages the regional seismic network, which is part of the Regional Seismic network of Northwestern Italy (RSNI). The network operates since the 80s and, over the years, it has developed in technological features, analysis procedures and geographical coverage. In particular in recent years the network has been further enhanced through the integration of Swiss and French stations installed in the cross-border area. The environmental context enables the installation of sensors in sites with good conditions as regards ambient noise and limited local amplification effects (as proved by PSD analysis, signal quality monitoring via PQLX, H/V analysis). The instrumental equipment consists of Broadband and Very Broadband sensors (Nanometrics Trillium 40" and 240") and different technological solutions for signals real-time transmission (cable, satellite, GPRS), according to the different local environment, with redundant connections and with experimental innovative systems. Digital transmission and acquisition systems operate through standard protocols (Nanometrics, SeedLink), with redundancy in data centers (Genoa, Turin, Rome). Both real-time automatic and manual operational procedures are in use for signals analysis (events detection, picking, focal parameters and ground shaking determination). In the framework of cross-border cooperation program ALCoTra (http://www.interreg-alcotra.org), approved by the European Commission, several projects have been developed to improve the performances of seismic monitoring systems used by partners (Arpa Piemonte, Aosta Valley Region, CNRS, Joseph Fourier University). The cross-border context points out first of all the importance of signals sharing (from 14 to 23 stations in narrow French-Italian border area, with an increase of over 50%) and of coordination during new stations planning and installation in the area. In the ongoing ALCoTra project "CASSAT" (Coordination and Analysis of Alpine Trans-border Seismic Surveillance), we evaluate the improvement of monitoring systems performances in terms of localizations precision and number of detections. Furthermore, we update the procedures for the production of ground shaking maps, with installation of accelerometers and integration of new available data for site effects assessment (VS30 map, FA-VS30 correlations by numerical simulations of seismic response), determined for the specific regional context from geophysical surveys data and geological analysis. As a consequence of the increase of available data due to new stations installation and recently recorded events, a new local magnitude scaling law is calibrated for the area. We also develop a parametric methodology to improve network real-time localization procedures in Northwestern Italy. The area, surrounded by Western Alps and Northern Apennines, presents a complex system of lithospheric structures, characterized by strong heterogeneities of various physical parameters (Ivrea Body, subducting European lithosphere, Ligurian Sea Moho, Po Valley deposits). We work with a localization algorithm (Hypoinverse-2000) suitable for such a heterogeneous context , adopting multi-1d crustal velocities models, linked to epicentral coordinates. In this analysis, first we build velocities models integrating several available geophysical and geo-structural data; then we test jointly both models and algorithm parameters with specifically developed automatic iterative procedures, through batch scripting, database, GIS and statistical analysis tools.

  19. Influence of the time scale on the construction of financial networks.

    PubMed

    Emmert-Streib, Frank; Dehmer, Matthias

    2010-09-30

    In this paper we investigate the definition and formation of financial networks. Specifically, we study the influence of the time scale on their construction. For our analysis we use correlation-based networks obtained from the daily closing prices of stock market data. More precisely, we use the stocks that currently comprise the Dow Jones Industrial Average (DJIA) and estimate financial networks where nodes correspond to stocks and edges correspond to none vanishing correlation coefficients. That means only if a correlation coefficient is statistically significant different from zero, we include an edge in the network. This construction procedure results in unweighted, undirected networks. By separating the time series of stock prices in non-overlapping intervals, we obtain one network per interval. The length of these intervals corresponds to the time scale of the data, whose influence on the construction of the networks will be studied in this paper. Numerical analysis of four different measures in dependence on the time scale for the construction of networks allows us to gain insights about the intrinsic time scale of the stock market with respect to a meaningful graph-theoretical analysis.

  20. Review: visual analytics of climate networks

    NASA Astrophysics Data System (ADS)

    Nocke, T.; Buschmann, S.; Donges, J. F.; Marwan, N.; Schulz, H.-J.; Tominski, C.

    2015-09-01

    Network analysis has become an important approach in studying complex spatiotemporal behaviour within geophysical observation and simulation data. This new field produces increasing numbers of large geo-referenced networks to be analysed. Particular focus lies currently on the network analysis of the complex statistical interrelationship structure within climatological fields. The standard procedure for such network analyses is the extraction of network measures in combination with static standard visualisation methods. Existing interactive visualisation methods and tools for geo-referenced network exploration are often either not known to the analyst or their potential is not fully exploited. To fill this gap, we illustrate how interactive visual analytics methods in combination with geovisualisation can be tailored for visual climate network investigation. Therefore, the paper provides a problem analysis relating the multiple visualisation challenges to a survey undertaken with network analysts from the research fields of climate and complex systems science. Then, as an overview for the interested practitioner, we review the state-of-the-art in climate network visualisation and provide an overview of existing tools. As a further contribution, we introduce the visual network analytics tools CGV and GTX, providing tailored solutions for climate network analysis, including alternative geographic projections, edge bundling, and 3-D network support. Using these tools, the paper illustrates the application potentials of visual analytics for climate networks based on several use cases including examples from global, regional, and multi-layered climate networks.

  1. Review: visual analytics of climate networks

    NASA Astrophysics Data System (ADS)

    Nocke, T.; Buschmann, S.; Donges, J. F.; Marwan, N.; Schulz, H.-J.; Tominski, C.

    2015-04-01

    Network analysis has become an important approach in studying complex spatiotemporal behaviour within geophysical observation and simulation data. This new field produces increasing amounts of large geo-referenced networks to be analysed. Particular focus lies currently on the network analysis of the complex statistical interrelationship structure within climatological fields. The standard procedure for such network analyses is the extraction of network measures in combination with static standard visualisation methods. Existing interactive visualisation methods and tools for geo-referenced network exploration are often either not known to the analyst or their potential is not fully exploited. To fill this gap, we illustrate how interactive visual analytics methods in combination with geovisualisation can be tailored for visual climate network investigation. Therefore, the paper provides a problem analysis, relating the multiple visualisation challenges with a survey undertaken with network analysts from the research fields of climate and complex systems science. Then, as an overview for the interested practitioner, we review the state-of-the-art in climate network visualisation and provide an overview of existing tools. As a further contribution, we introduce the visual network analytics tools CGV and GTX, providing tailored solutions for climate network analysis, including alternative geographic projections, edge bundling, and 3-D network support. Using these tools, the paper illustrates the application potentials of visual analytics for climate networks based on several use cases including examples from global, regional, and multi-layered climate networks.

  2. A TOTP-based enhanced route optimization procedure for mobile IPv6 to reduce handover delay and signalling overhead.

    PubMed

    Shah, Peer Azmat; Hasbullah, Halabi B; Lawal, Ibrahim A; Aminu Mu'azu, Abubakar; Tang Jung, Low

    2014-01-01

    Due to the proliferation of handheld mobile devices, multimedia applications like Voice over IP (VoIP), video conferencing, network music, and online gaming are gaining popularity in recent years. These applications are well known to be delay sensitive and resource demanding. The mobility of mobile devices, running these applications, across different networks causes delay and service disruption. Mobile IPv6 was proposed to provide mobility support to IPv6-based mobile nodes for continuous communication when they roam across different networks. However, the Route Optimization procedure in Mobile IPv6 involves the verification of mobile node's reachability at the home address and at the care-of address (home test and care-of test) that results in higher handover delays and signalling overhead. This paper presents an enhanced procedure, time-based one-time password Route Optimization (TOTP-RO), for Mobile IPv6 Route Optimization that uses the concepts of shared secret Token, time based one-time password (TOTP) along with verification of the mobile node via direct communication and maintaining the status of correspondent node's compatibility. The TOTP-RO was implemented in network simulator (NS-2) and an analytical analysis was also made. Analysis showed that TOTP-RO has lower handover delays, packet loss, and signalling overhead with an increased level of security as compared to the standard Mobile IPv6's Return-Routability-based Route Optimization (RR-RO).

  3. A new automated spectral feature extraction method and its application in spectral classification and defective spectra recovery

    NASA Astrophysics Data System (ADS)

    Wang, Ke; Guo, Ping; Luo, A.-Li

    2017-03-01

    Spectral feature extraction is a crucial procedure in automated spectral analysis. This procedure starts from the spectral data and produces informative and non-redundant features, facilitating the subsequent automated processing and analysis with machine-learning and data-mining techniques. In this paper, we present a new automated feature extraction method for astronomical spectra, with application in spectral classification and defective spectra recovery. The basic idea of our approach is to train a deep neural network to extract features of spectra with different levels of abstraction in different layers. The deep neural network is trained with a fast layer-wise learning algorithm in an analytical way without any iterative optimization procedure. We evaluate the performance of the proposed scheme on real-world spectral data. The results demonstrate that our method is superior regarding its comprehensive performance, and the computational cost is significantly lower than that for other methods. The proposed method can be regarded as a new valid alternative general-purpose feature extraction method for various tasks in spectral data analysis.

  4. An ANOVA approach for statistical comparisons of brain networks.

    PubMed

    Fraiman, Daniel; Fraiman, Ricardo

    2018-03-16

    The study of brain networks has developed extensively over the last couple of decades. By contrast, techniques for the statistical analysis of these networks are less developed. In this paper, we focus on the statistical comparison of brain networks in a nonparametric framework and discuss the associated detection and identification problems. We tested network differences between groups with an analysis of variance (ANOVA) test we developed specifically for networks. We also propose and analyse the behaviour of a new statistical procedure designed to identify different subnetworks. As an example, we show the application of this tool in resting-state fMRI data obtained from the Human Connectome Project. We identify, among other variables, that the amount of sleep the days before the scan is a relevant variable that must be controlled. Finally, we discuss the potential bias in neuroimaging findings that is generated by some behavioural and brain structure variables. Our method can also be applied to other kind of networks such as protein interaction networks, gene networks or social networks.

  5. Inferring topological features of proteins from amino acid residue networks

    NASA Astrophysics Data System (ADS)

    Alves, Nelson Augusto; Martinez, Alexandre Souto

    2007-02-01

    Topological properties of native folds are obtained from statistical analysis of 160 low homology proteins covering the four structural classes. This is done analyzing one, two and three-vertex joint distribution of quantities related to the corresponding network of amino acid residues. Emphasis on the amino acid residue hydrophobicity leads to the definition of their center of mass as vertices in this contact network model with interactions represented by edges. The network analysis helps us to interpret experimental results such as hydrophobic scales and fraction of buried accessible surface area in terms of the network connectivity. Moreover, those networks show assortative mixing by degree. To explore the vertex-type dependent correlations, we build a network of hydrophobic and polar vertices. This procedure presents the wiring diagram of the topological structure of globular proteins leading to the following attachment probabilities between hydrophobic-hydrophobic 0.424(5), hydrophobic-polar 0.419(2) and polar-polar 0.157(3) residues.

  6. Coulometric Analysis Experiment for the Undergraduate Chemistry Laboratory

    ERIC Educational Resources Information Center

    Dabke, Rajeev B.; Gebeyehu, Zewdu; Thor, Ryan

    2011-01-01

    An undergraduate experiment on coulometric analysis of four commercial household products is presented. A special type of coulometry cell made of polydimethylsiloxane (PDMS) polymer is utilized. The PDMS cell consists of multiple analyte compartments and an internal network of salt bridges. Experimental procedure for the analysis of the acid in a…

  7. Bayesian Analysis for Exponential Random Graph Models Using the Adaptive Exchange Sampler.

    PubMed

    Jin, Ick Hoon; Yuan, Ying; Liang, Faming

    2013-10-01

    Exponential random graph models have been widely used in social network analysis. However, these models are extremely difficult to handle from a statistical viewpoint, because of the intractable normalizing constant and model degeneracy. In this paper, we consider a fully Bayesian analysis for exponential random graph models using the adaptive exchange sampler, which solves the intractable normalizing constant and model degeneracy issues encountered in Markov chain Monte Carlo (MCMC) simulations. The adaptive exchange sampler can be viewed as a MCMC extension of the exchange algorithm, and it generates auxiliary networks via an importance sampling procedure from an auxiliary Markov chain running in parallel. The convergence of this algorithm is established under mild conditions. The adaptive exchange sampler is illustrated using a few social networks, including the Florentine business network, molecule synthetic network, and dolphins network. The results indicate that the adaptive exchange algorithm can produce more accurate estimates than approximate exchange algorithms, while maintaining the same computational efficiency.

  8. Multiplex network analysis of employee performance and employee social relationships

    NASA Astrophysics Data System (ADS)

    Cai, Meng; Wang, Wei; Cui, Ying; Stanley, H. Eugene

    2018-01-01

    In human resource management, employee performance is strongly affected by both formal and informal employee networks. Most previous research on employee performance has focused on monolayer networks that can represent only single categories of employee social relationships. We study employee performance by taking into account the entire multiplex structure of underlying employee social networks. We collect three datasets consisting of five different employee relationship categories in three firms, and predict employee performance using degree centrality and eigenvector centrality in a superimposed multiplex network (SMN) and an unfolded multiplex network (UMN). We use a quadratic assignment procedure (QAP) analysis and a regression analysis to demonstrate that the different categories of relationship are mutually embedded and that the strength of their impact on employee performance differs. We also use weighted/unweighted SMN/UMN to measure the predictive accuracy of this approach and find that employees with high centrality in a weighted UMN are more likely to perform well. Our results shed new light on how social structures affect employee performance.

  9. Networks for image acquisition, processing and display

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.

    1990-01-01

    The human visual system comprises layers of networks which sample, process, and code images. Understanding these networks is a valuable means of understanding human vision and of designing autonomous vision systems based on network processing. Ames Research Center has an ongoing program to develop computational models of such networks. The models predict human performance in detection of targets and in discrimination of displayed information. In addition, the models are artificial vision systems sharing properties with biological vision that has been tuned by evolution for high performance. Properties include variable density sampling, noise immunity, multi-resolution coding, and fault-tolerance. The research stresses analysis of noise in visual networks, including sampling, photon, and processing unit noises. Specific accomplishments include: models of sampling array growth with variable density and irregularity comparable to that of the retinal cone mosaic; noise models of networks with signal-dependent and independent noise; models of network connection development for preserving spatial registration and interpolation; multi-resolution encoding models based on hexagonal arrays (HOP transform); and mathematical procedures for simplifying analysis of large networks.

  10. Different approaches in Partial Least Squares and Artificial Neural Network models applied for the analysis of a ternary mixture of Amlodipine, Valsartan and Hydrochlorothiazide

    NASA Astrophysics Data System (ADS)

    Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeany, Badr A.

    2014-03-01

    Different chemometric models were applied for the quantitative analysis of Amlodipine (AML), Valsartan (VAL) and Hydrochlorothiazide (HCT) in ternary mixture, namely, Partial Least Squares (PLS) as traditional chemometric model and Artificial Neural Networks (ANN) as advanced model. PLS and ANN were applied with and without variable selection procedure (Genetic Algorithm GA) and data compression procedure (Principal Component Analysis PCA). The chemometric methods applied are PLS-1, GA-PLS, ANN, GA-ANN and PCA-ANN. The methods were used for the quantitative analysis of the drugs in raw materials and pharmaceutical dosage form via handling the UV spectral data. A 3-factor 5-level experimental design was established resulting in 25 mixtures containing different ratios of the drugs. Fifteen mixtures were used as a calibration set and the other ten mixtures were used as validation set to validate the prediction ability of the suggested methods. The validity of the proposed methods was assessed using the standard addition technique.

  11. Efficacy of Laparoscopic Nissen Fundoplication vs Transoral Incisionless Fundoplication or Proton Pump Inhibitors in Patients With Gastroesophageal Reflux Disease: A Systematic Review and Network Meta-analysis.

    PubMed

    Richter, Joel E; Kumar, Ambuj; Lipka, Seth; Miladinovic, Branko; Velanovich, Vic

    2018-04-01

    The effects of transoral incisionless fundoplication (TIF) and laparoscopic Nissen fundoplication (LNF) have been compared with those of proton pump inhibitors (PPIs) or a sham procedure in patients with gastroesophageal reflux disease (GERD), but there has been no direct comparison of TIF vs LNF. We performed a systematic review and network meta-analysis of randomized controlled trials to compare the relative efficacies of TIF vs LNF in patients with GERD. We searched publication databases and conference abstracts through May 10, 2017 for randomized controlled trials that compared the efficacy of TIF or LNF with that of a sham procedure or PPIs in patients with GERD. We performed a network meta-analysis using Bayesian methods under random-effects multiple treatment comparisons. We assessed ranking probability by surface under the cumulative ranking curve. Our search identified 7 trials comprising 1128 patients. Surface under the cumulative ranking curve ranking indicated TIF had highest probability of increasing patients' health-related quality of life (0.96), followed by LNF (0.66), a sham procedure (0.35), and PPIs (0.042). LNF had the highest probability of increasing percent time at pH <4 (0.99), followed by PPIs (0.64), TIF (0.32), and the sham procedure (0.05). LNF also had the highest probability of increasing LES pressure (0.78), followed by TIF (0.72) and PPIs (0.01). Patients who underwent the sham procedure had the highest probability for persistent esophagitis (0.74), followed by those receiving TIF (0.69), LNF (0.38), and PPIs (0.19). Meta-regression showed a shorter follow-up time as a significant confounder for the outcome of health-related quality of life in studies of TIF. In a systematic review and network meta-analysis of trials of patients with GERD, we found LNF to have the greatest ability to improve physiologic parameters of GERD, including increased LES pressure and decreased percent time pH <4. Although TIF produced the largest increase in health-related quality of life, this could be due to the shorter follow-up time of patients treated with TIF vs LNF or PPIs. TIF is a minimally invasive endoscopic procedure, yet based on evaluation of benefits vs risks, we do not recommend it as a long-term alternative to PPI or LNF treatment of GERD. Copyright © 2018 AGA Institute. Published by Elsevier Inc. All rights reserved.

  12. Topology design and performance analysis of an integrated communication network

    NASA Technical Reports Server (NTRS)

    Li, V. O. K.; Lam, Y. F.; Hou, T. C.; Yuen, J. H.

    1985-01-01

    A research study on the topology design and performance analysis for the Space Station Information System (SSIS) network is conducted. It is begun with a survey of existing research efforts in network topology design. Then a new approach for topology design is presented. It uses an efficient algorithm to generate candidate network designs (consisting of subsets of the set of all network components) in increasing order of their total costs, and checks each design to see if it forms an acceptable network. This technique gives the true cost-optimal network, and is particularly useful when the network has many constraints and not too many components. The algorithm for generating subsets is described in detail, and various aspects of the overall design procedure are discussed. Two more efficient versions of this algorithm (applicable in specific situations) are also given. Next, two important aspects of network performance analysis: network reliability and message delays are discussed. A new model is introduced to study the reliability of a network with dependent failures. For message delays, a collection of formulas from existing research results is given to compute or estimate the delays of messages in a communication network without making the independence assumption. The design algorithm coded in PASCAL is included as an appendix.

  13. Influence of the Time Scale on the Construction of Financial Networks

    PubMed Central

    Emmert-Streib, Frank; Dehmer, Matthias

    2010-01-01

    Background In this paper we investigate the definition and formation of financial networks. Specifically, we study the influence of the time scale on their construction. Methodology/Principal Findings For our analysis we use correlation-based networks obtained from the daily closing prices of stock market data. More precisely, we use the stocks that currently comprise the Dow Jones Industrial Average (DJIA) and estimate financial networks where nodes correspond to stocks and edges correspond to none vanishing correlation coefficients. That means only if a correlation coefficient is statistically significant different from zero, we include an edge in the network. This construction procedure results in unweighted, undirected networks. By separating the time series of stock prices in non-overlapping intervals, we obtain one network per interval. The length of these intervals corresponds to the time scale of the data, whose influence on the construction of the networks will be studied in this paper. Conclusions/Significance Numerical analysis of four different measures in dependence on the time scale for the construction of networks allows us to gain insights about the intrinsic time scale of the stock market with respect to a meaningful graph-theoretical analysis. PMID:20949124

  14. A TOTP-Based Enhanced Route Optimization Procedure for Mobile IPv6 to Reduce Handover Delay and Signalling Overhead

    PubMed Central

    Shah, Peer Azmat; Hasbullah, Halabi B.; Lawal, Ibrahim A.; Aminu Mu'azu, Abubakar; Tang Jung, Low

    2014-01-01

    Due to the proliferation of handheld mobile devices, multimedia applications like Voice over IP (VoIP), video conferencing, network music, and online gaming are gaining popularity in recent years. These applications are well known to be delay sensitive and resource demanding. The mobility of mobile devices, running these applications, across different networks causes delay and service disruption. Mobile IPv6 was proposed to provide mobility support to IPv6-based mobile nodes for continuous communication when they roam across different networks. However, the Route Optimization procedure in Mobile IPv6 involves the verification of mobile node's reachability at the home address and at the care-of address (home test and care-of test) that results in higher handover delays and signalling overhead. This paper presents an enhanced procedure, time-based one-time password Route Optimization (TOTP-RO), for Mobile IPv6 Route Optimization that uses the concepts of shared secret Token, time based one-time password (TOTP) along with verification of the mobile node via direct communication and maintaining the status of correspondent node's compatibility. The TOTP-RO was implemented in network simulator (NS-2) and an analytical analysis was also made. Analysis showed that TOTP-RO has lower handover delays, packet loss, and signalling overhead with an increased level of security as compared to the standard Mobile IPv6's Return-Routability-based Route Optimization (RR-RO). PMID:24688398

  15. Bayesian network meta-analysis of root coverage procedures: ranking efficacy and identification of best treatment.

    PubMed

    Buti, Jacopo; Baccini, Michela; Nieri, Michele; La Marca, Michele; Pini-Prato, Giovan P

    2013-04-01

    The aim of this work was to conduct a Bayesian network meta-analysis (NM) of randomized controlled trials (RCTs) to establish a ranking in efficacy and the best technique for coronally advanced flap (CAF)-based root coverage procedures. A literature search on PubMed, Cochrane libraries, EMBASE, and hand-searched journals until June 2012 was conducted to identify RCTs on treatments of Miller Class I and II gingival recessions with at least 6 months of follow-up. The treatment outcomes were recession reduction (RecRed), clinical attachment gain (CALgain), keratinized tissue gain (KTgain), and complete root coverage (CRC). Twenty-nine studies met the inclusion criteria, 20 of which were classified as at high risk of bias. The CAF+connective tissue graft (CTG) combination ranked highest in effectiveness for RecRed (Probability of being the best = 40%) and CALgain (Pr = 33%); CAF+enamel matrix derivative (EMD) was slightly better for CRC; CAF+Collagen Matrix (CM) appeared effective for KTgain (Pr = 69%). Network inconsistency was low for all outcomes excluding CALgain. CAF+CTG might be considered the gold standard in root coverage procedures. The low amount of inconsistency gives support to the reliability of the present findings. © 2012 John Wiley & Sons A/S.

  16. Symbolic dynamic filtering and language measure for behavior identification of mobile robots.

    PubMed

    Mallapragada, Goutham; Ray, Asok; Jin, Xin

    2012-06-01

    This paper presents a procedure for behavior identification of mobile robots, which requires limited or no domain knowledge of the underlying process. While the features of robot behavior are extracted by symbolic dynamic filtering of the observed time series, the behavior patterns are classified based on language measure theory. The behavior identification procedure has been experimentally validated on a networked robotic test bed by comparison with commonly used tools, namely, principal component analysis for feature extraction and Bayesian risk analysis for pattern classification.

  17. Internet protocol network mapper

    DOEpatents

    Youd, David W.; Colon III, Domingo R.; Seidl, Edward T.

    2016-02-23

    A network mapper for performing tasks on targets is provided. The mapper generates a map of a network that specifies the overall configuration of the network. The mapper inputs a procedure that defines how the network is to be mapped. The procedure specifies what, when, and in what order the tasks are to be performed. Each task specifies processing that is to be performed for a target to produce results. The procedure may also specify input parameters for a task. The mapper inputs initial targets that specify a range of network addresses to be mapped. The mapper maps the network by, for each target, executing the procedure to perform the tasks on the target. The results of the tasks represent the mapping of the network defined by the initial targets.

  18. Financial Analysis of Hastily-Formed Networks

    DTIC Science & Technology

    2006-09-01

    well as support the goals of the new National Strategy, by developing new plans and procedures to improve the coordination, communications and...Strategy, by developing new plans and procedures to improve the coordination, communications and operations between DoD and other entities when...Strike Group xviii DoD Department of Defense DRDO Defense Research Development Organization EMT Emergency Medical Technician ESG Expeditionary

  19. Optimal routing of hazardous substances in time-varying, stochastic transportation networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woods, A.L.; Miller-Hooks, E.; Mahmassani, H.S.

    This report is concerned with the selection of routes in a network along which to transport hazardous substances, taking into consideration several key factors pertaining to the cost of transport and the risk of population exposure in the event of an accident. Furthermore, the fact that travel time and the risk measures are not constant over time is explicitly recognized in the routing decisions. Existing approaches typically assume static conditions, possibly resulting in inefficient route selection and unnecessary risk exposure. The report described the application of recent advances in network analysis methodologies to the problem of routing hazardous substances. Severalmore » specific problem formulations are presented, reflecting different degrees of risk aversion on the part of the decision-maker, as well as different possible operational scenarios. All procedures explicitly consider travel times and travel costs (including risk measures) to be stochastic time-varying quantities. The procedures include both exact algorithms, which may require extensive computational effort in some situations, as well as more efficient heuristics that may not guarantee a Pareto-optimal solution. All procedures are systematically illustrated for an example application using the Texas highway network, for both normal and incident condition scenarios. The application illustrates the trade-offs between the information obtained in the solution and computational efficiency, and highlights the benefits of incorporating these procedures in a decision-support system for hazardous substance shipment routing decisions.« less

  20. A network of automatic atmospherics analyzer

    NASA Technical Reports Server (NTRS)

    Schaefer, J.; Volland, H.; Ingmann, P.; Eriksson, A. J.; Heydt, G.

    1980-01-01

    The design and function of an atmospheric analyzer which uses a computer are discussed. Mathematical models which show the method of measurement are presented. The data analysis and recording procedures of the analyzer are discussed.

  1. The use of artificial neural networks in experimental data acquisition and aerodynamic design

    NASA Technical Reports Server (NTRS)

    Meade, Andrew J., Jr.

    1991-01-01

    It is proposed that an artificial neural network be used to construct an intelligent data acquisition system. The artificial neural networks (ANN) model has a potential for replacing traditional procedures as well as for use in computational fluid dynamics validation. Potential advantages of the ANN model are listed. As a proof of concept, the author modeled a NACA 0012 airfoil at specific conditions, using the neural network simulator NETS, developed by James Baffes of the NASA Johnson Space Center. The neural network predictions were compared to the actual data. It is concluded that artificial neural networks can provide an elegant and valuable class of mathematical tools for data analysis.

  2. Analysis of wireless sensor network topology and estimation of optimal network deployment by deterministic radio channel characterization.

    PubMed

    Aguirre, Erik; Lopez-Iturri, Peio; Azpilicueta, Leire; Astrain, José Javier; Villadangos, Jesús; Falcone, Francisco

    2015-02-05

    One of the main challenges in the implementation and design of context-aware scenarios is the adequate deployment strategy for Wireless Sensor Networks (WSNs), mainly due to the strong dependence of the radiofrequency physical layer with the surrounding media, which can lead to non-optimal network designs. In this work, radioplanning analysis for WSN deployment is proposed by employing a deterministic 3D ray launching technique in order to provide insight into complex wireless channel behavior in context-aware indoor scenarios. The proposed radioplanning procedure is validated with a testbed implemented with a Mobile Ad Hoc Network WSN following a chain configuration, enabling the analysis and assessment of a rich variety of parameters, such as received signal level, signal quality and estimation of power consumption. The adoption of deterministic radio channel techniques allows the design and further deployment of WSNs in heterogeneous wireless scenarios with optimized behavior in terms of coverage, capacity, quality of service and energy consumption.

  3. Extensive cross-talk and global regulators identified from an analysis of the integrated transcriptional and signaling network in Escherichia coli.

    PubMed

    Antiqueira, Lucas; Janga, Sarath Chandra; Costa, Luciano da Fontoura

    2012-11-01

    To understand the regulatory dynamics of transcription factors (TFs) and their interplay with other cellular components we have integrated transcriptional, protein-protein and the allosteric or equivalent interactions which mediate the physiological activity of TFs in Escherichia coli. To study this integrated network we computed a set of network measurements followed by principal component analysis (PCA), investigated the correlations between network structure and dynamics, and carried out a procedure for motif detection. In particular, we show that outliers identified in the integrated network based on their network properties correspond to previously characterized global transcriptional regulators. Furthermore, outliers are highly and widely expressed across conditions, thus supporting their global nature in controlling many genes in the cell. Motifs revealed that TFs not only interact physically with each other but also obtain feedback from signals delivered by signaling proteins supporting the extensive cross-talk between different types of networks. Our analysis can lead to the development of a general framework for detecting and understanding global regulatory factors in regulatory networks and reinforces the importance of integrating multiple types of interactions in underpinning the interrelationships between them.

  4. Robust neural network with applications to credit portfolio data analysis.

    PubMed

    Feng, Yijia; Li, Runze; Sudjianto, Agus; Zhang, Yiyun

    2010-01-01

    In this article, we study nonparametric conditional quantile estimation via neural network structure. We proposed an estimation method that combines quantile regression and neural network (robust neural network, RNN). It provides good smoothing performance in the presence of outliers and can be used to construct prediction bands. A Majorization-Minimization (MM) algorithm was developed for optimization. Monte Carlo simulation study is conducted to assess the performance of RNN. Comparison with other nonparametric regression methods (e.g., local linear regression and regression splines) in real data application demonstrate the advantage of the newly proposed procedure.

  5. 76 FR 62072 - Center for Devices and Radiological Health; Standard Operating Procedures for Network of Experts...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-06

    ...] Center for Devices and Radiological Health; Standard Operating Procedures for Network of Experts; Request... procedures (SOPs) for a new ``Network of Experts.'' The draft SOPs describe a new process for staff at the... FDA is announcing the availability of two draft SOPs, one entitled, ``Network of Experts--Expert...

  6. Medical image analysis with artificial neural networks.

    PubMed

    Jiang, J; Trundle, P; Ren, J

    2010-12-01

    Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging. Copyright © 2010 Elsevier Ltd. All rights reserved.

  7. Aerodynamic Design Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan; Madavan, Nateri K.

    2003-01-01

    The design of aerodynamic components of aircraft, such as wings or engines, involves a process of obtaining the most optimal component shape that can deliver the desired level of component performance, subject to various constraints, e.g., total weight or cost, that the component must satisfy. Aerodynamic design can thus be formulated as an optimization problem that involves the minimization of an objective function subject to constraints. A new aerodynamic design optimization procedure based on neural networks and response surface methodology (RSM) incorporates the advantages of both traditional RSM and neural networks. The procedure uses a strategy, denoted parameter-based partitioning of the design space, to construct a sequence of response surfaces based on both neural networks and polynomial fits to traverse the design space in search of the optimal solution. Some desirable characteristics of the new design optimization procedure include the ability to handle a variety of design objectives, easily impose constraints, and incorporate design guidelines and rules of thumb. It provides an infrastructure for variable fidelity analysis and reduces the cost of computation by using less-expensive, lower fidelity simulations in the early stages of the design evolution. The initial or starting design can be far from optimal. The procedure is easy and economical to use in large-dimensional design space and can be used to perform design tradeoff studies rapidly. Designs involving multiple disciplines can also be optimized. Some practical applications of the design procedure that have demonstrated some of its capabilities include the inverse design of an optimal turbine airfoil starting from a generic shape and the redesign of transonic turbines to improve their unsteady aerodynamic characteristics.

  8. X-Graphs: Language and Algorithms for Heterogeneous Graph Streams

    DTIC Science & Technology

    2017-09-01

    INTRODUCTION 1 3 METHODS , ASUMPTIONS, AND PROCEDURES 2 Software Abstractions for Graph Analytic Applications 2 High performance Platforms for Graph Processing...data is stored in a distributed file system. 3 METHODS , ASUMPTIONS, AND PROCEDURES Software Abstractions for Graph Analytic Applications To...implementations of novel methods for networks analysis: several methods for detection of overlapping communities, personalized PageRank, node embeddings into a d

  9. Exploiting parallel computing with limited program changes using a network of microcomputers

    NASA Technical Reports Server (NTRS)

    Rogers, J. L., Jr.; Sobieszczanski-Sobieski, J.

    1985-01-01

    Network computing and multiprocessor computers are two discernible trends in parallel processing. The computational behavior of an iterative distributed process in which some subtasks are completed later than others because of an imbalance in computational requirements is of significant interest. The effects of asynchronus processing was studied. A small existing program was converted to perform finite element analysis by distributing substructure analysis over a network of four Apple IIe microcomputers connected to a shared disk, simulating a parallel computer. The substructure analysis uses an iterative, fully stressed, structural resizing procedure. A framework of beams divided into three substructures is used as the finite element model. The effects of asynchronous processing on the convergence of the design variables are determined by not resizing particular substructures on various iterations.

  10. Voting procedures from the perspective of theory of neural networks

    NASA Astrophysics Data System (ADS)

    Suleimenov, Ibragim; Panchenko, Sergey; Gabrielyan, Oleg; Pak, Ivan

    2016-11-01

    It is shown that voting procedure in any authority can be treated as Hopfield neural network analogue. It was revealed that weight coefficients of neural network which has discrete outputs -1 and 1 can be replaced by coefficients of a discrete set (-1, 0, 1). This gives us the opportunity to qualitatively analyze the voting procedure on the basis of limited data about mutual influence of members. It also proves that result of voting procedure is actually taken by network formed by voting members.

  11. Exploring Wound-Healing Genomic Machinery with a Network-Based Approach

    PubMed Central

    Vitali, Francesca; Marini, Simone; Balli, Martina; Grosemans, Hanne; Sampaolesi, Maurilio; Lussier, Yves A.; Cusella De Angelis, Maria Gabriella; Bellazzi, Riccardo

    2017-01-01

    The molecular mechanisms underlying tissue regeneration and wound healing are still poorly understood despite their importance. In this paper we develop a bioinformatics approach, combining biology and network theory to drive experiments for better understanding the genetic underpinnings of wound healing mechanisms and for selecting potential drug targets. We start by selecting literature-relevant genes in murine wound healing, and inferring from them a Protein-Protein Interaction (PPI) network. Then, we analyze the network to rank wound healing-related genes according to their topological properties. Lastly, we perform a procedure for in-silico simulation of a treatment action in a biological pathway. The findings obtained by applying the developed pipeline, including gene expression analysis, confirms how a network-based bioinformatics method is able to prioritize candidate genes for in vitro analysis, thus speeding up the understanding of molecular mechanisms and supporting the discovery of potential drug targets. PMID:28635674

  12. Cost-effectiveness of a European ST-segment elevation myocardial infarction network: results from the Catalan Codi Infart network

    PubMed Central

    Bosch, Julia; Martín-Yuste, Victoria; Rosas, Alba; Faixedas, Maria Teresa; Gómez-Hospital, Joan Antoni; Figueras, Jaume; Curós, Antoni; Cequier, Angel; Goicolea, Javier; Fernández-Ortiz, Antonio; Macaya, Carlos; Tresserras, Ricard; Pellisé, Laura; Sabaté, Manel

    2015-01-01

    Objectives To evaluate the cost-effectiveness of the ST-segment elevation myocardial infarction (STEMI) network of Catalonia (Codi Infart). Design Cost-utility analysis. Setting The analysis was from the Catalonian Autonomous Community in Spain, with a population of about 7.5 million people. Participants Patients with STEMI treated within the autonomous community of Catalonia (Spain) included in the IAM CAT II-IV and Codi Infart registries. Outcome measures Costs included hospitalisation, procedures and additional personnel and were obtained according to the reperfusion strategy. Clinical outcomes were defined as 30-day avoided mortality and quality-adjusted life-years (QALYs), before (N=356) and after network implementation (N=2140). Results A substitution effect and a technology effect were observed; aggregate costs increased by 2.6%. The substitution effect resulted from increased use of primary coronary angioplasty, a relatively expensive procedure and a decrease in fibrinolysis. Primary coronary angioplasty increased from 31% to 89% with the network, and fibrinolysis decreased from 37% to 3%. Rescue coronary angioplasty declined from 11% to 4%, and no reperfusion from 21% to 4%. The technological effect was related to improvements in the percutaneous coronary intervention procedure that increased efficiency, reducing the average length of the hospital stay. Mean costs per patient decreased from €8306 to €7874 for patients with primary coronary angioplasty. Clinical outcomes in patients treated with primary coronary angioplasty did not change significantly, although 30-day mortality decreased from 7.5% to 5.6%. The incremental cost-effectiveness ratio resulted in an extra cost of €4355 per life saved (30-day mortality) and €495 per QALY. Below a cost threshold of €30 000, results were sensitive to variations in costs and outcomes. Conclusions The Catalan STEMI network (Codi Infart) is cost-efficient. Further studies are needed in geopolitical different scenarios. PMID:26656019

  13. Response of power systems to the San Fernando Valley earthquake of 9 February 1971. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schiff, A.J.; Yao, J.T.P.

    1972-01-01

    The impact of the San Fernando Valley earthquake on electric power systems is discussed. Particular attention focused on the following three areas; (1) the effects of an earthquake on the power network in the Western States, (2) the failure of subsystems and components of the power system, and (3) the loss of power to hospitals. The report includes sections on the description and functions of major components of a power network, existing procedures to protect the network, safety devices within the system which influence the network, a summary of the effects of the San Fernando Valley earthquake on the Westernmore » States Power Network, and present efforts to reduce the network vulnerability to faults. Also included in the report are a review of design procedures and practices prior to the San Fernando Valley earthquake and descriptions of types of damage to electrical equipment, dynamic analysis of equipment failures, equipment surviving the San Fernando Valley earthquake and new seismic design specifications. In addition, some observations and insights gained during the study, which are not directly related to power systems are discussed.« less

  14. The QAP weighted network analysis method and its application in international services trade

    NASA Astrophysics Data System (ADS)

    Xu, Helian; Cheng, Long

    2016-04-01

    Based on QAP (Quadratic Assignment Procedure) correlation and complex network theory, this paper puts forward a new method named QAP Weighted Network Analysis Method. The core idea of the method is to analyze influences among relations in a social or economic group by building a QAP weighted network of networks of relations. In the QAP weighted network, a node depicts a relation and an undirect edge exists between any pair of nodes if there is significant correlation between relations. As an application of the QAP weighted network, we study international services trade by using the QAP weighted network, in which nodes depict 10 kinds of services trade relations. After the analysis of international services trade by QAP weighted network, and by using distance indicators, hierarchy tree and minimum spanning tree, the conclusion shows that: Firstly, significant correlation exists in all services trade, and the development of any one service trade will stimulate the other nine. Secondly, as the economic globalization goes deeper, correlations in all services trade have been strengthened continually, and clustering effects exist in those services trade. Thirdly, transportation services trade, computer and information services trade and communication services trade have the most influence and are at the core in all services trade.

  15. Data Quality Assurance and Control for AmeriFlux Network at CDIAC, ORNL

    NASA Astrophysics Data System (ADS)

    Shem, W.; Boden, T.; Krassovski, M.; Yang, B.

    2014-12-01

    The Carbon Dioxide Information Analysis Center (CDIAC) at the Oak Ridge National Laboratory (ORNL) serves as the long-term data repository for the AmeriFlux network. Datasets currently available include hourly or half-hourly meteorological and flux observations, biological measurement records, and synthesis data products. Currently there is a lack of standardized nomenclature and specifically designed procedures for data quality assurance/control in processing and handling micrometeorological and ecological data at individual flux sites. CDIAC's has bridged this gap by providing efficient and accurate procedures for data quality control and standardization of the results for easier assimilation by the models used in climate science. In this presentation we highlight the procedures we have put in place to scrutinize continuous flux and meteorological data within Ameriflux network. We itemize some basic data quality issues that we have observed over the past years and include some examples of typical data quality issues. Such issues, e.g., incorrect time-stamping, poor calibration or maintenance of instruments, missing or incomplete metadata and others that are commonly over-looked by PI's, invariably impact the time-series observations.

  16. Patent Citation Networks

    NASA Astrophysics Data System (ADS)

    Strandburg, Katherine; Tobochnik, Jan; Csardi, Gabor

    2005-03-01

    Patent applications contain citations which are similar to but different from those found in published scientific papers. In particular, patent citations are governed by legal rules. Moreover, a large fraction of citations are made not by the patent inventor, but by a patent examiner during the application procedure. Using a patent database, which contains the patent citations, assignees and inventors, we have applied network analysis and built network models. Our work includes determining the structure of the patent citation network and comparing it to existing results for scientific citation networks; identifying differences between various technological fields and comparing the observed differences to expectations based on anecdotal evidence about patenting practice; and developing models to explain the results.

  17. Evaluation of prediction capability, robustness, and sensitivity in non-linear landslide susceptibility models, Guantánamo, Cuba

    NASA Astrophysics Data System (ADS)

    Melchiorre, C.; Castellanos Abella, E. A.; van Westen, C. J.; Matteucci, M.

    2011-04-01

    This paper describes a procedure for landslide susceptibility assessment based on artificial neural networks, and focuses on the estimation of the prediction capability, robustness, and sensitivity of susceptibility models. The study is carried out in the Guantanamo Province of Cuba, where 186 landslides were mapped using photo-interpretation. Twelve conditioning factors were mapped including geomorphology, geology, soils, landuse, slope angle, slope direction, internal relief, drainage density, distance from roads and faults, rainfall intensity, and ground peak acceleration. A methodology was used that subdivided the database in 3 subsets. A training set was used for updating the weights. A validation set was used to stop the training procedure when the network started losing generalization capability, and a test set was used to calculate the performance of the network. A 10-fold cross-validation was performed in order to show that the results are repeatable. The prediction capability, the robustness analysis, and the sensitivity analysis were tested on 10 mutually exclusive datasets. The results show that by means of artificial neural networks it is possible to obtain models with high prediction capability and high robustness, and that an exploration of the effect of the individual variables is possible, even if they are considered as a black-box model.

  18. Performance Analysis of Different Backoff Algorithms for WBAN-Based Emerging Sensor Networks

    PubMed Central

    Khan, Pervez; Ullah, Niamat; Ali, Farman; Ullah, Sana; Hong, Youn-Sik; Lee, Ki-Young; Kim, Hoon

    2017-01-01

    The Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) procedure of IEEE 802.15.6 Medium Access Control (MAC) protocols for the Wireless Body Area Network (WBAN) use an Alternative Binary Exponential Backoff (ABEB) procedure. The backoff algorithm plays an important role to avoid collision in wireless networks. The Binary Exponential Backoff (BEB) algorithm used in different standards does not obtain the optimum performance due to enormous Contention Window (CW) gaps induced from packet collisions. Therefore, The IEEE 802.15.6 CSMA/CA has developed the ABEB procedure to avoid the large CW gaps upon each collision. However, the ABEB algorithm may lead to a high collision rate (as the CW size is incremented on every alternative collision) and poor utilization of the channel due to the gap between the subsequent CW. To minimize the gap between subsequent CW sizes, we adopted the Prioritized Fibonacci Backoff (PFB) procedure. This procedure leads to a smooth and gradual increase in the CW size, after each collision, which eventually decreases the waiting time, and the contending node can access the channel promptly with little delay; while ABEB leads to irregular and fluctuated CW values, which eventually increase collision and waiting time before a re-transmission attempt. We analytically approach this problem by employing a Markov chain to design the PFB scheme for the CSMA/CA procedure of the IEEE 80.15.6 standard. The performance of the PFB algorithm is compared against the ABEB function of WBAN CSMA/CA. The results show that the PFB procedure adopted for IEEE 802.15.6 CSMA/CA outperforms the ABEB procedure. PMID:28257112

  19. Knowledge engineering for temporal dependency networks as operations procedures. [in space communication

    NASA Technical Reports Server (NTRS)

    Fayyad, Kristina E.; Hill, Randall W., Jr.; Wyatt, E. J.

    1993-01-01

    This paper presents a case study of the knowledge engineering process employed to support the Link Monitor and Control Operator Assistant (LMCOA). The LMCOA is a prototype system which automates the configuration, calibration, test, and operation (referred to as precalibration) of the communications, data processing, metric data, antenna, and other equipment used to support space-ground communications with deep space spacecraft in NASA's Deep Space Network (DSN). The primary knowledge base in the LMCOA is the Temporal Dependency Network (TDN), a directed graph which provides a procedural representation of the precalibration operation. The TDN incorporates precedence, temporal, and state constraints and uses several supporting knowledge bases and data bases. The paper provides a brief background on the DSN, and describes the evolution of the TDN and supporting knowledge bases, the process used for knowledge engineering, and an analysis of the successes and problems of the knowledge engineering effort.

  20. Measuring Large-Scale Social Networks with High Resolution

    PubMed Central

    Stopczynski, Arkadiusz; Sekara, Vedran; Sapiezynski, Piotr; Cuttone, Andrea; Madsen, Mette My; Larsen, Jakob Eg; Lehmann, Sune

    2014-01-01

    This paper describes the deployment of a large-scale study designed to measure human interactions across a variety of communication channels, with high temporal resolution and spanning multiple years—the Copenhagen Networks Study. Specifically, we collect data on face-to-face interactions, telecommunication, social networks, location, and background information (personality, demographics, health, politics) for a densely connected population of 1 000 individuals, using state-of-the-art smartphones as social sensors. Here we provide an overview of the related work and describe the motivation and research agenda driving the study. Additionally, the paper details the data-types measured, and the technical infrastructure in terms of both backend and phone software, as well as an outline of the deployment procedures. We document the participant privacy procedures and their underlying principles. The paper is concluded with early results from data analysis, illustrating the importance of multi-channel high-resolution approach to data collection. PMID:24770359

  1. Recommendations for a Standardized Program Management Office (PMO) Time Compliance Network Order (TCNO) Patching Process

    DTIC Science & Technology

    2007-03-01

    self -reporting. The interview process and resulting data analysis may be impacted by research bias since both were conducted by the same individual...the processes you employ? Answer: 97 MAJCOM CONTACTS RESPOSIBLE FOR GENERAL TCNO PROCEDURES SECTION 1: INTERVIEWEE INFO Question 1: Please...BASE-LEVEL NCC CONTACTS RESPOSIBLE FOR GENERAL TCNO PROCEDURES SECTION 1: INTERVIEWEE INFO Question 1: Please provide your general job description

  2. Network-Based Management Procedures.

    ERIC Educational Resources Information Center

    Buckner, Allen L.

    Network-based management procedures serve as valuable aids in organizational management, achievement of objectives, problem solving, and decisionmaking. Network techniques especially applicable to educational management systems are the program evaluation and review technique (PERT) and the critical path method (CPM). Other network charting…

  3. The Father Christmas worm

    NASA Technical Reports Server (NTRS)

    Green, James L.; Sisson, Patricia L.

    1989-01-01

    Given here is an overview analysis of the Father Christmas Worm, a computer worm that was released onto the DECnet Internet three days before Christmas 1988. The purpose behind the worm was to send an electronic mail message to all users on the computer system running the worm. The message was a Christmas greeting and was signed 'Father Christmas'. From the investigation, it was determined that the worm was released from a computer (node number 20597::) at a university in Switzerland. The worm was designed to travel quickly. Estimates are that it was copied to over 6,000 computer nodes. However, it was believed to have executed on only a fraction of those computers. Within ten minutes after it was released, the worm was detected at the Space Physics Analysis Network (SPAN), NASA's largest space and Earth science network. Once the source program was captured, a procedural cure, using the existing functionality of the computer operating systems, was quickly devised and distributed. A combination of existing computer security measures, the quick and accurate procedures devised to stop copies of the worm from executing, and the network itself, were used to rapidly provide the cure. These were the main reasons why the worm executed on such a small percentage of nodes. This overview of the analysis of the events concerning the worm is based on an investigation made by the SPAN Security Team and provides some insight into future security measures that will be taken to handle computer worms and viruses that may hit similar networks.

  4. Stochastic simulation and analysis of biomolecular reaction networks

    PubMed Central

    Frazier, John M; Chushak, Yaroslav; Foy, Brent

    2009-01-01

    Background In recent years, several stochastic simulation algorithms have been developed to generate Monte Carlo trajectories that describe the time evolution of the behavior of biomolecular reaction networks. However, the effects of various stochastic simulation and data analysis conditions on the observed dynamics of complex biomolecular reaction networks have not recieved much attention. In order to investigate these issues, we employed a a software package developed in out group, called Biomolecular Network Simulator (BNS), to simulate and analyze the behavior of such systems. The behavior of a hypothetical two gene in vitro transcription-translation reaction network is investigated using the Gillespie exact stochastic algorithm to illustrate some of the factors that influence the analysis and interpretation of these data. Results Specific issues affecting the analysis and interpretation of simulation data are investigated, including: (1) the effect of time interval on data presentation and time-weighted averaging of molecule numbers, (2) effect of time averaging interval on reaction rate analysis, (3) effect of number of simulations on precision of model predictions, and (4) implications of stochastic simulations on optimization procedures. Conclusion The two main factors affecting the analysis of stochastic simulations are: (1) the selection of time intervals to compute or average state variables and (2) the number of simulations generated to evaluate the system behavior. PMID:19534796

  5. Analysis of the Factors Affecting Men's Attitudes Toward Cosmetic Surgery: Body Image, Media Exposure, Social Network Use, Masculine Gender Role Stress and Religious Attitudes.

    PubMed

    Abbas, Ozan Luay; Karadavut, Ufuk

    2017-12-01

    Cosmetic surgery is no longer just for females. More men are opting for cosmetic procedures, with marked increases seen in both minimally invasive and surgical options over the last decade. Compared to females, relatively little work has specifically focused on factors predicting males' attitudes toward cosmetic surgery. Therefore, we evaluated a number of variables that may predict some facet of men's attitudes toward cosmetic surgery according to evidence reported in the literature METHODS: A total of 151 male patients who applied for a surgical or minimally invasive cosmetic surgery procedure (patient group) and 151 healthy male volunteers who do not desire any type of cosmetic procedure (control group) were asked to fill out questionnaires about measures of body image, media exposure (television and magazine), social network site use, masculine gender role stress and religious attitudes. Our findings showed that lower ratings of body image satisfaction, increased time spent watching television, more frequent social network site use and higher degrees of masculine gender role stress were all significant predictors of attitudes toward cosmetic surgery among males. The current study confirmed the importance of body image dissatisfaction as a predictor of the choice to undergo cosmetic procedure. More importantly, a new predictor of cosmetic procedure attitudes was identified, namely masculine gender role stress. Finally, we demonstrated the effects television exposure and social network site use in promoting acceptance of surgical and nonsurgical routes to appearance enhancement. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .

  6. Fishing in the Amazonian forest: a gendered social network puzzle

    PubMed Central

    Díaz-Reviriego, I.; Fernández-Llamazares, Á.; Howard, P.L; Molina, JL; Reyes-García, V

    2016-01-01

    We employ social network analysis (SNA) to describe the structure of subsistence fishing social networks and to explore the relation between fishers’ emic perceptions of fishing expertise and their position in networks. Participant observation and quantitative methods were employed among the Tsimane’ Amerindians of the Bolivian Amazonia. A multiple regression quadratic assignment procedure was used to explore the extent to which gender, kinship, and age homophilies influence the formation of fishing networks. Logistic regressions were performed to determine the association between the fishers’ expertise, their socio-demographic identities, and network centrality. We found that fishing networks are gendered and that there is a positive association between fishers’ expertise and centrality in networks, an association that is more striking for women than for men. We propose that a social network perspective broadens understanding of the relations that shape the intracultural distribution of fishing expertise as well as natural resource access and use. PMID:28479670

  7. Fishing in the Amazonian forest: a gendered social network puzzle.

    PubMed

    Díaz-Reviriego, I; Fernández-Llamazares, Á; Howard, P L; Molina, J L; Reyes-García, V

    2017-01-01

    We employ social network analysis (SNA) to describe the structure of subsistence fishing social networks and to explore the relation between fishers' emic perceptions of fishing expertise and their position in networks. Participant observation and quantitative methods were employed among the Tsimane' Amerindians of the Bolivian Amazonia. A multiple regression quadratic assignment procedure was used to explore the extent to which gender, kinship, and age homophilies influence the formation of fishing networks. Logistic regressions were performed to determine the association between the fishers' expertise, their socio-demographic identities, and network centrality. We found that fishing networks are gendered and that there is a positive association between fishers' expertise and centrality in networks, an association that is more striking for women than for men. We propose that a social network perspective broadens understanding of the relations that shape the intracultural distribution of fishing expertise as well as natural resource access and use.

  8. Smoking Behavior and Friendship Formation: The Importance of Time Heterogeneity in Studying Social Network Dynamics

    DTIC Science & Technology

    2010-01-01

    Smoking Behavior and Friendship Formation: The Importance of Time Heterogeneity in Studying Social Network Dynamics Joshua A. Lospinoso Department of...djsatchell@gmail.com Abstract—This study illustrates the importance of assessing and accounting for time heterogeneity in longitudinal social net- work...analysis. We apply the time heterogeneity model selection procedure of [1] to a dataset collected on social tie formation for university freshman in the

  9. Randomizing bipartite networks: the case of the World Trade Web.

    PubMed

    Saracco, Fabio; Di Clemente, Riccardo; Gabrielli, Andrea; Squartini, Tiziano

    2015-06-01

    Within the last fifteen years, network theory has been successfully applied both to natural sciences and to socioeconomic disciplines. In particular, bipartite networks have been recognized to provide a particularly insightful representation of many systems, ranging from mutualistic networks in ecology to trade networks in economy, whence the need of a pattern detection-oriented analysis in order to identify statistically-significant structural properties. Such an analysis rests upon the definition of suitable null models, i.e. upon the choice of the portion of network structure to be preserved while randomizing everything else. However, quite surprisingly, little work has been done so far to define null models for real bipartite networks. The aim of the present work is to fill this gap, extending a recently-proposed method to randomize monopartite networks to bipartite networks. While the proposed formalism is perfectly general, we apply our method to the binary, undirected, bipartite representation of the World Trade Web, comparing the observed values of a number of structural quantities of interest with the expected ones, calculated via our randomization procedure. Interestingly, the behavior of the World Trade Web in this new representation is strongly different from the monopartite analogue, showing highly non-trivial patterns of self-organization.

  10. Data Mining of Network Logs

    NASA Technical Reports Server (NTRS)

    Collazo, Carlimar

    2011-01-01

    The statement of purpose is to analyze network monitoring logs to support the computer incident response team. Specifically, gain a clear understanding of the Uniform Resource Locator (URL) and its structure, and provide a way to breakdown a URL based on protocol, host name domain name, path, and other attributes. Finally, provide a method to perform data reduction by identifying the different types of advertisements shown on a webpage for incident data analysis. The procedures used for analysis and data reduction will be a computer program which would analyze the URL and identify and advertisement links from the actual content links.

  11. [The structural functional analysis of functioning of day-hospitals of the Russian Federation].

    PubMed

    2012-01-01

    The article deals with the results of structural functional analysis of functioning of day-hospitals in the Russian Federation. The dynamic analysis is presented concerning day-hospitals' network, capacity; financial support, beds stock structure, treated patients structure, volumes of diagnostic tests and curative procedures. The need in developing of population medical care in conditions of day-hospitals is demonstrated.

  12. Using Social Network Analysis to Better Understand Compulsive Exercise Behavior Among a Sample of Sorority Members.

    PubMed

    Patterson, Megan S; Goodson, Patricia

    2017-05-01

    Compulsive exercise, a form of unhealthy exercise often associated with prioritizing exercise and feeling guilty when exercise is missed, is a common precursor to and symptom of eating disorders. College-aged women are at high risk of exercising compulsively compared with other groups. Social network analysis (SNA) is a theoretical perspective and methodology allowing researchers to observe the effects of relational dynamics on the behaviors of people. SNA was used to assess the relationship between compulsive exercise and body dissatisfaction, physical activity, and network variables. Descriptive statistics were conducted using SPSS, and quadratic assignment procedure (QAP) analyses were conducted using UCINET. QAP regression analysis revealed a statistically significant model (R 2 = .375, P < .0001) predicting compulsive exercise behavior. Physical activity, body dissatisfaction, and network variables were statistically significant predictor variables in the QAP regression model. In our sample, women who are connected to "important" or "powerful" people in their network are likely to have higher compulsive exercise scores. This result provides healthcare practitioners key target points for intervention within similar groups of women. For scholars researching eating disorders and associated behaviors, this study supports looking into group dynamics and network structure in conjunction with body dissatisfaction and exercise frequency.

  13. Data Extraction and Management in Networks of Observational Health Care Databases for Scientific Research: A Comparison of EU-ADR, OMOP, Mini-Sentinel and MATRICE Strategies

    PubMed Central

    Gini, Rosa; Schuemie, Martijn; Brown, Jeffrey; Ryan, Patrick; Vacchi, Edoardo; Coppola, Massimo; Cazzola, Walter; Coloma, Preciosa; Berni, Roberto; Diallo, Gayo; Oliveira, José Luis; Avillach, Paul; Trifirò, Gianluca; Rijnbeek, Peter; Bellentani, Mariadonata; van Der Lei, Johan; Klazinga, Niek; Sturkenboom, Miriam

    2016-01-01

    Introduction: We see increased use of existing observational data in order to achieve fast and transparent production of empirical evidence in health care research. Multiple databases are often used to increase power, to assess rare exposures or outcomes, or to study diverse populations. For privacy and sociological reasons, original data on individual subjects can’t be shared, requiring a distributed network approach where data processing is performed prior to data sharing. Case Descriptions and Variation Among Sites: We created a conceptual framework distinguishing three steps in local data processing: (1) data reorganization into a data structure common across the network; (2) derivation of study variables not present in original data; and (3) application of study design to transform longitudinal data into aggregated data sets for statistical analysis. We applied this framework to four case studies to identify similarities and differences in the United States and Europe: Exploring and Understanding Adverse Drug Reactions by Integrative Mining of Clinical Records and Biomedical Knowledge (EU-ADR), Observational Medical Outcomes Partnership (OMOP), the Food and Drug Administration’s (FDA’s) Mini-Sentinel, and the Italian network—the Integration of Content Management Information on the Territory of Patients with Complex Diseases or with Chronic Conditions (MATRICE). Findings: National networks (OMOP, Mini-Sentinel, MATRICE) all adopted shared procedures for local data reorganization. The multinational EU-ADR network needed locally defined procedures to reorganize its heterogeneous data into a common structure. Derivation of new data elements was centrally defined in all networks but the procedure was not shared in EU-ADR. Application of study design was a common and shared procedure in all the case studies. Computer procedures were embodied in different programming languages, including SAS, R, SQL, Java, and C++. Conclusion: Using our conceptual framework we found several areas that would benefit from research to identify optimal standards for production of empirical knowledge from existing databases.an opportunity to advance evidence-based care management. In addition, formalized CM outcomes assessment methodologies will enable us to compare CM effectiveness across health delivery settings. PMID:27014709

  14. EnzDP: improved enzyme annotation for metabolic network reconstruction based on domain composition profiles.

    PubMed

    Nguyen, Nam-Ninh; Srihari, Sriganesh; Leong, Hon Wai; Chong, Ket-Fah

    2015-10-01

    Determining the entire complement of enzymes and their enzymatic functions is a fundamental step for reconstructing the metabolic network of cells. High quality enzyme annotation helps in enhancing metabolic networks reconstructed from the genome, especially by reducing gaps and increasing the enzyme coverage. Currently, structure-based and network-based approaches can only cover a limited number of enzyme families, and the accuracy of homology-based approaches can be further improved. Bottom-up homology-based approach improves the coverage by rebuilding Hidden Markov Model (HMM) profiles for all known enzymes. However, its clustering procedure relies firmly on BLAST similarity score, ignoring protein domains/patterns, and is sensitive to changes in cut-off thresholds. Here, we use functional domain architecture to score the association between domain families and enzyme families (Domain-Enzyme Association Scoring, DEAS). The DEAS score is used to calculate the similarity between proteins, which is then used in clustering procedure, instead of using sequence similarity score. We improve the enzyme annotation protocol using a stringent classification procedure, and by choosing optimal threshold settings and checking for active sites. Our analysis shows that our stringent protocol EnzDP can cover up to 90% of enzyme families available in Swiss-Prot. It achieves a high accuracy of 94.5% based on five-fold cross-validation. EnzDP outperforms existing methods across several testing scenarios. Thus, EnzDP serves as a reliable automated tool for enzyme annotation and metabolic network reconstruction. Available at: www.comp.nus.edu.sg/~nguyennn/EnzDP .

  15. Estimation procedure of the efficiency of the heat network segment

    NASA Astrophysics Data System (ADS)

    Polivoda, F. A.; Sokolovskii, R. I.; Vladimirov, M. A.; Shcherbakov, V. P.; Shatrov, L. A.

    2017-07-01

    An extensive city heat network contains many segments, and each segment operates with different efficiency of heat energy transfer. This work proposes an original technical approach; it involves the evaluation of the energy efficiency function of the heat network segment and interpreting of two hyperbolic functions in the form of the transcendental equation. In point of fact, the problem of the efficiency change of the heat network depending on the ambient temperature was studied. Criteria dependences used for evaluation of the set segment efficiency of the heat network and finding of the parameters for the most optimal control of the heat supply process of the remote users were inferred with the help of the functional analysis methods. Generally, the efficiency function of the heat network segment is interpreted by the multidimensional surface, which allows illustrating it graphically. It was shown that the solution of the inverse problem is possible as well. Required consumption of the heating agent and its temperature may be found by the set segment efficient and ambient temperature; requirements to heat insulation and pipe diameters may be formulated as well. Calculation results were received in a strict analytical form, which allows investigating the found functional dependences for availability of the extremums (maximums) under the set external parameters. A conclusion was made that it is expedient to apply this calculation procedure in two practically important cases: for the already made (built) network, when the change of the heat agent consumption and temperatures in the pipe is only possible, and for the projecting (under construction) network, when introduction of changes into the material parameters of the network is possible. This procedure allows clarifying diameter and length of the pipes, types of insulation, etc. Length of the pipes may be considered as the independent parameter for calculations; optimization of this parameter is made in accordance with other, economical, criteria for the specific project.

  16. Generalised power graph compression reveals dominant relationship patterns in complex networks

    PubMed Central

    Ahnert, Sebastian E.

    2014-01-01

    We introduce a framework for the discovery of dominant relationship patterns in complex networks, by compressing the networks into power graphs with overlapping power nodes. When paired with enrichment analysis of node classification terms, the most compressible sets of edges provide a highly informative sketch of the dominant relationship patterns that define the network. In addition, this procedure also gives rise to a novel, link-based definition of overlapping node communities in which nodes are defined by their relationships with sets of other nodes, rather than through connections within the community. We show that this completely general approach can be applied to undirected, directed, and bipartite networks, yielding valuable insights into the large-scale structure of real-world networks, including social networks and food webs. Our approach therefore provides a novel way in which network architecture can be studied, defined and classified. PMID:24663099

  17. Intraoperative and postoperative feasibility and safety of total tubeless, tubeless, small-bore tube, and standard percutaneous nephrolithotomy: a systematic review and network meta-analysis of 16 randomized controlled trials.

    PubMed

    Lee, Joo Yong; Jeh, Seong Uk; Kim, Man Deuk; Kang, Dong Hyuk; Kwon, Jong Kyou; Ham, Won Sik; Choi, Young Deuk; Cho, Kang Su

    2017-06-27

    Percutaneous nephrolithotomy (PCNL) is performed to treat relatively large renal stones. Recent publications indicate that tubeless and total tubeless (stentless) PCNL is safe in selected patients. We performed a systematic review and network meta-analysis to evaluate the feasibility and safety of different PCNL procedures, including total tubeless, tubeless with stent, small-bore tube, and large-bore tube PCNLs. PubMed, Cochrane Central Register of Controlled Trials, and EMBASE™ databases were searched to identify randomized controlled trials published before December 30, 2013. One researcher examined all titles and abstracts found by the searches. Two investigators independently evaluated the full-text articles to determine whether those met the inclusion criteria. Qualities of included studies were rated with Cochrane's risk-of-bias assessment tool. Sixteen studies were included in the final syntheses including pairwise and network meta-analyses. Operation time, pain scores, and transfusion rates were not significantly different between PCNL procedures. Network meta-analyses demonstrated that for hemoglobin changes, total tubeless PCNL may be superior to standard PCNL (mean difference [MD] 0.65, 95% CI 0.14-1.13) and tubeless PCNLs with stent (MD -1.14, 95% CI -1.65--0.62), and small-bore PCNL may be superior to tubeless PCNL with stent (MD 1.30, 95% CI 0.27-2.26). Network meta-analyses also showed that for length of hospital stay, total tubeless (MD 1.33, 95% CI 0.23-2.43) and tubeless PCNLs with stent (MD 0.99, 95% CI 0.19-1.79) may be superior to standard PCNL. In rank probability tests, small-bore tube and total tubeless PCNLs were superior for operation time, pain scores, and hemoglobin changes. For hemoglobin changes, total tubeless and small-bore PCNLs may be superior to other methods. For hospital stay, total tubeless and tubeless PCNLs with stent may be superior to other procedures.

  18. ICA-based artefact and accelerated fMRI acquisition for improved Resting State Network imaging

    PubMed Central

    Griffanti, Ludovica; Salimi-Khorshidi, Gholamreza; Beckmann, Christian F.; Auerbach, Edward J.; Douaud, Gwenaëlle; Sexton, Claire E.; Zsoldos, Enikő; Ebmeier, Klaus P; Filippini, Nicola; Mackay, Clare E.; Moeller, Steen; Xu, Junqian; Yacoub, Essa; Baselli, Giuseppe; Ugurbil, Kamil; Miller, Karla L.; Smith, Stephen M.

    2014-01-01

    The identification of resting state networks (RSNs) and the quantification of their functional connectivity in resting-state fMRI (rfMRI) are seriously hindered by the presence of artefacts, many of which overlap spatially or spectrally with RSNs. Moreover, recent developments in fMRI acquisition yield data with higher spatial and temporal resolutions, but may increase artefacts both spatially and/or temporally. Hence the correct identification and removal of non-neural fluctuations is crucial, especially in accelerated acquisitions. In this paper we investigate the effectiveness of three data-driven cleaning procedures, compare standard against higher (spatial and temporal) resolution accelerated fMRI acquisitions, and investigate the combined effect of different acquisitions and different cleanup approaches. We applied single-subject independent component analysis (ICA), followed by automatic component classification with FMRIB’s ICA-based X-noiseifier (FIX) to identify artefactual components. We then compared two first-level (within-subject) cleaning approaches for removing those artefacts and motion-related fluctuations from the data. The effectiveness of the cleaning procedures were assessed using timeseries (amplitude and spectra), network matrix and spatial map analyses. For timeseries and network analyses we also tested the effect of a second-level cleaning (informed by group-level analysis). Comparing these approaches, the preferable balance between noise removal and signal loss was achieved by regressing out of the data the full space of motion-related fluctuations and only the unique variance of the artefactual ICA components. Using similar analyses, we also investigated the effects of different cleaning approaches on data from different acquisition sequences. With the optimal cleaning procedures, functional connectivity results from accelerated data were statistically comparable or significantly better than the standard (unaccelerated) acquisition, and, crucially, with higher spatial and temporal resolution. Moreover, we were able to perform higher dimensionality ICA decompositions with the accelerated data, which is very valuable for detailed network analyses. PMID:24657355

  19. ICA-based artefact removal and accelerated fMRI acquisition for improved resting state network imaging.

    PubMed

    Griffanti, Ludovica; Salimi-Khorshidi, Gholamreza; Beckmann, Christian F; Auerbach, Edward J; Douaud, Gwenaëlle; Sexton, Claire E; Zsoldos, Enikő; Ebmeier, Klaus P; Filippini, Nicola; Mackay, Clare E; Moeller, Steen; Xu, Junqian; Yacoub, Essa; Baselli, Giuseppe; Ugurbil, Kamil; Miller, Karla L; Smith, Stephen M

    2014-07-15

    The identification of resting state networks (RSNs) and the quantification of their functional connectivity in resting-state fMRI (rfMRI) are seriously hindered by the presence of artefacts, many of which overlap spatially or spectrally with RSNs. Moreover, recent developments in fMRI acquisition yield data with higher spatial and temporal resolutions, but may increase artefacts both spatially and/or temporally. Hence the correct identification and removal of non-neural fluctuations is crucial, especially in accelerated acquisitions. In this paper we investigate the effectiveness of three data-driven cleaning procedures, compare standard against higher (spatial and temporal) resolution accelerated fMRI acquisitions, and investigate the combined effect of different acquisitions and different cleanup approaches. We applied single-subject independent component analysis (ICA), followed by automatic component classification with FMRIB's ICA-based X-noiseifier (FIX) to identify artefactual components. We then compared two first-level (within-subject) cleaning approaches for removing those artefacts and motion-related fluctuations from the data. The effectiveness of the cleaning procedures was assessed using time series (amplitude and spectra), network matrix and spatial map analyses. For time series and network analyses we also tested the effect of a second-level cleaning (informed by group-level analysis). Comparing these approaches, the preferable balance between noise removal and signal loss was achieved by regressing out of the data the full space of motion-related fluctuations and only the unique variance of the artefactual ICA components. Using similar analyses, we also investigated the effects of different cleaning approaches on data from different acquisition sequences. With the optimal cleaning procedures, functional connectivity results from accelerated data were statistically comparable or significantly better than the standard (unaccelerated) acquisition, and, crucially, with higher spatial and temporal resolution. Moreover, we were able to perform higher dimensionality ICA decompositions with the accelerated data, which is very valuable for detailed network analyses. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Fluxes through plant metabolic networks: measurements, predictions, insights and challenges.

    PubMed

    Kruger, Nicholas J; Ratcliffe, R George

    2015-01-01

    Although the flows of material through metabolic networks are central to cell function, they are not easy to measure other than at the level of inputs and outputs. This is particularly true in plant cells, where the network spans multiple subcellular compartments and where the network may function either heterotrophically or photoautotrophically. For many years, kinetic modelling of pathways provided the only method for describing the operation of fragments of the network. However, more recently, it has become possible to map the fluxes in central carbon metabolism using the stable isotope labelling techniques of metabolic flux analysis (MFA), and to predict intracellular fluxes using constraints-based modelling procedures such as flux balance analysis (FBA). These approaches were originally developed for the analysis of microbial metabolism, but over the last decade, they have been adapted for the more demanding analysis of plant metabolic networks. Here, the principal features of MFA and FBA as applied to plants are outlined, followed by a discussion of the insights that have been gained into plant metabolic networks through the application of these time-consuming and non-trivial methods. The discussion focuses on how a system-wide view of plant metabolism has increased our understanding of network structure, metabolic perturbations and the provision of reducing power and energy for cell function. Current methodological challenges that limit the scope of plant MFA are discussed and particular emphasis is placed on the importance of developing methods for cell-specific MFA.

  1. Design of surface-water data networks for regional information

    USGS Publications Warehouse

    Moss, Marshall E.; Gilroy, E.J.; Tasker, Gary D.; Karlinger, M.R.

    1982-01-01

    This report describes a technique, Network Analysis of Regional Information (NARI), and the existing computer procedures that have been developed for the specification of the regional information-cost relation for several statistical parameters of streamflow. The measure of information used is the true standard error of estimate of a regional logarithmic regression. The cost is a function of the number of stations at which hydrologic data are collected and the number of years for which the data are collected. The technique can be used to obtain either (1) a minimum cost network that will attain a prespecified accuracy and reliability or (2) a network that maximizes information given a set of budgetary and time constraints.

  2. Development of analytic intermodal freight networks for use within a GIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Southworth, F.; Xiong, D.; Middendorf, D.

    1997-05-01

    The paper discusses the practical issues involved in constructing intermodal freight networks that can be used within GIS platforms to support inter-regional freight routing and subsequent (for example, commodity flow) analysis. The procedures described can be used to create freight-routable and traffic flowable interstate and intermodal networks using some combination of highway, rail, water and air freight transportation. Keys to realistic freight routing are the identification of intermodal transfer locations and associated terminal functions, a proper handling of carrier-owned and operated sub-networks within each of the primary modes of transport, and the ability to model the types of carrier servicesmore » being offered.« less

  3. Artificial neural networks applied to quantitative elemental analysis of organic material using PIXE

    NASA Astrophysics Data System (ADS)

    Correa, R.; Chesta, M. A.; Morales, J. R.; Dinator, M. I.; Requena, I.; Vila, I.

    2006-08-01

    An artificial neural network (ANN) has been trained with real-sample PIXE (particle X-ray induced emission) spectra of organic substances. Following the training stage ANN was applied to a subset of similar samples thus obtaining the elemental concentrations in muscle, liver and gills of Cyprinus carpio. Concentrations obtained with the ANN method are in full agreement with results from one standard analytical procedure, showing the high potentiality of ANN in PIXE quantitative analyses.

  4. Facilitating relational framing in children and individuals with developmental delay using the relational completion procedure.

    PubMed

    Walsh, Sinead; Horgan, Jennifer; May, Richard J; Dymond, Simon; Whelan, Robert

    2014-01-01

    The Relational Completion Procedure is effective for establishing same, opposite and comparative derived relations in verbally able adults, but to date it has not been used to establish relational frames in young children or those with developmental delay. In Experiment 1, the Relational Completion Procedure was used with the goal of establishing two 3-member sameness networks in nine individuals with Autism Spectrum Disorder (eight with language delay). A multiple exemplar intervention was employed to facilitate derived relational responding when required. Seven of nine participants in Experiment 1 passed tests for derived relations. In Experiment 2, eight participants (all of whom, except one, had a verbal repertoire) were given training with the aim of establishing two 4-member sameness networks. Three of these participants were typically developing young children aged between 5 and 6 years old, all of whom demonstrated derived relations, as did four of the five participants with developmental delay. These data demonstrate that it is possible to reliably establish derived relations in young children and those with developmental delay using an automated procedure. © Society for the Experimental Analysis of Behavior.

  5. Network analysis: A new way of understanding psychopathology?

    PubMed

    Fonseca-Pedrero, Eduardo

    Current taxonomic systems are based on a descriptive and categorical approach where psychopathological symptoms and signs are caused by a hypothetical underlying mental disorder. In order to circumvent the limitations of classification systems, it is necessary to incorporate new conceptual and psychometric models that allow to understand, analyze and intervene in psychopathological phenomena from another perspective. The main goal was to present a new approach called network analysis for its application in the field of psychopathology. First of all, a brief introduction where psychopathological disorders are conceived as complex dynamic systems was carried out. Key concepts, as well as the different types of networks and the procedures for their estimation, are discussed. Following this, centrality measures, important for the understanding of the network as well as to examine the relevance of the variables within the network were addressed. These factors were then exemplified by estimating a network of self-reported psychopathological symptoms in a representative sample of adolescents. Finally, a brief recapitulation is made and future lines of research are discussed. Copyright © 2017 SEP y SEPB. Publicado por Elsevier España, S.L.U. All rights reserved.

  6. Error monitoring issues for common channel signaling

    NASA Astrophysics Data System (ADS)

    Hou, Victor T.; Kant, Krishna; Ramaswami, V.; Wang, Jonathan L.

    1994-04-01

    Motivated by field data which showed a large number of link changeovers and incidences of link oscillations between in-service and out-of-service states in common channel signaling (CCS) networks, a number of analyses of the link error monitoring procedures in the SS7 protocol were performed by the authors. This paper summarizes the results obtained thus far and include the following: (1) results of an exact analysis of the performance of the error monitoring procedures under both random and bursty errors; (2) a demonstration that there exists a range of error rates within which the error monitoring procedures of SS7 may induce frequent changeovers and changebacks; (3) an analysis of the performance ofthe SS7 level-2 transmission protocol to determine the tolerable error rates within which the delay requirements can be met; (4) a demonstration that the tolerable error rate depends strongly on various link and traffic characteristics, thereby implying that a single set of error monitor parameters will not work well in all situations; (5) some recommendations on a customizable/adaptable scheme of error monitoring with a discussion on their implementability. These issues may be particularly relevant in the presence of anticipated increases in SS7 traffic due to widespread deployment of Advanced Intelligent Network (AIN) and Personal Communications Service (PCS) as well as for developing procedures for high-speed SS7 links currently under consideration by standards bodies.

  7. Calibration Testing of Network Tap Devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Popovsky, Barbara; Chee, Brian; Frincke, Deborah A.

    2007-11-14

    Abstract: Understanding the behavior of network forensic devices is important to support prosecutions of malicious conduct on computer networks as well as legal remedies for false accusations of network management negligence. Individuals who seek to establish the credibility of network forensic data must speak competently about how the data was gathered and the potential for data loss. Unfortunately, manufacturers rarely provide information about the performance of low-layer network devices at a level that will survive legal challenges. This paper proposes a first step toward an independent calibration standard by establishing a validation testing methodology for evaluating forensic taps against manufacturermore » specifications. The methodology and the theoretical analysis that led to its development are offered as a conceptual framework for developing a standard and to "operationalize" network forensic readiness. This paper also provides details of an exemplar test, testing environment, procedures and results.« less

  8. A systematic approach to infer biological relevance and biases of gene network structures.

    PubMed

    Antonov, Alexey V; Tetko, Igor V; Mewes, Hans W

    2006-01-10

    The development of high-throughput technologies has generated the need for bioinformatics approaches to assess the biological relevance of gene networks. Although several tools have been proposed for analysing the enrichment of functional categories in a set of genes, none of them is suitable for evaluating the biological relevance of the gene network. We propose a procedure and develop a web-based resource (BIOREL) to estimate the functional bias (biological relevance) of any given genetic network by integrating different sources of biological information. The weights of the edges in the network may be either binary or continuous. These essential features make our web tool unique among many similar services. BIOREL provides standardized estimations of the network biases extracted from independent data. By the analyses of real data we demonstrate that the potential application of BIOREL ranges from various benchmarking purposes to systematic analysis of the network biology.

  9. Earth Observing System (EOS) Advanced Microwave Sounding Unit-A (AMSU-A) schedule plan

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This report describes Aerojet's methods and procedures used to control and administer contractual schedules for the EOS/AMSU-A program. Included are the following: the master, intermediate, and detail schedules; critical path analysis; and the total program logic network diagrams.

  10. FDI based on Artificial Neural Network for Low-Voltage-Ride-Through in DFIG-based Wind Turbine.

    PubMed

    Adouni, Amel; Chariag, Dhia; Diallo, Demba; Ben Hamed, Mouna; Sbita, Lassaâd

    2016-09-01

    As per modern electrical grid rules, Wind Turbine needs to operate continually even in presence severe grid faults as Low Voltage Ride Through (LVRT). Hence, a new LVRT Fault Detection and Identification (FDI) procedure has been developed to take the appropriate decision in order to develop the convenient control strategy. To obtain much better decision and enhanced FDI during grid fault, the proposed procedure is based on voltage indicators analysis using a new Artificial Neural Network architecture (ANN). In fact, two features are extracted (the amplitude and the angle phase). It is divided into two steps. The first is fault indicators generation and the second is indicators analysis for fault diagnosis. The first step is composed of six ANNs which are dedicated to describe the three phases of the grid (three amplitudes and three angle phases). Regarding to the second step, it is composed of a single ANN which analysis the indicators and generates a decision signal that describes the function mode (healthy or faulty). On other hand, the decision signal identifies the fault type. It allows distinguishing between the four faulty types. The diagnosis procedure is tested in simulation and experimental prototype. The obtained results confirm and approve its efficiency, rapidity, robustness and immunity to the noise and unknown inputs. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Safety assessment, detection and traceability, and societal aspects of genetically modified foods. European Network on Safety Assessment of Genetically Modified Food Crops (ENTRANSFOOD). Concluding remarks.

    PubMed

    Kuiper, H A; König, A; Kleter, G A; Hammes, W P; Knudsen, I

    2004-07-01

    The most important results from the EU-sponsored ENTRANSFOOD Thematic Network project are reviewed, including the design of a detailed step-wise procedure for the risk assessment of foods derived from genetically modified crops based on the latest scientific developments, evaluation of topical risk assessment issues, and the formulation of proposals for improved risk management and public involvement in the risk analysis process. Copyright 2004 Elsevier Ltd.

  12. Prediction of Aerodynamic Characteristics of Fighter Wings at High Angles of Attack.

    DTIC Science & Technology

    1984-03-01

    potential distribution throughout the network of four points on a body surface great- ly facilitates the flow analysis procedure. Tangential velocity...expensive of computer time. For example, as quoted by McLean, using this coarsest grid network , each 0 surface of the 727-200 wing required 10 minutes of...1980. 19. Le Balleur, J.C. and Neron , M., "Calcul D’Ecoulements3 Visqueux Decolles sur Profils D’Ailes par une Approche de Couplage", AGARn CP-291

  13. Identification of functional modules that correlate with phenotypic difference: the influence of network topology

    PubMed Central

    2010-01-01

    One of the important challenges to post-genomic biology is relating observed phenotypic alterations to the underlying collective alterations in genes. Current inferential methods, however, invariably omit large bodies of information on the relationships between genes. We present a method that takes account of such information - expressed in terms of the topology of a correlation network - and we apply the method in the context of current procedures for gene set enrichment analysis. PMID:20187943

  14. Nuevas tecnicas basadas en redes neuronales para el diseno de filtros de microondas multicapa apantallados

    NASA Astrophysics Data System (ADS)

    Pascual Garcia, Juan

    In this PhD thesis one method of shielded multilayer circuit neural network based analysis has been developed. One of the most successful analysis procedures of these kind of structures is the Integral Equation technique (IE) solved by the Method of Moments (MoM). In order to solve the IE, in the version which uses the media relevant potentials, it is necessary to have a formulation of the Green's functions associated to the mentioned potentials. The main computational burden in the IE resolution lies on the numerical evaluation of the Green's functions. In this work, the circuit analysis has been drastically accelerated thanks to the approximation of the Green's functions by means of neural networks. Once trained, the neural networks substitute the Green's functions in the IE. Two different types of neural networks have been used: the Radial basis function neural networks (RBFNN) and the Chebyshev neural networks. Thanks mainly to two distinct operations the correct approximation of the Green's functions has been possible. On the one hand, a very effective input space division has been developed. On the other hand, the elimination of the singularity makes feasible the approximation of slow variation functions. Two different singularity elimination strategies have been developed. The first one is based on the multiplication by the source-observation points distance (rho). The second one outperforms the first one. It consists of the extraction of two layers of spatial images from the whole summation of images. With regard to the Chebyshev neural networks, the OLS training algorithm has been applied in a novel fashion. This method allows the optimum design in this kind of neural networks. In this way, the performance of these neural networks outperforms greatly the RBFNNs one. In both networks, the time gain reached makes the neural method profitable. The time invested in the input space division and in the neural training is negligible with only few circuit analysis. To show, in a practical way, the ability of the neural based analysis method, two new design procedures have been developed. The first method uses the Genetic Algorithms to optimize an initial filter which does not fulfill the established specifications. A new fitness function, specially well suited to design filters, has been defined in order to assure the correct convergence of the optimization process. This new function measures the fulfillment of the specifications and it also prevents the appearance of the premature convergence problem. The second method is found on the approximation, by means of neural networks, of the relations between the electrical parameters, which defined the circuit response, and the physical dimensions that synthesize the aforementioned parameters. The neural networks trained with these data can be used in the design of many circuits in a given structure. Both methods had been show their ability in the design of practical filters.

  15. Space station data system analysis/architecture study. Task 3: Trade studies, DR-5, volume 1

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The primary objective of Task 3 is to provide additional analysis and insight necessary to support key design/programmatic decision for options quantification and selection for system definition. This includes: (1) the identification of key trade study topics; (2) the definition of a trade study procedure for each topic (issues to be resolved, key inputs, criteria/weighting, methodology); (3) conduct tradeoff and sensitivity analysis; and (4) the review/verification of results within the context of evolving system design and definition. The trade study topics addressed in this volume include space autonomy and function automation, software transportability, system network topology, communications standardization, onboard local area networking, distributed operating system, software configuration management, and the software development environment facility.

  16. Verification of mesoscale objective analyses of VAS and rawinsonde data using the March 1982 AVE/VAS special network data

    NASA Technical Reports Server (NTRS)

    Doyle, James D.; Warner, Thomas T.

    1987-01-01

    Various combinations of VAS (Visible and Infrared Spin Scan Radiometer Atmospheric Sounder) data, conventional rawinsonde data, and gridded data from the National Weather Service's (NWS) global analysis, were used in successive-correction and variational objective-analysis procedures. Analyses are produced for 0000 GMT 7 March 1982, when the VAS sounding distribution was not greatly limited by the existence of cloud cover. The successive-correction (SC) procedure was used with VAS data alone, rawinsonde data alone, and both VAS and rawinsonde data. Variational techniques were applied in three ways. Each of these techniques was discussed.

  17. The internet worm

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1989-01-01

    In November 1988 a worm program invaded several thousand UNIX-operated Sun workstations and VAX computers attached to the Research Internet, seriously disrupting service for several days but damaging no files. An analysis of the work's decompiled code revealed a battery of attacks by a knowledgeable insider, and demonstrated a number of security weaknesses. The attack occurred in an open network, and little can be inferred about the vulnerabilities of closed networks used for critical operations. The attack showed that passwork protection procedures need review and strengthening. It showed that sets of mutually trusting computers need to be carefully controlled. Sharp public reaction crystalized into a demand for user awareness and accountability in a networked world.

  18. Logic-Based Models for the Analysis of Cell Signaling Networks†

    PubMed Central

    2010-01-01

    Computational models are increasingly used to analyze the operation of complex biochemical networks, including those involved in cell signaling networks. Here we review recent advances in applying logic-based modeling to mammalian cell biology. Logic-based models represent biomolecular networks in a simple and intuitive manner without describing the detailed biochemistry of each interaction. A brief description of several logic-based modeling methods is followed by six case studies that demonstrate biological questions recently addressed using logic-based models and point to potential advances in model formalisms and training procedures that promise to enhance the utility of logic-based methods for studying the relationship between environmental inputs and phenotypic or signaling state outputs of complex signaling networks. PMID:20225868

  19. Next-Generation WDM Network Design and Routing

    NASA Astrophysics Data System (ADS)

    Tsang, Danny H. K.; Bensaou, Brahim

    2003-08-01

    Call for Papers The Editors of JON are soliciting papers on WDM Network Design and Routing. The aim in this focus issue is to publish original research on topics including - but not limited to - the following: - WDM network architectures and protocols - GMPLS network architectures - Wavelength converter placement in WDM networks - Routing and wavelength assignment (RWA) in WDM networks - Protection and restoration strategies and algorithms in WDM networks - Traffic grooming in WDM networks - Dynamic routing strategies and algorithms - Optical Burst Switching - Support of Multicast - Protection and restoration in WDM networks - Performance analysis and optimization in WDM networks Manuscript Submission To submit to this special issue, follow the normal procedure for submission to JON, indicating "WDM Network Design" in the "Comments" field of the online submission form. For all other questions relating to this focus issue, please send an e-mail to jon@osa.org, subject line "WDM Network Design." Additional information can be found on the JON website: http://www.osa-jon.org/submission/. Schedule Paper Submission Deadline: November 1, 2003 Notification to Authors: January 15, 2004 Final Manuscripts to Publisher: February 15, 2004 Publication of Focus Issue: February/March 2004

  20. Next-Generation WDM Network Design and Routing

    NASA Astrophysics Data System (ADS)

    Tsang, Danny H. K.; Bensaou, Brahim

    2003-10-01

    Call for Papers The Editors of JON are soliciting papers on WDM Network Design and Routing. The aim in this focus issue is to publish original research on topics including - but not limited to - the following: - WDM network architectures and protocols - GMPLS network architectures - Wavelength converter placement in WDM networks - Routing and wavelength assignment (RWA) in WDM networks - Protection and restoration strategies and algorithms in WDM networks - Traffic grooming in WDM networks - Dynamic routing strategies and algorithms - Optical burst switching - Support of multicast - Protection and restoration in WDM networks - Performance analysis and optimization in WDM networks Manuscript Submission To submit to this special issue, follow the normal procedure for submission to JON, indicating "WDM Network Design" in the "Comments" field of the online submission form. For all other questions relating to this focus issue, please send an e-mail to jon@osa.org, subject line "WDM Network Design." Additional information can be found on the JON website: http://www.osa-jon.org/submission/. Schedule - Paper Submission Deadline: November 1, 2003 - Notification to Authors: January 15, 2004 - Final Manuscripts to Publisher: February 15, 2004 - Publication of Focus Issue: February/March 2004

  1. Next-Generation WDM Network Design and Routing

    NASA Astrophysics Data System (ADS)

    Tsang, Danny H. K.; Bensaou, Brahim

    2003-09-01

    Call for Papers The Editors of JON are soliciting papers on WDM Network Design and Routing. The aim in this focus issue is to publish original research on topics including - but not limited to - the following: - WDM network architectures and protocols - GMPLS network architectures - Wavelength converter placement in WDM networks - Routing and wavelength assignment (RWA) in WDM networks - Protection and restoration strategies and algorithms in WDM networks - Traffic grooming in WDM networks - Dynamic routing strategies and algorithms - Optical burst switching - Support of multicast - Protection and restoration in WDM networks - Performance analysis and optimization in WDM networks Manuscript Submission To submit to this special issue, follow the normal procedure for submission to JON, indicating "WDM Network Design" in the "Comments" field of the online submission form. For all other questions relating to this focus issue, please send an e-mail to jon@osa.org, subject line "WDM Network Design." Additional information can be found on the JON website: http://www.osa-jon.org/submission/. Schedule - Paper Submission Deadline: November 1, 2003 - Notification to Authors: January 15, 2004 - Final Manuscripts to Publisher: February 15, 2004 - Publication of Focus Issue: February/March 2004

  2. Inferring Single Neuron Properties in Conductance Based Balanced Networks

    PubMed Central

    Pool, Román Rossi; Mato, Germán

    2011-01-01

    Balanced states in large networks are a usual hypothesis for explaining the variability of neural activity in cortical systems. In this regime the statistics of the inputs is characterized by static and dynamic fluctuations. The dynamic fluctuations have a Gaussian distribution. Such statistics allows to use reverse correlation methods, by recording synaptic inputs and the spike trains of ongoing spontaneous activity without any additional input. By using this method, properties of the single neuron dynamics that are masked by the balanced state can be quantified. To show the feasibility of this approach we apply it to large networks of conductance based neurons. The networks are classified as Type I or Type II according to the bifurcations which neurons of the different populations undergo near the firing onset. We also analyze mixed networks, in which each population has a mixture of different neuronal types. We determine under which conditions the intrinsic noise generated by the network can be used to apply reverse correlation methods. We find that under realistic conditions we can ascertain with low error the types of neurons present in the network. We also find that data from neurons with similar firing rates can be combined to perform covariance analysis. We compare the results of these methods (that do not requite any external input) to the standard procedure (that requires the injection of Gaussian noise into a single neuron). We find a good agreement between the two procedures. PMID:22016730

  3. Application of artificial neural network to fMRI regression analysis.

    PubMed

    Misaki, Masaya; Miyauchi, Satoru

    2006-01-15

    We used an artificial neural network (ANN) to detect correlations between event sequences and fMRI (functional magnetic resonance imaging) signals. The layered feed-forward neural network, given a series of events as inputs and the fMRI signal as a supervised signal, performed a non-linear regression analysis. This type of ANN is capable of approximating any continuous function, and thus this analysis method can detect any fMRI signals that correlated with corresponding events. Because of the flexible nature of ANNs, fitting to autocorrelation noise is a problem in fMRI analyses. We avoided this problem by using cross-validation and an early stopping procedure. The results showed that the ANN could detect various responses with different time courses. The simulation analysis also indicated an additional advantage of ANN over non-parametric methods in detecting parametrically modulated responses, i.e., it can detect various types of parametric modulations without a priori assumptions. The ANN regression analysis is therefore beneficial for exploratory fMRI analyses in detecting continuous changes in responses modulated by changes in input values.

  4. A high-capacity model for one shot association learning in the brain

    PubMed Central

    Einarsson, Hafsteinn; Lengler, Johannes; Steger, Angelika

    2014-01-01

    We present a high-capacity model for one-shot association learning (hetero-associative memory) in sparse networks. We assume that basic patterns are pre-learned in networks and associations between two patterns are presented only once and have to be learned immediately. The model is a combination of an Amit-Fusi like network sparsely connected to a Willshaw type network. The learning procedure is palimpsest and comes from earlier work on one-shot pattern learning. However, in our setup we can enhance the capacity of the network by iterative retrieval. This yields a model for sparse brain-like networks in which populations of a few thousand neurons are capable of learning hundreds of associations even if they are presented only once. The analysis of the model is based on a novel result by Janson et al. on bootstrap percolation in random graphs. PMID:25426060

  5. A high-capacity model for one shot association learning in the brain.

    PubMed

    Einarsson, Hafsteinn; Lengler, Johannes; Steger, Angelika

    2014-01-01

    We present a high-capacity model for one-shot association learning (hetero-associative memory) in sparse networks. We assume that basic patterns are pre-learned in networks and associations between two patterns are presented only once and have to be learned immediately. The model is a combination of an Amit-Fusi like network sparsely connected to a Willshaw type network. The learning procedure is palimpsest and comes from earlier work on one-shot pattern learning. However, in our setup we can enhance the capacity of the network by iterative retrieval. This yields a model for sparse brain-like networks in which populations of a few thousand neurons are capable of learning hundreds of associations even if they are presented only once. The analysis of the model is based on a novel result by Janson et al. on bootstrap percolation in random graphs.

  6. Improving the Unsteady Aerodynamic Performance of Transonic Turbines using Neural Networks

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan; Madavan, Nateri K.; Huber, Frank W.

    1999-01-01

    A recently developed neural net-based aerodynamic design procedure is used in the redesign of a transonic turbine stage to improve its unsteady aerodynamic performance. The redesign procedure used incorporates the advantages of both traditional response surface methodology and neural networks by employing a strategy called parameter-based partitioning of the design space. Starting from the reference design, a sequence of response surfaces based on both neural networks and polynomial fits are constructed to traverse the design space in search of an optimal solution that exhibits improved unsteady performance. The procedure combines the power of neural networks and the economy of low-order polynomials (in terms of number of simulations required and network training requirements). A time-accurate, two-dimensional, Navier-Stokes solver is used to evaluate the various intermediate designs and provide inputs to the optimization procedure. The procedure yielded a modified design that improves the aerodynamic performance through small changes to the reference design geometry. These results demonstrate the capabilities of the neural net-based design procedure, and also show the advantages of including high-fidelity unsteady simulations that capture the relevant flow physics in the design optimization process.

  7. Microseismic monitoring of Chocolate Bayou, Texas: the Pleasant Bayou No. 2 geopressured/geothermal energy test well program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mauk, F.J.; Kimball, B.; Davis, R.A.

    1984-01-01

    The Brazoria seismic network, instrumentation, design, and specifications are described. The data analysis procedures are presented. Seismicity is described in relation to the Pleasant Bayou production history. Seismicity originating near the chemical plant east of the geopressured/geothermal well is discussed. (MHR)

  8. Microseismic monitoring of Chocolate Bayou, Texas: The Pleasant Bayou no. 2 geopressured/geothermal energy test well program

    NASA Astrophysics Data System (ADS)

    Mauk, F. J.; Kimball, B.; Davis, R. A.

    The Brazoria seismic network, instrumentation, design, and specifications are described. The data analysis procedures are presented. Seismicity is described in relation to the Pleasant Bayou production history. Seismicity originating near the chemical plant east of the geopressured/geothermal well is discussed.

  9. Organizational Communication Studies: The LTT and OCD Procedures.

    ERIC Educational Resources Information Center

    Wiio, Osmo A.

    Poor results in organizational communication research may be due to the lack of comparative research, relying instead on small, unrepresentative samples from one or two organizations. Measurement techniques such as the communication audit and the network analysis make possible the comparison of different organizations and the collection of data…

  10. INVESTIGATING DIFFERENCES IN BRAIN FUNCTIONAL NETWORKS USING HIERARCHICAL COVARIATE-ADJUSTED INDEPENDENT COMPONENT ANALYSIS.

    PubMed

    Shi, Ran; Guo, Ying

    2016-12-01

    Human brains perform tasks via complex functional networks consisting of separated brain regions. A popular approach to characterize brain functional networks in fMRI studies is independent component analysis (ICA), which is a powerful method to reconstruct latent source signals from their linear mixtures. In many fMRI studies, an important goal is to investigate how brain functional networks change according to specific clinical and demographic variabilities. Existing ICA methods, however, cannot directly incorporate covariate effects in ICA decomposition. Heuristic post-ICA analysis to address this need can be inaccurate and inefficient. In this paper, we propose a hierarchical covariate-adjusted ICA (hc-ICA) model that provides a formal statistical framework for estimating covariate effects and testing differences between brain functional networks. Our method provides a more reliable and powerful statistical tool for evaluating group differences in brain functional networks while appropriately controlling for potential confounding factors. We present an analytically tractable EM algorithm to obtain maximum likelihood estimates of our model. We also develop a subspace-based approximate EM that runs significantly faster while retaining high accuracy. To test the differences in functional networks, we introduce a voxel-wise approximate inference procedure which eliminates the need of computationally expensive covariance matrix estimation and inversion. We demonstrate the advantages of our methods over the existing method via simulation studies. We apply our method to an fMRI study to investigate differences in brain functional networks associated with post-traumatic stress disorder (PTSD).

  11. Environmentally Friendly Procedure Based on Supercritical Fluid Chromatography and Tandem Mass Spectrometry Molecular Networking for the Discovery of Potent Antiviral Compounds from Euphorbia semiperfoliata.

    PubMed

    Nothias, Louis-Félix; Boutet-Mercey, Stéphanie; Cachet, Xavier; De La Torre, Erick; Laboureur, Laurent; Gallard, Jean-François; Retailleau, Pascal; Brunelle, Alain; Dorrestein, Pieter C; Costa, Jean; Bedoya, Luis M; Roussi, Fanny; Leyssen, Pieter; Alcami, José; Paolini, Julien; Litaudon, Marc; Touboul, David

    2017-10-27

    A supercritical fluid chromatography-based targeted purification procedure using tandem mass spectrometry and molecular networking was developed to analyze, annotate, and isolate secondary metabolites from complex plant extract mixture. This approach was applied for the targeted isolation of new antiviral diterpene esters from Euphorbia semiperfoliata whole plant extract. The analysis of bioactive fractions revealed that unknown diterpene esters, including jatrophane esters and phorbol esters, were present in the samples. The purification procedure using semipreparative supercritical fluid chromatography led to the isolation and identification of two new jatrophane esters (13 and 14) and one known (15) and three new 4-deoxyphorbol esters (16-18). The structure and absolute configuration of compound 16 were confirmed by X-ray crystallography. This compound was found to display antiviral activity against Chikungunya virus (EC 50 = 0.45 μM), while compound 15 proved to be a potent and selective inhibitor of HIV-1 replication in a recombinant virus assay (EC 50 = 13 nM). This study showed that a supercritical fluid chromatography-based protocol and molecular networking can facilitate and accelerate the discovery of bioactive small molecules by targeting molecules of interest, while minimizing the use of toxic solvents.

  12. Backbone of complex networks of corporations: the flow of control.

    PubMed

    Glattfelder, J B; Battiston, S

    2009-09-01

    We present a methodology to extract the backbone of complex networks based on the weight and direction of links, as well as on nontopological properties of nodes. We show how the methodology can be applied in general to networks in which mass or energy is flowing along the links. In particular, the procedure enables us to address important questions in economics, namely, how control and wealth are structured and concentrated across national markets. We report on the first cross-country investigation of ownership networks, focusing on the stock markets of 48 countries around the world. On the one hand, our analysis confirms results expected on the basis of the literature on corporate control, namely, that in Anglo-Saxon countries control tends to be dispersed among numerous shareholders. On the other hand, it also reveals that in the same countries, control is found to be highly concentrated at the global level, namely, lying in the hands of very few important shareholders. Interestingly, the exact opposite is observed for European countries. These results have previously not been reported as they are not observable without the kind of network analysis developed here.

  13. Network Analysis of Foramen Ovale Electrode Recordings in Drug-resistant Temporal Lobe Epilepsy Patients

    PubMed Central

    Sanz-García, Ancor; Vega-Zelaya, Lorena; Pastor, Jesús; Torres, Cristina V.; Sola, Rafael G.; Ortega, Guillermo J.

    2016-01-01

    Approximately 30% of epilepsy patients are refractory to antiepileptic drugs. In these cases, surgery is the only alternative to eliminate/control seizures. However, a significant minority of patients continues to exhibit post-operative seizures, even in those cases in which the suspected source of seizures has been correctly localized and resected. The protocol presented here combines a clinical procedure routinely employed during the pre-operative evaluation of temporal lobe epilepsy (TLE) patients with a novel technique for network analysis. The method allows for the evaluation of the temporal evolution of mesial network parameters. The bilateral insertion of foramen ovale electrodes (FOE) into the ambient cistern simultaneously records electrocortical activity at several mesial areas in the temporal lobe. Furthermore, network methodology applied to the recorded time series tracks the temporal evolution of the mesial networks both interictally and during the seizures. In this way, the presented protocol offers a unique way to visualize and quantify measures that considers the relationships between several mesial areas instead of a single area. PMID:28060326

  14. Backbone of complex networks of corporations: The flow of control

    NASA Astrophysics Data System (ADS)

    Glattfelder, J. B.; Battiston, S.

    2009-09-01

    We present a methodology to extract the backbone of complex networks based on the weight and direction of links, as well as on nontopological properties of nodes. We show how the methodology can be applied in general to networks in which mass or energy is flowing along the links. In particular, the procedure enables us to address important questions in economics, namely, how control and wealth are structured and concentrated across national markets. We report on the first cross-country investigation of ownership networks, focusing on the stock markets of 48 countries around the world. On the one hand, our analysis confirms results expected on the basis of the literature on corporate control, namely, that in Anglo-Saxon countries control tends to be dispersed among numerous shareholders. On the other hand, it also reveals that in the same countries, control is found to be highly concentrated at the global level, namely, lying in the hands of very few important shareholders. Interestingly, the exact opposite is observed for European countries. These results have previously not been reported as they are not observable without the kind of network analysis developed here.

  15. Efficient and accurate Greedy Search Methods for mining functional modules in protein interaction networks.

    PubMed

    He, Jieyue; Li, Chaojun; Ye, Baoliu; Zhong, Wei

    2012-06-25

    Most computational algorithms mainly focus on detecting highly connected subgraphs in PPI networks as protein complexes but ignore their inherent organization. Furthermore, many of these algorithms are computationally expensive. However, recent analysis indicates that experimentally detected protein complexes generally contain Core/attachment structures. In this paper, a Greedy Search Method based on Core-Attachment structure (GSM-CA) is proposed. The GSM-CA method detects densely connected regions in large protein-protein interaction networks based on the edge weight and two criteria for determining core nodes and attachment nodes. The GSM-CA method improves the prediction accuracy compared to other similar module detection approaches, however it is computationally expensive. Many module detection approaches are based on the traditional hierarchical methods, which is also computationally inefficient because the hierarchical tree structure produced by these approaches cannot provide adequate information to identify whether a network belongs to a module structure or not. In order to speed up the computational process, the Greedy Search Method based on Fast Clustering (GSM-FC) is proposed in this work. The edge weight based GSM-FC method uses a greedy procedure to traverse all edges just once to separate the network into the suitable set of modules. The proposed methods are applied to the protein interaction network of S. cerevisiae. Experimental results indicate that many significant functional modules are detected, most of which match the known complexes. Results also demonstrate that the GSM-FC algorithm is faster and more accurate as compared to other competing algorithms. Based on the new edge weight definition, the proposed algorithm takes advantages of the greedy search procedure to separate the network into the suitable set of modules. Experimental analysis shows that the identified modules are statistically significant. The algorithm can reduce the computational time significantly while keeping high prediction accuracy.

  16. A soil sampling intercomparison exercise for the ALMERA network.

    PubMed

    Belli, Maria; de Zorzi, Paolo; Sansone, Umberto; Shakhashiro, Abduhlghani; Gondin da Fonseca, Adelaide; Trinkl, Alexander; Benesch, Thomas

    2009-11-01

    Soil sampling and analysis for radionuclides after an accidental or routine release is a key factor for the dose calculation to members of the public, and for the establishment of possible countermeasures. The IAEA organized for selected laboratories of the ALMERA (Analytical Laboratories for the Measurement of Environmental Radioactivity) network a Soil Sampling Intercomparison Exercise (IAEA/SIE/01) with the objective of comparing soil sampling procedures used by different laboratories. The ALMERA network is a world-wide network of analytical laboratories located in IAEA member states capable of providing reliable and timely analysis of environmental samples in the event of an accidental or intentional release of radioactivity. Ten ALMERA laboratories were selected to participate in the sampling exercise. The soil sampling intercomparison exercise took place in November 2005 in an agricultural area qualified as a "reference site", aimed at assessing the uncertainties associated with soil sampling in agricultural, semi-natural, urban and contaminated environments and suitable for performing sampling intercomparison. In this paper, the laboratories sampling performance were evaluated.

  17. A Network Selection Algorithm Considering Power Consumption in Hybrid Wireless Networks

    NASA Astrophysics Data System (ADS)

    Joe, Inwhee; Kim, Won-Tae; Hong, Seokjoon

    In this paper, we propose a novel network selection algorithm considering power consumption in hybrid wireless networks for vertical handover. CDMA, WiBro, WLAN networks are candidate networks for this selection algorithm. This algorithm is composed of the power consumption prediction algorithm and the final network selection algorithm. The power consumption prediction algorithm estimates the expected lifetime of the mobile station based on the current battery level, traffic class and power consumption for each network interface card of the mobile station. If the expected lifetime of the mobile station in a certain network is not long enough compared the handover delay, this particular network will be removed from the candidate network list, thereby preventing unnecessary handovers in the preprocessing procedure. On the other hand, the final network selection algorithm consists of AHP (Analytic Hierarchical Process) and GRA (Grey Relational Analysis). The global factors of the network selection structure are QoS, cost and lifetime. If user preference is lifetime, our selection algorithm selects the network that offers longest service duration due to low power consumption. Also, we conduct some simulations using the OPNET simulation tool. The simulation results show that the proposed algorithm provides longer lifetime in the hybrid wireless network environment.

  18. Proactive Alleviation Procedure to Handle Black Hole Attack and Its Version

    PubMed Central

    Babu, M. Rajesh; Dian, S. Moses; Chelladurai, Siva; Palaniappan, Mathiyalagan

    2015-01-01

    The world is moving towards a new realm of computing such as Internet of Things. The Internet of Things, however, envisions connecting almost all objects within the world to the Internet by recognizing them as smart objects. In doing so, the existing networks which include wired, wireless, and ad hoc networks should be utilized. Moreover, apart from other networks, the ad hoc network is full of security challenges. For instance, the MANET (mobile ad hoc network) is susceptible to various attacks in which the black hole attacks and its versions do serious damage to the entire MANET infrastructure. The severity of this attack increases, when the compromised MANET nodes work in cooperation with each other to make a cooperative black hole attack. Therefore this paper proposes an alleviation procedure which consists of timely mandate procedure, hole detection algorithm, and sensitive guard procedure to detect the maliciously behaving nodes. It has been observed that the proposed procedure is cost-effective and ensures QoS guarantee by assuring resource availability thus making the MANET appropriate for Internet of Things. PMID:26495430

  19. Proactive Alleviation Procedure to Handle Black Hole Attack and Its Version.

    PubMed

    Babu, M Rajesh; Dian, S Moses; Chelladurai, Siva; Palaniappan, Mathiyalagan

    2015-01-01

    The world is moving towards a new realm of computing such as Internet of Things. The Internet of Things, however, envisions connecting almost all objects within the world to the Internet by recognizing them as smart objects. In doing so, the existing networks which include wired, wireless, and ad hoc networks should be utilized. Moreover, apart from other networks, the ad hoc network is full of security challenges. For instance, the MANET (mobile ad hoc network) is susceptible to various attacks in which the black hole attacks and its versions do serious damage to the entire MANET infrastructure. The severity of this attack increases, when the compromised MANET nodes work in cooperation with each other to make a cooperative black hole attack. Therefore this paper proposes an alleviation procedure which consists of timely mandate procedure, hole detection algorithm, and sensitive guard procedure to detect the maliciously behaving nodes. It has been observed that the proposed procedure is cost-effective and ensures QoS guarantee by assuring resource availability thus making the MANET appropriate for Internet of Things.

  20. Connectionist Learning Procedures.

    ERIC Educational Resources Information Center

    Hinton, Geoffrey E.

    A major goal of research on networks of neuron-like processing units is to discover efficient learning procedures that allow these networks to construct complex internal representations of their environment. The learning procedures must be capable of modifying the connection strengths in such a way that internal units which are not part of the…

  1. Design of Neural Networks for Fast Convergence and Accuracy

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Sparks, Dean W., Jr.

    1998-01-01

    A novel procedure for the design and training of artificial neural networks, used for rapid and efficient controls and dynamics design and analysis for flexible space systems, has been developed. Artificial neural networks are employed to provide a means of evaluating the impact of design changes rapidly. Specifically, two-layer feedforward neural networks are designed to approximate the functional relationship between the component spacecraft design changes and measures of its performance. A training algorithm, based on statistical sampling theory, is presented, which guarantees that the trained networks provide a designer-specified degree of accuracy in mapping the functional relationship. Within each iteration of this statistical-based algorithm, a sequential design algorithm is used for the design and training of the feedforward network to provide rapid convergence to the network goals. Here, at each sequence a new network is trained to minimize the error of previous network. The design algorithm attempts to avoid the local minima phenomenon that hampers the traditional network training. A numerical example is performed on a spacecraft application in order to demonstrate the feasibility of the proposed approach.

  2. A method for independent component graph analysis of resting-state fMRI.

    PubMed

    Ribeiro de Paula, Demetrius; Ziegler, Erik; Abeyasinghe, Pubuditha M; Das, Tushar K; Cavaliere, Carlo; Aiello, Marco; Heine, Lizette; di Perri, Carol; Demertzi, Athena; Noirhomme, Quentin; Charland-Verville, Vanessa; Vanhaudenhuyse, Audrey; Stender, Johan; Gomez, Francisco; Tshibanda, Jean-Flory L; Laureys, Steven; Owen, Adrian M; Soddu, Andrea

    2017-03-01

    Independent component analysis (ICA) has been extensively used for reducing task-free BOLD fMRI recordings into spatial maps and their associated time-courses. The spatially identified independent components can be considered as intrinsic connectivity networks (ICNs) of non-contiguous regions. To date, the spatial patterns of the networks have been analyzed with techniques developed for volumetric data. Here, we detail a graph building technique that allows these ICNs to be analyzed with graph theory. First, ICA was performed at the single-subject level in 15 healthy volunteers using a 3T MRI scanner. The identification of nine networks was performed by a multiple-template matching procedure and a subsequent component classification based on the network "neuronal" properties. Second, for each of the identified networks, the nodes were defined as 1,015 anatomically parcellated regions. Third, between-node functional connectivity was established by building edge weights for each networks. Group-level graph analysis was finally performed for each network and compared to the classical network. Network graph comparison between the classically constructed network and the nine networks showed significant differences in the auditory and visual medial networks with regard to the average degree and the number of edges, while the visual lateral network showed a significant difference in the small-worldness. This novel approach permits us to take advantage of the well-recognized power of ICA in BOLD signal decomposition and, at the same time, to make use of well-established graph measures to evaluate connectivity differences. Moreover, by providing a graph for each separate network, it can offer the possibility to extract graph measures in a specific way for each network. This increased specificity could be relevant for studying pathological brain activity or altered states of consciousness as induced by anesthesia or sleep, where specific networks are known to be altered in different strength.

  3. mizuRoute version 1: A river network routing tool for a continental domain water resources applications

    USGS Publications Warehouse

    Mizukami, Naoki; Clark, Martyn P.; Sampson, Kevin; Nijssen, Bart; Mao, Yixin; McMillan, Hilary; Viger, Roland; Markstrom, Steven; Hay, Lauren E.; Woods, Ross; Arnold, Jeffrey R.; Brekke, Levi D.

    2016-01-01

    This paper describes the first version of a stand-alone runoff routing tool, mizuRoute. The mizuRoute tool post-processes runoff outputs from any distributed hydrologic model or land surface model to produce spatially distributed streamflow at various spatial scales from headwater basins to continental-wide river systems. The tool can utilize both traditional grid-based river network and vector-based river network data. Both types of river network include river segment lines and the associated drainage basin polygons, but the vector-based river network can represent finer-scale river lines than the grid-based network. Streamflow estimates at any desired location in the river network can be easily extracted from the output of mizuRoute. The routing process is simulated as two separate steps. First, hillslope routing is performed with a gamma-distribution-based unit-hydrograph to transport runoff from a hillslope to a catchment outlet. The second step is river channel routing, which is performed with one of two routing scheme options: (1) a kinematic wave tracking (KWT) routing procedure; and (2) an impulse response function – unit-hydrograph (IRF-UH) routing procedure. The mizuRoute tool also includes scripts (python, NetCDF operators) to pre-process spatial river network data. This paper demonstrates mizuRoute's capabilities to produce spatially distributed streamflow simulations based on river networks from the United States Geological Survey (USGS) Geospatial Fabric (GF) data set in which over 54 000 river segments and their contributing areas are mapped across the contiguous United States (CONUS). A brief analysis of model parameter sensitivity is also provided. The mizuRoute tool can assist model-based water resources assessments including studies of the impacts of climate change on streamflow.

  4. Fuzzy Adaptive Control for Intelligent Autonomous Space Exploration Problems

    NASA Technical Reports Server (NTRS)

    Esogbue, Augustine O.

    1998-01-01

    The principal objective of the research reported here is the re-design, analysis and optimization of our newly developed neural network fuzzy adaptive controller model for complex processes capable of learning fuzzy control rules using process data and improving its control through on-line adaption. The learned improvement is according to a performance objective function that provides evaluative feedback; this performance objective is broadly defined to meet long-range goals over time. Although fuzzy control had proven effective for complex, nonlinear, imprecisely-defined processes for which standard models and controls are either inefficient, impractical or cannot be derived, the state of the art prior to our work showed that procedures for deriving fuzzy control, however, were mostly ad hoc heuristics. The learning ability of neural networks was exploited to systematically derive fuzzy control and permit on-line adaption and in the process optimize control. The operation of neural networks integrates very naturally with fuzzy logic. The neural networks which were designed and tested using simulation software and simulated data, followed by realistic industrial data were reconfigured for application on several platforms as well as for the employment of improved algorithms. The statistical procedures of the learning process were investigated and evaluated with standard statistical procedures (such as ANOVA, graphical analysis of residuals, etc.). The computational advantage of dynamic programming-like methods of optimal control was used to permit on-line fuzzy adaptive control. Tests for the consistency, completeness and interaction of the control rules were applied. Comparisons to other methods and controllers were made so as to identify the major advantages of the resulting controller model. Several specific modifications and extensions were made to the original controller. Additional modifications and explorations have been proposed for further study. Some of these are in progress in our laboratory while others await additional support. All of these enhancements will improve the attractiveness of the controller as an effective tool for the on line control of an array of complex process environments.

  5. Video networking of cardiac catheterization laboratories.

    PubMed

    Tobis, J; Aharonian, V; Mansukhani, P; Kasaoka, S; Jhandyala, R; Son, R; Browning, R; Youngblood, L; Thompson, M

    1999-02-01

    The purpose of this study was to assess the feasibility and accuracy of a video telecommunication network to transmit coronary images to provide on-line interaction between personnel in a cardiac catheterization laboratory and a remote core laboratory. A telecommunication system was installed in the cardiac catheterization laboratory at Kaiser Hospital, Los Angeles, and the core laboratory at the University of California, Irvine, approximately 40 miles away. Cineangiograms, live fluoroscopy, intravascular ultrasound studies and images of the catheterization laboratory were transmitted in real time over a dedicated T1 line at 768 kilobytes/second at 15 frames/second. These cases were performed during a clinical study of angiographic guidance versus intravascular ultrasound (IVUS) guidance of stent deployment. During the cases the core laboratory performed quantitative analysis of the angiograms and ultrasound images. Selected images were then annotated and transmitted back to the catheterization laboratory to facilitate discussion during the procedure. A successful communication hookup was obtained in 39 (98%) of 40 cases. Measurements of angiographic parameters were very close between the original cinefilm and the transmitted images. Quantitative analysis of the ultrasound images showed no significant difference in any of the diameter or cross-sectional area measurements between the original ultrasound tape and the transmitted images. The telecommunication link during the interventional procedures had a significant impact in 23 (58%) of 40 cases affecting the area to be treated, the size of the inflation balloon, recognition of stent underdeployment, or the existence of disease in other areas that was not noted on the original studies. Current video telecommunication systems provide high-quality images on-line with accurate representation of cineangiograms and intravascular ultrasound images. This system had a significant impact on 58% of the cases in this small clinical trial. Telecommunication networks between hospitals and a central core laboratory may facilitate physician training and improve technical skills and judgement during interventional procedures. This project has implications for how multicenter clinical trials could be operated through telecommunication networks to ensure conformity with the protocol.

  6. SU-E-T-419: Workflow and FMEA in a New Proton Therapy (PT) Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, C; Wessels, B; Hamilton, H

    2014-06-01

    Purpose: Workflow is an important component in the operational planning of a new proton facility. By integrating the concept of failure mode and effect analysis (FMEA) and traditional QA requirements, a workflow for a proton therapy treatment course is set up. This workflow serves as the blue print for the planning of computer hardware/software requirements and network flow. A slight modification of the workflow generates a process map(PM) for FMEA and the planning of QA program in PT. Methods: A flowchart is first developed outlining the sequence of processes involved in a PT treatment course. Each process consists of amore » number of sub-processes to encompass a broad scope of treatment and QA procedures. For each subprocess, the personnel involved, the equipment needed and the computer hardware/software as well as network requirements are defined by a team of clinical staff, administrators and IT personnel. Results: Eleven intermediate processes with a total of 70 sub-processes involved in a PT treatment course are identified. The number of sub-processes varies, ranging from 2-12. The sub-processes within each process are used for the operational planning. For example, in the CT-Sim process, there are 12 sub-processes: three involve data entry/retrieval from a record-and-verify system, two controlled by the CT computer, two require department/hospital network, and the other five are setup procedures. IT then decides the number of computers needed and the software and network requirement. By removing the traditional QA procedures from the workflow, a PM is generated for FMEA analysis to design a QA program for PT. Conclusion: Significant efforts are involved in the development of the workflow in a PT treatment course. Our hybrid model of combining FMEA and traditional QA program serves a duo purpose of efficient operational planning and designing of a QA program in PT.« less

  7. 47 CFR 25.261 - Procedures for avoidance of in-line interference events for Non Geostationary Satellite Orbit...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... interference events for Non Geostationary Satellite Orbit (NGSO) Satellite Network Operations in the Fixed... avoidance of in-line interference events for Non Geostationary Satellite Orbit (NGSO) Satellite Network... procedures in this section apply to non-Federal-Government NGSO FSS satellite networks operating in the...

  8. 47 CFR 25.261 - Procedures for avoidance of in-line interference events for Non Geostationary Satellite Orbit...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... interference events for Non Geostationary Satellite Orbit (NGSO) Satellite Network Operations in the Fixed... avoidance of in-line interference events for Non Geostationary Satellite Orbit (NGSO) Satellite Network... procedures in this section apply to non-Federal-Government NGSO FSS satellite networks operating in the...

  9. Experiments on Learning by Back Propagation.

    ERIC Educational Resources Information Center

    Plaut, David C.; And Others

    This paper describes further research on a learning procedure for layered networks of deterministic, neuron-like units, described by Rumelhart et al. The units, the way they are connected, the learning procedure, and the extension to iterative networks are presented. In one experiment, a network learns a set of filters, enabling it to discriminate…

  10. Postoperative infection in spine surgery: does the month matter?

    PubMed

    Durkin, Michael J; Dicks, Kristen V; Baker, Arthur W; Moehring, Rebekah W; Chen, Luke F; Sexton, Daniel J; Lewis, Sarah S; Anderson, Deverick J

    2015-07-01

    The relationship between time of year and surgical site infection (SSI) following neurosurgical procedures is poorly understood. Authors of previous reports have demonstrated that rates of SSI following neurosurgical procedures performed during the summer months were higher compared with rates during other seasons. It is unclear, however, if this difference was related to climatological changes or inexperienced medical trainees (the July effect). The aim of this study was to evaluate for seasonal variation of SSI following spine surgery in a network of nonteaching community hospitals. The authors analyzed 6 years of prospectively collected surveillance data (January 1, 2007, to December 31, 2012) from all laminectomies and spinal fusions from 20 hospitals in the Duke Infection Control Outreach Network of community hospitals. Surgical site infections were defined using National Healthcare Safety Network criteria and identified using standardized methods across study hospitals. Regression models were then constructed using Poisson distribution to evaluate for seasonal trends by month. Each analysis was first performed for all SSIs and then for SSIs caused by specific organisms or classes of organisms. Categorical analysis was performed using two separate definitions of summer: June through September (definition 1), and July through September (definition 2). The prevalence rate of SSIs during the summer was compared with the prevalence rate during the remainder of the year by calculating prevalence rate ratios and 95% confidence intervals. The authors identified 642 SSIs following 57,559 neurosurgical procedures (overall prevalence rate = 1.11/100 procedures); 215 occurred following 24,466 laminectomies (prevalence rate = 0.88/100 procedures), and 427 following 33,093 spinal fusions (prevalence rate = 1.29/100 procedures). Common causes of SSI were Staphylococcus aureus (n = 380; 59%), coagulase-negative staphylococci (n = 90; 14%), and Escherichia coli (n = 41; 6.4%). Poisson regression models demonstrated increases in the rates of SSI during each of the summer months for all SSIs and SSIs due to gram-positive cocci, S. aureus, and methicillin-sensitive S. aureus. Categorical analysis confirmed that the rate of SSI during the 4-month summer period was higher than the rate during the remainder of the year, regardless of which definition for summer was used (definition 1, p = 0.008; definition 2, p = 0.003). Similarly, the rates of SSI due to grampositive cocci and S. aureus were higher during the summer months than the remainder of the year regardless of which definition of summer was used. However, the rate of SSI due to gram-negative bacilli was not. The rate of SSI following fusion or spinal laminectomy/laminoplasty was higher during the summer in this network of community hospitals. The increase appears to be related to increases in SSIs caused by gram-positive cocci and, more specifically, S. aureus. Given the nonteaching nature of these hospitals, the findings demonstrate that increases in the rate of SSI during the summer are more likely related to ecological and/or environmental factors than the July effect.

  11. Poisson statistics of PageRank probabilities of Twitter and Wikipedia networks

    NASA Astrophysics Data System (ADS)

    Frahm, Klaus M.; Shepelyansky, Dima L.

    2014-04-01

    We use the methods of quantum chaos and Random Matrix Theory for analysis of statistical fluctuations of PageRank probabilities in directed networks. In this approach the effective energy levels are given by a logarithm of PageRank probability at a given node. After the standard energy level unfolding procedure we establish that the nearest spacing distribution of PageRank probabilities is described by the Poisson law typical for integrable quantum systems. Our studies are done for the Twitter network and three networks of Wikipedia editions in English, French and German. We argue that due to absence of level repulsion the PageRank order of nearby nodes can be easily interchanged. The obtained Poisson law implies that the nearby PageRank probabilities fluctuate as random independent variables.

  12. A statistical method for measuring activation of gene regulatory networks.

    PubMed

    Esteves, Gustavo H; Reis, Luiz F L

    2018-06-13

    Gene expression data analysis is of great importance for modern molecular biology, given our ability to measure the expression profiles of thousands of genes and enabling studies rooted in systems biology. In this work, we propose a simple statistical model for the activation measuring of gene regulatory networks, instead of the traditional gene co-expression networks. We present the mathematical construction of a statistical procedure for testing hypothesis regarding gene regulatory network activation. The real probability distribution for the test statistic is evaluated by a permutation based study. To illustrate the functionality of the proposed methodology, we also present a simple example based on a small hypothetical network and the activation measuring of two KEGG networks, both based on gene expression data collected from gastric and esophageal samples. The two KEGG networks were also analyzed for a public database, available through NCBI-GEO, presented as Supplementary Material. This method was implemented in an R package that is available at the BioConductor project website under the name maigesPack.

  13. Consciousness, cognition and brain networks: New perspectives.

    PubMed

    Aldana, E M; Valverde, J L; Fábregas, N

    2016-10-01

    A detailed analysis of the literature on consciousness and cognition mechanisms based on the neural networks theory is presented. The immune and inflammatory response to the anesthetic-surgical procedure induces modulation of neuronal plasticity by influencing higher cognitive functions. Anesthetic drugs can cause unconsciousness, producing a functional disruption of cortical and thalamic cortical integration complex. The external and internal perceptions are processed through an intricate network of neural connections, involving the higher nervous activity centers, especially the cerebral cortex. This requires an integrated model, formed by neural networks and their interactions with highly specialized regions, through large-scale networks, which are distributed throughout the brain collecting information flow of these perceptions. Functional and effective connectivity between large-scale networks, are essential for consciousness, unconsciousness and cognition. It is what is called the "human connectome" or map neural networks. Copyright © 2014 Sociedad Española de Anestesiología, Reanimación y Terapéutica del Dolor. Publicado por Elsevier España, S.L.U. All rights reserved.

  14. Verification of mesoscale objective analyses of VAS and rawinsode data using the March 1982 AVE/VAS special network data. [Atmospheric Variability Experiment/Visible-infrared spin-scan radiometer Atmospheric Sounder

    NASA Technical Reports Server (NTRS)

    Doyle, James D.; Warner, Thomas T.

    1988-01-01

    Various combinations of VAS (Visible and Infrared Spin Scan Radiometer Atmospheric Sounder) data, conventional rawinsonde data, and gridded data from the National Weather Service's (NWS) global analysis, were used in successive-correction and variational objective-analysis procedures. Analyses are produced for 0000 GMT 7 March 1982, when the VAS sounding distribution was not greatly limited by the existence of cloud cover. The successive-correction (SC) Procedure was used with VAS data alone, rawinsonde data alone, and both VAS and rawinsonde data. Variational techniques were applied in three ways. Each of these techniques was discussed.

  15. Social network properties and self-rated health in later life: comparisons from the Korean social life, health, and aging project and the national social life, health and aging project.

    PubMed

    Youm, Yoosik; Laumann, Edward O; Ferraro, Kenneth F; Waite, Linda J; Kim, Hyeon Chang; Park, Yeong-Ran; Chu, Sang Hui; Joo, Won-Tak; Lee, Jin A

    2014-09-14

    This paper has two objectives. Firstly, it provides an overview of the social network module, data collection procedures, and measurement of ego-centric and complete-network properties in the Korean Social Life, Health, and Aging Project (KSHAP). Secondly, it directly compares the KSHAP structure and results to the ego-centric network structure and results of the National Social Life, Health, and Aging Project (NSHAP), which conducted in-home interviews with 3,005 persons 57 to 85 years of age in the United States. The structure of the complete social network of 814 KSHAP respondents living in Township K was measured and examined at two levels of networks. Ego-centric network properties include network size, composition, volume of contact with network members, density, and bridging potential. Complete-network properties are degree centrality, closeness centrality, betweenness centrality, and brokerage role. We found that KSHAP respondents with a smaller number of social network members were more likely to be older and tended to have poorer self-rated health. Compared to the NSHAP, the KSHAP respondents maintained a smaller network size with a greater network density among their members and lower bridging potential. Further analysis of the complete network properties of KSHAP respondents revealed that more brokerage roles inside the same neighborhood (Ri) were significantly associated with better self-rated health. Socially isolated respondents identified by network components had the worst self-rated health. The findings demonstrate the importance of social network analysis for the study of older adults' health status in Korea. The study also highlights the importance of complete-network data and its ability to reveal mechanisms beyond ego-centric network data.

  16. Social network properties and self-rated health in later life: comparisons from the Korean social life, health, and aging project and the national social life, health and aging project

    PubMed Central

    2014-01-01

    Background This paper has two objectives. Firstly, it provides an overview of the social network module, data collection procedures, and measurement of ego-centric and complete-network properties in the Korean Social Life, Health, and Aging Project (KSHAP). Secondly, it directly compares the KSHAP structure and results to the ego-centric network structure and results of the National Social Life, Health, and Aging Project (NSHAP), which conducted in-home interviews with 3,005 persons 57 to 85 years of age in the United States. Methods The structure of the complete social network of 814 KSHAP respondents living in Township K was measured and examined at two levels of networks. Ego-centric network properties include network size, composition, volume of contact with network members, density, and bridging potential. Complete-network properties are degree centrality, closeness centrality, betweenness centrality, and brokerage role. Results We found that KSHAP respondents with a smaller number of social network members were more likely to be older and tended to have poorer self-rated health. Compared to the NSHAP, the KSHAP respondents maintained a smaller network size with a greater network density among their members and lower bridging potential. Further analysis of the complete network properties of KSHAP respondents revealed that more brokerage roles inside the same neighborhood (Ri) were significantly associated with better self-rated health. Socially isolated respondents identified by network components had the worst self-rated health. Conclusions The findings demonstrate the importance of social network analysis for the study of older adults’ health status in Korea. The study also highlights the importance of complete-network data and its ability to reveal mechanisms beyond ego-centric network data. PMID:25217892

  17. Trends in surgical treatment of Chiari malformation Type I in the United States.

    PubMed

    Wilkinson, D Andrew; Johnson, Kyle; Garton, Hugh J L; Muraszko, Karin M; Maher, Cormac O

    2017-02-01

    OBJECTIVE The goal of this analysis was to define temporal and geographic trends in the surgical treatment of Chiari malformation Type I (CM-I) in a large, privately insured health care network. METHODS The authors examined de-identified insurance claims data from a large, privately insured health care network of over 58 million beneficiaries throughout the United States for the period between 2001 and 2014 for all patients undergoing surgical treatment of CM-I. Using a combination of International Classification of Diseases (ICD) diagnosis codes and Current Procedural Terminology (CPT) codes, the authors identified CM-I and associated diagnoses and procedures over a 14-year period, highlighting temporal and geographic trends in the performance of CM-I decompression (CMD) surgery as well as commonly associated procedures. RESULTS There were 2434 surgical procedures performed for CMD among the beneficiaries during the 14-year interval; 34% were performed in patients younger than 20 years of age. The rate of CMD increased 51% from the first half to the second half of the study period among younger patients (p < 0.001) and increased 28% among adult patients between 20 and 65 years of age (p < 0.001). A large sex difference was noted among adult patients; 78% of adult patients undergoing CMD were female compared with only 53% of the children. Pediatric patients undergoing CMD were more likely to be white with a higher household net worth. Regional variability was identified among rates of CMD as well. The average annual rate of surgery ranged from 0.8 surgeries per 100,000 insured person-years in the Pacific census division to 2.0 surgeries per 100,000 insured person-years in the East South Central census division. CONCLUSIONS Analysis of a large nationwide health care network showed recently increasing rates of CMD in children and adults over the past 14 years.

  18. 77 FR 4698 - Energy Conservation Program: Test Procedure and Energy Conservation Standard for Set-Top Boxes...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-31

    ... Conservation Program: Test Procedure and Energy Conservation Standard for Set-Top Boxes and Network Equipment... comments on the request for information pertaining to the development of test procedures and energy conservation standards for set-top boxes and network equipment. The comment period is extended to March 15...

  19. Reference hydrologic networks II. Using reference hydrologic networks to assess climate-driven changes in streamflow

    USGS Publications Warehouse

    Burn, Donald H.; Hannaford, Jamie; Hodgkins, Glenn A.; Whitfield, Paul H.; Thorne, Robin; Marsh, Terry

    2012-01-01

    Reference hydrologic networks (RHNs) can play an important role in monitoring for changes in the hydrological regime related to climate variation and change. Currently, the literature concerning hydrological response to climate variations is complex and confounded by the combinations of many methods of analysis, wide variations in hydrology, and the inclusion of data series that include changes in land use, storage regulation and water use in addition to those of climate. Three case studies that illustrate a variety of approaches to the analysis of data from RHNs are presented and used, together with a summary of studies from the literature, to develop approaches for the investigation of changes in the hydrological regime at a continental or global scale, particularly for international comparison. We present recommendations for an analysis framework and the next steps to advance such an initiative. There is a particular focus on the desirability of establishing standardized procedures and methodologies for both the creation of new national RHNs and the systematic analysis of data derived from a collection of RHNs.

  20. A Communication Audit of a State Mental Health Institution.

    ERIC Educational Resources Information Center

    Eadie, William F.; And Others

    An adaptation of "communication audit" procedures was used to evaluate the communication patterns at a mental health center (MHC). The evaluation included initial interviews with 28 MHC workers/administrators, a survey of 215 staff members for a communication network analysis, and followup interviews with another 28 persons. The data produced four…

  1. Movement of Fuel Ashore: Storage, Capacity, Throughput, and Distribution Analysis

    DTIC Science & Technology

    2015-12-01

    89  ix LIST OF FIGURES Figure 1.  Principles of Operational Maneuver from the Sea ........................... 7  Figure 2.  Compositing and...30  Table 2.  Force Mix Composition ...procedures, and force composition . Such alterations represent an acceptance of operational risk to buy down the foundational risk that the logistics network

  2. A systematic review of nurse-related social network analysis studies.

    PubMed

    Benton, D C; Pérez-Raya, F; Fernández-Fernández, M P; González-Jurado, M A

    2015-09-01

    Nurses frequently work as part of both uni- and multidisciplinary teams. Communication between team members is critical in the delivery of quality care. Social network analysis is increasingly being used to explore such communication. To explore the use of social network analysis involving nurses either as subjects of the study or as researchers. Standard systematic review procedures were applied to identify nurse-related studies that utilize social network analysis. A comparative thematic approach to synthesis was used. Both published and grey literature written in English, Spanish and Portuguese between January 1965 and December 2013 were identified via a structured search of CINAHL, SciELO and PubMed. In addition, Google and Yahoo search engines were used to identify additional grey literature using the same search strategy. Forty-three primary studies were identified with literature from North America dominating the published work. So far it would appear that no author or group of authors have developed a programme of research in the nursing field using the social network analysis approach although several authors may be in the process of doing so. The dominance of literature from North America may be viewed as problematic as the underlying structures and themes may be an artefact of cultural communication norms from this region. The use of social network analysis in relation to nursing and by nurse researchers has increased rapidly over the past two decades. The lack of longitudinal studies and the absence of replication across multiple sites should be seen as an opportunity for further research. This analytical approach is relatively new in the field of nursing but does show considerable promise in offering insights into the way information flows between individuals, teams, institutions and other structures. An understanding of these structures provides a means of improving communication. © 2014 International Council of Nurses.

  3. Network module detection: Affinity search technique with the multi-node topological overlap measure

    PubMed Central

    Li, Ai; Horvath, Steve

    2009-01-01

    Background Many clustering procedures only allow the user to input a pairwise dissimilarity or distance measure between objects. We propose a clustering method that can input a multi-point dissimilarity measure d(i1, i2, ..., iP) where the number of points P can be larger than 2. The work is motivated by gene network analysis where clusters correspond to modules of highly interconnected nodes. Here, we define modules as clusters of network nodes with high multi-node topological overlap. The topological overlap measure is a robust measure of interconnectedness which is based on shared network neighbors. In previous work, we have shown that the multi-node topological overlap measure yields biologically meaningful results when used as input of network neighborhood analysis. Findings We adapt network neighborhood analysis for the use of module detection. We propose the Module Affinity Search Technique (MAST), which is a generalized version of the Cluster Affinity Search Technique (CAST). MAST can accommodate a multi-node dissimilarity measure. Clusters grow around user-defined or automatically chosen seeds (e.g. hub nodes). We propose both local and global cluster growth stopping rules. We use several simulations and a gene co-expression network application to argue that the MAST approach leads to biologically meaningful results. We compare MAST with hierarchical clustering and partitioning around medoid clustering. Conclusion Our flexible module detection method is implemented in the MTOM software which can be downloaded from the following webpage: PMID:19619323

  4. Network module detection: Affinity search technique with the multi-node topological overlap measure.

    PubMed

    Li, Ai; Horvath, Steve

    2009-07-20

    Many clustering procedures only allow the user to input a pairwise dissimilarity or distance measure between objects. We propose a clustering method that can input a multi-point dissimilarity measure d(i1, i2, ..., iP) where the number of points P can be larger than 2. The work is motivated by gene network analysis where clusters correspond to modules of highly interconnected nodes. Here, we define modules as clusters of network nodes with high multi-node topological overlap. The topological overlap measure is a robust measure of interconnectedness which is based on shared network neighbors. In previous work, we have shown that the multi-node topological overlap measure yields biologically meaningful results when used as input of network neighborhood analysis. We adapt network neighborhood analysis for the use of module detection. We propose the Module Affinity Search Technique (MAST), which is a generalized version of the Cluster Affinity Search Technique (CAST). MAST can accommodate a multi-node dissimilarity measure. Clusters grow around user-defined or automatically chosen seeds (e.g. hub nodes). We propose both local and global cluster growth stopping rules. We use several simulations and a gene co-expression network application to argue that the MAST approach leads to biologically meaningful results. We compare MAST with hierarchical clustering and partitioning around medoid clustering. Our flexible module detection method is implemented in the MTOM software which can be downloaded from the following webpage: http://www.genetics.ucla.edu/labs/horvath/MTOM/

  5. Training radial basis function networks for wind speed prediction using PSO enhanced differential search optimizer

    PubMed Central

    2018-01-01

    This paper presents an integrated hybrid optimization algorithm for training the radial basis function neural network (RBF NN). Training of neural networks is still a challenging exercise in machine learning domain. Traditional training algorithms in general suffer and trap in local optima and lead to premature convergence, which makes them ineffective when applied for datasets with diverse features. Training algorithms based on evolutionary computations are becoming popular due to their robust nature in overcoming the drawbacks of the traditional algorithms. Accordingly, this paper proposes a hybrid training procedure with differential search (DS) algorithm functionally integrated with the particle swarm optimization (PSO). To surmount the local trapping of the search procedure, a new population initialization scheme is proposed using Logistic chaotic sequence, which enhances the population diversity and aid the search capability. To demonstrate the effectiveness of the proposed RBF hybrid training algorithm, experimental analysis on publicly available 7 benchmark datasets are performed. Subsequently, experiments were conducted on a practical application case for wind speed prediction to expound the superiority of the proposed RBF training algorithm in terms of prediction accuracy. PMID:29768463

  6. Training radial basis function networks for wind speed prediction using PSO enhanced differential search optimizer.

    PubMed

    Rani R, Hannah Jessie; Victoire T, Aruldoss Albert

    2018-01-01

    This paper presents an integrated hybrid optimization algorithm for training the radial basis function neural network (RBF NN). Training of neural networks is still a challenging exercise in machine learning domain. Traditional training algorithms in general suffer and trap in local optima and lead to premature convergence, which makes them ineffective when applied for datasets with diverse features. Training algorithms based on evolutionary computations are becoming popular due to their robust nature in overcoming the drawbacks of the traditional algorithms. Accordingly, this paper proposes a hybrid training procedure with differential search (DS) algorithm functionally integrated with the particle swarm optimization (PSO). To surmount the local trapping of the search procedure, a new population initialization scheme is proposed using Logistic chaotic sequence, which enhances the population diversity and aid the search capability. To demonstrate the effectiveness of the proposed RBF hybrid training algorithm, experimental analysis on publicly available 7 benchmark datasets are performed. Subsequently, experiments were conducted on a practical application case for wind speed prediction to expound the superiority of the proposed RBF training algorithm in terms of prediction accuracy.

  7. Emerging late adolescent friendship networks and Big Five personality traits: a social network approach.

    PubMed

    Selfhout, Maarten; Burk, William; Branje, Susan; Denissen, Jaap; van Aken, Marcel; Meeus, Wim

    2010-04-01

    The current study focuses on the emergence of friendship networks among just-acquainted individuals, investigating the effects of Big Five personality traits on friendship selection processes. Sociometric nominations and self-ratings on personality traits were gathered from 205 late adolescents (mean age=19 years) at 5 time points during the first year of university. SIENA, a novel multilevel statistical procedure for social network analysis, was used to examine effects of Big Five traits on friendship selection. Results indicated that friendship networks between just-acquainted individuals became increasingly more cohesive within the first 3 months and then stabilized. Whereas individuals high on Extraversion tended to select more friends than those low on this trait, individuals high on Agreeableness tended to be selected more as friends. In addition, individuals tended to select friends with similar levels of Agreeableness, Extraversion, and Openness.

  8. Modular analysis of the probabilistic genetic interaction network.

    PubMed

    Hou, Lin; Wang, Lin; Qian, Minping; Li, Dong; Tang, Chao; Zhu, Yunping; Deng, Minghua; Li, Fangting

    2011-03-15

    Epistatic Miniarray Profiles (EMAP) has enabled the mapping of large-scale genetic interaction networks; however, the quantitative information gained from EMAP cannot be fully exploited since the data are usually interpreted as a discrete network based on an arbitrary hard threshold. To address such limitations, we adopted a mixture modeling procedure to construct a probabilistic genetic interaction network and then implemented a Bayesian approach to identify densely interacting modules in the probabilistic network. Mixture modeling has been demonstrated as an effective soft-threshold technique of EMAP measures. The Bayesian approach was applied to an EMAP dataset studying the early secretory pathway in Saccharomyces cerevisiae. Twenty-seven modules were identified, and 14 of those were enriched by gold standard functional gene sets. We also conducted a detailed comparison with state-of-the-art algorithms, hierarchical cluster and Markov clustering. The experimental results show that the Bayesian approach outperforms others in efficiently recovering biologically significant modules.

  9. Network model and short circuit program for the Kennedy Space Center electric power distribution system

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Assumptions made and techniques used in modeling the power network to the 480 volt level are discussed. Basic computational techniques used in the short circuit program are described along with a flow diagram of the program and operational procedures. Procedures for incorporating network changes are included in this user's manual.

  10. Link Prediction in Criminal Networks: A Tool for Criminal Intelligence Analysis

    PubMed Central

    Berlusconi, Giulia; Calderoni, Francesco; Parolini, Nicola; Verani, Marco; Piccardi, Carlo

    2016-01-01

    The problem of link prediction has recently received increasing attention from scholars in network science. In social network analysis, one of its aims is to recover missing links, namely connections among actors which are likely to exist but have not been reported because data are incomplete or subject to various types of uncertainty. In the field of criminal investigations, problems of incomplete information are encountered almost by definition, given the obvious anti-detection strategies set up by criminals and the limited investigative resources. In this paper, we work on a specific dataset obtained from a real investigation, and we propose a strategy to identify missing links in a criminal network on the basis of the topological analysis of the links classified as marginal, i.e. removed during the investigation procedure. The main assumption is that missing links should have opposite features with respect to marginal ones. Measures of node similarity turn out to provide the best characterization in this sense. The inspection of the judicial source documents confirms that the predicted links, in most instances, do relate actors with large likelihood of co-participation in illicit activities. PMID:27104948

  11. Sensitivity Analysis for Probabilistic Neural Network Structure Reduction.

    PubMed

    Kowalski, Piotr A; Kusy, Maciej

    2018-05-01

    In this paper, we propose the use of local sensitivity analysis (LSA) for the structure simplification of the probabilistic neural network (PNN). Three algorithms are introduced. The first algorithm applies LSA to the PNN input layer reduction by selecting significant features of input patterns. The second algorithm utilizes LSA to remove redundant pattern neurons of the network. The third algorithm combines the proposed two and constitutes the solution of how they can work together. PNN with a product kernel estimator is used, where each multiplicand computes a one-dimensional Cauchy function. Therefore, the smoothing parameter is separately calculated for each dimension by means of the plug-in method. The classification qualities of the reduced and full structure PNN are compared. Furthermore, we evaluate the performance of PNN, for which global sensitivity analysis (GSA) and the common reduction methods are applied, both in the input layer and the pattern layer. The models are tested on the classification problems of eight repository data sets. A 10-fold cross validation procedure is used to determine the prediction ability of the networks. Based on the obtained results, it is shown that the LSA can be used as an alternative PNN reduction approach.

  12. Extraction of Martian valley networks from digital topography

    NASA Technical Reports Server (NTRS)

    Stepinski, T. F.; Collier, M. L.

    2004-01-01

    We have developed a novel method for delineating valley networks on Mars. The valleys are inferred from digital topography by an autonomous computer algorithm as drainage networks, instead of being manually mapped from images. Individual drainage basins are precisely defined and reconstructed to restore flow continuity disrupted by craters. Drainage networks are extracted from their underlying basins using the contributing area threshold method. We demonstrate that such drainage networks coincide with mapped valley networks verifying that valley networks are indeed drainage systems. Our procedure is capable of delineating and analyzing valley networks with unparalleled speed and consistency. We have applied this method to 28 Noachian locations on Mars exhibiting prominent valley networks. All extracted networks have a planar morphology similar to that of terrestrial river networks. They are characterized by a drainage density of approx.0.1/km, low in comparison to the drainage density of terrestrial river networks. Slopes of "streams" in Martian valley networks decrease downstream at a slower rate than slopes of streams in terrestrial river networks. This analysis, based on a sizable data set of valley networks, reveals that although valley networks have some features pointing to their origin by precipitation-fed runoff erosion, their quantitative characteristics suggest that precipitation intensity and/or longevity of past pluvial climate were inadequate to develop mature drainage basins on Mars.

  13. A Fast MAC-Layer Handover for an IEEE 802.16e-Based WMAN

    NASA Astrophysics Data System (ADS)

    Ray, Sayan K.; Pawlikowski, Krzysztof; Sirisena, Harsha

    We propose a modification of the IEEE 802.16e hard handover (HHO) procedure, which significantly reduces the handover latency constraint of the original HHO procedure in IEEE 802.16e networks. It allows a better handling of the delay-sensitive traffic by avoiding unnecessary time-consuming scanning and synchronization activity as well as simplifies the network re-entry procedure. With the help of the backhaul network, it reduces the number of control messages in the original handover policy, making the handover latency acceptable also for real-time streaming traffic. Preliminary performance evaluation studies show that the modified handover procedure is able to reduce the total handover latency by about 50%.

  14. Training a Network of Electronic Neurons for Control of a Mobile Robot

    NASA Astrophysics Data System (ADS)

    Vromen, T. G. M.; Steur, E.; Nijmeijer, H.

    An adaptive training procedure is developed for a network of electronic neurons, which controls a mobile robot driving around in an unknown environment while avoiding obstacles. The neuronal network controls the angular velocity of the wheels of the robot based on the sensor readings. The nodes in the neuronal network controller are clusters of neurons rather than single neurons. The adaptive training procedure ensures that the input-output behavior of the clusters is identical, even though the constituting neurons are nonidentical and have, in isolation, nonidentical responses to the same input. In particular, we let the neurons interact via a diffusive coupling, and the proposed training procedure modifies the diffusion interaction weights such that the neurons behave synchronously with a predefined response. The working principle of the training procedure is experimentally validated and results of an experiment with a mobile robot that is completely autonomously driving in an unknown environment with obstacles are presented.

  15. Material Characterization for the Analysis of Skin/Stiffener Separation

    NASA Technical Reports Server (NTRS)

    Davila, Carlos G.; Leone, Frank A.; Song, Kyongchan; Ratcliffe, James G.; Rose, Cheryl A.

    2017-01-01

    Test results show that separation failure in co-cured skin/stiffener interfaces is characterized by dense networks of interacting cracks and crack path migrations that are not present in standard characterization tests for delamination. These crack networks result in measurable large-scale and sub-ply-scale R curve toughening mechanisms, such as fiber bridging, crack migration, and crack delving. Consequently, a number of unknown issues exist regarding the level of analysis detail that is required for sufficient predictive fidelity. The objective of the present paper is to examine some of the difficulties associated with modeling separation failure in stiffened composite structures. A procedure to characterize the interfacial material properties is proposed and the use of simplified models based on empirical interface properties is evaluated.

  16. CIRCAL-2 - General-purpose on-line circuit design.

    NASA Technical Reports Server (NTRS)

    Dertouzos, M. L.; Jessel, G. P.; Stinger, J. R.

    1972-01-01

    CIRCAL-2 is a second-generation general-purpose on-line circuit-design program with the following main features: (1) multiple-analysis capability; (2) uniform and general data structures for handling text editing, network representations, and output results, regardless of analysis; (3) special techniques and structures for minimizing and controlling user-program interaction; (4) use of functionals for the description of hysteresis and heat effects; and (5) ability to define optimization procedures that 'replace' the user. The paper discusses the organization of CIRCAL-2, the aforementioned main features, and their consequences, such as a set of network elements and models general enough for most analyses and a set of functions tailored to circuit-design requirements. The presentation is descriptive, concentrating on conceptual rather than on program implementation details.

  17. Novel approaches to pin cluster synchronization on complex dynamical networks in Lur'e forms

    NASA Astrophysics Data System (ADS)

    Tang, Ze; Park, Ju H.; Feng, Jianwen

    2018-04-01

    This paper investigates the cluster synchronization of complex dynamical networks consisted of identical or nonidentical Lur'e systems. Due to the special topology structure of the complex networks and the existence of stochastic perturbations, a kind of randomly occurring pinning controller is designed which not only synchronizes all Lur'e systems in the same cluster but also decreases the negative influence among different clusters. Firstly, based on an extended integral inequality, the convex combination theorem and S-procedure, the conditions for cluster synchronization of identical Lur'e networks are derived in a convex domain. Secondly, randomly occurring adaptive pinning controllers with two independent Bernoulli stochastic variables are designed and then sufficient conditions are obtained for the cluster synchronization on complex networks consisted of nonidentical Lur'e systems. In addition, suitable control gains for successful cluster synchronization of nonidentical Lur'e networks are acquired by designing some adaptive updating laws. Finally, we present two numerical examples to demonstrate the validity of the control scheme and the theoretical analysis.

  18. High-performance parallel analysis of coupled problems for aircraft propulsion

    NASA Technical Reports Server (NTRS)

    Felippa, C. A.; Farhat, C.; Lanteri, S.; Maman, N.; Piperno, S.; Gumaste, U.

    1994-01-01

    This research program deals with the application of high-performance computing methods for the analysis of complete jet engines. We have entitled this program by applying the two dimensional parallel aeroelastic codes to the interior gas flow problem of a bypass jet engine. The fluid mesh generation, domain decomposition, and solution capabilities were successfully tested. We then focused attention on methodology for the partitioned analysis of the interaction of the gas flow with a flexible structure and with the fluid mesh motion that results from these structural displacements. This is treated by a new arbitrary Lagrangian-Eulerian (ALE) technique that models the fluid mesh motion as that of a fictitious mass-spring network. New partitioned analysis procedures to treat this coupled three-component problem are developed. These procedures involved delayed corrections and subcycling. Preliminary results on the stability, accuracy, and MPP computational efficiency are reported.

  19. Modeling of a pitching and plunging airfoil using experimental flow field and load measurements

    NASA Astrophysics Data System (ADS)

    Troshin, Victor; Seifert, Avraham

    2018-01-01

    The main goal of the current paper is to outline a low-order modeling procedure of a heaving airfoil in a still fluid using experimental measurements. Due to its relative simplicity, the proposed procedure is applicable for the analysis of flow fields within complex and unsteady geometries and it is suitable for analyzing the data obtained by experimentation. Currently, this procedure is used to model and predict the flow field evolution using a small number of low profile load sensors and flow field measurements. A time delay neural network is used to estimate the flow field. The neural network estimates the amplitudes of the most energetic modes using four sensory inputs. The modes are calculated using proper orthogonal decomposition of the flow field data obtained experimentally by time-resolved, phase-locked particle imaging velocimetry. To permit the use of proper orthogonal decomposition, the measured flow field is mapped onto a stationary domain using volume preserving transformation. The analysis performed by the model showed good estimation quality within the parameter range used in the training procedure. However, the performance deteriorates for cases out of this range. This situation indicates that, to improve the robustness of the model, both the decomposition and the training data sets must be diverse in terms of input parameter space. In addition, the results suggest that the property of volume preservation of the mapping does not affect the model quality as long as the model is not based on the Galerkin approximation. Thus, it may be relaxed for cases with more complex geometry and kinematics.

  20. Particle identification with neural networks using a rotational invariant moment representation

    NASA Astrophysics Data System (ADS)

    Sinkus, Ralph; Voss, Thomas

    1997-02-01

    A feed-forward neural network is used to identify electromagnetic particles based upon their showering properties within a segmented calorimeter. A preprocessing procedure is applied to the spatial energy distribution of the particle shower in order to account for the varying geometry of the calorimeter. The novel feature is the expansion of the energy distribution in terms of moments of the so-called Zernike functions which are invariant under rotation. The distributions of moments exhibit very different scales, thus the multidimensional input distribution for the neural network is transformed via a principal component analysis and rescaled by its respective variances to ensure input values of the order of one. This increases the sensitivity of the network and thus results in better performance in identifying and separating electromagnetic from hadronic particles, especially at low energies.

  1. CHARACTERIZATION OF THE COMPLETE FIBER NETWORK TOPOLOGY OF PLANAR FIBROUS TISSUES AND SCAFFOLDS

    PubMed Central

    D'Amore, Antonio; Stella, John A.; Wagner, William R.; Sacks, Michael S.

    2010-01-01

    Understanding how engineered tissue scaffold architecture affects cell morphology, metabolism, phenotypic expression, as well as predicting material mechanical behavior have recently received increased attention. In the present study, an image-based analysis approach that provides an automated tool to characterize engineered tissue fiber network topology is presented. Micro-architectural features that fully defined fiber network topology were detected and quantified, which include fiber orientation, connectivity, intersection spatial density, and diameter. Algorithm performance was tested using scanning electron microscopy (SEM) images of electrospun poly(ester urethane)urea (ES-PEUU) scaffolds. SEM images of rabbit mesenchymal stem cell (MSC) seeded collagen gel scaffolds and decellularized rat carotid arteries were also analyzed to further evaluate the ability of the algorithm to capture fiber network morphology regardless of scaffold type and the evaluated size scale. The image analysis procedure was validated qualitatively and quantitatively, comparing fiber network topology manually detected by human operators (n=5) with that automatically detected by the algorithm. Correlation values between manual detected and algorithm detected results for the fiber angle distribution and for the fiber connectivity distribution were 0.86 and 0.93 respectively. Algorithm detected fiber intersections and fiber diameter values were comparable (within the mean ± standard deviation) with those detected by human operators. This automated approach identifies and quantifies fiber network morphology as demonstrated for three relevant scaffold types and provides a means to: (1) guarantee objectivity, (2) significantly reduce analysis time, and (3) potentiate broader analysis of scaffold architecture effects on cell behavior and tissue development both in vitro and in vivo. PMID:20398930

  2. Neural Net-Based Redesign of Transonic Turbines for Improved Unsteady Aerodynamic Performance

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.; Rai, Man Mohan; Huber, Frank W.

    1998-01-01

    A recently developed neural net-based aerodynamic design procedure is used in the redesign of a transonic turbine stage to improve its unsteady aerodynamic performance. The redesign procedure used incorporates the advantages of both traditional response surface methodology (RSM) and neural networks by employing a strategy called parameter-based partitioning of the design space. Starting from the reference design, a sequence of response surfaces based on both neural networks and polynomial fits are constructed to traverse the design space in search of an optimal solution that exhibits improved unsteady performance. The procedure combines the power of neural networks and the economy of low-order polynomials (in terms of number of simulations required and network training requirements). A time-accurate, two-dimensional, Navier-Stokes solver is used to evaluate the various intermediate designs and provide inputs to the optimization procedure. The optimization procedure yields a modified design that improves the aerodynamic performance through small changes to the reference design geometry. The computed results demonstrate the capabilities of the neural net-based design procedure, and also show the tremendous advantages that can be gained by including high-fidelity unsteady simulations that capture the relevant flow physics in the design optimization process.

  3. Optimal tree-stem bucking of northeastern species of China

    Treesearch

    Jingxin Wang; Chris B. LeDoux; Joseph McNeel

    2004-01-01

    An application of optimal tree-stem bucking to the northeastern tree species of China is reported. The bucking procedures used in this region are summarized, which are the basic guidelines for the optimal bucking design. The directed graph approach was adopted to generate the bucking patterns by using the network analysis labeling algorithm. A computer-based bucking...

  4. An Analysis of Television's Coverage of the "Iran Crisis": 5 November 1979 to 15 January 1980.

    ERIC Educational Resources Information Center

    Miller, Christine

    The three television networks, acting under severe restrictions imposed by the Iranian government, all provided comprehensive coverage of the hostage crisis. A study was conducted to examine what, if any, salient differences arose or existed in this coverage from November 5, 1979, until January 15, 1980. A research procedure combining qualitative…

  5. A mixed-integer linear programming approach to the reduction of genome-scale metabolic networks.

    PubMed

    Röhl, Annika; Bockmayr, Alexander

    2017-01-03

    Constraint-based analysis has become a widely used method to study metabolic networks. While some of the associated algorithms can be applied to genome-scale network reconstructions with several thousands of reactions, others are limited to small or medium-sized models. In 2015, Erdrich et al. introduced a method called NetworkReducer, which reduces large metabolic networks to smaller subnetworks, while preserving a set of biological requirements that can be specified by the user. Already in 2001, Burgard et al. developed a mixed-integer linear programming (MILP) approach for computing minimal reaction sets under a given growth requirement. Here we present an MILP approach for computing minimum subnetworks with the given properties. The minimality (with respect to the number of active reactions) is not guaranteed by NetworkReducer, while the method by Burgard et al. does not allow specifying the different biological requirements. Our procedure is about 5-10 times faster than NetworkReducer and can enumerate all minimum subnetworks in case there exist several ones. This allows identifying common reactions that are present in all subnetworks, and reactions appearing in alternative pathways. Applying complex analysis methods to genome-scale metabolic networks is often not possible in practice. Thus it may become necessary to reduce the size of the network while keeping important functionalities. We propose a MILP solution to this problem. Compared to previous work, our approach is more efficient and allows computing not only one, but even all minimum subnetworks satisfying the required properties.

  6. Error modelling of quantum Hall array resistance standards

    NASA Astrophysics Data System (ADS)

    Marzano, Martina; Oe, Takehiko; Ortolano, Massimo; Callegaro, Luca; Kaneko, Nobu-Hisa

    2018-04-01

    Quantum Hall array resistance standards (QHARSs) are integrated circuits composed of interconnected quantum Hall effect elements that allow the realization of virtually arbitrary resistance values. In recent years, techniques were presented to efficiently design QHARS networks. An open problem is that of the evaluation of the accuracy of a QHARS, which is affected by contact and wire resistances. In this work, we present a general and systematic procedure for the error modelling of QHARSs, which is based on modern circuit analysis techniques and Monte Carlo evaluation of the uncertainty. As a practical example, this method of analysis is applied to the characterization of a 1 MΩ QHARS developed by the National Metrology Institute of Japan. Software tools are provided to apply the procedure to other arrays.

  7. An enhanced performance through agent-based secure approach for mobile ad hoc networks

    NASA Astrophysics Data System (ADS)

    Bisen, Dhananjay; Sharma, Sanjeev

    2018-01-01

    This paper proposes an agent-based secure enhanced performance approach (AB-SEP) for mobile ad hoc network. In this approach, agent nodes are selected through optimal node reliability as a factor. This factor is calculated on the basis of node performance features such as degree difference, normalised distance value, energy level, mobility and optimal hello interval of node. After selection of agent nodes, a procedure of malicious behaviour detection is performed using fuzzy-based secure architecture (FBSA). To evaluate the performance of the proposed approach, comparative analysis is done with conventional schemes using performance parameters such as packet delivery ratio, throughput, total packet forwarding, network overhead, end-to-end delay and percentage of malicious detection.

  8. NETWORK SYNTHESIS OF CASCADED THRESHOLD ELEMENTS.

    DTIC Science & Technology

    A threshold function is a switching function which can be stimulated by a single, simplified, idealized neuron, or threshold element. In this report... threshold functions are examined in the context of abstract set theory and linear algebra for the purpose of obtaining practical synthesis procedures...for networks of threshold elements. A procedure is described by which, for any given switching function, a cascade network of these elements can be

  9. State feedback controller design for the synchronization of Boolean networks with time delays

    NASA Astrophysics Data System (ADS)

    Li, Fangfei; Li, Jianning; Shen, Lijuan

    2018-01-01

    State feedback control design to make the response Boolean network synchronize with the drive Boolean network is far from being solved in the literature. Motivated by this, this paper studies the feedback control design for the complete synchronization of two coupled Boolean networks with time delays. A necessary condition for the existence of a state feedback controller is derived first. Then the feedback control design procedure for the complete synchronization of two coupled Boolean networks is provided based on the necessary condition. Finally, an example is given to illustrate the proposed design procedure.

  10. Network Operations Support Plan for the Spot 2 mission (revision 1)

    NASA Technical Reports Server (NTRS)

    Werbitzky, Victor

    1989-01-01

    The purpose of this Network Operations Support Plan (NOSP) is to indicate operational procedures and ground equipment configurations for the SPOT 2 mission. The provisions in this document take precedence over procedures or configurations in other documents.

  11. MicroRNA-Target Network Inference and Local Network Enrichment Analysis Identify Two microRNA Clusters with Distinct Functions in Head and Neck Squamous Cell Carcinoma

    PubMed Central

    Sass, Steffen; Pitea, Adriana; Unger, Kristian; Hess, Julia; Mueller, Nikola S.; Theis, Fabian J.

    2015-01-01

    MicroRNAs represent ~22 nt long endogenous small RNA molecules that have been experimentally shown to regulate gene expression post-transcriptionally. One main interest in miRNA research is the investigation of their functional roles, which can typically be accomplished by identification of mi-/mRNA interactions and functional annotation of target gene sets. We here present a novel method “miRlastic”, which infers miRNA-target interactions using transcriptomic data as well as prior knowledge and performs functional annotation of target genes by exploiting the local structure of the inferred network. For the network inference, we applied linear regression modeling with elastic net regularization on matched microRNA and messenger RNA expression profiling data to perform feature selection on prior knowledge from sequence-based target prediction resources. The novelty of miRlastic inference originates in predicting data-driven intra-transcriptome regulatory relationships through feature selection. With synthetic data, we showed that miRlastic outperformed commonly used methods and was suitable even for low sample sizes. To gain insight into the functional role of miRNAs and to determine joint functional properties of miRNA clusters, we introduced a local enrichment analysis procedure. The principle of this procedure lies in identifying regions of high functional similarity by evaluating the shortest paths between genes in the network. We can finally assign functional roles to the miRNAs by taking their regulatory relationships into account. We thoroughly evaluated miRlastic on a cohort of head and neck cancer (HNSCC) patients provided by The Cancer Genome Atlas. We inferred an mi-/mRNA regulatory network for human papilloma virus (HPV)-associated miRNAs in HNSCC. The resulting network best enriched for experimentally validated miRNA-target interaction, when compared to common methods. Finally, the local enrichment step identified two functional clusters of miRNAs that were predicted to mediate HPV-associated dysregulation in HNSCC. Our novel approach was able to characterize distinct pathway regulations from matched miRNA and mRNA data. An R package of miRlastic was made available through: http://icb.helmholtz-muenchen.de/mirlastic. PMID:26694379

  12. MicroRNA-Target Network Inference and Local Network Enrichment Analysis Identify Two microRNA Clusters with Distinct Functions in Head and Neck Squamous Cell Carcinoma.

    PubMed

    Sass, Steffen; Pitea, Adriana; Unger, Kristian; Hess, Julia; Mueller, Nikola S; Theis, Fabian J

    2015-12-18

    MicroRNAs represent ~22 nt long endogenous small RNA molecules that have been experimentally shown to regulate gene expression post-transcriptionally. One main interest in miRNA research is the investigation of their functional roles, which can typically be accomplished by identification of mi-/mRNA interactions and functional annotation of target gene sets. We here present a novel method "miRlastic", which infers miRNA-target interactions using transcriptomic data as well as prior knowledge and performs functional annotation of target genes by exploiting the local structure of the inferred network. For the network inference, we applied linear regression modeling with elastic net regularization on matched microRNA and messenger RNA expression profiling data to perform feature selection on prior knowledge from sequence-based target prediction resources. The novelty of miRlastic inference originates in predicting data-driven intra-transcriptome regulatory relationships through feature selection. With synthetic data, we showed that miRlastic outperformed commonly used methods and was suitable even for low sample sizes. To gain insight into the functional role of miRNAs and to determine joint functional properties of miRNA clusters, we introduced a local enrichment analysis procedure. The principle of this procedure lies in identifying regions of high functional similarity by evaluating the shortest paths between genes in the network. We can finally assign functional roles to the miRNAs by taking their regulatory relationships into account. We thoroughly evaluated miRlastic on a cohort of head and neck cancer (HNSCC) patients provided by The Cancer Genome Atlas. We inferred an mi-/mRNA regulatory network for human papilloma virus (HPV)-associated miRNAs in HNSCC. The resulting network best enriched for experimentally validated miRNA-target interaction, when compared to common methods. Finally, the local enrichment step identified two functional clusters of miRNAs that were predicted to mediate HPV-associated dysregulation in HNSCC. Our novel approach was able to characterize distinct pathway regulations from matched miRNA and mRNA data. An R package of miRlastic was made available through: http://icb.helmholtz-muenchen.de/mirlastic.

  13. Integrated workflow for characterizing and modeling fracture network in unconventional reservoirs using microseismic data

    NASA Astrophysics Data System (ADS)

    Ayatollahy Tafti, Tayeb

    We develop a new method for integrating information and data from different sources. We also construct a comprehensive workflow for characterizing and modeling a fracture network in unconventional reservoirs, using microseismic data. The methodology is based on combination of several mathematical and artificial intelligent techniques, including geostatistics, fractal analysis, fuzzy logic, and neural networks. The study contributes to scholarly knowledge base on the characterization and modeling fractured reservoirs in several ways; including a versatile workflow with a novel objective functions. Some the characteristics of the methods are listed below: 1. The new method is an effective fracture characterization procedure estimates different fracture properties. Unlike the existing methods, the new approach is not dependent on the location of events. It is able to integrate all multi-scaled and diverse fracture information from different methodologies. 2. It offers an improved procedure to create compressional and shear velocity models as a preamble for delineating anomalies and map structures of interest and to correlate velocity anomalies with fracture swarms and other reservoir properties of interest. 3. It offers an effective way to obtain the fractal dimension of microseismic events and identify the pattern complexity, connectivity, and mechanism of the created fracture network. 4. It offers an innovative method for monitoring the fracture movement in different stages of stimulation that can be used to optimize the process. 5. Our newly developed MDFN approach allows to create a discrete fracture network model using only microseismic data with potential cost reduction. It also imposes fractal dimension as a constraint on other fracture modeling approaches, which increases the visual similarity between the modeled networks and the real network over the simulated volume.

  14. Spatial-Temporal Dynamics of High-Resolution Animal Networks: What Can We Learn from Domestic Animals?

    PubMed

    Chen, Shi; Ilany, Amiyaal; White, Brad J; Sanderson, Michael W; Lanzas, Cristina

    2015-01-01

    Animal social network is the key to understand many ecological and epidemiological processes. We used real-time location system (RTLS) to accurately track cattle position, analyze their proximity networks, and tested the hypothesis of temporal stationarity and spatial homogeneity in these networks during different daily time periods and in different areas of the pen. The network structure was analyzed using global network characteristics (network density), subgroup clustering (modularity), triadic property (transitivity), and dyadic interactions (correlation coefficient from a quadratic assignment procedure) at hourly level. We demonstrated substantial spatial-temporal heterogeneity in these networks and potential link between indirect animal-environment contact and direct animal-animal contact. But such heterogeneity diminished if data were collected at lower spatial (aggregated at entire pen level) or temporal (aggregated at daily level) resolution. The network structure (described by the characteristics such as density, modularity, transitivity, etc.) also changed substantially at different time and locations. There were certain time (feeding) and location (hay) that the proximity network structures were more consistent based on the dyadic interaction analysis. These results reveal new insights for animal network structure and spatial-temporal dynamics, provide more accurate descriptions of animal social networks, and allow more accurate modeling of multiple (both direct and indirect) disease transmission pathways.

  15. Automated Quantification and Integrative Analysis of 2D and 3D Mitochondrial Shape and Network Properties

    PubMed Central

    Nikolaisen, Julie; Nilsson, Linn I. H.; Pettersen, Ina K. N.; Willems, Peter H. G. M.; Lorens, James B.; Koopman, Werner J. H.; Tronstad, Karl J.

    2014-01-01

    Mitochondrial morphology and function are coupled in healthy cells, during pathological conditions and (adaptation to) endogenous and exogenous stress. In this sense mitochondrial shape can range from small globular compartments to complex filamentous networks, even within the same cell. Understanding how mitochondrial morphological changes (i.e. “mitochondrial dynamics”) are linked to cellular (patho) physiology is currently the subject of intense study and requires detailed quantitative information. During the last decade, various computational approaches have been developed for automated 2-dimensional (2D) analysis of mitochondrial morphology and number in microscopy images. Although these strategies are well suited for analysis of adhering cells with a flat morphology they are not applicable for thicker cells, which require a three-dimensional (3D) image acquisition and analysis procedure. Here we developed and validated an automated image analysis algorithm allowing simultaneous 3D quantification of mitochondrial morphology and network properties in human endothelial cells (HUVECs). Cells expressing a mitochondria-targeted green fluorescence protein (mitoGFP) were visualized by 3D confocal microscopy and mitochondrial morphology was quantified using both the established 2D method and the new 3D strategy. We demonstrate that both analyses can be used to characterize and discriminate between various mitochondrial morphologies and network properties. However, the results from 2D and 3D analysis were not equivalent when filamentous mitochondria in normal HUVECs were compared with circular/spherical mitochondria in metabolically stressed HUVECs treated with rotenone (ROT). 2D quantification suggested that metabolic stress induced mitochondrial fragmentation and loss of biomass. In contrast, 3D analysis revealed that the mitochondrial network structure was dissolved without affecting the amount and size of the organelles. Thus, our results demonstrate that 3D imaging and quantification are crucial for proper understanding of mitochondrial shape and topology in non-flat cells. In summary, we here present an integrative method for unbiased 3D quantification of mitochondrial shape and network properties in mammalian cells. PMID:24988307

  16. Communications processor for C3 analysis and wargaming

    NASA Astrophysics Data System (ADS)

    Clark, L. N.; Pless, L. D.; Rapp, R. L.

    1982-03-01

    This thesis developed the software capability to allow the investigation of c3 problems, procedures and methodologies. The resultant communications model, that while independent of a specific wargame, is currently implemented in conjunction with the McClintic Theater Model. It provides a computerized message handling system (C3 Model) which allows simulation of communication links (circuits) with user-definable delays; garble and loss rates; and multiple circuit types, addresses, and levels of command. It is designed to be used for test and evaluation of command and control problems in the areas of organizational relationships, communication networks and procedures, and combat doctrine or tactics.

  17. Design of Neural Networks for Fast Convergence and Accuracy: Dynamics and Control

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Sparks, Dean W., Jr.

    1997-01-01

    A procedure for the design and training of artificial neural networks, used for rapid and efficient controls and dynamics design and analysis for flexible space systems, has been developed. Artificial neural networks are employed, such that once properly trained, they provide a means of evaluating the impact of design changes rapidly. Specifically, two-layer feedforward neural networks are designed to approximate the functional relationship between the component/spacecraft design changes and measures of its performance or nonlinear dynamics of the system/components. A training algorithm, based on statistical sampling theory, is presented, which guarantees that the trained networks provide a designer-specified degree of accuracy in mapping the functional relationship. Within each iteration of this statistical-based algorithm, a sequential design algorithm is used for the design and training of the feedforward network to provide rapid convergence to the network goals. Here, at each sequence a new network is trained to minimize the error of previous network. The proposed method should work for applications wherein an arbitrary large source of training data can be generated. Two numerical examples are performed on a spacecraft application in order to demonstrate the feasibility of the proposed approach.

  18. Design of neural networks for fast convergence and accuracy: dynamics and control.

    PubMed

    Maghami, P G; Sparks, D R

    2000-01-01

    A procedure for the design and training of artificial neural networks, used for rapid and efficient controls and dynamics design and analysis for flexible space systems, has been developed. Artificial neural networks are employed, such that once properly trained, they provide a means of evaluating the impact of design changes rapidly. Specifically, two-layer feedforward neural networks are designed to approximate the functional relationship between the component/spacecraft design changes and measures of its performance or nonlinear dynamics of the system/components. A training algorithm, based on statistical sampling theory, is presented, which guarantees that the trained networks provide a designer-specified degree of accuracy in mapping the functional relationship. Within each iteration of this statistical-based algorithm, a sequential design algorithm is used for the design and training of the feedforward network to provide rapid convergence to the network goals. Here, at each sequence a new network is trained to minimize the error of previous network. The proposed method should work for applications wherein an arbitrary large source of training data can be generated. Two numerical examples are performed on a spacecraft application in order to demonstrate the feasibility of the proposed approach.

  19. New methodologies for multi-scale time-variant reliability analysis of complex lifeline networks

    NASA Astrophysics Data System (ADS)

    Kurtz, Nolan Scot

    The cost of maintaining existing civil infrastructure is enormous. Since the livelihood of the public depends on such infrastructure, its state must be managed appropriately using quantitative approaches. Practitioners must consider not only which components are most fragile to hazard, e.g. seismicity, storm surge, hurricane winds, etc., but also how they participate on a network level using network analysis. Focusing on particularly damaged components does not necessarily increase network functionality, which is most important to the people that depend on such infrastructure. Several network analyses, e.g. S-RDA, LP-bounds, and crude-MCS, and performance metrics, e.g. disconnection bounds and component importance, are available for such purposes. Since these networks are existing, the time state is also important. If networks are close to chloride sources, deterioration may be a major issue. Information from field inspections may also have large impacts on quantitative models. To address such issues, hazard risk analysis methodologies for deteriorating networks subjected to seismicity, i.e. earthquakes, have been created from analytics. A bridge component model has been constructed for these methodologies. The bridge fragilities, which were constructed from data, required a deeper level of analysis as these were relevant for specific structures. Furthermore, chloride-induced deterioration network effects were investigated. Depending on how mathematical models incorporate new information, many approaches are available, such as Bayesian model updating. To make such procedures more flexible, an adaptive importance sampling scheme was created for structural reliability problems. Additionally, such a method handles many kinds of system and component problems with singular or multiple important regions of the limit state function. These and previously developed analysis methodologies were found to be strongly sensitive to the network size. Special network topologies may be more or less computationally difficult, while the resolution of the network also has large affects. To take advantage of some types of topologies, network hierarchical structures with super-link representation have been used in the literature to increase the computational efficiency by analyzing smaller, densely connected networks; however, such structures were based on user input and subjective at times. To address this, algorithms must be automated and reliable. These hierarchical structures may indicate the structure of the network itself. This risk analysis methodology has been expanded to larger networks using such automated hierarchical structures. Component importance is the most important objective from such network analysis; however, this may only provide the information of which bridges to inspect/repair earliest and little else. High correlations influence such component importance measures in a negative manner. Additionally, a regional approach is not appropriately modelled. To investigate a more regional view, group importance measures based on hierarchical structures have been created. Such structures may also be used to create regional inspection/repair approaches. Using these analytical, quantitative risk approaches, the next generation of decision makers may make both component and regional-based optimal decisions using information from both network function and further effects of infrastructure deterioration.

  20. Network Approaches to Substance Use and HIV/Hepatitis C Risk among Homeless Youth and Adult Women in the United States: A Review

    PubMed Central

    Dombrowski, Kirk; Sittner, Kelley; Crawford, Devan; Welch-Lazoritz, Melissa; Habecker, Patrick; Khan, Bilal

    2016-01-01

    During the United States economic recession of 2008–2011, the number of homeless and unstably housed people in the United States increased considerably. Homeless adult women and unaccompanied homeless youth make up the most marginal segments of this population. Because homeless individuals are a hard to reach population, research into these marginal groups has traditionally been a challenge for researchers interested in substance abuse and mental health. Network analysis techniques and research strategies offer means for dealing with traditional challenges such as missing sampling frames, variation in definitions of homelessness and study inclusion criteria, and enumeration/population estimation procedures. This review focuses on the need for, and recent steps toward, solutions to these problems that involve network science strategies for data collection and analysis. Research from a range of fields is reviewed and organized according to a new stress process framework aimed at understanding how homeless status interacts with issues related to substance abuse and mental health. Three types of network innovation are discussed: network scale-up methods, a network ecology approach to social resources, and the integration of network variables into the proposed stress process model of homeless substance abuse and mental health. By employing network methods and integrating these methods into existing models, research on homeless and unstably housed women and unaccompanied young people can address existing research challenges and promote more effective intervention and care programs. PMID:28042394

  1. Network Approaches to Substance Use and HIV/Hepatitis C Risk among Homeless Youth and Adult Women in the United States: A Review.

    PubMed

    Dombrowski, Kirk; Sittner, Kelley; Crawford, Devan; Welch-Lazoritz, Melissa; Habecker, Patrick; Khan, Bilal

    2016-09-01

    During the United States economic recession of 2008-2011, the number of homeless and unstably housed people in the United States increased considerably. Homeless adult women and unaccompanied homeless youth make up the most marginal segments of this population. Because homeless individuals are a hard to reach population, research into these marginal groups has traditionally been a challenge for researchers interested in substance abuse and mental health. Network analysis techniques and research strategies offer means for dealing with traditional challenges such as missing sampling frames, variation in definitions of homelessness and study inclusion criteria, and enumeration/population estimation procedures. This review focuses on the need for, and recent steps toward, solutions to these problems that involve network science strategies for data collection and analysis. Research from a range of fields is reviewed and organized according to a new stress process framework aimed at understanding how homeless status interacts with issues related to substance abuse and mental health. Three types of network innovation are discussed: network scale-up methods, a network ecology approach to social resources, and the integration of network variables into the proposed stress process model of homeless substance abuse and mental health. By employing network methods and integrating these methods into existing models, research on homeless and unstably housed women and unaccompanied young people can address existing research challenges and promote more effective intervention and care programs.

  2. Data harmonization of environmental variables: from simple to general solutions

    NASA Astrophysics Data System (ADS)

    Baume, O.

    2009-04-01

    European data platforms often contain measurements from different regional or national networks. As standards and protocols - e.g. type of measurement devices, sensors or measurement site classification, laboratory analysis and post-processing methods, vary between networks, discontinuities will appear when mapping the target variable at an international scale. Standardisation is generally a costly solution and does not allow classical statistical analysis of previously reported values. As an alternative, harmonization should be envisaged as an integrated step in mapping procedures across borders. In this paper, several harmonization solutions developed under the INTAMAP FP6 project are presented. The INTAMAP FP6 project is currently developing an interoperable framework for real-time automatic mapping of critical environmental variables by extending spatial statistical methods to web-based implementations. Harmonization is often considered as a pre-processing step in statistical data analysis workflow. If biases are assessed with little knowledge about the target variable - in particular when no explanatory covariate is integrated, a harmonization procedure along borders or between regionally overlapping networks may be adopted (Skøien et al., 2007). In this case, bias is estimated as the systematic difference between line or local predictions. On the other hand, when covariates can be included in spatial prediction, the harmonization step is integrated in the whole model estimation procedure, and, therefore, is no longer an independent pre-processing step of the automatic mapping process (Baume et al., 2007). In this case, bias factors become integrated parameters of the geostatistical model and are estimated alongside the other model parameters. The harmonization methods developed within the INTAMAP project were first applied within the field of radiation, where the European Radiological Data Exchange Platform (EURDEP) - http://eurdep.jrc.ec.europa.eu/ - has been active for all member states for more than a decade (de Cort and de Vries, 1997). This database contains biases because of the different networks processes used in data reporting (Bossew et al., 2007). In a comparison study, monthly averaged Gamma dose measurements from eight European countries were using the methods described above. Baume et al. (2008) showed that both methods yield similar results and can detect and remove bias from the EURDEP database. To broaden the potential of the methods developed within the INTAMAP project, another application example taken from soil science is presented in this paper. The Carbon/Nitrogen (C/N) ratio of forest soils is one of the best predictors for evaluating soil functions such as used in climate change issues. Although soil samples were analyzed according to a common European laboratory method, Carré et al. (2008) concluded that systematic errors are introduced in the measurements due to calibration issues and instability of the sample. The application of the harmonization procedures showed that bias could be adequately removed, although the procedures have difficulty to distinguish real differences from bias.

  3. Thyroid Disease and Surgery in CHEER: The Nation’s Otolaryngology-Head and Neck Surgery Practice Based Network

    PubMed Central

    Parham, Kourosh; Chapurin, Nikita; Schulz, Kris; Shin, Jennifer J.; Pynnonen, Melissa A.; Witsell, David L.; Langman, Alan; Nguyen-Huynh, Anh; Ryan, Sheila E.; Vambutas, Andrea; Wolfley, Anne; Roberts, Rhonda; Lee, Walter T.

    2017-01-01

    Objectives 1) Describe thyroid-related diagnoses and procedures in CHEER across academic and community sites. 2) Compare management of malignant thyroid disease across these sites, and 3) Provide practice based data related to flexible laryngoscopy vocal fold assessment before and after thyroid surgery based on AAO-HNSF Clinical Practice Guidelines. Study Design Review of retrospective data collection (RDC) database of the CHEER network using ICD-9 and CPT codes related to thyroid conditions. Setting Multisite practice based network. Subjects and Methods There were 3,807 thyroid patients (1,392 malignant; 2,415 benign) with 10,160 unique visits identified from 1 year of patient data in the RDC. Analysis was performed for identified cohort of patients using demographics, site characteristics and diagnostic and procedural distribution. Results Mean number of patients with thyroid disease per site was 238 (range 23–715). In community practices, 19% of patients with thyroid disease had cancer versus 45% in the academic setting (p<0.001). While academic sites manage more cancer patients, community sites are also surgically treating thyroid cancer, and performed more procedures per cancer patient (4.2 vs. 3.5, p<0.001). Vocal fold function was assessed by flexible laryngoscopy in 34.0% of pre-operative patients and in 3.7% post-operatively. Conclusion This is the first overview of malignant and benign thyroid disease through CHEER. It shows how the RDC can be used alone and with national guidelines to inform of clinical practice patterns in academic and community sites. This demonstrates the potential for future thyroid related studies utilizing the Otolaryngology-H&N Surgery’s practice-based research network. PMID:27371622

  4. Practice-based research networks, part II: a descriptive analysis of the athletic training practice-based research network in the secondary school setting.

    PubMed

    Valovich McLeod, Tamara C; Lam, Kenneth C; Bay, R Curtis; Sauers, Eric L; Snyder Valier, Alison R

    2012-01-01

    Analysis of health care service models requires the collection and evaluation of basic practice characterization data. Practice-based research networks (PBRNs) provide a framework for gathering data useful in characterizing clinical practice. To describe preliminary secondary school setting practice data from the Athletic Training Practice-Based Research Network (AT-PBRN). Descriptive study. Secondary school athletic training facilities within the AT-PBRN. Clinicians (n = 22) and their patients (n = 2523) from the AT-PBRN. A Web-based survey was used to obtain data on clinical practice site and clinician characteristics. Patient and practice characteristics were obtained via deidentified electronic medical record data collected between September 1, 2009, and April 1, 2011. Descriptive data regarding the clinician and CPS practice characteristics are reported as percentages and frequencies. Descriptive analysis of patient encounters and practice characteristic data was performed, with the percentages and frequencies of the type of injuries recorded at initial evaluation, type of treatment received at initial evaluation, daily treatment, and daily sign-in procedures. The AT-PBRN had secondary school sites in 7 states, and most athletic trainers at those sites (78.2%) had less than 5 years of experience. The secondary school sites within the AT-PBRN documented 2523 patients treated across 3140 encounters. Patients most frequently sought care for a current injury (61.3%), followed by preventive services (24.0%), and new injuries (14.7%). The most common diagnoses were ankle sprain/strain (17.9%), hip sprain/strain (12.5%), concussion (12.0%), and knee pain (2.5%). The most frequent procedures were athletic trainer evaluation (53.9%), hot- or cold-pack application (26.0%), strapping (10.3%), and therapeutic exercise (5.7%). The median number of treatments per injury was 3 (interquartile range = 2, 4; range = 2-19). These preliminary data describe services provided by clinicians within the AT-PBRN and demonstrate the usefulness of the PBRN model for obtaining such data.

  5. Self-organizing linear output map (SOLO): An artificial neural network suitable for hydrologic modeling and analysis

    NASA Astrophysics Data System (ADS)

    Hsu, Kuo-Lin; Gupta, Hoshin V.; Gao, Xiaogang; Sorooshian, Soroosh; Imam, Bisher

    2002-12-01

    Artificial neural networks (ANNs) can be useful in the prediction of hydrologic variables, such as streamflow, particularly when the underlying processes have complex nonlinear interrelationships. However, conventional ANN structures suffer from network training issues that significantly limit their widespread application. This paper presents a multivariate ANN procedure entitled self-organizing linear output map (SOLO), whose structure has been designed for rapid, precise, and inexpensive estimation of network structure/parameters and system outputs. More important, SOLO provides features that facilitate insight into the underlying processes, thereby extending its usefulness beyond forecast applications as a tool for scientific investigations. These characteristics are demonstrated using a classic rainfall-runoff forecasting problem. Various aspects of model performance are evaluated in comparison with other commonly used modeling approaches, including multilayer feedforward ANNs, linear time series modeling, and conceptual rainfall-runoff modeling.

  6. Methods for parameter identification in oscillatory networks and application to cortical and thalamic 600 Hz activity.

    PubMed

    Leistritz, L; Suesse, T; Haueisen, J; Hilgenfeld, B; Witte, H

    2006-01-01

    Directed information transfer in the human brain occurs presumably by oscillations. As of yet, most approaches for the analysis of these oscillations are based on time-frequency or coherence analysis. The present work concerns the modeling of cortical 600 Hz oscillations, localized within the Brodmann Areas 3b and 1 after stimulation of the nervus medianus, by means of coupled differential equations. This approach leads to the so-called parameter identification problem, where based on a given data set, a set of unknown parameters of a system of ordinary differential equations is determined by special optimization procedures. Some suitable algorithms for this task are presented in this paper. Finally an oscillatory network model is optimally fitted to the data taken from ten volunteers.

  7. Signalling maps in cancer research: construction and data analysis

    PubMed Central

    Kondratova, Maria; Sompairac, Nicolas; Barillot, Emmanuel; Zinovyev, Andrei

    2018-01-01

    Abstract Generation and usage of high-quality molecular signalling network maps can be augmented by standardizing notations, establishing curation workflows and application of computational biology methods to exploit the knowledge contained in the maps. In this manuscript, we summarize the major aims and challenges of assembling information in the form of comprehensive maps of molecular interactions. Mainly, we share our experience gained while creating the Atlas of Cancer Signalling Network. In the step-by-step procedure, we describe the map construction process and suggest solutions for map complexity management by introducing a hierarchical modular map structure. In addition, we describe the NaviCell platform, a computational technology using Google Maps API to explore comprehensive molecular maps similar to geographical maps and explain the advantages of semantic zooming principles for map navigation. We also provide the outline to prepare signalling network maps for navigation using the NaviCell platform. Finally, several examples of cancer high-throughput data analysis and visualization in the context of comprehensive signalling maps are presented. PMID:29688383

  8. The application of data mining techniques to oral cancer prognosis.

    PubMed

    Tseng, Wan-Ting; Chiang, Wei-Fan; Liu, Shyun-Yeu; Roan, Jinsheng; Lin, Chun-Nan

    2015-05-01

    This study adopted an integrated procedure that combines the clustering and classification features of data mining technology to determine the differences between the symptoms shown in past cases where patients died from or survived oral cancer. Two data mining tools, namely decision tree and artificial neural network, were used to analyze the historical cases of oral cancer, and their performance was compared with that of logistic regression, the popular statistical analysis tool. Both decision tree and artificial neural network models showed superiority to the traditional statistical model. However, as to clinician, the trees created by the decision tree models are relatively easier to interpret compared to that of the artificial neural network models. Cluster analysis also discovers that those stage 4 patients whose also possess the following four characteristics are having an extremely low survival rate: pN is N2b, level of RLNM is level I-III, AJCC-T is T4, and cells mutate situation (G) is moderate.

  9. Combined LC-MS/MS and Molecular Networking Approach Reveals New Cyanotoxins from the 2014 Cyanobacterial Bloom in Green Lake, Seattle.

    PubMed

    Teta, Roberta; Della Sala, Gerardo; Glukhov, Evgenia; Gerwick, Lena; Gerwick, William H; Mangoni, Alfonso; Costantino, Valeria

    2015-12-15

    Cyanotoxins obtained from a freshwater cyanobacterial collection at Green Lake, Seattle during a cyanobacterial harmful algal bloom in the summer of 2014 were studied using a new approach based on molecular networking analysis of liquid chromatography tandem mass spectrometry (LC-MS/MS) data. This MS networking approach is particularly well-suited for the detection of new cyanotoxin variants and resulted in the discovery of three new cyclic peptides, namely microcystin-MhtyR (6), which comprised about half of the total microcystin content in the bloom, and ferintoic acids C (12) and D (13). Structure elucidation of 6 was aided by a new microscale methylation procedure. Metagenomic analysis of the bloom using the 16S-ITS rRNA region identified Microcystis aeruginosa as the predominant cyanobacterium in the sample. Fragments of the putative biosynthetic genes for the new cyanotoxins were also identified, and their sequences correlated to the structure of the isolated cyanotoxins.

  10. Predicting ventriculoperitoneal shunt infection in children with hydrocephalus using artificial neural network.

    PubMed

    Habibi, Zohreh; Ertiaei, Abolhasan; Nikdad, Mohammad Sadegh; Mirmohseni, Atefeh Sadat; Afarideh, Mohsen; Heidari, Vahid; Saberi, Hooshang; Rezaei, Abdolreza Sheikh; Nejat, Farideh

    2016-11-01

    The relationships between shunt infection and predictive factors have not been previously investigated using Artificial Neural Network (ANN) model. The aim of this study was to develop an ANN model to predict shunt infection in a group of children with shunted hydrocephalus. Among more than 800 ventriculoperitoneal shunt procedures which had been performed between April 2000 and April 2011, 68 patients with shunt infection and 80 controls that fulfilled a set of meticulous inclusion/exclusion criteria were consecutively enrolled. Univariate analysis was performed for a long list of risk factors, and those with p value < 0.2 were used to create ANN and logistic regression (LR) models. Five variables including birth weight, age at the first shunting, shunt revision, prematurity, and myelomeningocele were significantly associated with shunt infection via univariate analysis, and two other variables (intraventricular hemorrhage and coincided infections) had a p value of less than 0.2. Using these seven input variables, ANN and LR models predicted shunt infection with an accuracy of 83.1 % (AUC; 91.98 %, 95 % CI) and 55.7 % (AUC; 76.5, 95 % CI), respectively. The contribution of the factors in the predictive performance of ANN in descending order was history of shunt revision, low birth weight (under 2000 g), history of prematurity, the age at the first shunt procedure, history of intraventricular hemorrhage, history of myelomeningocele, and coinfection. The findings show that artificial neural networks can predict shunt infection with a high level of accuracy in children with shunted hydrocephalus. Also, the contribution of different risk factors in the prediction of shunt infection can be determined using the trained network.

  11. Variable-free exploration of stochastic models: a gene regulatory network example.

    PubMed

    Erban, Radek; Frewen, Thomas A; Wang, Xiao; Elston, Timothy C; Coifman, Ronald; Nadler, Boaz; Kevrekidis, Ioannis G

    2007-04-21

    Finding coarse-grained, low-dimensional descriptions is an important task in the analysis of complex, stochastic models of gene regulatory networks. This task involves (a) identifying observables that best describe the state of these complex systems and (b) characterizing the dynamics of the observables. In a previous paper [R. Erban et al., J. Chem. Phys. 124, 084106 (2006)] the authors assumed that good observables were known a priori, and presented an equation-free approach to approximate coarse-grained quantities (i.e., effective drift and diffusion coefficients) that characterize the long-time behavior of the observables. Here we use diffusion maps [R. Coifman et al., Proc. Natl. Acad. Sci. U.S.A. 102, 7426 (2005)] to extract appropriate observables ("reduction coordinates") in an automated fashion; these involve the leading eigenvectors of a weighted Laplacian on a graph constructed from network simulation data. We present lifting and restriction procedures for translating between physical variables and these data-based observables. These procedures allow us to perform equation-free, coarse-grained computations characterizing the long-term dynamics through the design and processing of short bursts of stochastic simulation initialized at appropriate values of the data-based observables.

  12. Computation of Steady-State Probability Distributions in Stochastic Models of Cellular Networks

    PubMed Central

    Hallen, Mark; Li, Bochong; Tanouchi, Yu; Tan, Cheemeng; West, Mike; You, Lingchong

    2011-01-01

    Cellular processes are “noisy”. In each cell, concentrations of molecules are subject to random fluctuations due to the small numbers of these molecules and to environmental perturbations. While noise varies with time, it is often measured at steady state, for example by flow cytometry. When interrogating aspects of a cellular network by such steady-state measurements of network components, a key need is to develop efficient methods to simulate and compute these distributions. We describe innovations in stochastic modeling coupled with approaches to this computational challenge: first, an approach to modeling intrinsic noise via solution of the chemical master equation, and second, a convolution technique to account for contributions of extrinsic noise. We show how these techniques can be combined in a streamlined procedure for evaluation of different sources of variability in a biochemical network. Evaluation and illustrations are given in analysis of two well-characterized synthetic gene circuits, as well as a signaling network underlying the mammalian cell cycle entry. PMID:22022252

  13. An open source high-performance solution to extract surface water drainage networks from diverse terrain conditions

    USGS Publications Warehouse

    Stanislawski, Larry V.; Survila, Kornelijus; Wendel, Jeffrey; Liu, Yan; Buttenfield, Barbara P.

    2018-01-01

    This paper describes a workflow for automating the extraction of elevation-derived stream lines using open source tools with parallel computing support and testing the effectiveness of procedures in various terrain conditions within the conterminous United States. Drainage networks are extracted from the US Geological Survey 1/3 arc-second 3D Elevation Program elevation data having a nominal cell size of 10 m. This research demonstrates the utility of open source tools with parallel computing support for extracting connected drainage network patterns and handling depressions in 30 subbasins distributed across humid, dry, and transitional climate regions and in terrain conditions exhibiting a range of slopes. Special attention is given to low-slope terrain, where network connectivity is preserved by generating synthetic stream channels through lake and waterbody polygons. Conflation analysis compares the extracted streams with a 1:24,000-scale National Hydrography Dataset flowline network and shows that similarities are greatest for second- and higher-order tributaries.

  14. ODIN. Online Database Information Network: ODIN Policy & Procedure Manual.

    ERIC Educational Resources Information Center

    Townley, Charles T.; And Others

    Policies and procedures are outlined for the Online Database Information Network (ODIN), a cooperative of libraries in south-central Pennsylvania, which was organized to improve library services through technology. The first section covers organization and goals, members, and responsibilities of the administrative council and libraries. Patrons…

  15. Resting-State Functional Magnetic Resonance Imaging for Language Preoperative Planning

    PubMed Central

    Branco, Paulo; Seixas, Daniela; Deprez, Sabine; Kovacs, Silvia; Peeters, Ronald; Castro, São L.; Sunaert, Stefan

    2016-01-01

    Functional magnetic resonance imaging (fMRI) is a well-known non-invasive technique for the study of brain function. One of its most common clinical applications is preoperative language mapping, essential for the preservation of function in neurosurgical patients. Typically, fMRI is used to track task-related activity, but poor task performance and movement artifacts can be critical limitations in clinical settings. Recent advances in resting-state protocols open new possibilities for pre-surgical mapping of language potentially overcoming these limitations. To test the feasibility of using resting-state fMRI instead of conventional active task-based protocols, we compared results from fifteen patients with brain lesions while performing a verb-to-noun generation task and while at rest. Task-activity was measured using a general linear model analysis and independent component analysis (ICA). Resting-state networks were extracted using ICA and further classified in two ways: manually by an expert and by using an automated template matching procedure. The results revealed that the automated classification procedure correctly identified language networks as compared to the expert manual classification. We found a good overlay between task-related activity and resting-state language maps, particularly within the language regions of interest. Furthermore, resting-state language maps were as sensitive as task-related maps, and had higher specificity. Our findings suggest that resting-state protocols may be suitable to map language networks in a quick and clinically efficient way. PMID:26869899

  16. Analysis of Carbamate Pesticides: Validation of Semi-Volatile Analysis by HPLC-MS/MS by EPA Method MS666

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owens, J; Koester, C

    The Environmental Protection Agency's (EPA) Region 5 Chicago Regional Laboratory (CRL) developed a method for analysis of aldicarb, bromadiolone, carbofuran, oxamyl, and methomyl in water by high performance liquid chromatography tandem mass spectrometry (HPLC-MS/MS), titled Method EPA MS666. This draft standard operating procedure (SOP) was distributed to multiple EPA laboratories and to Lawrence Livermore National Laboratory, which was tasked to serve as a reference laboratory for EPA's Environmental Reference Laboratory Network (ERLN) and to develop and validate analytical procedures. The primary objective of this study was to validate and verify the analytical procedures described in MS666 for analysis of carbamatemore » pesticides in aqueous samples. The gathered data from this validation study will be used to: (1) demonstrate analytical method performance; (2) generate quality control acceptance criteria; and (3) revise the SOP to provide a validated method that would be available for use during a homeland security event. The data contained in this report will be compiled, by EPA CRL, with data generated by other EPA Regional laboratories so that performance metrics of Method EPA MS666 can be determined.« less

  17. Analysis of Ethanolamines: Validation of Semi-Volatile Analysis by HPLC-MS/MS by EPA Method MS888

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owens, J; Vu, A; Koester, C

    The Environmental Protection Agency's (EPA) Region 5 Chicago Regional Laboratory (CRL) developed a method titled 'Analysis of Diethanolamine, Triethanolamine, n-Methyldiethanolamine, and n-Ethyldiethanolamine in Water by Single Reaction Monitoring Liquid Chromatography/Tandem Mass Spectrometry (LC/MS/MS): EPA Method MS888'. This draft standard operating procedure (SOP) was distributed to multiple EPA laboratories and to Lawrence Livermore National Laboratory, which was tasked to serve as a reference laboratory for EPA's Environmental Reference Laboratory Network (ERLN) and to develop and validate analytical procedures. The primary objective of this study was to validate and verify the analytical procedures described in 'EPA Method MS888' for analysis of themore » listed ethanolamines in aqueous samples. The gathered data from this validation study will be used to: (1) demonstrate analytical method performance; (2) generate quality control acceptance criteria; and (3) revise the SOP to provide a validated method that would be available for use during a homeland security event. The data contained in this report will be compiled, by EPA CRL, with data generated by other EPA Regional laboratories so that performance metrics of 'EPA Method MS888' can be determined.« less

  18. Analysis of Thiodiglycol: Validation of Semi-Volatile Analysis by HPLC-MS/MS by EPA Method MS777

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owens, J; Koester, C

    The Environmental Protection Agency's (EPA) Region 5 Chicago Regional Laboratory (CRL) developed a method for the analysis of thiodiglycol, the breakdown product of the sulfur mustard HD, in water by high performance liquid chromatography tandem mass spectrometry (HPLC-MS/MS), titled Method EPA MS777 (hereafter referred to as EPA CRL SOP MS777). This draft standard operating procedure (SOP) was distributed to multiple EPA laboratories and to Lawrence Livermore National Laboratory, which was tasked to serve as a reference laboratory for EPA's Environmental Reference Laboratory Network (ERLN) and to develop and validate analytical procedures. The primary objective of this study was to verifymore » the analytical procedures described in MS777 for analysis of thiodiglycol in aqueous samples. The gathered data from this study will be used to: (1) demonstrate analytical method performance; (2) generate quality control acceptance criteria; and (3) revise the SOP to provide a validated method that would be available for use during a homeland security event. The data contained in this report will be compiled, by EPA CRL, with data generated by other EPA Regional laboratories so that performance metrics of Method EPA MS777 can be determined.« less

  19. The Accounting Network: How Financial Institutions React to Systemic Crisis

    PubMed Central

    Puliga, Michelangelo; Flori, Andrea; Pappalardo, Giuseppe; Chessa, Alessandro; Pammolli, Fabio

    2016-01-01

    The role of Network Theory in the study of the financial crisis has been widely spotted in the latest years. It has been shown how the network topology and the dynamics running on top of it can trigger the outbreak of large systemic crisis. Following this methodological perspective we introduce here the Accounting Network, i.e. the network we can extract through vector similarities techniques from companies’ financial statements. We build the Accounting Network on a large database of worldwide banks in the period 2001–2013, covering the onset of the global financial crisis of mid-2007. After a careful data cleaning, we apply a quality check in the construction of the network, introducing a parameter (the Quality Ratio) capable of trading off the size of the sample (coverage) and the representativeness of the financial statements (accuracy). We compute several basic network statistics and check, with the Louvain community detection algorithm, for emerging communities of banks. Remarkably enough sensible regional aggregations show up with the Japanese and the US clusters dominating the community structure, although the presence of a geographically mixed community points to a gradual convergence of banks into similar supranational practices. Finally, a Principal Component Analysis procedure reveals the main economic components that influence communities’ heterogeneity. Even using the most basic vector similarity hypotheses on the composition of the financial statements, the signature of the financial crisis clearly arises across the years around 2008. We finally discuss how the Accounting Networks can be improved to reflect the best practices in the financial statement analysis. PMID:27736865

  20. The Accounting Network: How Financial Institutions React to Systemic Crisis.

    PubMed

    Puliga, Michelangelo; Flori, Andrea; Pappalardo, Giuseppe; Chessa, Alessandro; Pammolli, Fabio

    2016-01-01

    The role of Network Theory in the study of the financial crisis has been widely spotted in the latest years. It has been shown how the network topology and the dynamics running on top of it can trigger the outbreak of large systemic crisis. Following this methodological perspective we introduce here the Accounting Network, i.e. the network we can extract through vector similarities techniques from companies' financial statements. We build the Accounting Network on a large database of worldwide banks in the period 2001-2013, covering the onset of the global financial crisis of mid-2007. After a careful data cleaning, we apply a quality check in the construction of the network, introducing a parameter (the Quality Ratio) capable of trading off the size of the sample (coverage) and the representativeness of the financial statements (accuracy). We compute several basic network statistics and check, with the Louvain community detection algorithm, for emerging communities of banks. Remarkably enough sensible regional aggregations show up with the Japanese and the US clusters dominating the community structure, although the presence of a geographically mixed community points to a gradual convergence of banks into similar supranational practices. Finally, a Principal Component Analysis procedure reveals the main economic components that influence communities' heterogeneity. Even using the most basic vector similarity hypotheses on the composition of the financial statements, the signature of the financial crisis clearly arises across the years around 2008. We finally discuss how the Accounting Networks can be improved to reflect the best practices in the financial statement analysis.

  1. Systematic Site Characterization at Seismic Stations combined with Empirical Spectral Modeling: critical data for local hazard analysis

    NASA Astrophysics Data System (ADS)

    Michel, Clotaire; Hobiger, Manuel; Edwards, Benjamin; Poggi, Valerio; Burjanek, Jan; Cauzzi, Carlo; Kästli, Philipp; Fäh, Donat

    2016-04-01

    The Swiss Seismological Service operates one of the densest national seismic networks in the world, still rapidly expanding (see http://www.seismo.ethz.ch/monitor/index_EN). Since 2009, every newly instrumented site is characterized following an established procedure to derive realistic 1D VS velocity profiles. In addition, empirical Fourier spectral modeling is performed on the whole network for each recorded event with sufficient signal-to-noise ratio. Besides the source characteristics of the earthquakes, statistical real time analyses of the residuals of the spectral modeling provide a seamlessly updated amplification function w.r. to Swiss rock conditions at every station. Our site characterization procedure is mainly based on the analysis of surface waves from passive experiments and includes cross-checks of the derived amplification functions with those obtained through spectral modeling. The systematic use of three component surface-wave analysis, allowing the derivation of both Rayleigh and Love waves dispersion curves, also contributes to the improved quality of the retrieved profiles. The results of site characterisation activities at recently installed strong-motion stations depict the large variety of possible effects of surface geology on ground motion in the Alpine context. Such effects range from de-amplification at hard-rock sites to amplification up to a factor of 15 in lacustrine sediments with respect to the Swiss reference rock velocity model. The derived velocity profiles are shown to reproduce observed amplification functions from empirical spectral modeling. Although many sites are found to exhibit 1D behavior, our procedure allows the detection and qualification of 2D and 3D effects. All data collected during the site characterization procedures in the last 20 years are gathered in a database, implementing a data model proposed for community use at the European scale through NERA and EPOS (www.epos-eu.org). A web stationbook derived from it can be accessed through the interface www.stations.seismo.ethz.ch.

  2. EDRN Standard Operating Procedures (SOP) — EDRN Public Portal

    Cancer.gov

    The NCI’s Early Detection Research Network is developing a number of standard operating procedures for assays, methods, and protocols for collection and processing of biological samples, and other reference materials to assist investigators to conduct experiments in a consistent, reliable manner. These SOPs are established by the investigators of the Early Detection Research Network to maintain constancy throughout the Network. These SOPs represent neither a consensus, nor are the recommendations of NCI.

  3. Detection of septicemia in chicken livers by spectroscopy,.

    PubMed

    Dey, B P; Chen, Y R; Hsieh, C; Chan, D E

    2003-02-01

    To establish a procedure for differentiating normal chickens from chickens with septicemia/toxemia (septox) by machine inspection under the Hazard Analysis and Critical Control Point-Based Inspection Models Project, spectral measurements of 300 chicken livers, of which half were normal and half were condemned due to septox conditions, were collected and analyzed. Neural network classification of the spectral data after principal component analysis (PCA) indicated that normal and septox livers were correctly differentiated by spectroscopy at a rate of 96%. Analysis of the data established 100% correlation between the spectroscopic identification and the subset of samples, both normal and septox, that were histopathologically diagnosed. In an attempt to establish the microbiological etiology of the diseased livers, isolates from 30 livers indicated that the poultry carcasses were contaminated mostly with coliforms present in the environment, hindering the isolation of pathogenic microorganisms. Therefore, to establish the cause of diseased livers, a strictly aseptic environment and procedure for sample collection is required.

  4. Video movie making using remote procedure calls and 4BSD Unix sockets on Unix, UNICOS, and MS-DOS systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, D.W.; Johnston, W.E.; Hall, D.E.

    1990-03-01

    We describe the use of the Sun Remote Procedure Call and Unix socket interprocess communication mechanisms to provide the network transport for a distributed, client-server based, image handling system. Clients run under Unix or UNICOS and servers run under Unix or MS-DOS. The use of remote procedure calls across local or wide-area networks to make video movies is addressed.

  5. A neural network approach to cloud classification

    NASA Technical Reports Server (NTRS)

    Lee, Jonathan; Weger, Ronald C.; Sengupta, Sailes K.; Welch, Ronald M.

    1990-01-01

    It is shown that, using high-spatial-resolution data, very high cloud classification accuracies can be obtained with a neural network approach. A texture-based neural network classifier using only single-channel visible Landsat MSS imagery achieves an overall cloud identification accuracy of 93 percent. Cirrus can be distinguished from boundary layer cloudiness with an accuracy of 96 percent, without the use of an infrared channel. Stratocumulus is retrieved with an accuracy of 92 percent, cumulus at 90 percent. The use of the neural network does not improve cirrus classification accuracy. Rather, its main effect is in the improved separation between stratocumulus and cumulus cloudiness. While most cloud classification algorithms rely on linear parametric schemes, the present study is based on a nonlinear, nonparametric four-layer neural network approach. A three-layer neural network architecture, the nonparametric K-nearest neighbor approach, and the linear stepwise discriminant analysis procedure are compared. A significant finding is that significantly higher accuracies are attained with the nonparametric approaches using only 20 percent of the database as training data, compared to 67 percent of the database in the linear approach.

  6. Flood analysis in mixed-urban areas reflecting interactions with the complete water cycle through coupled hydrologic-hydraulic modelling.

    PubMed

    Sto Domingo, N D; Refsgaard, A; Mark, O; Paludan, B

    2010-01-01

    The potential devastating effects of urban flooding have given high importance to thorough understanding and management of water movement within catchments, and computer modelling tools have found widespread use for this purpose. The state-of-the-art in urban flood modelling is the use of a coupled 1D pipe and 2D overland flow model to simultaneously represent pipe and surface flows. This method has been found to be accurate for highly paved areas, but inappropriate when land hydrology is important. The objectives of this study are to introduce a new urban flood modelling procedure that is able to reflect system interactions with hydrology, verify that the new procedure operates well, and underline the importance of considering the complete water cycle in urban flood analysis. A physically-based and distributed hydrological model was linked to a drainage network model for urban flood analysis, and the essential components and concepts used were described in this study. The procedure was then applied to a catchment previously modelled with the traditional 1D-2D procedure to determine if the new method performs similarly well. Then, results from applying the new method in a mixed-urban area were analyzed to determine how important hydrologic contributions are to flooding in the area.

  7. An Observing System Simulation Experiment Approach to Meteorological Network Assessment

    NASA Astrophysics Data System (ADS)

    Abbasnezhadi, K.; Rasmussen, P. F.; Stadnyk, T.; Boluwade, A.

    2016-12-01

    A proper knowledge of the spatiotemporal distribution of rainfall is important in order to conduct a mindful investigation of water movement and storage throughout a catchment. Currently, the most accurate precipitation information available for the remote Boreal ecozones of northern Manitoba is coming from the Canadian Precipitation Analysis (CaPA) data assimilation system. Throughout the Churchill River Basin (CRB), CaPA still does not have the proper skill due to the limited number of weather stations. A new approach to experimental network design was investigated based on the concept of Observing System Simulation Experiment (OSSE). The OSSE-based network assessment procedure which simulates the CaPA system provides a scientific and hydrologically significant tool to assess the sensitivity of CaPA precipitation analysis to observation network density throughout the CRB. To simulate CaPA system, synthetic background and station data were simulated, respectively, by adding spatially uncorrelated and correlated Gaussian noises to an assumingly true daily weather field synthesized by a gridded precipitation generator which simulates CaPA data. Given the true reference field on one hand, and a set of pseudo-CaPA analyses associated with different network realizations on the other hand, a WATFLOOD hydrological model was employed to compare the modeled runoff. The simulations showed that as network density increases, the accuracy of CaPA precipitation products improves up to a certain limit beyond which adding more stations to the network does not result in further accuracy.

  8. Representing operations procedures using temporal dependency networks

    NASA Technical Reports Server (NTRS)

    Fayyad, Kristina E.; Cooper, Lynne P.

    1993-01-01

    DSN Link Monitor & Control (LMC) operations consist primarily of executing procedures to configure, calibrate, test, and operate a communications link between an interplanetary spacecraft and its mission control center. Currently the LMC operators are responsible for integrating procedures into an end-to-end series of steps. The research presented in this paper is investigating new ways of specifying operations procedures that incorporate the insight of operations, engineering, and science personnel to improve mission operations. The paper describes the rationale for using Temporal Dependency Networks (TDN's) to represent the procedures, a description of how the data is acquired, and the knowledge engineering effort required to represent operations procedures. Results of operational tests of this concept, as implemented in the LMC Operator Assistant Prototype (LMCOA), are also presented.

  9. Distributed semantic networks and CLIPS

    NASA Technical Reports Server (NTRS)

    Snyder, James; Rodriguez, Tony

    1991-01-01

    Semantic networks of frames are commonly used as a method of reasoning in many problems. In most of these applications the semantic network exists as a single entity in a single process environment. Advances in workstation hardware provide support for more sophisticated applications involving multiple processes, interacting in a distributed environment. In these applications the semantic network may well be distributed over several concurrently executing tasks. This paper describes the design and implementation of a frame based, distributed semantic network in which frames are accessed both through C Language Integrated Production System (CLIPS) expert systems and procedural C++ language programs. The application area is a knowledge based, cooperative decision making model utilizing both rule based and procedural experts.

  10. Self-Organizing Maps and Parton Distribution Functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    K. Holcomb, Simonetta Liuti, D. Z. Perry

    2011-05-01

    We present a new method to extract parton distribution functions from high energy experimental data based on a specific type of neural networks, the Self-Organizing Maps. We illustrate the features of our new procedure that are particularly useful for an anaysis directed at extracting generalized parton distributions from data. We show quantitative results of our initial analysis of the parton distribution functions from inclusive deep inelastic scattering.

  11. Incorporating social impact on new product adoption in choice modeing: A case study in green vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Lin; Wang, Mingxian; Chen, Wei

    While discrete choice analysis is prevalent in capturing consumer preferences and describing their choice behaviors in product design, the traditional choice modeling approach assumes that each individual makes independent decisions, without considering the social impact. However, empirical studies show that choice is social - influenced by many factors beyond engineering performance of a product and consumer attributes. To alleviate this limitation, we propose a new choice modeling framework to capture the dynamic influence from social networks on consumer adoption of new products. By introducing social influence attributes into a choice utility function, social network simulation is integrated with the traditionalmore » discrete choice analysis in a three-stage process. Our study shows the need for considering social impact in forecasting new product adoption. Using hybrid electric vehicles as an example, our work illustrates the procedure of social network construction, social influence evaluation, and choice model estimation based on data from the National Household Travel Survey. Our study also demonstrates several interesting findings on the dynamic nature of new technology adoption and how social networks may influence hybrid electric vehicle adoption. (C) 2014 Elsevier Ltd. All rights reserved« less

  12. Organic cattle products: Authenticating production origin by analysis of serum mineral content.

    PubMed

    Rodríguez-Bermúdez, Ruth; Herrero-Latorre, Carlos; López-Alonso, Marta; Losada, David E; Iglesias, Roberto; Miranda, Marta

    2018-10-30

    An authentication procedure for differentiating between organic and non-organic cattle production on the basis of analysis of serum samples has been developed. For this purpose, the concentrations of fourteen mineral elements (As, Cd, Co, Cr, Cu, Fe, Hg, I, Mn, Mo, Ni, Pb, Se and Zn) in 522 serum samples from cows (341 from organic farms and 181 from non-organic farms), determined by inductively coupled plasma spectrometry, were used. The chemical information provided by serum analysis was employed to construct different pattern recognition classification models that predict the origin of each sample: organic or non-organic class. Among all classification procedures considered, the best results were obtained with the decision tree C5.0, Random Forest and AdaBoost neural networks, with hit levels close to 90% for both production types. The proposed method, involving analysis of serum samples, provided rapid, accurate in vivo classification of cattle according to organic and non-organic production type. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Sensitivity of surface meteorological analyses to observation networks

    NASA Astrophysics Data System (ADS)

    Tyndall, Daniel Paul

    A computationally efficient variational analysis system for two-dimensional meteorological fields is developed and described. This analysis approach is most efficient when the number of analysis grid points is much larger than the number of available observations, such as for large domain mesoscale analyses. The analysis system is developed using MATLAB software and can take advantage of multiple processors or processor cores. A version of the analysis system has been exported as a platform independent application (i.e., can be run on Windows, Linux, or Macintosh OS X desktop computers without a MATLAB license) with input/output operations handled by commonly available internet software combined with data archives at the University of Utah. The impact of observation networks on the meteorological analyses is assessed by utilizing a percentile ranking of individual observation sensitivity and impact, which is computed by using the adjoint of the variational surface assimilation system. This methodology is demonstrated using a case study of the analysis from 1400 UTC 27 October 2010 over the entire contiguous United States domain. The sensitivity of this approach to the dependence of the background error covariance on observation density is examined. Observation sensitivity and impact provide insight on the influence of observations from heterogeneous observing networks as well as serve as objective metrics for quality control procedures that may help to identify stations with significant siting, reporting, or representativeness issues.

  14. The Effects of Procedural Knowledge Transparency on Adoption in Corporate Social Networks

    ERIC Educational Resources Information Center

    Jensen, Bjoern J. M.

    2017-01-01

    This dissertation investigated how a certain type of organizational knowledge sharing, procedural knowledge transparency, affected innovation adoption rates of members of a corporate social network within a large Scandinavian organization, in its two years of activity. It also explored the mediation of these effects by different types of…

  15. User Procedures Standardization for Network Access. NBS Technical Note 799.

    ERIC Educational Resources Information Center

    Neumann, A. J.

    User access procedures to information systems have become of crucial importance with the advent of computer networks, which have opened new types of resources to a broad spectrum of users. This report surveys user access protocols of six representative systems: BASIC, GE MK II, INFONET, MEDLINE, NIC/ARPANET and SPIRES. Functional access…

  16. 47 CFR 25.261 - Procedures for avoidance of in-line interference events for Non Geostationary Satellite Orbit...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 2 2014-10-01 2014-10-01 false Procedures for avoidance of in-line interference events for Non Geostationary Satellite Orbit (NGSO) Satellite Network Operations in the Fixed... avoidance of in-line interference events for Non Geostationary Satellite Orbit (NGSO) Satellite Network...

  17. 47 CFR 25.261 - Procedures for avoidance of in-line interference events for Non Geostationary Satellite Orbit...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 2 2012-10-01 2012-10-01 false Procedures for avoidance of in-line interference events for Non Geostationary Satellite Orbit (NGSO) Satellite Network Operations in the Fixed... avoidance of in-line interference events for Non Geostationary Satellite Orbit (NGSO) Satellite Network...

  18. 47 CFR 25.261 - Procedures for avoidance of in-line interference events for Non Geostationary Satellite Orbit...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 2 2013-10-01 2013-10-01 false Procedures for avoidance of in-line interference events for Non Geostationary Satellite Orbit (NGSO) Satellite Network Operations in the Fixed... avoidance of in-line interference events for Non Geostationary Satellite Orbit (NGSO) Satellite Network...

  19. Experimental demonstration of software defined data center optical networks with Tbps end-to-end tunability

    NASA Astrophysics Data System (ADS)

    Zhao, Yongli; Zhang, Jie; Ji, Yuefeng; Li, Hui; Wang, Huitao; Ge, Chao

    2015-10-01

    The end-to-end tunability is important to provision elastic channel for the burst traffic of data center optical networks. Then, how to complete the end-to-end tunability based on elastic optical networks? Software defined networking (SDN) based end-to-end tunability solution is proposed for software defined data center optical networks, and the protocol extension and implementation procedure are designed accordingly. For the first time, the flexible grid all optical networks with Tbps end-to-end tunable transport and switch system have been online demonstrated for data center interconnection, which are controlled by OpenDayLight (ODL) based controller. The performance of the end-to-end tunable transport and switch system has been evaluated with wavelength number tuning, bit rate tuning, and transmit power tuning procedure.

  20. Heart Rate Variability Dynamics for the Prognosis of Cardiovascular Risk

    PubMed Central

    Ramirez-Villegas, Juan F.; Lam-Espinosa, Eric; Ramirez-Moreno, David F.; Calvo-Echeverry, Paulo C.; Agredo-Rodriguez, Wilfredo

    2011-01-01

    Statistical, spectral, multi-resolution and non-linear methods were applied to heart rate variability (HRV) series linked with classification schemes for the prognosis of cardiovascular risk. A total of 90 HRV records were analyzed: 45 from healthy subjects and 45 from cardiovascular risk patients. A total of 52 features from all the analysis methods were evaluated using standard two-sample Kolmogorov-Smirnov test (KS-test). The results of the statistical procedure provided input to multi-layer perceptron (MLP) neural networks, radial basis function (RBF) neural networks and support vector machines (SVM) for data classification. These schemes showed high performances with both training and test sets and many combinations of features (with a maximum accuracy of 96.67%). Additionally, there was a strong consideration for breathing frequency as a relevant feature in the HRV analysis. PMID:21386966

  1. Modeling the resilience of critical infrastructure: the role of network dependencies.

    PubMed

    Guidotti, Roberto; Chmielewski, Hana; Unnikrishnan, Vipin; Gardoni, Paolo; McAllister, Therese; van de Lindt, John

    2016-01-01

    Water and wastewater network, electric power network, transportation network, communication network, and information technology network are among the critical infrastructure in our communities; their disruption during and after hazard events greatly affects communities' well-being, economic security, social welfare, and public health. In addition, a disruption in one network may cause disruption to other networks and lead to their reduced functionality. This paper presents a unified theoretical methodology for the modeling of dependent/interdependent infrastructure networks and incorporates it in a six-step probabilistic procedure to assess their resilience. Both the methodology and the procedure are general, can be applied to any infrastructure network and hazard, and can model different types of dependencies between networks. As an illustration, the paper models the direct effects of seismic events on the functionality of a potable water distribution network and the cascading effects of the damage of the electric power network (EPN) on the potable water distribution network (WN). The results quantify the loss of functionality and delay in the recovery process due to dependency of the WN on the EPN. The results show the importance of capturing the dependency between networks in modeling the resilience of critical infrastructure.

  2. Modeling the resilience of critical infrastructure: the role of network dependencies

    PubMed Central

    Guidotti, Roberto; Chmielewski, Hana; Unnikrishnan, Vipin; Gardoni, Paolo; McAllister, Therese; van de Lindt, John

    2017-01-01

    Water and wastewater network, electric power network, transportation network, communication network, and information technology network are among the critical infrastructure in our communities; their disruption during and after hazard events greatly affects communities’ well-being, economic security, social welfare, and public health. In addition, a disruption in one network may cause disruption to other networks and lead to their reduced functionality. This paper presents a unified theoretical methodology for the modeling of dependent/interdependent infrastructure networks and incorporates it in a six-step probabilistic procedure to assess their resilience. Both the methodology and the procedure are general, can be applied to any infrastructure network and hazard, and can model different types of dependencies between networks. As an illustration, the paper models the direct effects of seismic events on the functionality of a potable water distribution network and the cascading effects of the damage of the electric power network (EPN) on the potable water distribution network (WN). The results quantify the loss of functionality and delay in the recovery process due to dependency of the WN on the EPN. The results show the importance of capturing the dependency between networks in modeling the resilience of critical infrastructure. PMID:28825037

  3. Correcting Evaluation Bias of Relational Classifiers with Network Cross Validation

    DTIC Science & Technology

    2010-01-01

    classi- fication algorithms: simple random resampling (RRS), equal-instance random resampling (ERS), and network cross-validation ( NCV ). The first two... NCV procedure that eliminates overlap between test sets altogether. The procedure samples for k disjoint test sets that will be used for evaluation...propLabeled ∗ S) nodes from train Pool in f erenceSet =network − trainSet F = F ∪ < trainSet, test Set, in f erenceSet > end for output: F NCV addresses

  4. Computerized Liquid Crystal Phase Identification by Neural Networks Analysis of Polarizing Microscopy Textures

    NASA Astrophysics Data System (ADS)

    Karaszi, Zoltan; Konya, Andrew; Dragan, Feodor; Jakli, Antal; CPIP/LCI; CS Dept. of Kent State University Collaboration

    Polarizing optical microscopy (POM) is traditionally the best-established method of studying liquid crystals, and using POM started already with Otto Lehman in 1890. An expert, who is familiar with the science of optics of anisotropic materials and typical textures of liquid crystals, can identify phases with relatively large confidence. However, for unambiguous identification usually other expensive and time-consuming experiments are needed. Replacement of the subjective and qualitative human eye-based liquid crystal texture analysis with quantitative computerized image analysis technique started only recently and were used to enhance the detection of smooth phase transitions, determine order parameter and birefringence of specific liquid crystal phases. We investigate if the computer can recognize and name the phase where the texture was taken. To judge the potential of reliable image recognition based on this procedure, we used 871 images of liquid crystal textures belonging to five main categories: Nematic, Smectic A, Smectic C, Cholesteric and Crystal, and used a Neural Network Clustering Technique included in the data mining software package in Java ``WEKA''. A neural network trained on a set of 827 LC textures classified the remaining 44 textures with 80% accuracy.

  5. Quality-assurance plan for groundwater activities, U.S. Geological Survey, Washington Water Science Center

    USGS Publications Warehouse

    Kozar, Mark D.; Kahle, Sue C.

    2013-01-01

    This report documents the standard procedures, policies, and field methods used by the U.S. Geological Survey’s (USGS) Washington Water Science Center staff for activities related to the collection, processing, analysis, storage, and publication of groundwater data. This groundwater quality-assurance plan changes through time to accommodate new methods and requirements developed by the Washington Water Science Center and the USGS Office of Groundwater. The plan is based largely on requirements and guidelines provided by the USGS Office of Groundwater, or the USGS Water Mission Area. Regular updates to this plan represent an integral part of the quality-assurance process. Because numerous policy memoranda have been issued by the Office of Groundwater since the previous groundwater quality assurance plan was written, this report is a substantial revision of the previous report, supplants it, and contains significant additional policies not covered in the previous report. This updated plan includes information related to the organization and responsibilities of USGS Washington Water Science Center staff, training, safety, project proposal development, project review procedures, data collection activities, data processing activities, report review procedures, and archiving of field data and interpretative information pertaining to groundwater flow models, borehole aquifer tests, and aquifer tests. Important updates from the previous groundwater quality assurance plan include: (1) procedures for documenting and archiving of groundwater flow models; (2) revisions to procedures and policies for the creation of sites in the Groundwater Site Inventory database; (3) adoption of new water-level forms to be used within the USGS Washington Water Science Center; (4) procedures for future creation of borehole geophysics, surface geophysics, and aquifer-test archives; and (5) use of the USGS Multi Optional Network Key Entry System software for entry of routine water-level data collected as part of long-term water-level monitoring networks.

  6. The comparative risk of developing postoperative complications in patients with distal radius fractures following different treatment modalities

    PubMed Central

    Qiu, Wen-Jun; Li, Yi-Fan; Ji, Yun-Han; Xu, Wei; Zhu, Xiao-Dong; Tang, Xian-Zhong; Zhao, Huan-Li; Wang, Gui-Bin; Jia, Yue-Qing; Zhu, Shi-Cai; Zhang, Feng-Fang; Liu, Hong-Mei

    2015-01-01

    In this study, we performed a network meta-analysis to compare the outcomes of seven most common surgical procedures to fix DRF, including bridging external fixation, non-bridging external fixation, K-wire fixation, plaster fixation, dorsal plating, volar plating, and dorsal and volar plating. Published studies were retrieved through PubMed, Embase and Cochrane Library databases. The database search terms used were the following keywords and MeSH terms: DRF, bridging external fixation, non-bridging external fixation, K-wire fixation, plaster fixation, dorsal plating, volar plating, and dorsal and volar plating. The network meta-analysis was performed to rank the probabilities of postoperative complication risks for the seven surgical modalities in DRF patients. This network meta-analysis included data obtained from a total of 19 RCTs. Our results revealed that compared to DRF patients treated with bridging external fixation, marked differences in pin-track infection (PTI) rate were found in patients treated with plaster fixation, volar plating, and dorsal and volar plating. Cluster analysis showed that plaster fixation is associated with the lowest probability of postoperative complication in DRF patients. Plaster fixation is associated with the lowest risk for postoperative complications in DRF patients, when compared to six other common DRF surgical methods examined. PMID:26549312

  7. Use of artificial neural network for spatial rainfall analysis

    NASA Astrophysics Data System (ADS)

    Paraskevas, Tsangaratos; Dimitrios, Rozos; Andreas, Benardos

    2014-04-01

    In the present study, the precipitation data measured at 23 rain gauge stations over the Achaia County, Greece, were used to estimate the spatial distribution of the mean annual precipitation values over a specific catchment area. The objective of this work was achieved by programming an Artificial Neural Network (ANN) that uses the feed-forward back-propagation algorithm as an alternative interpolating technique. A Geographic Information System (GIS) was utilized to process the data derived by the ANN and to create a continuous surface that represented the spatial mean annual precipitation distribution. The ANN introduced an optimization procedure that was implemented during training, adjusting the hidden number of neurons and the convergence of the ANN in order to select the best network architecture. The performance of the ANN was evaluated using three standard statistical evaluation criteria applied to the study area and showed good performance. The outcomes were also compared with the results obtained from a previous study in the area of research which used a linear regression analysis for the estimation of the mean annual precipitation values giving more accurate results. The information and knowledge gained from the present study could improve the accuracy of analysis concerning hydrology and hydrogeological models, ground water studies, flood related applications and climate analysis studies.

  8. Prevention of contrast-induced acute kidney injury in patients undergoing cardiovascular procedures-a systematic review and network meta-analysis.

    PubMed

    Navarese, Eliano P; Gurbel, Paul A; Andreotti, Felicita; Kołodziejczak, Michalina Marta; Palmer, Suetonia C; Dias, Sofia; Buffon, Antonino; Kubica, Jacek; Kowalewski, Mariusz; Jadczyk, Tomasz; Laskiewicz, Michał; Jędrzejek, Marek; Brockmeyer, Maximillian; Airoldi, Flavio; Ruospo, Marinella; De Servi, Stefano; Wojakowski, Wojciech; O' Connor, Christopher; Strippoli, Giovanni F M

    2017-01-01

    Interventional diagnostic and therapeutic procedures requiring intravascular iodinated contrast steadily increase patient exposure to the risks of contrast-induced acute kidney injury (CIAKI), which is associated with death, nonfatal cardiovascular events, and prolonged hospitalization. The aim of this study was to investigate the efficacy of pharmacological and non-pharmacological treatments for CIAKI prevention in patients undergoing cardiovascular invasive procedures with iodinated contrast. MEDLINE, Google Scholar, EMBASE and Cochrane databases as well as abstracts and presentations from major cardiovascular and nephrology meetings were searched, up to 22 April 2016. Eligible studies were randomized trials comparing strategies to prevent CIAKI (alone or in combination) when added to saline versus each other, saline, placebo, or no treatment in patients undergoing cardiovascular invasive procedures with administration of iodinated contrast. Two reviewers independently extracted trial-level data including number of patients, duration of follow-up, and outcomes. Eighteen strategies aimed at CIAKI prevention were identified. The primary outcome was the occurrence of CIAKI. Secondary outcomes were mortality, myocardial infarction, dialysis and heart failure. The data were pooled using network meta-analysis. Treatment estimates were calculated as odds ratios (ORs) with 95% credible intervals (CrI). 147 RCTs involving 33,463 patients were eligible. Saline plus N-acetylcysteine (OR 0.72, 95%CrI 0.57-0.88), ascorbic acid (0.59, 0.34-0.95), sodium bicarbonate plus N-acetylcysteine (0.59, 0.36-0.89), probucol (0.42, 0.15-0.91), methylxanthines (0.39, 0.20-0.66), statin (0.36, 0.21-0.59), device-guided matched hydration (0.35, 0.12-0.79), prostaglandins (0.26, 0.08-0.62) and trimetazidine (0.26, 0.09-0.59) were associated with lower odds of CIAKI compared to saline. Methylxanthines (0.12, 0.01-0.94) or left ventricular end-diastolic pressure-guided hydration (0.09, 0.01-0.59) were associated with lower mortality compared to saline. Currently recommended treatment with saline as the only measure to prevent CIAKI during cardiovascular procedures may not represent the optimal strategy. Vasodilators, when added to saline, may significantly reduce the odds of CIAKI following cardiovascular procedures.

  9. A GIS Procedure to Monitor PWV During Severe Meteorological Events

    NASA Astrophysics Data System (ADS)

    Ferrando, I.; Federici, B.; Sguerso, D.

    2016-12-01

    As widely known, the observation of GNSS signal's delay can improve the knowledge of meteorological phenomena. The local Precipitable Water Vapour (PWV), which can be easily derived from Zenith Total Delay (ZTD), Pressure (P) and Temperature (T) (Bevis et al., 1994), is not a satisfactory parameter to evaluate the occurrence of severe meteorological events. Hence, a GIS procedure, called G4M (GNSS for Meteorology), has been conceived to produce 2D PWV maps with high spatial and temporal resolution (1 km and 6 minutes respectively). The input data are GNSS, P and T observations not necessarily co-located coming from existing infrastructures, combined with a simplified physical model, owned by the research group.On spite of the low density and the different configurations of GNSS, P and T networks, the procedure is capable to detect severe meteorological events with reliable results. The procedure has already been applied in a wide and orographically complex area covering approximately the north-west of Italy and the French-Italian border region, to study two severe meteorological events occurred in Genoa (Italy) and other meteorological alert cases. The P, T and PWV 2D maps obtained by the procedure have been compared with the ones coming from meteorological re-analysis models, used as reference to obtain statistics on the goodness of the procedure in representing these fields. Additionally, the spatial variability of PWV was taken into account as indicator for representing potential critical situations; this index seems promising in highlighting remarkable features that precede intense precipitations. The strength and originality of the procedure lie into the employment of existing infrastructures, the independence from meteorological models, the high adaptability to different networks configurations, and the ability to produce high-resolution 2D PWV maps even from sparse input data. In the next future, the procedure could also be set up for near real-time applications.

  10. Enhanced reconstruction of weighted networks from strengths and degrees

    NASA Astrophysics Data System (ADS)

    Mastrandrea, Rossana; Squartini, Tiziano; Fagiolo, Giorgio; Garlaschelli, Diego

    2014-04-01

    Network topology plays a key role in many phenomena, from the spreading of diseases to that of financial crises. Whenever the whole structure of a network is unknown, one must resort to reconstruction methods that identify the least biased ensemble of networks consistent with the partial information available. A challenging case, frequently encountered due to privacy issues in the analysis of interbank flows and Big Data, is when there is only local (node-specific) aggregate information available. For binary networks, the relevant ensemble is one where the degree (number of links) of each node is constrained to its observed value. However, for weighted networks the problem is much more complicated. While the naïve approach prescribes to constrain the strengths (total link weights) of all nodes, recent counter-intuitive results suggest that in weighted networks the degrees are often more informative than the strengths. This implies that the reconstruction of weighted networks would be significantly enhanced by the specification of both strengths and degrees, a computationally hard and bias-prone procedure. Here we solve this problem by introducing an analytical and unbiased maximum-entropy method that works in the shortest possible time and does not require the explicit generation of reconstructed samples. We consider several real-world examples and show that, while the strengths alone give poor results, the additional knowledge of the degrees yields accurately reconstructed networks. Information-theoretic criteria rigorously confirm that the degree sequence, as soon as it is non-trivial, is irreducible to the strength sequence. Our results have strong implications for the analysis of motifs and communities and whenever the reconstructed ensemble is required as a null model to detect higher-order patterns.

  11. Shuttle Ku-band and S-band communications implementations study

    NASA Technical Reports Server (NTRS)

    Huth, G. K.; Nessibou, T.; Nilsen, P. W.; Simon, M. K.; Weber, C. L.

    1979-01-01

    The interfaces between the Ku-band system and the TDRSS, between the S-band system and the TDRSS, GSTDN and SGLS networks, and between the S-band payload communication equipment and the other Orbiter avionic equipment were investigated. The principal activities reported are: (1) performance analysis of the payload narrowband bent-pipe through the Ku-band communication system; (2) performance evaluation of the TDRSS user constraints placed on the S-band and Ku-band communication systems; (3) assessment of the shuttle-unique S-band TDRSS ground station false lock susceptibility; (4) development of procedure to make S-band antenna measurements during orbital flight; (5) development of procedure to make RFI measurements during orbital flight to assess the performance degradation to the TDRSS S-band communication link; and (6) analysis of the payload interface integration problem areas.

  12. BIOREL: the benchmark resource to estimate the relevance of the gene networks.

    PubMed

    Antonov, Alexey V; Mewes, Hans W

    2006-02-06

    The progress of high-throughput methodologies in functional genomics has lead to the development of statistical procedures to infer gene networks from various types of high-throughput data. However, due to the lack of common standards, the biological significance of the results of the different studies is hard to compare. To overcome this problem we propose a benchmark procedure and have developed a web resource (BIOREL), which is useful for estimating the biological relevance of any genetic network by integrating different sources of biological information. The associations of each gene from the network are classified as biologically relevant or not. The proportion of genes in the network classified as "relevant" is used as the overall network relevance score. Employing synthetic data we demonstrated that such a score ranks the networks fairly in respect to the relevance level. Using BIOREL as the benchmark resource we compared the quality of experimental and theoretically predicted protein interaction data.

  13. Artificial neural network classification using a minimal training set - Comparison to conventional supervised classification

    NASA Technical Reports Server (NTRS)

    Hepner, George F.; Logan, Thomas; Ritter, Niles; Bryant, Nevin

    1990-01-01

    Recent research has shown an artificial neural network (ANN) to be capable of pattern recognition and the classification of image data. This paper examines the potential for the application of neural network computing to satellite image processing. A second objective is to provide a preliminary comparison and ANN classification. An artificial neural network can be trained to do land-cover classification of satellite imagery using selected sites representative of each class in a manner similar to conventional supervised classification. One of the major problems associated with recognition and classifications of pattern from remotely sensed data is the time and cost of developing a set of training sites. This reseach compares the use of an ANN back propagation classification procedure with a conventional supervised maximum likelihood classification procedure using a minimal training set. When using a minimal training set, the neural network is able to provide a land-cover classification superior to the classification derived from the conventional classification procedure. This research is the foundation for developing application parameters for further prototyping of software and hardware implementations for artificial neural networks in satellite image and geographic information processing.

  14. 47 CFR 68.201 - Connection to the public switched telephone network.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... network. 68.201 Section 68.201 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) CONNECTION OF TERMINAL EQUIPMENT TO THE TELEPHONE NETWORK Terminal Equipment Approval Procedures § 68.201 Connection to the public switched telephone network. Terminal equipment may...

  15. 47 CFR 68.201 - Connection to the public switched telephone network.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... network. 68.201 Section 68.201 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) CONNECTION OF TERMINAL EQUIPMENT TO THE TELEPHONE NETWORK Terminal Equipment Approval Procedures § 68.201 Connection to the public switched telephone network. Terminal equipment may...

  16. Synchronization of the DOE/NASA 100-kilowatt wind turbine generator with a large utility network

    NASA Technical Reports Server (NTRS)

    Gilbert, L. J.

    1977-01-01

    The DOE/NASA 100 kilowatt wind turbine generator system was synchronized with a large utility network. The system equipments and procedures associated with the synchronization process were described. Time history traces of typical synchronizations were presented indicating that power and current transients resulting from the synchronizing procedure are limited to acceptable magnitudes.

  17. Comparing energy sources for surgical ablation of atrial fibrillation: a Bayesian network meta-analysis of randomized, controlled trials.

    PubMed

    Phan, Kevin; Xie, Ashleigh; Kumar, Narendra; Wong, Sophia; Medi, Caroline; La Meir, Mark; Yan, Tristan D

    2015-08-01

    Simplified maze procedures involving radiofrequency, cryoenergy and microwave energy sources have been increasingly utilized for surgical treatment of atrial fibrillation as an alternative to the traditional cut-and-sew approach. In the absence of direct comparisons, a Bayesian network meta-analysis is another alternative to assess the relative effect of different treatments, using indirect evidence. A Bayesian meta-analysis of indirect evidence was performed using 16 published randomized trials identified from 6 databases. Rank probability analysis was used to rank each intervention in terms of their probability of having the best outcome. Sinus rhythm prevalence beyond the 12-month follow-up was similar between the cut-and-sew, microwave and radiofrequency approaches, which were all ranked better than cryoablation (respectively, 39, 36, and 25 vs 1%). The cut-and-sew maze was ranked worst in terms of mortality outcomes compared with microwave, radiofrequency and cryoenergy (2 vs 19, 34, and 24%, respectively). The cut-and-sew maze procedure was associated with significantly lower stroke rates compared with microwave ablation [odds ratio <0.01; 95% confidence interval 0.00, 0.82], and ranked the best in terms of pacemaker requirements compared with microwave, radiofrequency and cryoenergy (81 vs 14, and 1, <0.01% respectively). Bayesian rank probability analysis shows that the cut-and-sew approach is associated with the best outcomes in terms of sinus rhythm prevalence and stroke outcomes, and remains the gold standard approach for AF treatment. Given the limitations of indirect comparison analysis, these results should be viewed with caution and not over-interpreted. © The Author 2014. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.

  18. Existing Resources, Standards, and Procedures for Precise Monitoring and Analysis of Structural Deformations. Volume 1

    DTIC Science & Technology

    1992-09-01

    Vsurveyors’ at the technician level or even without any formal education. In this case, even the most technologically advanced instrumentation will not... technologically advanced instrumentation system will not supply the expected information. UNB Report on Deformation Monitoring, 1992 163 The worldwide review... Technology ( CANMET ) Report 77-15. Lazzarini, T. (1975). "The identification of reference points in trigonometrical and linear networks established for

  19. Models for Threat Assessment in Networks

    DTIC Science & Technology

    2006-09-01

    Software International and Command AntiVirus . [Online]. Available: http://www.commandsoftware.com/virus/newlove.html [38] C. Ng and P. Ferrie. (2000...28 2.3 False positive trends across all population sizes for r=0.7 and m=0.1 . . . . 33 2.4 False negative trends across all population...benefits analysis is often performed to determine the list of mitigation procedures. Traditionally, risk assessment has been done in part with software

  20. On Applicability of Network Coding Technique for 6LoWPAN-based Sensor Networks.

    PubMed

    Amanowicz, Marek; Krygier, Jaroslaw

    2018-05-26

    In this paper, the applicability of the network coding technique in 6LoWPAN-based sensor multihop networks is examined. The 6LoWPAN is one of the standards proposed for the Internet of Things architecture. Thus, we can expect the significant growth of traffic in such networks, which can lead to overload and decrease in the sensor network lifetime. The authors propose the inter-session network coding mechanism that can be implemented in resource-limited sensor motes. The solution reduces the overall traffic in the network, and in consequence, the energy consumption is decreased. Used procedures take into account deep header compressions of the native 6LoWPAN packets and the hop-by-hop changes of the header structure. Applied simplifications reduce signaling traffic that is typically occurring in network coding deployments, keeping the solution usefulness for the wireless sensor networks with limited resources. The authors validate the proposed procedures in terms of end-to-end packet delay, packet loss ratio, traffic in the air, total energy consumption, and network lifetime. The solution has been tested in a real wireless sensor network. The results confirm the efficiency of the proposed technique, mostly in delay-tolerant sensor networks.

  1. Assessing Temporal Stability for Coarse Scale Satellite Moisture Validation in the Maqu Area, Tibet

    PubMed Central

    Bhatti, Haris Akram; Rientjes, Tom; Verhoef, Wouter; Yaseen, Muhammad

    2013-01-01

    This study evaluates if the temporal stability concept is applicable to a time series of satellite soil moisture images so to extend the common procedure of satellite image validation. The area of study is the Maqu area, which is located in the northeastern part of the Tibetan plateau. The network serves validation purposes of coarse scale (25–50 km) satellite soil moisture products and comprises 20 stations with probes installed at depths of 5, 10, 20, 40, 80 cm. The study period is 2009. The temporal stability concept is applied to all five depths of the soil moisture measuring network and to a time series of satellite-based moisture products from the Advance Microwave Scanning Radiometer (AMSR-E). The in-situ network is also assessed by Pearsons's correlation analysis. Assessments by the temporal stability concept proved to be useful and results suggest that probe measurements at 10 cm depth best match to the satellite observations. The Mean Relative Difference plot for satellite pixels shows that a RMSM pixel can be identified but in our case this pixel does not overlay any in-situ station. Also, the RMSM pixel does not overlay any of the Representative Mean Soil Moisture (RMSM) stations of the five probe depths. Pearson's correlation analysis on in-situ measurements suggests that moisture patterns over time are more persistent than over space. Since this study presents first results on the application of the temporal stability concept to a series of satellite images, we recommend further tests to become more conclusive on effectiveness to broaden the procedure of satellite validation. PMID:23959237

  2. The Spanish national health care-associated infection surveillance network (INCLIMECC): data summary January 1997 through December 2006 adapted to the new National Healthcare Safety Network Procedure-associated module codes.

    PubMed

    Pérez, Cristina Díaz-Agero; Rodela, Ana Robustillo; Monge Jodrá, Vincente

    2009-12-01

    In 1997, a national standardized surveillance system (designated INCLIMECC [Indicadores Clínicos de Mejora Continua de la Calidad]) was established in Spain for health care-associated infection (HAI) in surgery patients, based on the National Nosocomial Infection Surveillance (NNIS) system. In 2005, in its procedure-associated module, the National Healthcare Safety Network (NHSN) inherited the NNIS program for surveillance of HAI in surgery patients and reorganized all surgical procedures. INCLIMECC actively monitors all patients referred to the surgical ward of each participating hospital. We present a summary of the data collected from January 1997 to December 2006 adapted to the new NHSN procedures. Surgical site infection (SSI) rates are provided by operative procedure and NNIS risk index category. Further quality indicators reported are surgical complications, length of stay, antimicrobial prophylaxis, mortality, readmission because of infection or other complication, and revision surgery. Because the ICD-9-CM surgery procedure code is included in each patient's record, we were able to reorganize our database avoiding the loss of extensive information, as has occurred with other systems.

  3. Systems Level Analysis of Systemic Sclerosis Shows a Network of Immune and Profibrotic Pathways Connected with Genetic Polymorphisms

    PubMed Central

    Mahoney, J. Matthew; Taroni, Jaclyn; Martyanov, Viktor; Wood, Tammara A.; Greene, Casey S.; Pioli, Patricia A.; Hinchcliff, Monique E.; Whitfield, Michael L.

    2015-01-01

    Systemic sclerosis (SSc) is a rare systemic autoimmune disease characterized by skin and organ fibrosis. The pathogenesis of SSc and its progression are poorly understood. The SSc intrinsic gene expression subsets (inflammatory, fibroproliferative, normal-like, and limited) are observed in multiple clinical cohorts of patients with SSc. Analysis of longitudinal skin biopsies suggests that a patient's subset assignment is stable over 6–12 months. Genetically, SSc is multi-factorial with many genetic risk loci for SSc generally and for specific clinical manifestations. Here we identify the genes consistently associated with the intrinsic subsets across three independent cohorts, show the relationship between these genes using a gene-gene interaction network, and place the genetic risk loci in the context of the intrinsic subsets. To identify gene expression modules common to three independent datasets from three different clinical centers, we developed a consensus clustering procedure based on mutual information of partitions, an information theory concept, and performed a meta-analysis of these genome-wide gene expression datasets. We created a gene-gene interaction network of the conserved molecular features across the intrinsic subsets and analyzed their connections with SSc-associated genetic polymorphisms. The network is composed of distinct, but interconnected, components related to interferon activation, M2 macrophages, adaptive immunity, extracellular matrix remodeling, and cell proliferation. The network shows extensive connections between the inflammatory- and fibroproliferative-specific genes. The network also shows connections between these subset-specific genes and 30 SSc-associated polymorphic genes including STAT4, BLK, IRF7, NOTCH4, PLAUR, CSK, IRAK1, and several human leukocyte antigen (HLA) genes. Our analyses suggest that the gene expression changes underlying the SSc subsets may be long-lived, but mechanistically interconnected and related to a patients underlying genetic risk. PMID:25569146

  4. Long-term observations of tropospheric particle number size distributions and equivalent black carbon mass concentrations in the German Ultrafine Aerosol Network (GUAN)

    NASA Astrophysics Data System (ADS)

    Birmili, W.; Weinhold, K.; Merkel, M.; Rasch, F.; Sonntag, A.; Wiedensohler, A.; Bastian, S.; Schladitz, A.; Löschau, G.; Cyrys, J.; Pitz, M.; Gu, J.; Kusch, T.; Flentje, H.; Quass, U.; Kaminski, H.; Kuhlbusch, T. A. J.; Meinhardt, F.; Schwerin, A.; Bath, O.; Ries, L.; Wirtz, K.; Fiebig, M.

    2015-11-01

    The German Ultrafine Aerosol Network (GUAN) is a cooperative atmospheric observation network, which aims at improving the scientific understanding of aerosol-related effects in the troposphere. The network addresses research questions dedicated to both, climate and health related effects. GUAN's core activity has been the continuous collection of tropospheric particle number size distributions and black carbon mass concentrations at seventeen observation sites in Germany. These sites cover various environmental settings including urban traffic, urban background, rural background, and Alpine mountains. In association with partner projects, GUAN has implemented a high degree of harmonisation of instrumentation, operating procedures, and data evaluation procedures. The quality of the measurement data is assured by laboratory intercomparisons as well as on-site comparisons with reference instruments. This paper describes the measurement sites, instrumentation, quality assurance and data evaluation procedures in the network as well as the EBAS repository, where the data sets can be obtained (doi:10.5072/guan).

  5. Long-term observations of tropospheric particle number size distributions and equivalent black carbon mass concentrations in the German Ultrafine Aerosol Network (GUAN)

    NASA Astrophysics Data System (ADS)

    Birmili, Wolfram; Weinhold, Kay; Rasch, Fabian; Sonntag, André; Sun, Jia; Merkel, Maik; Wiedensohler, Alfred; Bastian, Susanne; Schladitz, Alexander; Löschau, Gunter; Cyrys, Josef; Pitz, Mike; Gu, Jianwei; Kusch, Thomas; Flentje, Harald; Quass, Ulrich; Kaminski, Heinz; Kuhlbusch, Thomas A. J.; Meinhardt, Frank; Schwerin, Andreas; Bath, Olaf; Ries, Ludwig; Gerwig, Holger; Wirtz, Klaus; Fiebig, Markus

    2016-08-01

    The German Ultrafine Aerosol Network (GUAN) is a cooperative atmospheric observation network, which aims at improving the scientific understanding of aerosol-related effects in the troposphere. The network addresses research questions dedicated to both climate- and health-related effects. GUAN's core activity has been the continuous collection of tropospheric particle number size distributions and black carbon mass concentrations at 17 observation sites in Germany. These sites cover various environmental settings including urban traffic, urban background, rural background, and Alpine mountains. In association with partner projects, GUAN has implemented a high degree of harmonisation of instrumentation, operating procedures, and data evaluation procedures. The quality of the measurement data is assured by laboratory intercomparisons as well as on-site comparisons with reference instruments. This paper describes the measurement sites, instrumentation, quality assurance, and data evaluation procedures in the network as well as the EBAS repository, where the data sets can be obtained (doi:10.5072/guan).

  6. Analysis of Phosphonic Acids: Validation of Semi-Volatile Analysis by HPLC-MS/MS by EPA Method MS999

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owens, J; Vu, A; Koester, C

    The Environmental Protection Agency's (EPA) Region 5 Chicago Regional Laboratory (CRL) developed a method titled Analysis of Diisopropyl Methylphosphonate, Ethyl Hydrogen Dimethylamidophosphate, Isopropyl Methylphosphonic Acid, Methylphosphonic Acid, and Pinacolyl Methylphosphonic Acid in Water by Multiple Reaction Monitoring Liquid Chromatography/Tandem Mass Spectrometry: EPA Version MS999. This draft standard operating procedure (SOP) was distributed to multiple EPA laboratories and to Lawrence Livermore National Laboratory, which was tasked to serve as a reference laboratory for EPA's Environmental Reference Laboratory Network (ERLN) and to develop and validate analytical procedures. The primary objective of this study was to validate and verify the analytical procedures describedmore » in EPA Method MS999 for analysis of the listed phosphonic acids and surrogates in aqueous samples. The gathered data from this validation study will be used to: (1) demonstrate analytical method performance; (2) generate quality control acceptance criteria; and (3) revise the SOP to provide a validated method that would be available for use during a homeland security event. The data contained in this report will be compiled, by EPA CRL, with data generated by other EPA Regional laboratories so that performance metrics of EPA Method MS999 can be determined.« less

  7. Representing situation awareness in collaborative systems: a case study in the energy distribution domain.

    PubMed

    Salmon, P M; Stanton, N A; Walker, G H; Jenkins, D; Baber, C; McMaster, R

    2008-03-01

    The concept of distributed situation awareness (DSA) is currently receiving increasing attention from the human factors community. This article investigates DSA in a collaborative real-world industrial setting by discussing the results derived from a recent naturalistic study undertaken within the UK energy distribution domain. The results describe the DSA-related information used by the networks of agents involved in the scenarios analysed, the sharing of this information between the agents and the salience of different information elements used. Thus, the structure, quality and content of each network's DSA is discussed, along with the implications for DSA theory. The findings reinforce the notion that when viewing situation awareness (SA) in collaborative systems, it is useful to focus on the coordinated behaviour of the system itself, rather than on the individual as the unit of analysis and suggest that the findings from such assessments can potentially be used to inform system, procedure and training design. SA is a critical commodity for teams working in industrial systems and systems, procedures and training programmes should be designed to facilitate efficient system SA acquisition and maintenance. This article presents approaches for describing and understanding SA during real-world collaborative tasks, the outputs from which can potentially be used to inform system, training programmes and procedure design.

  8. Anti AIDS drug design with the help of neural networks

    NASA Astrophysics Data System (ADS)

    Tetko, I. V.; Tanchuk, V. Yu.; Luik, A. I.

    1995-04-01

    Artificial neural networks were used to analyze and predict the human immunodefiency virus type 1 reverse transcriptase inhibitors. Training and control set included 44 molecules (most of them are well-known substances such as AZT, TIBO, dde, etc.) The biological activities of molecules were taken from literature and rated for two classes: active and inactive compounds according to their values. We used topological indices as molecular parameters. Four most informative parameters (out of 46) were chosen using cluster analysis and original input parameters' estimation procedure and were used to predict activities of both control and new (synthesized in our institute) molecules. We applied pruning network algorithm and network ensembles to obtain the final classifier and avoid chance correlation. The increasing of neural network generalization of the data from the control set was observed, when using the aforementioned methods. The prognosis of new molecules revealed one molecule as possibly active. It was confirmed by further biological tests. The compound was as active as AZT and in order less toxic. The active compound is currently being evaluated in pre clinical trials as possible drug for anti-AIDS therapy.

  9. The Canarian Seismic Monitoring Network: design, development and first result

    NASA Astrophysics Data System (ADS)

    D'Auria, Luca; Barrancos, José; Padilla, Germán D.; García-Hernández, Rubén; Pérez, Aaron; Pérez, Nemesio M.

    2017-04-01

    Tenerife is an active volcanic island which experienced several eruptions of moderate intensity in historical times, and few explosive eruptions in the Holocene. The increasing population density and the consistent number of tourists are constantly raising the volcanic risk. In June 2016 Instituto Volcanologico de Canarias started the deployment of a seismological volcano monitoring network consisting of 15 broadband seismic stations. The network began its full operativity in November 2016. The aim of the network are both volcano monitoring and scientific research. Currently data are continuously recorded and processed in real-time. Seismograms, hypocentral parameters, statistical informations about the seismicity and other data are published on a web page. We show the technical characteristics of the network and an estimate of its detection threshold and earthquake location performances. Furthermore we present other near-real time procedures on the data: analysis of the ambient noise for determining the shallow velocity model and temporal velocity variations, detection of earthquake multiplets through massive data mining of the seismograms and automatic relocation of events through double-difference location.

  10. Weighted networks as randomly reinforced urn processes

    NASA Astrophysics Data System (ADS)

    Caldarelli, Guido; Chessa, Alessandro; Crimaldi, Irene; Pammolli, Fabio

    2013-02-01

    We analyze weighted networks as randomly reinforced urn processes, in which the edge-total weights are determined by a reinforcement mechanism. We develop a statistical test and a procedure based on it to study the evolution of networks over time, detecting the “dominance” of some edges with respect to the others and then assessing if a given instance of the network is taken at its steady state or not. Distance from the steady state can be considered as a measure of the relevance of the observed properties of the network. Our results are quite general, in the sense that they are not based on a particular probability distribution or functional form of the random weights. Moreover, the proposed tool can be applied also to dense networks, which have received little attention by the network community so far, since they are often problematic. We apply our procedure in the context of the International Trade Network, determining a core of “dominant edges.”

  11. Research and emulation of ranging in BPON system

    NASA Astrophysics Data System (ADS)

    Yang, Guangxiang; Tao, Dexin; He, Yan

    2005-12-01

    Ranging is one of the key technologies in Broadband Passive Optical Network based on the ATM (BPON) system. It is complex for software designers and difficult to test. In order to simplify the ranging procedure, enhance its efficiency, and find an appropriate method to verify it, a new ranging procedure that completely satisfies the requirements specified in ITU-T G.983.1 and one verifying method is proposed in this paper. A kind of ranging procedure without serial number (SN) searching function, called one-by-one ranging are developed under the condition of cold PON, cold Optical Network Termination (ONU). The ranging procedure includes the use of OLT and ONU flow charts respectively. By using the network emulation software OPNET, the BPON system is modeled and the ranging procedure is simulated. The emulation experimental results show that the presented ranging procedure can effectively eliminate the collision of burst mode signals between ONUs, which can be ranged one-by-one under the controlling of OLT, while also enhancing the ranging efficiency. As all of the message formats used in this research conform with the ITU-T G.983.1, the ranging procedure can meet the protocol specifications with good interoperability, and is very compatible with products of other manufacturer. According to the present study of ranging procedures, guidelines and principles are provided, Also some design difficulties are eliminated in the software design.

  12. GeneBee-net: Internet-based server for analyzing biopolymers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brodsky, L.I.; Ivanov, V.V.; Nikolaev, V.K.

    This work describes a network server for searching databanks of biopolymer structures and performing other biocomputing procedures; it is available via direct Internet connection. Basic server procedures are dedicated to homology (similarity) search of sequence and 3D structure of proteins. The homologies found could be used to build multiple alignments, predict protein and RNA secondary structure, and construct phylogenetic trees. In addition to traditional methods of sequence similarity search, the authors propose {open_quotes}non-matrix{close_quotes} (correlational) search. An analogous approach is used to identify regions of similar tertiary structure of proteins. Algorithm concepts and usage examples are presented for new methods. Servicemore » logic is based upon interaction of a client program and server procedures. The client program allows the compilation of queries and the processing of results of an analysis.« less

  13. Reconfiguring practice: the interdependence of experimental procedure and computing infrastructure in distributed earthquake engineering.

    PubMed

    De La Flor, Grace; Ojaghi, Mobin; Martínez, Ignacio Lamata; Jirotka, Marina; Williams, Martin S; Blakeborough, Anthony

    2010-09-13

    When transitioning local laboratory practices into distributed environments, the interdependent relationship between experimental procedure and the technologies used to execute experiments becomes highly visible and a focal point for system requirements. We present an analysis of ways in which this reciprocal relationship is reconfiguring laboratory practices in earthquake engineering as a new computing infrastructure is embedded within three laboratories in order to facilitate the execution of shared experiments across geographically distributed sites. The system has been developed as part of the UK Network for Earthquake Engineering Simulation e-Research project, which links together three earthquake engineering laboratories at the universities of Bristol, Cambridge and Oxford. We consider the ways in which researchers have successfully adapted their local laboratory practices through the modification of experimental procedure so that they may meet the challenges of coordinating distributed earthquake experiments.

  14. Engineering Online and In-person Social Networks for Physical Activity: A Randomized Trial

    PubMed Central

    Rovniak, Liza S.; Kong, Lan; Hovell, Melbourne F.; Ding, Ding; Sallis, James F.; Ray, Chester A.; Kraschnewski, Jennifer L.; Matthews, Stephen A.; Kiser, Elizabeth; Chinchilli, Vernon M.; George, Daniel R.; Sciamanna, Christopher N.

    2016-01-01

    Background Social networks can influence physical activity, but little is known about how best to engineer online and in-person social networks to increase activity. Purpose To conduct a randomized trial based on the Social Networks for Activity Promotion model to assess the incremental contributions of different procedures for building social networks on objectively-measured outcomes. Methods Physically inactive adults (n = 308, age, 50.3 (SD = 8.3) years, 38.3% male, 83.4% overweight/obese) were randomized to 1 of 3 groups. The Promotion group evaluated the effects of weekly emailed tips emphasizing social network interactions for walking (e.g., encouragement, informational support); the Activity group evaluated the incremental effect of adding an evidence-based online fitness walking intervention to the weekly tips; and the Social Networks group evaluated the additional incremental effect of providing access to an online networking site for walking, and prompting walking/activity across diverse settings. The primary outcome was mean change in accelerometer-measured moderate-to-vigorous physical activity (MVPA), assessed at 3 and 9 months from baseline. Results Participants increased their MVPA by 21.0 mins/week, 95% CI [5.9, 36.1], p = .005, at 3 months, and this change was sustained at 9 months, with no between-group differences. Conclusions Although the structure of procedures for targeting social networks varied across intervention groups, the functional effect of these procedures on physical activity was similar. Future research should evaluate if more powerful reinforcers improve the effects of social network interventions. Trial Registration Number NCT01142804 PMID:27405724

  15. Bias in groundwater samples caused by wellbore flow

    USGS Publications Warehouse

    Reilly, Thomas E.; Franke, O. Lehn; Bennett, Gordon D.

    1989-01-01

    Proper design of physical installations and sampling procedures for groundwater monitoring networks is critical for the detection and analysis of possible contaminants. Monitoring networks associated with known contaminant sources sometimes include an array of monitoring wells with long well screens. The purpose of this paper is: (a) to report the results of a numerical experiment indicating that significant borehole flow can occur within long well screens installed in homogeneous aquifers with very small head differences in the aquifer (less than 0.01 feet between the top and bottom of the screen); (b) to demonstrate that contaminant monitoring wells with long screens may completely fail to fulfill their purpose in many groundwater environments.

  16. KAMEDIN: a telemedicine system for computer supported cooperative work and remote image analysis in radiology.

    PubMed

    Handels, H; Busch, C; Encarnação, J; Hahn, C; Kühn, V; Miehe, J; Pöppl, S I; Rinast, E; Rossmanith, C; Seibert, F; Will, A

    1997-03-01

    The software system KAMEDIN (Kooperatives Arbeiten und MEdizinische Diagnostik auf Innovativen Netzen) is a multimedia telemedicine system for exchange, cooperative diagnostics, and remote analysis of digital medical image data. It provides components for visualisation, processing, and synchronised audio-visual discussion of medical images. Techniques of computer supported cooperative work (CSCW) synchronise user interactions during a teleconference. Visibility of both local and remote cursor on the conference workstations facilitates telepointing and reinforces the conference partner's telepresence. Audio communication during teleconferences is supported by an integrated audio component. Furthermore, brain tissue segmentation with artificial neural networks can be performed on an external supercomputer as a remote image analysis procedure. KAMEDIN is designed as a low cost CSCW tool for ISDN based telecommunication. However it can be used on any TCP/IP supporting network. In a field test, KAMEDIN was installed in 15 clinics and medical departments to validate the systems' usability. The telemedicine system KAMEDIN has been developed, tested, and evaluated within a research project sponsored by German Telekom.

  17. Multi-Objective Community Detection Based on Memetic Algorithm

    PubMed Central

    2015-01-01

    Community detection has drawn a lot of attention as it can provide invaluable help in understanding the function and visualizing the structure of networks. Since single objective optimization methods have intrinsic drawbacks to identifying multiple significant community structures, some methods formulate the community detection as multi-objective problems and adopt population-based evolutionary algorithms to obtain multiple community structures. Evolutionary algorithms have strong global search ability, but have difficulty in locating local optima efficiently. In this study, in order to identify multiple significant community structures more effectively, a multi-objective memetic algorithm for community detection is proposed by combining multi-objective evolutionary algorithm with a local search procedure. The local search procedure is designed by addressing three issues. Firstly, nondominated solutions generated by evolutionary operations and solutions in dominant population are set as initial individuals for local search procedure. Then, a new direction vector named as pseudonormal vector is proposed to integrate two objective functions together to form a fitness function. Finally, a network specific local search strategy based on label propagation rule is expanded to search the local optimal solutions efficiently. The extensive experiments on both artificial and real-world networks evaluate the proposed method from three aspects. Firstly, experiments on influence of local search procedure demonstrate that the local search procedure can speed up the convergence to better partitions and make the algorithm more stable. Secondly, comparisons with a set of classic community detection methods illustrate the proposed method can find single partitions effectively. Finally, the method is applied to identify hierarchical structures of networks which are beneficial for analyzing networks in multi-resolution levels. PMID:25932646

  18. Multi-objective community detection based on memetic algorithm.

    PubMed

    Wu, Peng; Pan, Li

    2015-01-01

    Community detection has drawn a lot of attention as it can provide invaluable help in understanding the function and visualizing the structure of networks. Since single objective optimization methods have intrinsic drawbacks to identifying multiple significant community structures, some methods formulate the community detection as multi-objective problems and adopt population-based evolutionary algorithms to obtain multiple community structures. Evolutionary algorithms have strong global search ability, but have difficulty in locating local optima efficiently. In this study, in order to identify multiple significant community structures more effectively, a multi-objective memetic algorithm for community detection is proposed by combining multi-objective evolutionary algorithm with a local search procedure. The local search procedure is designed by addressing three issues. Firstly, nondominated solutions generated by evolutionary operations and solutions in dominant population are set as initial individuals for local search procedure. Then, a new direction vector named as pseudonormal vector is proposed to integrate two objective functions together to form a fitness function. Finally, a network specific local search strategy based on label propagation rule is expanded to search the local optimal solutions efficiently. The extensive experiments on both artificial and real-world networks evaluate the proposed method from three aspects. Firstly, experiments on influence of local search procedure demonstrate that the local search procedure can speed up the convergence to better partitions and make the algorithm more stable. Secondly, comparisons with a set of classic community detection methods illustrate the proposed method can find single partitions effectively. Finally, the method is applied to identify hierarchical structures of networks which are beneficial for analyzing networks in multi-resolution levels.

  19. Social Network Type and Subjective Well-being in a National Sample of Older Americans

    PubMed Central

    Litwin, Howard; Shiovitz-Ezra, Sharon

    2011-01-01

    Purpose: The study considers the social networks of older Americans, a population for whom there have been few studies of social network type. It also examines associations between network types and well-being indicators: loneliness, anxiety, and happiness. Design and Methods: A subsample of persons aged 65 years and older from the first wave of the National Social Life, Health, and Aging Project was employed (N = 1,462). We applied K-means cluster analysis to derive social network types using 7 criterion variables. In the multivariate stage, the well-being outcomes were regressed on the network type construct and on background and health characteristics by means of logistic regression. Results: Five social network types were derived: “diverse,” “friend,” “congregant,” “family,” and “restricted.” Social network type was found to be associated with each of the well-being indicators after adjusting for demographic and health confounders. Respondents embedded in network types characterized by greater social capital tended to exhibit better well-being in terms of less loneliness, less anxiety, and greater happiness. Implications: Knowledge about differing network types should make gerontological practitioners more aware of the varying interpersonal milieus in which older people function. Adopting network type assessment as an integral part of intake procedures and tracing network shifts over time can serve as a basis for risk assessment as well as a means for determining the efficacy of interventions. PMID:21097553

  20. Enhancing to method for extracting Social network by the relation existence

    NASA Astrophysics Data System (ADS)

    Elfida, Maria; Matyuso Nasution, M. K.; Sitompul, O. S.

    2018-01-01

    To get the trusty information about the social network extracted from the Web requires a reliable method, but for optimal resultant required the method that can overcome the complexity of information resources. This paper intends to reveal ways to overcome the constraints of social network extraction leading to high complexity by identifying relationships among social actors. By changing the treatment of the procedure used, we obtain the complexity is smaller than the previous procedure. This has also been demonstrated in an experiment by using the denial sample.

  1. Detection of Anomalies in Hydrometric Data Using Artificial Intelligence Techniques

    NASA Astrophysics Data System (ADS)

    Lauzon, N.; Lence, B. J.

    2002-12-01

    This work focuses on the detection of anomalies in hydrometric data sequences, such as 1) outliers, which are individual data having statistical properties that differ from those of the overall population; 2) shifts, which are sudden changes over time in the statistical properties of the historical records of data; and 3) trends, which are systematic changes over time in the statistical properties. For the purpose of the design and management of water resources systems, it is important to be aware of these anomalies in hydrometric data, for they can induce a bias in the estimation of water quantity and quality parameters. These anomalies may be viewed as specific patterns affecting the data, and therefore pattern recognition techniques can be used for identifying them. However, the number of possible patterns is very large for each type of anomaly and consequently large computing capacities are required to account for all possibilities using the standard statistical techniques, such as cluster analysis. Artificial intelligence techniques, such as the Kohonen neural network and fuzzy c-means, are clustering techniques commonly used for pattern recognition in several areas of engineering and have recently begun to be used for the analysis of natural systems. They require much less computing capacity than the standard statistical techniques, and therefore are well suited for the identification of outliers, shifts and trends in hydrometric data. This work constitutes a preliminary study, using synthetic data representing hydrometric data that can be found in Canada. The analysis of the results obtained shows that the Kohonen neural network and fuzzy c-means are reasonably successful in identifying anomalies. This work also addresses the problem of uncertainties inherent to the calibration procedures that fit the clusters to the possible patterns for both the Kohonen neural network and fuzzy c-means. Indeed, for the same database, different sets of clusters can be established with these calibration procedures. A simple method for analyzing uncertainties associated with the Kohonen neural network and fuzzy c-means is developed here. The method combines the results from several sets of clusters, either from the Kohonen neural network or fuzzy c-means, so as to provide an overall diagnosis as to the identification of outliers, shifts and trends. The results indicate an improvement in the performance for identifying anomalies when the method of combining cluster sets is used, compared with when only one cluster set is used.

  2. Robust Analysis of Network-Based Real-Time Kinematic for GNSS-Derived Heights.

    PubMed

    Bae, Tae-Suk; Grejner-Brzezinska, Dorota; Mader, Gerald; Dennis, Michael

    2015-10-26

    New guidelines and procedures for real-time (RT) network-based solutions are required in order to support Global Navigation Satellite System (GNSS) derived heights. Two kinds of experiments were carried out to analyze the performance of the network-based real-time kinematic (RTK) solutions. New test marks were installed in different surrounding environments, and the existing GPS benchmarks were used for analyzing the effect of different factors, such as baseline lengths, antenna types, on the final accuracy and reliability of the height estimation. The RT solutions are categorized into three groups: single-base RTK, multiple-epoch network RTK (mRTN), and single-epoch network RTK (sRTN). The RTK solution can be biased up to 9 mm depending on the surrounding environment, but there was no notable bias for a longer reference base station (about 30 km) In addition, the occupation time for the network RTK was investigated in various cases. There is no explicit bias in the solution for different durations, but smoother results were obtained for longer durations. Further investigation is needed into the effect of changing the occupation time between solutions and into the possibility of using single-epoch solutions in precise determination of heights by GNSS.

  3. The Emergence of Selective Attention through Probabilistic Associations between Stimuli and Actions.

    PubMed

    Simione, Luca; Nolfi, Stefano

    2016-01-01

    In this paper we show how a multilayer neural network trained to master a context-dependent task in which the action co-varies with a certain stimulus in a first context and with a second stimulus in an alternative context exhibits selective attention, i.e. filtering out of irrelevant information. This effect is rather robust and it is observed in several variations of the experiment in which the characteristics of the network as well as of the training procedure have been varied. Our result demonstrates how the filtering out of irrelevant information can originate spontaneously as a consequence of the regularities present in context-dependent training set and therefore does not necessarily depend on specific architectural constraints. The post-evaluation of the network in an instructed-delay experimental scenario shows how the behaviour of the network is consistent with the data collected in neuropsychological studies. The analysis of the network at the end of the training process indicates how selective attention originates as a result of the effects caused by relevant and irrelevant stimuli mediated by context-dependent and context-independent bidirectional associations between stimuli and actions that are extracted by the network during the learning.

  4. An empirical Bayes approach to network recovery using external knowledge.

    PubMed

    Kpogbezan, Gino B; van der Vaart, Aad W; van Wieringen, Wessel N; Leday, Gwenaël G R; van de Wiel, Mark A

    2017-09-01

    Reconstruction of a high-dimensional network may benefit substantially from the inclusion of prior knowledge on the network topology. In the case of gene interaction networks such knowledge may come for instance from pathway repositories like KEGG, or be inferred from data of a pilot study. The Bayesian framework provides a natural means of including such prior knowledge. Based on a Bayesian Simultaneous Equation Model, we develop an appealing Empirical Bayes (EB) procedure that automatically assesses the agreement of the used prior knowledge with the data at hand. We use variational Bayes method for posterior densities approximation and compare its accuracy with that of Gibbs sampling strategy. Our method is computationally fast, and can outperform known competitors. In a simulation study, we show that accurate prior data can greatly improve the reconstruction of the network, but need not harm the reconstruction if wrong. We demonstrate the benefits of the method in an analysis of gene expression data from GEO. In particular, the edges of the recovered network have superior reproducibility (compared to that of competitors) over resampled versions of the data. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Health and Environment Linked for Information Exchange in Atlanta (HELIX-Atlanta): A Pilot Tracking System

    NASA Technical Reports Server (NTRS)

    Rickman, Doug; Shire, J.; Qualters, J.; Mitchell, K.; Pollard, S.; Rao, R.; Kajumba, N.; Quattrochi, D.; Estes, M., Jr.; Meyer, P.; hide

    2009-01-01

    Objectives. To provide an overview of four environmental public health surveillance projects developed by CDC and its partners for the Health and Environment Linked for Information Exchange, Atlanta (HELIX-Atlanta) and to illustrate common issues and challenges encountered in developing an environmental public health tracking system. Methods. HELIX-Atlanta, initiated in October 2003 to develop data linkage and analysis methods that can be used by the National Environmental Public Health Tracking Network (Tracking Network), conducted four projects. We highlight the projects' work, assess attainment of the HELIX-Atlanta goals and discuss three surveillance attributes. Results. Among the major challenges was the complexity of analytic issues which required multidiscipline teams with technical expertise. This expertise and the data resided across multiple organizations. Conclusions:Establishing formal procedures for sharing data, defining data analysis standards and automating analyses, and committing staff with appropriate expertise is needed to support wide implementation of environmental public health tracking.

  6. When the Sky Falls NASA's Response to Bright Bolide Events Over Continental USA

    NASA Technical Reports Server (NTRS)

    Blaauw, R. C.; Cooke, W. J.; Kingery, A. M.; Moser, D. E.

    2015-01-01

    Being the only U.S. Government entity charged with monitoring the meteor environment, the Meteoroid Environment Office (MEO) has deployed a network of allsky and wide field meteor cameras, along with the appropriate software tools to quickly analyze data from these systems. However, the coverage of this network is still quite limited, forcing the incorporation of data from other cameras posted to the internet in analyzing many of the fireballs reported by the public and media. Information on these bright events often needs to be reported to NASA Headquarters by noon the following day; thus a procedure has been developed that determines the analysis process for a given fireball event based on the types and amount of data available. The differences between these analysis processes are shown by looking at four meteor events that the MEO responded to, all of which were large enough to produce meteorites.

  7. Cluster analysis of word frequency dynamics

    NASA Astrophysics Data System (ADS)

    Maslennikova, Yu S.; Bochkarev, V. V.; Belashova, I. A.

    2015-01-01

    This paper describes the analysis and modelling of word usage frequency time series. During one of previous studies, an assumption was put forward that all word usage frequencies have uniform dynamics approaching the shape of a Gaussian function. This assumption can be checked using the frequency dictionaries of the Google Books Ngram database. This database includes 5.2 million books published between 1500 and 2008. The corpus contains over 500 billion words in American English, British English, French, German, Spanish, Russian, Hebrew, and Chinese. We clustered time series of word usage frequencies using a Kohonen neural network. The similarity between input vectors was estimated using several algorithms. As a result of the neural network training procedure, more than ten different forms of time series were found. They describe the dynamics of word usage frequencies from birth to death of individual words. Different groups of word forms were found to have different dynamics of word usage frequency variations.

  8. Application of artificial neural networks for conformity analysis of fuel performed with an optical fiber sensor

    NASA Astrophysics Data System (ADS)

    Possetti, Gustavo Rafael Collere; Coradin, Francelli Klemba; Côcco, Lílian Cristina; Yamamoto, Carlos Itsuo; de Arruda, Lucia Valéria Ramos; Falate, Rosane; Muller, Marcia; Fabris, José Luís

    2008-04-01

    The liquid fuel quality control is an important issue that brings benefits for the State, for the consumers and for the environment. The conformity analysis, in special for gasoline, demands a rigorous sampling technique among gas stations and other economic agencies, followed by a series of standard physicochemical tests. Such procedures are commonly expensive and time demanding and, moreover, a specialist is often required to carry out the tasks. Such drawbacks make the development of alternative analysis tools an important research field. The fuel refractive index is an additional parameter to help the fuel conformity analysis, besides the prospective optical fiber sensors, which operate like transducers with singular properties. When this parameter is correlated with the sample density, it becomes possible to determine conformity zones that cannot be analytically defined. This work presents an application of artificial neural networks based on Radial Basis Function to determine these zones. A set of 45 gasoline samples, collected in several gas stations and previously analyzed according to the rules of Agência Nacional do Petróleo, Gás Natural e Biocombustíveis, a Brazilian regulatory agency, constituted the database to build two neural networks. The input variables of first network are the samples refractive indices, measured with an Abbe refractometer, and the density of the samples measured with a digital densimeter. For the second network the input variables included, besides the samples densities, the wavelength response of a long-period grating to the samples refractive indices. The used grating was written in an optical fiber using the point-to-point technique by submitting the fiber to consecutive electrical arcs from a splice machine. The output variables of both Radial Basis Function Networks are represented by the conformity status of each sample, according to report of tests carried out following the American Society for Testing and Materials and/or Brazilian Association of Technical Rules standards. A subset of 35 samples, randomly chosen from the database, was used to design and calibrate (train) both networks. The two networks topologies (numbers of Radial Basis Function neurons of the hidden layer and function radius) were built in order to minimize the root mean square error. The subset composed by the other 10 samples was used to validate the final networks architectures. The obtained results have demonstrated that both networks reach a good predictive capability.

  9. Power analysis and trend detection for water quality monitoring data. An application for the Greater Yellowstone Inventory and Monitoring Network

    USGS Publications Warehouse

    Irvine, Kathryn M.; Manlove, Kezia; Hollimon, Cynthia

    2012-01-01

    An important consideration for long term monitoring programs is determining the required sampling effort to detect trends in specific ecological indicators of interest. To enhance the Greater Yellowstone Inventory and Monitoring Network’s water resources protocol(s) (O’Ney 2006 and O’Ney et al. 2009 [under review]), we developed a set of tools to: (1) determine the statistical power for detecting trends of varying magnitude in a specified water quality parameter over different lengths of sampling (years) and different within-year collection frequencies (monthly or seasonal sampling) at particular locations using historical data, and (2) perform periodic trend analyses for water quality parameters while addressing seasonality and flow weighting. A power analysis for trend detection is a statistical procedure used to estimate the probability of rejecting the hypothesis of no trend when in fact there is a trend, within a specific modeling framework. In this report, we base our power estimates on using the seasonal Kendall test (Helsel and Hirsch 2002) for detecting trend in water quality parameters measured at fixed locations over multiple years. We also present procedures (R-scripts) for conducting a periodic trend analysis using the seasonal Kendall test with and without flow adjustment. This report provides the R-scripts developed for power and trend analysis, tutorials, and the associated tables and graphs. The purpose of this report is to provide practical information for monitoring network staff on how to use these statistical tools for water quality monitoring data sets.

  10. A PC-based computer package for automatic detection and location of earthquakes: Application to a seismic network in eastern sicity (Italy)

    NASA Astrophysics Data System (ADS)

    Patanè, Domenico; Ferrari, Ferruccio; Giampiccolo, Elisabetta; Gresta, Stefano

    Few automated data acquisition and processing systems operate on mainframes, some run on UNIX-based workstations and others on personal computers, equipped with either DOS/WINDOWS or UNIX-derived operating systems. Several large and complex software packages for automatic and interactive analysis of seismic data have been developed in recent years (mainly for UNIX-based systems). Some of these programs use a variety of artificial intelligence techniques. The first operational version of a new software package, named PC-Seism, for analyzing seismic data from a local network is presented in Patanè et al. (1999). This package, composed of three separate modules, provides an example of a new generation of visual object-oriented programs for interactive and automatic seismic data-processing running on a personal computer. In this work, we mainly discuss the automatic procedures implemented in the ASDP (Automatic Seismic Data-Processing) module and real time application to data acquired by a seismic network running in eastern Sicily. This software uses a multi-algorithm approach and a new procedure MSA (multi-station-analysis) for signal detection, phase grouping and event identification and location. It is designed for an efficient and accurate processing of local earthquake records provided by single-site and array stations. Results from ASDP processing of two different data sets recorded at Mt. Etna volcano by a regional network are analyzed to evaluate its performance. By comparing the ASDP pickings with those revised manually, the detection and subsequently the location capabilities of this software are assessed. The first data set is composed of 330 local earthquakes recorded in the Mt. Etna erea during 1997 by the telemetry analog seismic network. The second data set comprises about 970 automatic locations of more than 2600 local events recorded at Mt. Etna during the last eruption (July 2001) at the present network. For the former data set, a comparison of the automatic results with the manual picks indicates that the ASDP module can accurately pick 80% of the P-waves and 65% of S-waves. The on-line application on the latter data set shows that automatic locations are affected by larger errors, due to the preliminary setting of the configuration parameters in the program. However, both automatic ASDP and manual hypocenter locations are comparable within the estimated error bounds. New improvements of the PC-Seism software for on-line analysis are also discussed.

  11. 2D PWV monitoring of a wide and orographically complex area with a low dense GNSS network

    NASA Astrophysics Data System (ADS)

    Ferrando, Ilaria; Federici, Bianca; Sguerso, Domenico

    2018-04-01

    This study presents an innovative procedure to monitor the precipitable water vapor (PWV) content of a wide and orographically complex area with low-density networks. The procedure, termed G4M (global navigation satellite system, GNSS, for Meteorology), has been developed in a geographic information system (GIS) environment using the free and open source GRASS GIS software (https://grass.osgeo.org). The G4M input data are zenith total delay estimates obtained from GNSS permanent stations network adjustment and pressure ( P) and temperature ( T) observations using existing infrastructure networks with different geographic distributions in the study area. In spite of the wide sensor distribution, the procedure produces 2D maps with high spatiotemporal resolution (up to 250 m and 6 min) based on a simplified mathematical model including data interpolation, which was conceived by the authors to describe the atmosphere's physics. In addition to PWV maps, the procedure provides ΔPWV and heterogeneity index maps: the former represents PWV variations with respect to a "calm" moment, which are useful for monitoring the PWV evolution; and the latter are promising indicators to localize severe meteorological events in time and space. This innovative procedure is compared with meteorological simulations in this paper; in addition, an application to a severe event that occurred in Genoa (Italy) is presented.[Figure not available: see fulltext.

  12. Network-Based Method for Identifying Co-Regeneration Genes in Bone, Dentin, Nerve and Vessel Tissues

    PubMed Central

    Pan, Hongying; Zhang, Yu-Hang; Feng, Kaiyan; Kong, XiangYin; Cai, Yu-Dong

    2017-01-01

    Bone and dental diseases are serious public health problems. Most current clinical treatments for these diseases can produce side effects. Regeneration is a promising therapy for bone and dental diseases, yielding natural tissue recovery with few side effects. Because soft tissues inside the bone and dentin are densely populated with nerves and vessels, the study of bone and dentin regeneration should also consider the co-regeneration of nerves and vessels. In this study, a network-based method to identify co-regeneration genes for bone, dentin, nerve and vessel was constructed based on an extensive network of protein–protein interactions. Three procedures were applied in the network-based method. The first procedure, searching, sought the shortest paths connecting regeneration genes of one tissue type with regeneration genes of other tissues, thereby extracting possible co-regeneration genes. The second procedure, testing, employed a permutation test to evaluate whether possible genes were false discoveries; these genes were excluded by the testing procedure. The last procedure, screening, employed two rules, the betweenness ratio rule and interaction score rule, to select the most essential genes. A total of seventeen genes were inferred by the method, which were deemed to contribute to co-regeneration of at least two tissues. All these seventeen genes were extensively discussed to validate the utility of the method. PMID:28974058

  13. Network-Based Method for Identifying Co- Regeneration Genes in Bone, Dentin, Nerve and Vessel Tissues.

    PubMed

    Chen, Lei; Pan, Hongying; Zhang, Yu-Hang; Feng, Kaiyan; Kong, XiangYin; Huang, Tao; Cai, Yu-Dong

    2017-10-02

    Bone and dental diseases are serious public health problems. Most current clinical treatments for these diseases can produce side effects. Regeneration is a promising therapy for bone and dental diseases, yielding natural tissue recovery with few side effects. Because soft tissues inside the bone and dentin are densely populated with nerves and vessels, the study of bone and dentin regeneration should also consider the co-regeneration of nerves and vessels. In this study, a network-based method to identify co-regeneration genes for bone, dentin, nerve and vessel was constructed based on an extensive network of protein-protein interactions. Three procedures were applied in the network-based method. The first procedure, searching, sought the shortest paths connecting regeneration genes of one tissue type with regeneration genes of other tissues, thereby extracting possible co-regeneration genes. The second procedure, testing, employed a permutation test to evaluate whether possible genes were false discoveries; these genes were excluded by the testing procedure. The last procedure, screening, employed two rules, the betweenness ratio rule and interaction score rule, to select the most essential genes. A total of seventeen genes were inferred by the method, which were deemed to contribute to co-regeneration of at least two tissues. All these seventeen genes were extensively discussed to validate the utility of the method.

  14. Analysis of the U.S. geological survey streamgaging network

    USGS Publications Warehouse

    Scott, A.G.

    1987-01-01

    This paper summarizes the results from the first 3 years of a 5-year cost-effectiveness study of the U.S. Geological Survey streamgaging network. The objective of the study is to define and document the most cost-effective means of furnishing streamflow information. In the first step of this study, data uses were identified for 3,493 continuous-record stations currently being operated in 32 States. In the second step, evaluation of alternative methods of providing streamflow information, flow-routing models, and regression models were developed for estimating daily flows at 251 stations of the 3,493 stations analyzed. In the third step of the analysis, relationships were developed between the accuracy of the streamflow records and the operating budget. The weighted standard error for all stations, with current operating procedures, was 19.9 percent. By altering field activities, as determined by the analyses, this could be reduced to 17.8 percent. The existing streamgaging networks in four Districts were further analyzed to determine the impacts that satellite telemetry would have on the cost effectiveness. Satellite telemetry was not found to be cost effective on the basis of hydrologic data collection alone, given present cost of equipment and operation.This paper summarizes the results from the first 3 years of a 5-year cost-effectiveness study of the U. S. Geological Survey streamgaging network. The objective of the study is to define and document the most cost-effective means of furnishing streamflow information. In the first step of this study, data uses were identified for 3,493 continuous-record stations currently being operated in 32 States. In the second step, evaluation of alternative methods of providing streamflow information, flow-routing models, and regression models were developed for estimating daily flows at 251 stations of the 3, 493 stations analyzed. In the third step of the analysis, relationships were developed between the accuracy of the streamflow records and the operating budget. The weighted standard error for all stations, with current operating procedures, was 19. 9 percent. By altering field activities, as determined by the analyses, this could be reduced to 17. 8 percent. Additional study results are discussed.

  15. Practice-Based Research Networks, Part II: A Descriptive Analysis of the Athletic Training Practice-Based Research Network in the Secondary School Setting

    PubMed Central

    McLeod, Tamara C. Valovich; Lam, Kenneth C.; Bay, R. Curtis; Sauers, Eric L.; Valier, Alison R. Snyder

    2012-01-01

    Context Analysis of health care service models requires the collection and evaluation of basic practice characterization data. Practice-based research networks (PBRNs) provide a framework for gathering data useful in characterizing clinical practice. Objective To describe preliminary secondary school setting practice data from the Athletic Training Practice-Based Research Network (AT-PBRN). Design Descriptive study. Setting Secondary school athletic training facilities within the AT-PBRN. Patients or Other Participants Clinicians (n = 22) and their patients (n = 2523) from the AT-PBRN. Main Outcome Measure(s) A Web-based survey was used to obtain data on clinical practice site and clinician characteristics. Patient and practice characteristics were obtained via deidentified electronic medical record data collected between September 1, 2009, and April 1, 2011. Descriptive data regarding the clinician and CPS practice characteristics are reported as percentages and frequencies. Descriptive analysis of patient encounters and practice characteristic data was performed, with the percentages and frequencies of the type of injuries recorded at initial evaluation, type of treatment received at initial evaluation, daily treatment, and daily sign-in procedures. Results The AT-PBRN had secondary school sites in 7 states, and most athletic trainers at those sites (78.2%) had less than 5 years of experience. The secondary school sites within the AT-PBRN documented 2523 patients treated across 3140 encounters. Patients most frequently sought care for a current injury (61.3%), followed by preventive services (24.0%), and new injuries (14.7%). The most common diagnoses were ankle sprain/strain (17.9%), hip sprain/strain (12.5%), concussion (12.0%), and knee pain (2.5%). The most frequent procedures were athletic trainer evaluation (53.9%), hot- or cold-pack application (26.0%), strapping (10.3%), and therapeutic exercise (5.7%). The median number of treatments per injury was 3 (interquartile range = 2, 4; range = 2–19). Conclusions These preliminary data describe services provided by clinicians within the AT-PBRN and demonstrate the usefulness of the PBRN model for obtaining such data. PMID:23068594

  16. A Measurement Plane for Optical Networks to Manage Emergency Events

    NASA Astrophysics Data System (ADS)

    Tego, E.; Carciofi, C.; Grazioso, P.; Petrini, V.; Pompei, S.; Matera, F.; Attanasio, V.; Nastri, E.; Restuccia, E.

    2017-11-01

    In this work, we show a wide geographical area optical network test bed, adopting the mPlane measurement plane for monitoring its performance and to manage software defined network approaches, with some specific tests and procedures dedicated to respond to disaster events and to support emergency networks. Such a test bed includes FTTX accesses, and it is currently implemented to support future 5G wireless services with slicing procedures based on Carrier Ethernet. The characteristics of this platform have been experimentally tested in the case of a damage-causing link failure and traffic congestion, showing a fast reactions to these disastrous events, allowing the user to recharge the initial QoS parameters.

  17. Freight Transportation Energy Use : Volume 3. Freight Network and Operations Database.

    DOT National Transportation Integrated Search

    1979-07-01

    The data sources, procedures, and assumptions used to generate the TSC national freight network and operations database are documented. National rail, highway, waterway, and pipeline networks are presented, and estimates of facility capacity, travel ...

  18. Site selection for the future stations of the french permanent broadband network

    NASA Astrophysics Data System (ADS)

    Vergne, Jérôme; Charade, Olivier

    2013-04-01

    RESIF (REseau SIsmologique et géodésique Français) is a new French research infrastructure dedicated to the observation of earth deformation based on seismic and geodetic instruments mainly located in France. One of its major component, called RESIF-CLB (Construction Large Bande), is devoted to the evolution of the permanent seismic broadband network in metropolitan France with the objective to complement the 45 existing stations with ~155 new stations within the next eight years. This network will be used for various scientific objectives including deep structures imaging and national seismicity monitoring. The chosen network topology consists in a backbone of homogeneously distributed stations (long wavelength array) completed by additional stations in seismically active regions. Management of the RESIF-CLB project is carried out by the technical division of INSU (Institut National des Sciences de l'Univers) who will rely on eight regional observatories and the CEA-LDG for the construction and operation of the stations. To optimize the performance of the network, we put a strong emphasis on the standardization of the stations in term of vault types, scientific and technical instrumentation and operation procedures. We also set up a procedure for site selection requiring that every potential site has to be tested for at least 3 weeks with a minimalist installation. Analysis of the continuous ambient noise records is then included in a standardized report submitted to all committed partners for acceptance. During the last two years, about 60 potential new sites have been tested, spanning various places and environments. We present a review of the seismic noise measurements at these sites and discuss the influence of different types of noise sources depending on the frequency band of interest. For example, we show that regional population distribution can be used as a proxy to infer the noise level at frequencies higher than 1 Hz. Based on similar noise analyses at existing permanent sites, we also discuss the fair benefit of our site testing procedure for the estimation of the long period noise level once the station is settled.

  19. Systematic flood modelling to support flood-proof urban design

    NASA Astrophysics Data System (ADS)

    Bruwier, Martin; Mustafa, Ahmed; Aliaga, Daniel; Archambeau, Pierre; Erpicum, Sébastien; Nishida, Gen; Zhang, Xiaowei; Pirotton, Michel; Teller, Jacques; Dewals, Benjamin

    2017-04-01

    Urban flood risk is influenced by many factors such as hydro-meteorological drivers, existing drainage systems as well as vulnerability of population and assets. The urban fabric itself has also a complex influence on inundation flows. In this research, we performed a systematic analysis on how various characteristics of urban patterns control inundation flow within the urban area and upstream of it. An urban generator tool was used to generate over 2,250 synthetic urban networks of 1 km2. This tool is based on the procedural modelling presented by Parish and Müller (2001) which was adapted to generate a broader variety of urban networks. Nine input parameters were used to control the urban geometry. Three of them define the average length, orientation and curvature of the streets. Two orthogonal major roads, for which the width constitutes the fourth input parameter, work as constraints to generate the urban network. The width of secondary streets is given by the fifth input parameter. Each parcel generated by the street network based on a parcel mean area parameter can be either a park or a building parcel depending on the park ratio parameter. Three setback parameters constraint the exact location of the building whithin a building parcel. For each of synthetic urban network, detailed two-dimensional inundation maps were computed with a hydraulic model. The computational efficiency was enhanced by means of a porosity model. This enables the use of a coarser computational grid , while preserving information on the detailed geometry of the urban network (Sanders et al. 2008). These porosity parameters reflect not only the void fraction, which influences the storage capacity of the urban area, but also the influence of buildings on flow conveyance (dynamic effects). A sensitivity analysis was performed based on the inundation maps to highlight the respective impact of each input parameter characteristizing the urban networks. The findings of the study pinpoint which properties of urban networks have a major influence on urban inundation flow, enabling better informed flood-proof urban design. References: Parish, Y. I. H., Muller, P. 2001. Procedural modeling of cities. SIGGRAPH, pp. 301—308. Sanders, B.F., Schubert, J.E., Gallegos, H.A., 2008. Integral formulation of shallow-water equations with anisotropic porosity for urban flood modeling. Journal of Hydrology 362, 19-38. Acknowledgements: The research was funded through the ARC grant for Concerted Research Actions, financed by the Wallonia-Brussels Federation.

  20. Candidate gene prioritization by network analysis of differential expression using machine learning approaches

    PubMed Central

    2010-01-01

    Background Discovering novel disease genes is still challenging for diseases for which no prior knowledge - such as known disease genes or disease-related pathways - is available. Performing genetic studies frequently results in large lists of candidate genes of which only few can be followed up for further investigation. We have recently developed a computational method for constitutional genetic disorders that identifies the most promising candidate genes by replacing prior knowledge by experimental data of differential gene expression between affected and healthy individuals. To improve the performance of our prioritization strategy, we have extended our previous work by applying different machine learning approaches that identify promising candidate genes by determining whether a gene is surrounded by highly differentially expressed genes in a functional association or protein-protein interaction network. Results We have proposed three strategies scoring disease candidate genes relying on network-based machine learning approaches, such as kernel ridge regression, heat kernel, and Arnoldi kernel approximation. For comparison purposes, a local measure based on the expression of the direct neighbors is also computed. We have benchmarked these strategies on 40 publicly available knockout experiments in mice, and performance was assessed against results obtained using a standard procedure in genetics that ranks candidate genes based solely on their differential expression levels (Simple Expression Ranking). Our results showed that our four strategies could outperform this standard procedure and that the best results were obtained using the Heat Kernel Diffusion Ranking leading to an average ranking position of 8 out of 100 genes, an AUC value of 92.3% and an error reduction of 52.8% relative to the standard procedure approach which ranked the knockout gene on average at position 17 with an AUC value of 83.7%. Conclusion In this study we could identify promising candidate genes using network based machine learning approaches even if no knowledge is available about the disease or phenotype. PMID:20840752

  1. The topology of metabolic isotope labeling networks.

    PubMed

    Weitzel, Michael; Wiechert, Wolfgang; Nöh, Katharina

    2007-08-29

    Metabolic Flux Analysis (MFA) based on isotope labeling experiments (ILEs) is a widely established tool for determining fluxes in metabolic pathways. Isotope labeling networks (ILNs) contain all essential information required to describe the flow of labeled material in an ILE. Whereas recent experimental progress paves the way for high-throughput MFA, large network investigations and exact statistical methods, these developments are still limited by the poor performance of computational routines used for the evaluation and design of ILEs. In this context, the global analysis of ILN topology turns out to be a clue for realizing large speedup factors in all required computational procedures. With a strong focus on the speedup of algorithms the topology of ILNs is investigated using graph theoretic concepts and algorithms. A rigorous determination of all cyclic and isomorphic subnetworks, accompanied by the global analysis of ILN connectivity is performed. Particularly, it is proven that ILNs always brake up into a large number of small strongly connected components (SCCs) and, moreover, there are natural isomorphisms between many of these SCCs. All presented techniques are universal, i.e. they do not require special assumptions on the network structure, bidirectionality of fluxes, measurement configuration, or label input. The general results are exemplified with a practically relevant metabolic network which describes the central metabolism of E. coli comprising 10390 isotopomer pools. Exploiting the topological features of ILNs leads to a significant speedup of all universal algorithms for ILE evaluation. It is proven in theory and exemplified with the E. coli example that a speedup factor of about 1000 compared to standard algorithms is achieved. This widely opens the door for new high performance algorithms suitable for high throughput applications and large ILNs. Moreover, for the first time the global topological analysis of ILNs allows to comprehensively describe and understand the general patterns of label flow in complex networks. This is an invaluable tool for the structural design of new experiments and the interpretation of measured data.

  2. Time course based artifact identification for independent components of resting-state FMRI.

    PubMed

    Rummel, Christian; Verma, Rajeev Kumar; Schöpf, Veronika; Abela, Eugenio; Hauf, Martinus; Berruecos, José Fernando Zapata; Wiest, Roland

    2013-01-01

    In functional magnetic resonance imaging (fMRI) coherent oscillations of the blood oxygen level-dependent (BOLD) signal can be detected. These arise when brain regions respond to external stimuli or are activated by tasks. The same networks have been characterized during wakeful rest when functional connectivity of the human brain is organized in generic resting-state networks (RSN). Alterations of RSN emerge as neurobiological markers of pathological conditions such as altered mental state. In single-subject fMRI data the coherent components can be identified by blind source separation of the pre-processed BOLD data using spatial independent component analysis (ICA) and related approaches. The resulting maps may represent physiological RSNs or may be due to various artifacts. In this methodological study, we propose a conceptually simple and fully automatic time course based filtering procedure to detect obvious artifacts in the ICA output for resting-state fMRI. The filter is trained on six and tested on 29 healthy subjects, yielding mean filter accuracy, sensitivity and specificity of 0.80, 0.82, and 0.75 in out-of-sample tests. To estimate the impact of clearly artifactual single-subject components on group resting-state studies we analyze unfiltered and filtered output with a second level ICA procedure. Although the automated filter does not reach performance values of visual analysis by human raters, we propose that resting-state compatible analysis of ICA time courses could be very useful to complement the existing map or task/event oriented artifact classification algorithms.

  3. 31 CFR Appendix N to Subpart C of... - Financial Crimes Enforcement Network

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 31 Money and Finance: Treasury 1 2011-07-01 2011-07-01 false Financial Crimes Enforcement Network...—Financial Crimes Enforcement Network 1. In general. This appendix applies to the Financial Crimes Enforcement Network (FinCEN). It sets forth specific notification and access procedures with respect to...

  4. Correlation filtering in financial time series (Invited Paper)

    NASA Astrophysics Data System (ADS)

    Aste, T.; Di Matteo, Tiziana; Tumminello, M.; Mantegna, R. N.

    2005-05-01

    We apply a method to filter relevant information from the correlation coefficient matrix by extracting a network of relevant interactions. This method succeeds to generate networks with the same hierarchical structure of the Minimum Spanning Tree but containing a larger amount of links resulting in a richer network topology allowing loops and cliques. In Tumminello et al.,1 we have shown that this method, applied to a financial portfolio of 100 stocks in the USA equity markets, is pretty efficient in filtering relevant information about the clustering of the system and its hierarchical structure both on the whole system and within each cluster. In particular, we have found that triangular loops and 4 element cliques have important and significant relations with the market structure and properties. Here we apply this filtering procedure to the analysis of correlation in two different kind of interest rate time series (16 Eurodollars and 34 US interest rates).

  5. Security analysis and enhanced user authentication in proxy mobile IPv6 networks.

    PubMed

    Kang, Dongwoo; Jung, Jaewook; Lee, Donghoon; Kim, Hyoungshick; Won, Dongho

    2017-01-01

    The Proxy Mobile IPv6 (PMIPv6) is a network-based mobility management protocol that allows a Mobile Node(MN) connected to the PMIPv6 domain to move from one network to another without changing the assigned IPv6 address. The user authentication procedure in this protocol is not standardized, but many smartcard based authentication schemes have been proposed. Recently, Alizadeh et al. proposed an authentication scheme for the PMIPv6. However, it could allow an attacker to derive an encryption key that must be securely shared between MN and the Mobile Access Gate(MAG). As a result, outsider adversary can derive MN's identity, password and session key. In this paper, we analyze Alizadeh et al.'s scheme regarding security and propose an enhanced authentication scheme that uses a dynamic identity to satisfy anonymity. Furthermore, we use BAN logic to show that our scheme can successfully generate and communicate with the inter-entity session key.

  6. [The impact on costs and care of two approaches to reduce employees' dental plan expenses in a private company].

    PubMed

    Costa Filho, Luiz Cesar da; Duncan, Bruce Bartholow; Polanczyk, Carisi Anne; Sória, Marina Lara; Habekost, Ana Paula; Costa, Carolina Covolo da

    2008-05-01

    The present study evaluated the dental care plan offered to 4,000 employees of a private hospital and their respective families. The analysis covered three stages: (1) baseline (control), when dental care was provided by an outsourced company with a network of dentists paid for services, (2) a renegotiation of costs with the original dental care provider, and (3) provision of dental care by the hospital itself, through directly hired dentists on regular salaries. Monthly economic and clinical data were collected for this research. The dental plan renegotiation reduced costs by 37% in relation to baseline, and the hospital's own dental service reduced costs by 50%. Renegotiation led to a 31% reduction in clinical procedures, without altering the dental care profile; the hospital's own dental service did not reduce the total number of clinical procedures, but modified the profile of dental care, since procedures related to the causes of diseases increased and surgical/restorative procedures decreased.

  7. Integrating P3 Data Into P2 Analyses: What is the Added Value

    Treesearch

    James R. Steinman

    2001-01-01

    The Forest Inventory and Analysis and Forest Health Monitoring Programs of the USDA Forest Service are integrating field procedures for measuring their networks of plots throughout the United States. These plots are now referred to as Phase 2 (P2) and Phase 3 (P3) plots, respectively, and 1 out of every 16 P2 plots will also be a P3 plot. Mensurational methods will be...

  8. Database Entity Persistence with Hibernate for the Network Connectivity Analysis Model

    DTIC Science & Technology

    2014-04-01

    time savings in the Java coding development process. Appendices A and B describe address setup procedures for installing the MySQL database...development environment is required: • The open source MySQL Database Management System (DBMS) from Oracle, which is a Java Database Connectivity (JDBC...compliant DBMS • MySQL JDBC Driver library that comes as a plug-in with the Netbeans distribution • The latest Java Development Kit with the latest

  9. Automated Clean Chemistry for Bulk Analysis of Environmental Swipe Samples - FY17 Year End Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ticknor, Brian W.; Metzger, Shalina C.; McBay, Eddy H.

    Sample preparation methods for mass spectrometry are being automated using commercial-off-the-shelf (COTS) equipment to shorten lengthy and costly manual chemical purification procedures. This development addresses a serious need in the International Atomic Energy Agency’s Network of Analytical Laboratories (IAEA NWAL) to increase efficiency in the Bulk Analysis of Environmental Samples for Safeguards program with a method that allows unattended, overnight operation. In collaboration with Elemental Scientific Inc., the prepFAST-MC2 was designed based on COTS equipment. It was modified for uranium/plutonium separations using renewable columns packed with Eichrom TEVA and UTEVA resins, with a chemical separation method based on the Oakmore » Ridge National Laboratory (ORNL) NWAL chemical procedure. The newly designed prepFAST-SR has had several upgrades compared with the original prepFAST-MC2. Both systems are currently installed in the Ultra-Trace Forensics Science Center at ORNL.« less

  10. Proving Stabilization of Biological Systems

    NASA Astrophysics Data System (ADS)

    Cook, Byron; Fisher, Jasmin; Krepska, Elzbieta; Piterman, Nir

    We describe an efficient procedure for proving stabilization of biological systems modeled as qualitative networks or genetic regulatory networks. For scalability, our procedure uses modular proof techniques, where state-space exploration is applied only locally to small pieces of the system rather than the entire system as a whole. Our procedure exploits the observation that, in practice, the form of modular proofs can be restricted to a very limited set. For completeness, our technique falls back on a non-compositional counterexample search. Using our new procedure, we have solved a number of challenging published examples, including: a 3-D model of the mammalian epidermis; a model of metabolic networks operating in type-2 diabetes; a model of fate determination of vulval precursor cells in the C. elegans worm; and a model of pair-rule regulation during segmentation in the Drosophila embryo. Our results show many orders of magnitude speedup in cases where previous stabilization proving techniques were known to succeed, and new results in cases where tools had previously failed.

  11. Automated extraction of natural drainage density patterns for the conterminous United States through high performance computing

    USGS Publications Warehouse

    Stanislawski, Larry V.; Falgout, Jeff T.; Buttenfield, Barbara P.

    2015-01-01

    Hydrographic networks form an important data foundation for cartographic base mapping and for hydrologic analysis. Drainage density patterns for these networks can be derived to characterize local landscape, bedrock and climate conditions, and further inform hydrologic and geomorphological analysis by indicating areas where too few headwater channels have been extracted. But natural drainage density patterns are not consistently available in existing hydrographic data for the United States because compilation and capture criteria historically varied, along with climate, during the period of data collection over the various terrain types throughout the country. This paper demonstrates an automated workflow that is being tested in a high-performance computing environment by the U.S. Geological Survey (USGS) to map natural drainage density patterns at the 1:24,000-scale (24K) for the conterminous United States. Hydrographic network drainage patterns may be extracted from elevation data to guide corrections for existing hydrographic network data. The paper describes three stages in this workflow including data pre-processing, natural channel extraction, and generation of drainage density patterns from extracted channels. The workflow is concurrently implemented by executing procedures on multiple subbasin watersheds within the U.S. National Hydrography Dataset (NHD). Pre-processing defines parameters that are needed for the extraction process. Extraction proceeds in standard fashion: filling sinks, developing flow direction and weighted flow accumulation rasters. Drainage channels with assigned Strahler stream order are extracted within a subbasin and simplified. Drainage density patterns are then estimated with 100-meter resolution and subsequently smoothed with a low-pass filter. The extraction process is found to be of better quality in higher slope terrains. Concurrent processing through the high performance computing environment is shown to facilitate and refine the choice of drainage density extraction parameters and more readily improve extraction procedures than conventional processing.

  12. Detection of gene communities in multi-networks reveals cancer drivers

    NASA Astrophysics Data System (ADS)

    Cantini, Laura; Medico, Enzo; Fortunato, Santo; Caselle, Michele

    2015-12-01

    We propose a new multi-network-based strategy to integrate different layers of genomic information and use them in a coordinate way to identify driving cancer genes. The multi-networks that we consider combine transcription factor co-targeting, microRNA co-targeting, protein-protein interaction and gene co-expression networks. The rationale behind this choice is that gene co-expression and protein-protein interactions require a tight coregulation of the partners and that such a fine tuned regulation can be obtained only combining both the transcriptional and post-transcriptional layers of regulation. To extract the relevant biological information from the multi-network we studied its partition into communities. To this end we applied a consensus clustering algorithm based on state of art community detection methods. Even if our procedure is valid in principle for any pathology in this work we concentrate on gastric, lung, pancreas and colorectal cancer and identified from the enrichment analysis of the multi-network communities a set of candidate driver cancer genes. Some of them were already known oncogenes while a few are new. The combination of the different layers of information allowed us to extract from the multi-network indications on the regulatory pattern and functional role of both the already known and the new candidate driver genes.

  13. Automated detection of videotaped neonatal seizures of epileptic origin.

    PubMed

    Karayiannis, Nicolaos B; Xiong, Yaohua; Tao, Guozhi; Frost, James D; Wise, Merrill S; Hrachovy, Richard A; Mizrahi, Eli M

    2006-06-01

    This study aimed at the development of a seizure-detection system by training neural networks with quantitative motion information extracted from short video segments of neonatal seizures of the myoclonic and focal clonic types and random infant movements. The motion of the infants' body parts was quantified by temporal motion-strength signals extracted from video segments by motion-segmentation methods based on optical flow computation. The area of each frame occupied by the infants' moving body parts was segmented by clustering the motion parameters obtained by fitting an affine model to the pixel velocities. The motion of the infants' body parts also was quantified by temporal motion-trajectory signals extracted from video recordings by robust motion trackers based on block-motion models. These motion trackers were developed to adjust autonomously to illumination and contrast changes that may occur during the video-frame sequence. Video segments were represented by quantitative features obtained by analyzing motion-strength and motion-trajectory signals in both the time and frequency domains. Seizure recognition was performed by conventional feed-forward neural networks, quantum neural networks, and cosine radial basis function neural networks, which were trained to detect neonatal seizures of the myoclonic and focal clonic types and to distinguish them from random infant movements. The computational tools and procedures developed for automated seizure detection were evaluated on a set of 240 video segments of 54 patients exhibiting myoclonic seizures (80 segments), focal clonic seizures (80 segments), and random infant movements (80 segments). Regardless of the decision scheme used for interpreting the responses of the trained neural networks, all the neural network models exhibited sensitivity and specificity>90%. For one of the decision schemes proposed for interpreting the responses of the trained neural networks, the majority of the trained neural-network models exhibited sensitivity>90% and specificity>95%. In particular, cosine radial basis function neural networks achieved the performance targets of this phase of the project (i.e., sensitivity>95% and specificity>95%). The best among the motion segmentation and tracking methods developed in this study produced quantitative features that constitute a reliable basis for detecting neonatal seizures. The performance targets of this phase of the project were achieved by combining the quantitative features obtained by analyzing motion-strength signals with those produced by analyzing motion-trajectory signals. The computational procedures and tools developed in this study to perform off-line analysis of short video segments will be used in the next phase of this project, which involves the integration of these procedures and tools into a system that can process and analyze long video recordings of infants monitored for seizures in real time.

  14. 42 CFR 422.202 - Participation procedures.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... procedures. (a) Notice and appeal rights. An MA organization that operates a coordinated care plan or network... of groups of physicians, through reasonable procedures that include the following: (1) Written notice... 42 Public Health 3 2011-10-01 2011-10-01 false Participation procedures. 422.202 Section 422.202...

  15. Fast Fragmentation of Networks Using Module-Based Attacks

    PubMed Central

    Requião da Cunha, Bruno; González-Avella, Juan Carlos; Gonçalves, Sebastián

    2015-01-01

    In the multidisciplinary field of Network Science, optimization of procedures for efficiently breaking complex networks is attracting much attention from a practical point of view. In this contribution, we present a module-based method to efficiently fragment complex networks. The procedure firstly identifies topological communities through which the network can be represented using a well established heuristic algorithm of community finding. Then only the nodes that participate of inter-community links are removed in descending order of their betweenness centrality. We illustrate the method by applying it to a variety of examples in the social, infrastructure, and biological fields. It is shown that the module-based approach always outperforms targeted attacks to vertices based on node degree or betweenness centrality rankings, with gains in efficiency strongly related to the modularity of the network. Remarkably, in the US power grid case, by deleting 3% of the nodes, the proposed method breaks the original network in fragments which are twenty times smaller in size than the fragments left by betweenness-based attack. PMID:26569610

  16. Network placement optimization for large-scale distributed system

    NASA Astrophysics Data System (ADS)

    Ren, Yu; Liu, Fangfang; Fu, Yunxia; Zhou, Zheng

    2018-01-01

    The network geometry strongly influences the performance of the distributed system, i.e., the coverage capability, measurement accuracy and overall cost. Therefore the network placement optimization represents an urgent issue in the distributed measurement, even in large-scale metrology. This paper presents an effective computer-assisted network placement optimization procedure for the large-scale distributed system and illustrates it with the example of the multi-tracker system. To get an optimal placement, the coverage capability and the coordinate uncertainty of the network are quantified. Then a placement optimization objective function is developed in terms of coverage capabilities, measurement accuracy and overall cost. And a novel grid-based encoding approach for Genetic algorithm is proposed. So the network placement is optimized by a global rough search and a local detailed search. Its obvious advantage is that there is no need for a specific initial placement. At last, a specific application illustrates this placement optimization procedure can simulate the measurement results of a specific network and design the optimal placement efficiently.

  17. Network information security in a phase III Integrated Academic Information Management System (IAIMS).

    PubMed

    Shea, S; Sengupta, S; Crosswell, A; Clayton, P D

    1992-01-01

    The developing Integrated Academic Information System (IAIMS) at Columbia-Presbyterian Medical Center provides data sharing links between two separate corporate entities, namely Columbia University Medical School and The Presbyterian Hospital, using a network-based architecture. Multiple database servers with heterogeneous user authentication protocols are linked to this network. "One-stop information shopping" implies one log-on procedure per session, not separate log-on and log-off procedures for each server or application used during a session. These circumstances provide challenges at the policy and technical levels to data security at the network level and insuring smooth information access for end users of these network-based services. Five activities being conducted as part of our security project are described: (1) policy development; (2) an authentication server for the network; (3) Kerberos as a tool for providing mutual authentication, encryption, and time stamping of authentication messages; (4) a prototype interface using Kerberos services to authenticate users accessing a network database server; and (5) a Kerberized electronic signature.

  18. Engineering Online and In-Person Social Networks for Physical Activity: A Randomized Trial.

    PubMed

    Rovniak, Liza S; Kong, Lan; Hovell, Melbourne F; Ding, Ding; Sallis, James F; Ray, Chester A; Kraschnewski, Jennifer L; Matthews, Stephen A; Kiser, Elizabeth; Chinchilli, Vernon M; George, Daniel R; Sciamanna, Christopher N

    2016-12-01

    Social networks can influence physical activity, but little is known about how best to engineer online and in-person social networks to increase activity. The purpose of this study was to conduct a randomized trial based on the Social Networks for Activity Promotion model to assess the incremental contributions of different procedures for building social networks on objectively measured outcomes. Physically inactive adults (n = 308, age, 50.3 (SD = 8.3) years, 38.3 % male, 83.4 % overweight/obese) were randomized to one of three groups. The Promotion group evaluated the effects of weekly emailed tips emphasizing social network interactions for walking (e.g., encouragement, informational support); the Activity group evaluated the incremental effect of adding an evidence-based online fitness walking intervention to the weekly tips; and the Social Networks group evaluated the additional incremental effect of providing access to an online networking site for walking as well as prompting walking/activity across diverse settings. The primary outcome was mean change in accelerometer-measured moderate-to-vigorous physical activity (MVPA), assessed at 3 and 9 months from baseline. Participants increased their MVPA by 21.0 min/week, 95 % CI [5.9, 36.1], p = .005, at 3 months, and this change was sustained at 9 months, with no between-group differences. Although the structure of procedures for targeting social networks varied across intervention groups, the functional effect of these procedures on physical activity was similar. Future research should evaluate if more powerful reinforcers improve the effects of social network interventions. The trial was registered with the ClinicalTrials.gov (NCT01142804).

  19. A gene network bioinformatics analysis for pemphigoid autoimmune blistering diseases.

    PubMed

    Barone, Antonio; Toti, Paolo; Giuca, Maria Rita; Derchi, Giacomo; Covani, Ugo

    2015-07-01

    In this theoretical study, a text mining search and clustering analysis of data related to genes potentially involved in human pemphigoid autoimmune blistering diseases (PAIBD) was performed using web tools to create a gene/protein interaction network. The Search Tool for the Retrieval of Interacting Genes/Proteins (STRING) database was employed to identify a final set of PAIBD-involved genes and to calculate the overall significant interactions among genes: for each gene, the weighted number of links, or WNL, was registered and a clustering procedure was performed using the WNL analysis. Genes were ranked in class (leader, B, C, D and so on, up to orphans). An ontological analysis was performed for the set of 'leader' genes. Using the above-mentioned data network, 115 genes represented the final set; leader genes numbered 7 (intercellular adhesion molecule 1 (ICAM-1), interferon gamma (IFNG), interleukin (IL)-2, IL-4, IL-6, IL-8 and tumour necrosis factor (TNF)), class B genes were 13, whereas the orphans were 24. The ontological analysis attested that the molecular action was focused on extracellular space and cell surface, whereas the activation and regulation of the immunity system was widely involved. Despite the limited knowledge of the present pathologic phenomenon, attested by the presence of 24 genes revealing no protein-protein direct or indirect interactions, the network showed significant pathways gathered in several subgroups: cellular components, molecular functions, biological processes and the pathologic phenomenon obtained from the Kyoto Encyclopaedia of Genes and Genomes (KEGG) database. The molecular basis for PAIBD was summarised and expanded, which will perhaps give researchers promising directions for the identification of new therapeutic targets.

  20. Nouvelle methode d'integration energetique pour la retro-installation des procedes industriels et la transformation des usines papetieres

    NASA Astrophysics Data System (ADS)

    Bonhivers, Jean-Christophe

    The increase in production of goods over the last decades has led to the need for improving the management of natural resources management and the efficiency of processes. As a consequence, heat integration methods for industry have been developed. These have been successful for the design of new plants: the integration principles are largely employed, and energy intensity has dramatically decreased in many processes. Although progress has also been achieved in integration methods for retrofit, these methods still need further conceptual development. Furthermore, methodological difficulties increase when trying to retrofit heat exchange networks that are closely interrelated to water networks, such as the case of pulp and paper mills. The pulp and paper industry seeks to increase its profitability by reducing production costs and optimizing supply chains. Recent process developments in forestry biorefining give this industry the opportunity for diversification into bio-products, increasing potential profit margins, and at the same time modernizing its energy systems. Identification of energy strategies for a mill in a changing environment, including the possibility of adding a biorefinery process on the industrial site, requires better integration methods for retrofit situations. The objective of this thesis is to develop an energy integration method for the retrofit of industrial systems and the transformation of pulp and paper mills, ant to demonstrate the method in case studies. Energy is conserved and degraded in a process. Heat can be converted into electricity, stored as chemical energy, or rejected to the environment. A systematic analysis of successive degradations of energy between the hot utilities until the environment, through process operations and existing heat exchangers, is essential in order to reduce the heat consumption. In this thesis, the "Bridge Method" for energy integration by heat exchanger network retrofit has been developed. This method is the first that considers the analysis of these degradations. The fundamental mechanism to reduce the heat consumption in an existing network has been made explicit; it is the basis of the developed method. The Bridge Method includes the definition of "a bridge", which is a set of modifications leading to heat reduction in a heat exchanger network. It is proven that, for a given set of streams, only bridges can lead to heat savings. The Bridge Method also includes (1) a global procedure for heat exchanger network retrofit, (2) a procedure to enumerate systematically the bridges, (3) "a network table" to easily evaluate them, and (4) an "energy transfer diagram" showing the effect of the two first principles of thermodynamics of energy conservation and degradation in industrial processes in order to identify energy savings opportunities. The Bridge Method can be used for the analysis of networks including several types of heat transfer, and site-wide analysis. The Bridge Method has been applied in case studies for retrofitting networks composed of indirect-contact heat exchangers, including the network of a kraft pulp mill, and also networks of direct-contact heat exchangers, including the hot water production system of a pulp mill. The method has finally been applied for the evaluation of a biorefinery process, alone or hosted in a kraft pulp mill. Results show that the use of the method significantly reduces the search space and leads to identification of the relevant solutions. The necessity of a bridge to reduce the inputs and outputs of a process is a consequence of the two first thermodynamics principles of energy conservation and increase in entropy. The concept of bridge alone can also be used as a tool for process analysis, and in numerical optimization-based approaches for energy integration.

  1. Recurrent-neural-network-based Boolean factor analysis and its application to word clustering.

    PubMed

    Frolov, Alexander A; Husek, Dusan; Polyakov, Pavel Yu

    2009-07-01

    The objective of this paper is to introduce a neural-network-based algorithm for word clustering as an extension of the neural-network-based Boolean factor analysis algorithm (Frolov , 2007). It is shown that this extended algorithm supports even the more complex model of signals that are supposed to be related to textual documents. It is hypothesized that every topic in textual data is characterized by a set of words which coherently appear in documents dedicated to a given topic. The appearance of each word in a document is coded by the activity of a particular neuron. In accordance with the Hebbian learning rule implemented in the network, sets of coherently appearing words (treated as factors) create tightly connected groups of neurons, hence, revealing them as attractors of the network dynamics. The found factors are eliminated from the network memory by the Hebbian unlearning rule facilitating the search of other factors. Topics related to the found sets of words can be identified based on the words' semantics. To make the method complete, a special technique based on a Bayesian procedure has been developed for the following purposes: first, to provide a complete description of factors in terms of component probability, and second, to enhance the accuracy of classification of signals to determine whether it contains the factor. Since it is assumed that every word may possibly contribute to several topics, the proposed method might be related to the method of fuzzy clustering. In this paper, we show that the results of Boolean factor analysis and fuzzy clustering are not contradictory, but complementary. To demonstrate the capabilities of this attempt, the method is applied to two types of textual data on neural networks in two different languages. The obtained topics and corresponding words are at a good level of agreement despite the fact that identical topics in Russian and English conferences contain different sets of keywords.

  2. Exploiting social influence to magnify population-level behaviour change in maternal and child health: study protocol for a randomised controlled trial of network targeting algorithms in rural Honduras

    PubMed Central

    Shakya, Holly B; Stafford, Derek; Hughes, D Alex; Keegan, Thomas; Negron, Rennie; Broome, Jai; McKnight, Mark; Nicoll, Liza; Nelson, Jennifer; Iriarte, Emma; Ordonez, Maria; Airoldi, Edo; Fowler, James H; Christakis, Nicholas A

    2017-01-01

    Introduction Despite global progress on many measures of child health, rates of neonatal mortality remain high in the developing world. Evidence suggests that substantial improvements can be achieved with simple, low-cost interventions within family and community settings, particularly those designed to change knowledge and behaviour at the community level. Using social network analysis to identify structurally influential community members and then targeting them for intervention shows promise for the implementation of sustainable community-wide behaviour change. Methods and analysis We will use a detailed understanding of social network structure and function to identify novel ways of targeting influential individuals to foster cascades of behavioural change at a population level. Our work will involve experimental and observational analyses. We will map face-to-face social networks of 30 000 people in 176 villages in Western Honduras, and then conduct a randomised controlled trial of a friendship-based network-targeting algorithm with a set of well-established care interventions. We will also test whether the proportion of the population targeted affects the degree to which the intervention spreads throughout the network. We will test scalable methods of network targeting that would not, in the future, require the actual mapping of social networks but would still offer the prospect of rapidly identifying influential targets for public health interventions. Ethics and dissemination The Yale IRB and the Honduran Ministry of Health approved all data collection procedures (Protocol number 1506016012) and all participants will provide informed consent before enrolment. We will publish our findings in peer-reviewed journals as well as engage non-governmental organisations and other actors through venues for exchanging practical methods for behavioural health interventions, such as global health conferences. We will also develop a ‘toolkit’ for practitioners to use in network-based intervention efforts, including public release of our network mapping software. Trial registration number NCT02694679; Pre-results. PMID:28289044

  3. Canada's neglected tropical disease research network: who's in the core-who's on the periphery?

    PubMed

    Phillips, Kaye; Kohler, Jillian Clare; Pennefather, Peter; Thorsteinsdottir, Halla; Wong, Joseph

    2013-01-01

    This study designed and applied accessible yet systematic methods to generate baseline information about the patterns and structure of Canada's neglected tropical disease (NTD) research network; a network that, until recently, was formed and functioned on the periphery of strategic Canadian research funding. MULTIPLE METHODS WERE USED TO CONDUCT THIS STUDY, INCLUDING: (1) a systematic bibliometric procedure to capture archival NTD publications and co-authorship data; (2) a country-level "core-periphery" network analysis to measure and map the structure of Canada's NTD co-authorship network including its size, density, cliques, and centralization; and (3) a statistical analysis to test the correlation between the position of countries in Canada's NTD network ("k-core measure") and the quantity and quality of research produced. Over the past sixty years (1950-2010), Canadian researchers have contributed to 1,079 NTD publications, specializing in Leishmania, African sleeping sickness, and leprosy. Of this work, 70% of all first authors and co-authors (n = 4,145) have been Canadian. Since the 1990s, however, a network of international co-authorship activity has been emerging, with representation of researchers from 62 different countries; largely researchers from OECD countries (e.g. United States and United Kingdom) and some non-OECD countries (e.g. Brazil and Iran). Canada has a core-periphery NTD international research structure, with a densely connected group of OECD countries and some African nations, such as Uganda and Kenya. Sitting predominantly on the periphery of this research network is a cluster of 16 non-OECD nations that fall within the lowest GDP percentile of the network. The publication specialties, composition, and position of NTD researchers within Canada's NTD country network provide evidence that while Canadian researchers currently remain the overall gatekeepers of the NTD research they generate; there is opportunity to leverage existing research collaborations and help advance regions and NTD areas that are currently under-developed.

  4. Canada's Neglected Tropical Disease Research Network: Who's in the Core—Who's on the Periphery?

    PubMed Central

    Phillips, Kaye; Kohler, Jillian Clare; Pennefather, Peter; Thorsteinsdottir, Halla; Wong, Joseph

    2013-01-01

    Background This study designed and applied accessible yet systematic methods to generate baseline information about the patterns and structure of Canada's neglected tropical disease (NTD) research network; a network that, until recently, was formed and functioned on the periphery of strategic Canadian research funding. Methodology Multiple methods were used to conduct this study, including: (1) a systematic bibliometric procedure to capture archival NTD publications and co-authorship data; (2) a country-level “core-periphery” network analysis to measure and map the structure of Canada's NTD co-authorship network including its size, density, cliques, and centralization; and (3) a statistical analysis to test the correlation between the position of countries in Canada's NTD network (“k-core measure”) and the quantity and quality of research produced. Principal Findings Over the past sixty years (1950–2010), Canadian researchers have contributed to 1,079 NTD publications, specializing in Leishmania, African sleeping sickness, and leprosy. Of this work, 70% of all first authors and co-authors (n = 4,145) have been Canadian. Since the 1990s, however, a network of international co-authorship activity has been emerging, with representation of researchers from 62 different countries; largely researchers from OECD countries (e.g. United States and United Kingdom) and some non-OECD countries (e.g. Brazil and Iran). Canada has a core-periphery NTD international research structure, with a densely connected group of OECD countries and some African nations, such as Uganda and Kenya. Sitting predominantly on the periphery of this research network is a cluster of 16 non-OECD nations that fall within the lowest GDP percentile of the network. Conclusion/Significance The publication specialties, composition, and position of NTD researchers within Canada's NTD country network provide evidence that while Canadian researchers currently remain the overall gatekeepers of the NTD research they generate; there is opportunity to leverage existing research collaborations and help advance regions and NTD areas that are currently under-developed. PMID:24340113

  5. Process Synchronization and Data Communication between Processes in Real Time Local Area Networks.

    DTIC Science & Technology

    1985-12-01

    52 APPENDIX A: PROCEDURE MAKETABLE .............. 54 APPENDIX B: PROCEDURE MAKEMESSAGE ............. 56 APPENDIX C: PROCEDURE...item. The relation table is built by the driver during system initialization by the procedure maketable , see Appendix A. This procedure reads the file... MAKETABLE Procedure maketable is the first procedure called by the driver. It sets up the relation table in local RAM of SBC 1 by reading the information

  6. Fast automated analysis of strong gravitational lenses with convolutional neural networks.

    PubMed

    Hezaveh, Yashar D; Levasseur, Laurence Perreault; Marshall, Philip J

    2017-08-30

    Quantifying image distortions caused by strong gravitational lensing-the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures-and estimating the corresponding matter distribution of these structures (the 'gravitational lens') has primarily been performed using maximum likelihood modelling of observations. This procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physical processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. Here we report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the 'singular isothermal ellipsoid' density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.

  7. Sieve-based relation extraction of gene regulatory networks from biological literature

    PubMed Central

    2015-01-01

    Background Relation extraction is an essential procedure in literature mining. It focuses on extracting semantic relations between parts of text, called mentions. Biomedical literature includes an enormous amount of textual descriptions of biological entities, their interactions and results of related experiments. To extract them in an explicit, computer readable format, these relations were at first extracted manually from databases. Manual curation was later replaced with automatic or semi-automatic tools with natural language processing capabilities. The current challenge is the development of information extraction procedures that can directly infer more complex relational structures, such as gene regulatory networks. Results We develop a computational approach for extraction of gene regulatory networks from textual data. Our method is designed as a sieve-based system and uses linear-chain conditional random fields and rules for relation extraction. With this method we successfully extracted the sporulation gene regulation network in the bacterium Bacillus subtilis for the information extraction challenge at the BioNLP 2013 conference. To enable extraction of distant relations using first-order models, we transform the data into skip-mention sequences. We infer multiple models, each of which is able to extract different relationship types. Following the shared task, we conducted additional analysis using different system settings that resulted in reducing the reconstruction error of bacterial sporulation network from 0.73 to 0.68, measured as the slot error rate between the predicted and the reference network. We observe that all relation extraction sieves contribute to the predictive performance of the proposed approach. Also, features constructed by considering mention words and their prefixes and suffixes are the most important features for higher accuracy of extraction. Analysis of distances between different mention types in the text shows that our choice of transforming data into skip-mention sequences is appropriate for detecting relations between distant mentions. Conclusions Linear-chain conditional random fields, along with appropriate data transformations, can be efficiently used to extract relations. The sieve-based architecture simplifies the system as new sieves can be easily added or removed and each sieve can utilize the results of previous ones. Furthermore, sieves with conditional random fields can be trained on arbitrary text data and hence are applicable to broad range of relation extraction tasks and data domains. PMID:26551454

  8. Sieve-based relation extraction of gene regulatory networks from biological literature.

    PubMed

    Žitnik, Slavko; Žitnik, Marinka; Zupan, Blaž; Bajec, Marko

    2015-01-01

    Relation extraction is an essential procedure in literature mining. It focuses on extracting semantic relations between parts of text, called mentions. Biomedical literature includes an enormous amount of textual descriptions of biological entities, their interactions and results of related experiments. To extract them in an explicit, computer readable format, these relations were at first extracted manually from databases. Manual curation was later replaced with automatic or semi-automatic tools with natural language processing capabilities. The current challenge is the development of information extraction procedures that can directly infer more complex relational structures, such as gene regulatory networks. We develop a computational approach for extraction of gene regulatory networks from textual data. Our method is designed as a sieve-based system and uses linear-chain conditional random fields and rules for relation extraction. With this method we successfully extracted the sporulation gene regulation network in the bacterium Bacillus subtilis for the information extraction challenge at the BioNLP 2013 conference. To enable extraction of distant relations using first-order models, we transform the data into skip-mention sequences. We infer multiple models, each of which is able to extract different relationship types. Following the shared task, we conducted additional analysis using different system settings that resulted in reducing the reconstruction error of bacterial sporulation network from 0.73 to 0.68, measured as the slot error rate between the predicted and the reference network. We observe that all relation extraction sieves contribute to the predictive performance of the proposed approach. Also, features constructed by considering mention words and their prefixes and suffixes are the most important features for higher accuracy of extraction. Analysis of distances between different mention types in the text shows that our choice of transforming data into skip-mention sequences is appropriate for detecting relations between distant mentions. Linear-chain conditional random fields, along with appropriate data transformations, can be efficiently used to extract relations. The sieve-based architecture simplifies the system as new sieves can be easily added or removed and each sieve can utilize the results of previous ones. Furthermore, sieves with conditional random fields can be trained on arbitrary text data and hence are applicable to broad range of relation extraction tasks and data domains.

  9. A Quality-Control-Oriented Database for a Mesoscale Meteorological Observation Network

    NASA Astrophysics Data System (ADS)

    Lussana, C.; Ranci, M.; Uboldi, F.

    2012-04-01

    In the operational context of a local weather service, data accessibility and quality related issues must be managed by taking into account a wide set of user needs. This work describes the structure and the operational choices made for the operational implementation of a database system storing data from highly automated observing stations, metadata and information on data quality. Lombardy's environmental protection agency, ARPA Lombardia, manages a highly automated mesoscale meteorological network. A Quality Assurance System (QAS) ensures that reliable observational information is collected and disseminated to the users. The weather unit in ARPA Lombardia, at the same time an important QAS component and an intensive data user, has developed a database specifically aimed to: 1) providing quick access to data for operational activities and 2) ensuring data quality for real-time applications, by means of an Automatic Data Quality Control (ADQC) procedure. Quantities stored in the archive include hourly aggregated observations of: precipitation amount, temperature, wind, relative humidity, pressure, global and net solar radiation. The ADQC performs several independent tests on raw data and compares their results in a decision-making procedure. An important ADQC component is the Spatial Consistency Test based on Optimal Interpolation. Interpolated and Cross-Validation analysis values are also stored in the database, providing further information to human operators and useful estimates in case of missing data. The technical solution adopted is based on a LAMP (Linux, Apache, MySQL and Php) system, constituting an open source environment suitable for both development and operational practice. The ADQC procedure itself is performed by R scripts directly interacting with the MySQL database. Users and network managers can access the database by using a set of web-based Php applications.

  10. Markovian Analysis of the Sequential Behavior of the Spontaneous Spinal Cord Dorsum Potentials Induced by Acute Nociceptive Stimulation in the Anesthetized Cat.

    PubMed

    Martin, Mario; Béjar, Javier; Esposito, Gennaro; Chávez, Diógenes; Contreras-Hernández, Enrique; Glusman, Silvio; Cortés, Ulises; Rudomín, Pablo

    2017-01-01

    In a previous study we developed a Machine Learning procedure for the automatic identification and classification of spontaneous cord dorsum potentials ( CDPs ). This study further supported the proposal that in the anesthetized cat, the spontaneous CDPs recorded from different lumbar spinal segments are generated by a distributed network of dorsal horn neurons with structured (non-random) patterns of functional connectivity and that these configurations can be changed to other non-random and stable configurations after the noceptive stimulation produced by the intradermic injection of capsaicin in the anesthetized cat. Here we present a study showing that the sequence of identified forms of the spontaneous CDPs follows a Markov chain of at least order one. That is, the system has memory in the sense that the spontaneous activation of dorsal horn neuronal ensembles producing the CDPs is not independent of the most recent activity. We used this markovian property to build a procedure to identify portions of signals as belonging to a specific functional state of connectivity among the neuronal networks involved in the generation of the CDPs . We have tested this procedure during acute nociceptive stimulation produced by the intradermic injection of capsaicin in intact as well as spinalized preparations. Altogether, our results indicate that CDP sequences cannot be generated by a renewal stochastic process. Moreover, it is possible to describe some functional features of activity in the cord dorsum by modeling the CDP sequences as generated by a Markov order one stochastic process. Finally, these Markov models make possible to determine the functional state which produced a CDP sequence. The proposed identification procedures appear to be useful for the analysis of the sequential behavior of the ongoing CDPs recorded from different spinal segments in response to a variety of experimental procedures including the changes produced by acute nociceptive stimulation. They are envisaged as a useful tool to examine alterations of the patterns of functional connectivity between dorsal horn neurons under normal and different pathological conditions, an issue of potential clinical concern.

  11. A multiobjective optimization framework for multicontaminant industrial water network design.

    PubMed

    Boix, Marianne; Montastruc, Ludovic; Pibouleau, Luc; Azzaro-Pantel, Catherine; Domenech, Serge

    2011-07-01

    The optimal design of multicontaminant industrial water networks according to several objectives is carried out in this paper. The general formulation of the water allocation problem (WAP) is given as a set of nonlinear equations with binary variables representing the presence of interconnections in the network. For optimization purposes, three antagonist objectives are considered: F(1), the freshwater flow-rate at the network entrance, F(2), the water flow-rate at inlet of regeneration units, and F(3), the number of interconnections in the network. The multiobjective problem is solved via a lexicographic strategy, where a mixed-integer nonlinear programming (MINLP) procedure is used at each step. The approach is illustrated by a numerical example taken from the literature involving five processes, one regeneration unit and three contaminants. The set of potential network solutions is provided in the form of a Pareto front. Finally, the strategy for choosing the best network solution among those given by Pareto fronts is presented. This Multiple Criteria Decision Making (MCDM) problem is tackled by means of two approaches: a classical TOPSIS analysis is first implemented and then an innovative strategy based on the global equivalent cost (GEC) in freshwater that turns out to be more efficient for choosing a good network according to a practical point of view. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Netlang: A software for the linguistic analysis of corpora by means of complex networks

    PubMed Central

    Serna Salazar, Diego; Isaza, Gustavo; Castillo Ossa, Luis F.; Bedia, Manuel G.

    2017-01-01

    To date there is no software that directly connects the linguistic analysis of a conversation to a network program. Networks programs are able to extract statistical information from data basis with information about systems of interacting elements. Language has also been conceived and studied as a complex system. However, most proposals do not analyze language according to linguistic theory, but use instead computational systems that should save time at the price of leaving aside many crucial aspects for linguistic theory. Some approaches to network studies on language do apply precise linguistic analyses, made by a linguist. The problem until now has been the lack of interface between the analysis of a sentence and its integration into the network that could be managed by a linguist and that could save the analysis of any language. Previous works have used old software that was not created for these purposes and that often produced problems with some idiosyncrasies of the target language. The desired interface should be able to deal with the syntactic peculiarities of a particular language, the options of linguistic theory preferred by the user and the preservation of morpho-syntactic information (lexical categories and syntactic relations between items). Netlang is the first program able to do that. Recently, a new kind of linguistic analysis has been developed, which is able to extract a complexity pattern from the speaker's linguistic production which is depicted as a network where words are inside nodes, and these nodes connect each other by means of edges or links (the information inside the edge can be syntactic, semantic, etc.). The Netlang software has become the bridge between rough linguistic data and the network program. Netlang has integrated and improved the functions of programs used in the past, namely the DGA annotator and two scripts (ToXML.pl and Xml2Pairs.py) used for transforming and pruning data. Netlang allows the researcher to make accurate linguistic analysis by means of syntactic dependency relations between words, while tracking record of the nature of such syntactic relationships (subject, object, etc). The Netlang software is presented as a new tool that solve many problems detected in the past. The most important improvement is that Netlang integrates three past applications into one program, and is able to produce a series of file formats that can be read by a network program. Through the Netlang software, the linguistic network analysis based on syntactic analyses, characterized for its low cost and the completely non-invasive procedure aims to evolve into a sufficiently fine grained tool for clinical diagnosis in potential cases of language disorders. PMID:28832598

  13. Netlang: A software for the linguistic analysis of corpora by means of complex networks.

    PubMed

    Barceló-Coblijn, Lluís; Serna Salazar, Diego; Isaza, Gustavo; Castillo Ossa, Luis F; Bedia, Manuel G

    2017-01-01

    To date there is no software that directly connects the linguistic analysis of a conversation to a network program. Networks programs are able to extract statistical information from data basis with information about systems of interacting elements. Language has also been conceived and studied as a complex system. However, most proposals do not analyze language according to linguistic theory, but use instead computational systems that should save time at the price of leaving aside many crucial aspects for linguistic theory. Some approaches to network studies on language do apply precise linguistic analyses, made by a linguist. The problem until now has been the lack of interface between the analysis of a sentence and its integration into the network that could be managed by a linguist and that could save the analysis of any language. Previous works have used old software that was not created for these purposes and that often produced problems with some idiosyncrasies of the target language. The desired interface should be able to deal with the syntactic peculiarities of a particular language, the options of linguistic theory preferred by the user and the preservation of morpho-syntactic information (lexical categories and syntactic relations between items). Netlang is the first program able to do that. Recently, a new kind of linguistic analysis has been developed, which is able to extract a complexity pattern from the speaker's linguistic production which is depicted as a network where words are inside nodes, and these nodes connect each other by means of edges or links (the information inside the edge can be syntactic, semantic, etc.). The Netlang software has become the bridge between rough linguistic data and the network program. Netlang has integrated and improved the functions of programs used in the past, namely the DGA annotator and two scripts (ToXML.pl and Xml2Pairs.py) used for transforming and pruning data. Netlang allows the researcher to make accurate linguistic analysis by means of syntactic dependency relations between words, while tracking record of the nature of such syntactic relationships (subject, object, etc). The Netlang software is presented as a new tool that solve many problems detected in the past. The most important improvement is that Netlang integrates three past applications into one program, and is able to produce a series of file formats that can be read by a network program. Through the Netlang software, the linguistic network analysis based on syntactic analyses, characterized for its low cost and the completely non-invasive procedure aims to evolve into a sufficiently fine grained tool for clinical diagnosis in potential cases of language disorders.

  14. Elementary signaling modes predict the essentiality of signal transduction network components

    PubMed Central

    2011-01-01

    Background Understanding how signals propagate through signaling pathways and networks is a central goal in systems biology. Quantitative dynamic models help to achieve this understanding, but are difficult to construct and validate because of the scarcity of known mechanistic details and kinetic parameters. Structural and qualitative analysis is emerging as a feasible and useful alternative for interpreting signal transduction. Results In this work, we present an integrative computational method for evaluating the essentiality of components in signaling networks. This approach expands an existing signaling network to a richer representation that incorporates the positive or negative nature of interactions and the synergistic behaviors among multiple components. Our method simulates both knockout and constitutive activation of components as node disruptions, and takes into account the possible cascading effects of a node's disruption. We introduce the concept of elementary signaling mode (ESM), as the minimal set of nodes that can perform signal transduction independently. Our method ranks the importance of signaling components by the effects of their perturbation on the ESMs of the network. Validation on several signaling networks describing the immune response of mammals to bacteria, guard cell abscisic acid signaling in plants, and T cell receptor signaling shows that this method can effectively uncover the essentiality of components mediating a signal transduction process and results in strong agreement with the results of Boolean (logical) dynamic models and experimental observations. Conclusions This integrative method is an efficient procedure for exploratory analysis of large signaling and regulatory networks where dynamic modeling or experimental tests are impractical. Its results serve as testable predictions, provide insights into signal transduction and regulatory mechanisms and can guide targeted computational or experimental follow-up studies. The source codes for the algorithms developed in this study can be found at http://www.phys.psu.edu/~ralbert/ESM. PMID:21426566

  15. Investigation of the Impact of Extracting and Exchanging Health Information by Using Internet and Social Networks.

    PubMed

    Pistolis, John; Zimeras, Stelios; Chardalias, Kostas; Roupa, Zoe; Fildisis, George; Diomidous, Marianna

    2016-06-01

    Social networks (1) have been embedded in our daily life for a long time. They constitute a powerful tool used nowadays for both searching and exchanging information on different issues by using Internet searching engines (Google, Bing, etc.) and Social Networks (Facebook, Twitter etc.). In this paper, are presented the results of a research based on the frequency and the type of the usage of the Internet and the Social Networks by the general public and the health professionals. The objectives of the research were focused on the investigation of the frequency of seeking and meticulously searching for health information in the social media by both individuals and health practitioners. The exchanging of information is a procedure that involves the issues of reliability and quality of information. In this research, by using advanced statistical techniques an effort is made to investigate the participant's profile in using social networks for searching and exchanging information on health issues. Based on the answers 93 % of the people, use the Internet to find information on health-subjects. Considering principal component analysis, the most important health subjects were nutrition (0.719 %), respiratory issues (0.79 %), cardiological issues (0.777%), psychological issues (0.667%) and total (73.8%). The research results, based on different statistical techniques revealed that the 61.2% of the males and 56.4% of the females intended to use the social networks for searching medical information. Based on the principal components analysis, the most important sources that the participants mentioned, were the use of the Internet and social networks for exchanging information on health issues. These sources proved to be of paramount importance to the participants of the study. The same holds for nursing, medical and administrative staff in hospitals.

  16. Fuzzy knowledge base construction through belief networks based on Lukasiewicz logic

    NASA Technical Reports Server (NTRS)

    Lara-Rosano, Felipe

    1992-01-01

    In this paper, a procedure is proposed to build a fuzzy knowledge base founded on fuzzy belief networks and Lukasiewicz logic. Fuzzy procedures are developed to do the following: to assess the belief values of a consequent, in terms of the belief values of its logical antecedents and the belief value of the corresponding logical function; and to update belief values when new evidence is available.

  17. Generating a comprehensive set of standard operating procedures for a biorepository network-The CTRNet experience.

    PubMed

    Barnes, Rebecca; Albert, Monique; Damaraju, Sambasivarao; de Sousa-Hitzler, Jean; Kodeeswaran, Sugy; Mes-Masson, Anne-Marie; Watson, Peter; Schacter, Brent

    2013-12-01

    Despite the integral role of biorepositories in fueling translational research and the advancement of medicine, there are significant gaps in harmonization of biobanking practices, resulting in variable biospecimen collection, storage, and processing. This significantly impacts accurate downstream analysis and, in particular, creates a problem for biorepository networks or consortia. The Canadian Tumour Repository Network (CTRNet; www.ctrnet.ca ) is a consortium of Canadian tumor biorepositories that aims to enhance biobanking capacity and quality through standardization. To minimize the issue of variable biobanking practices throughout its network, CTRNet has developed and maintained a comprehensive set of 45 standard operating procedures (SOPs). There were four key elements to the CTRNet SOP development process: 1) an SOP development team was formed from members across CTRNet to co-produce each SOP; 2) a principal author was appointed with responsibility for overall coordination of the SOP development process; 3) the CTRNet Management Committee (composed of principal investigators for each member biorepository) reviewed/revised each SOP completed by the development team; and 4) external expert reviewers provided feedback and recommendations on each SOP. Once final Management Committee approval was obtained, the ratified SOP was published on the CTRNet website for public access. Since the SOPs were first published on the CTRNet website (June 2008), there have been approximately 15,000 downloads of one or more CTRNet SOPs/Policies by users from over 60 countries. In accordance with biobanking best practices, CTRNet performs an exhaustive review of its SOPs at set intervals, to coincide with each granting cycle. The last revision was completed in May 2012.

  18. Systematic review and network meta-analysis comparing clinical outcomes and effectiveness of surgical treatments for haemorrhoids.

    PubMed

    Simillis, C; Thoukididou, S N; Slesser, A A P; Rasheed, S; Tan, E; Tekkis, P P

    2015-12-01

    The aim was to compare the clinical outcomes and effectiveness of surgical treatments for haemorrhoids. Randomized clinical trials were identified by means of a systematic review. A Bayesian network meta-analysis was performed using the Markov chain Monte Carlo method in WinBUGS. Ninety-eight trials were included with 7827 participants and 11 surgical treatments for grade III and IV haemorrhoids. Open, closed and radiofrequency haemorrhoidectomies resulted in significantly more postoperative complications than transanal haemorrhoidal dearterialization (THD), LigaSure™ and Harmonic® haemorrhoidectomies. THD had significantly less postoperative bleeding than open and stapled procedures, and resulted in significantly fewer emergency reoperations than open, closed, stapled and LigaSure™ haemorrhoidectomies. Open and closed haemorrhoidectomies resulted in more pain on postoperative day 1 than stapled, THD, LigaSure™ and Harmonic® procedures. After stapled, LigaSure™ and Harmonic® haemorrhoidectomies patients resumed normal daily activities earlier than after open and closed procedures. THD provided the earliest time to first bowel movement. The stapled and THD groups had significantly higher haemorrhoid recurrence rates than the open, closed and LigaSure™ groups. Recurrence of haemorrhoidal symptoms was more common after stapled haemorrhoidectomy than after open and LigaSure™ operations. No significant difference was identified between treatments for anal stenosis, incontinence and perianal skin tags. Open and closed haemorrhoidectomies resulted in more postoperative complications and slower recovery, but fewer haemorrhoid recurrences. THD and stapled haemorrhoidectomies were associated with decreased postoperative pain and faster recovery, but higher recurrence rates. The advantages and disadvantages of each surgical treatment should be discussed with the patient before surgery to allow an informed decision to be made. © 2015 BJS Society Ltd Published by John Wiley & Sons Ltd.

  19. Third-Order Spectral Techniques for the Diagnosis of Motor Bearing Condition Using Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Yang, D.-M.; Stronach, A. F.; MacConnell, P.; Penman, J.

    2002-03-01

    This paper addresses the development of a novel condition monitoring procedure for rolling element bearings which involves a combination of signal processing, signal analysis and artificial intelligence methods. Seven approaches based on power spectrum, bispectral and bicoherence vibration analyses are investigated as signal pre-processing techniques for application in the diagnosis of a number of induction motor rolling element bearing conditions. The bearing conditions considered are a normal bearing and bearings with cage and inner and outer race faults. The vibration analysis methods investigated are based on the power spectrum, the bispectrum, the bicoherence, the bispectrum diagonal slice, the bicoherence diagonal slice, the summed bispectrum and the summed bicoherence. Selected features are extracted from the vibration signatures so obtained and these are used as inputs to an artificial neural network trained to identify the bearing conditions. Quadratic phase coupling (QPC), examined using the magnitude of bispectrum and bicoherence and biphase, is shown to be absent from the bearing system and it is therefore concluded that the structure of the bearing vibration signatures results from inter-modulation effects. In order to test the proposed procedure, experimental data from a bearing test rig are used to develop an example diagnostic system. Results show that the bearing conditions examined can be diagnosed with a high success rate, particularly when using the summed bispectrum signatures.

  20. From protein-protein interactions to protein co-expression networks: a new perspective to evaluate large-scale proteomic data.

    PubMed

    Vella, Danila; Zoppis, Italo; Mauri, Giancarlo; Mauri, Pierluigi; Di Silvestre, Dario

    2017-12-01

    The reductionist approach of dissecting biological systems into their constituents has been successful in the first stage of the molecular biology to elucidate the chemical basis of several biological processes. This knowledge helped biologists to understand the complexity of the biological systems evidencing that most biological functions do not arise from individual molecules; thus, realizing that the emergent properties of the biological systems cannot be explained or be predicted by investigating individual molecules without taking into consideration their relations. Thanks to the improvement of the current -omics technologies and the increasing understanding of the molecular relationships, even more studies are evaluating the biological systems through approaches based on graph theory. Genomic and proteomic data are often combined with protein-protein interaction (PPI) networks whose structure is routinely analyzed by algorithms and tools to characterize hubs/bottlenecks and topological, functional, and disease modules. On the other hand, co-expression networks represent a complementary procedure that give the opportunity to evaluate at system level including organisms that lack information on PPIs. Based on these premises, we introduce the reader to the PPI and to the co-expression networks, including aspects of reconstruction and analysis. In particular, the new idea to evaluate large-scale proteomic data by means of co-expression networks will be discussed presenting some examples of application. Their use to infer biological knowledge will be shown, and a special attention will be devoted to the topological and module analysis.

  1. Genome Scale Modeling in Systems Biology: Algorithms and Resources

    PubMed Central

    Najafi, Ali; Bidkhori, Gholamreza; Bozorgmehr, Joseph H.; Koch, Ina; Masoudi-Nejad, Ali

    2014-01-01

    In recent years, in silico studies and trial simulations have complemented experimental procedures. A model is a description of a system, and a system is any collection of interrelated objects; an object, moreover, is some elemental unit upon which observations can be made but whose internal structure either does not exist or is ignored. Therefore, any network analysis approach is critical for successful quantitative modeling of biological systems. This review highlights some of most popular and important modeling algorithms, tools, and emerging standards for representing, simulating and analyzing cellular networks in five sections. Also, we try to show these concepts by means of simple example and proper images and graphs. Overall, systems biology aims for a holistic description and understanding of biological processes by an integration of analytical experimental approaches along with synthetic computational models. In fact, biological networks have been developed as a platform for integrating information from high to low-throughput experiments for the analysis of biological systems. We provide an overview of all processes used in modeling and simulating biological networks in such a way that they can become easily understandable for researchers with both biological and mathematical backgrounds. Consequently, given the complexity of generated experimental data and cellular networks, it is no surprise that researchers have turned to computer simulation and the development of more theory-based approaches to augment and assist in the development of a fully quantitative understanding of cellular dynamics. PMID:24822031

  2. 47 CFR 36.354 - Access expenses-Account 6540.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... JURISDICTIONAL SEPARATIONS PROCEDURES; STANDARD PROCEDURES FOR SEPARATING TELECOMMUNICATIONS PROPERTY COSTS... Network Operations Expenses § 36.354 Access expenses—Account 6540. (a) This account includes access...

  3. Framework for adaptive multiscale analysis of nonhomogeneous point processes.

    PubMed

    Helgason, Hannes; Bartroff, Jay; Abry, Patrice

    2011-01-01

    We develop the methodology for hypothesis testing and model selection in nonhomogeneous Poisson processes, with an eye toward the application of modeling and variability detection in heart beat data. Modeling the process' non-constant rate function using templates of simple basis functions, we develop the generalized likelihood ratio statistic for a given template and a multiple testing scheme to model-select from a family of templates. A dynamic programming algorithm inspired by network flows is used to compute the maximum likelihood template in a multiscale manner. In a numerical example, the proposed procedure is nearly as powerful as the super-optimal procedures that know the true template size and true partition, respectively. Extensions to general history-dependent point processes is discussed.

  4. A multi-period distribution network design model under demand uncertainty

    NASA Astrophysics Data System (ADS)

    Tabrizi, Babak H.; Razmi, Jafar

    2013-05-01

    Supply chain management is taken into account as an inseparable component in satisfying customers' requirements. This paper deals with the distribution network design (DND) problem which is a critical issue in achieving supply chain accomplishments. A capable DND can guarantee the success of the entire network performance. However, there are many factors that can cause fluctuations in input data determining market treatment, with respect to short-term planning, on the one hand. On the other hand, network performance may be threatened by the changes that take place within practicing periods, with respect to long-term planning. Thus, in order to bring both kinds of changes under control, we considered a new multi-period, multi-commodity, multi-source DND problem in circumstances where the network encounters uncertain demands. The fuzzy logic is applied here as an efficient tool for controlling the potential customers' demand risk. The defuzzifying framework leads the practitioners and decision-makers to interact with the solution procedure continuously. The fuzzy model is then validated by a sensitivity analysis test, and a typical problem is solved in order to illustrate the implementation steps. Finally, the formulation is tested by some different-sized problems to show its total performance.

  5. Computational Characterization of Type I collagen-based Extra-cellular Matrix

    NASA Astrophysics Data System (ADS)

    Liang, Long; Jones, Christopher Allen Rucksack; Lin, Daniel; Jiao, Yang; Sun, Bo

    2015-03-01

    A model of extracellular matrix (ECM) of collagen fibers has been built, in which cells could communicate with distant partners via fiber-mediated long-range-transmitted stress states. The ECM is modeled as a spring-like fiber network derived from skeletonized confocal microscopy data. Different local and global perturbations have been performed on the network, each followed by an optimized global Monte-Carlo (MC) energy minimization leading to the deformed network in response to the perturbations. In the optimization, a highly efficient local energy update procedure is employed and force-directed MC moves are used, which results in a convergence to the energy minimum state 20 times faster than the commonly used random displacement trial moves in MC. Further analysis and visualization of the distribution and correlation of the resulting force network reveal that local perturbations can give rise to global impacts: the force chains formed with a linear extent much further than the characteristic length scale associated with the perturbation sites and average fiber length. This behavior provides a strong evidence for our hypothesis of fiber-mediated long-range force transmission in ECM networks and the resulting long-range cell-cell mechanical signaling. ASU Seed Grant.

  6. Systemic Risk Analysis on Reconstructed Economic and Financial Networks

    PubMed Central

    Cimini, Giulio; Squartini, Tiziano; Garlaschelli, Diego; Gabrielli, Andrea

    2015-01-01

    We address a fundamental problem that is systematically encountered when modeling real-world complex systems of societal relevance: the limitedness of the information available. In the case of economic and financial networks, privacy issues severely limit the information that can be accessed and, as a consequence, the possibility of correctly estimating the resilience of these systems to events such as financial shocks, crises and cascade failures. Here we present an innovative method to reconstruct the structure of such partially-accessible systems, based on the knowledge of intrinsic node-specific properties and of the number of connections of only a limited subset of nodes. This information is used to calibrate an inference procedure based on fundamental concepts derived from statistical physics, which allows to generate ensembles of directed weighted networks intended to represent the real system—so that the real network properties can be estimated as their average values within the ensemble. We test the method both on synthetic and empirical networks, focusing on the properties that are commonly used to measure systemic risk. Indeed, the method shows a remarkable robustness with respect to the limitedness of the information available, thus representing a valuable tool for gaining insights on privacy-protected economic and financial systems. PMID:26507849

  7. Systemic Risk Analysis on Reconstructed Economic and Financial Networks

    NASA Astrophysics Data System (ADS)

    Cimini, Giulio; Squartini, Tiziano; Garlaschelli, Diego; Gabrielli, Andrea

    2015-10-01

    We address a fundamental problem that is systematically encountered when modeling real-world complex systems of societal relevance: the limitedness of the information available. In the case of economic and financial networks, privacy issues severely limit the information that can be accessed and, as a consequence, the possibility of correctly estimating the resilience of these systems to events such as financial shocks, crises and cascade failures. Here we present an innovative method to reconstruct the structure of such partially-accessible systems, based on the knowledge of intrinsic node-specific properties and of the number of connections of only a limited subset of nodes. This information is used to calibrate an inference procedure based on fundamental concepts derived from statistical physics, which allows to generate ensembles of directed weighted networks intended to represent the real system—so that the real network properties can be estimated as their average values within the ensemble. We test the method both on synthetic and empirical networks, focusing on the properties that are commonly used to measure systemic risk. Indeed, the method shows a remarkable robustness with respect to the limitedness of the information available, thus representing a valuable tool for gaining insights on privacy-protected economic and financial systems.

  8. Calibration and correction procedures for cosmic-ray neutron soil moisture probes located across Australia

    NASA Astrophysics Data System (ADS)

    Hawdon, Aaron; McJannet, David; Wallace, Jim

    2014-06-01

    The cosmic-ray probe (CRP) provides continuous estimates of soil moisture over an area of ˜30 ha by counting fast neutrons produced from cosmic rays which are predominantly moderated by water molecules in the soil. This paper describes the setup, measurement correction procedures, and field calibration of CRPs at nine locations across Australia with contrasting soil type, climate, and land cover. These probes form the inaugural Australian CRP network, which is known as CosmOz. CRP measurements require neutron count rates to be corrected for effects of atmospheric pressure, water vapor pressure changes, and variations in incoming neutron intensity. We assess the magnitude and importance of these corrections and present standardized approaches for network-wide analysis. In particular, we present a new approach to correct for incoming neutron intensity variations and test its performance against existing procedures used in other studies. Our field calibration results indicate that a generalized calibration function for relating neutron counts to soil moisture is suitable for all soil types, with the possible exception of very sandy soils with low water content. Using multiple calibration data sets, we demonstrate that the generalized calibration function only applies after accounting for persistent sources of hydrogen in the soil profile. Finally, we demonstrate that by following standardized correction procedures and scaling neutron counting rates of all CRPs to a single reference location, differences in calibrations between sites are related to site biomass. This observation provides a means for estimating biomass at a given location or for deriving coefficients for the calibration function in the absence of field calibration data.

  9. Development of the Global Measles Laboratory Network.

    PubMed

    Featherstone, David; Brown, David; Sanders, Ray

    2003-05-15

    The routine reporting of suspected measles cases and laboratory testing of samples from these cases is the backbone of measles surveillance. The Global Measles Laboratory Network (GMLN) has developed standards for laboratory confirmation of measles and provides training resources for staff of network laboratories, reference materials and expertise for the development and quality control of testing procedures, and accurate information for the Measles Mortality Reduction and Regional Elimination Initiative. The GMLN was developed along the lines of the successful Global Polio Laboratory Network, and much of the polio laboratory infrastructure was utilized for measles. The GMLN has developed as countries focus on measles control activities following successful eradication of polio. Currently more than 100 laboratories are part of the global network and follow standardized testing and reporting procedures. A comprehensive laboratory accreditation process will be introduced in 2002 with six quality assurance and performance indicators.

  10. Scalable software-defined optical networking with high-performance routing and wavelength assignment algorithms.

    PubMed

    Lee, Chankyun; Cao, Xiaoyuan; Yoshikane, Noboru; Tsuritani, Takehiro; Rhee, June-Koo Kevin

    2015-10-19

    The feasibility of software-defined optical networking (SDON) for a practical application critically depends on scalability of centralized control performance. The paper, highly scalable routing and wavelength assignment (RWA) algorithms are investigated on an OpenFlow-based SDON testbed for proof-of-concept demonstration. Efficient RWA algorithms are proposed to achieve high performance in achieving network capacity with reduced computation cost, which is a significant attribute in a scalable centralized-control SDON. The proposed heuristic RWA algorithms differ in the orders of request processes and in the procedures of routing table updates. Combined in a shortest-path-based routing algorithm, a hottest-request-first processing policy that considers demand intensity and end-to-end distance information offers both the highest throughput of networks and acceptable computation scalability. We further investigate trade-off relationship between network throughput and computation complexity in routing table update procedure by a simulation study.

  11. Uncovering the community structure in signed social networks based on greedy optimization

    NASA Astrophysics Data System (ADS)

    Chen, Yan; Yan, Jiaqi; Yang, Yu; Chen, Junhua

    2017-05-01

    The formality of signed relationships has been recently adopted in a lot of complicated systems. The relations among these entities are complicated and multifarious. We cannot indicate these relationships only by positive links, and signed networks have been becoming more and more universal in the study of social networks when community is being significant. In this paper, to identify communities in signed networks, we exploit a new greedy algorithm, taking signs and the density of these links into account. The main idea of the algorithm is the initial procedure of signed modularity and the corresponding update rules. Specially, we employ the “Asymmetric and Constrained Belief Evolution” procedure to evaluate the optimal number of communities. According to the experimental results, the algorithm is proved to be able to run well. More specifically, the proposed algorithm is very efficient for these networks with medium size, both dense and sparse.

  12. Upper Washita River experimental watersheds: Data screening procedure for data quality assurance

    USDA-ARS?s Scientific Manuscript database

    The presence of non-stationary condition in long term hydrologic observation networks are associated with natural and anthropogenic stressors or network operation problems. Detection and identification of network operation drivers is fundamental in hydrologic investigation due to changes in systemat...

  13. Risk analysis with a fuzzy-logic approach of a complex installation

    NASA Astrophysics Data System (ADS)

    Peikert, Tim; Garbe, Heyno; Potthast, Stefan

    2016-09-01

    This paper introduces a procedural method based on fuzzy logic to analyze systematic the risk of an electronic system in an intentional electromagnetic environment (IEME). The method analyzes the susceptibility of a complex electronic installation with respect to intentional electromagnetic interference (IEMI). It combines the advantages of well-known techniques as fault tree analysis (FTA), electromagnetic topology (EMT) and Bayesian networks (BN) and extends the techniques with an approach to handle uncertainty. This approach uses fuzzy sets, membership functions and fuzzy logic to handle the uncertainty with probability functions and linguistic terms. The linguistic terms add to the risk analysis the knowledge from experts of the investigated system or environment.

  14. The COMPTEL Processing and Analysis Software system (COMPASS)

    NASA Astrophysics Data System (ADS)

    de Vries, C. P.; COMPTEL Collaboration

    The data analysis system of the gamma-ray Compton Telescope (COMPTEL) onboard the Compton-GRO spacecraft is described. A continous stream of data of the order of 1 kbytes per second is generated by the instrument. The data processing and analysis software is build around a relational database managment system (RDBMS) in order to be able to trace heritage and processing status of all data in the processing pipeline. Four institutes cooperate in this effort requiring procedures to keep local RDBMS contents identical between the sites and swift exchange of data using network facilities. Lately, there has been a gradual move of the system from central processing facilities towards clusters of workstations.

  15. EPA Library Network Communication Strategies

    EPA Pesticide Factsheets

    To establish Agency-wide procedures for the EPA National Library Network libraries to communicate, using a range of established mechanisms, with other EPA libraries, EPA staff, organizations and the public.

  16. Exploiting Outage and Error Probability of Cooperative Incremental Relaying in Underwater Wireless Sensor Networks

    PubMed Central

    Nasir, Hina; Javaid, Nadeem; Sher, Muhammad; Qasim, Umar; Khan, Zahoor Ali; Alrajeh, Nabil; Niaz, Iftikhar Azim

    2016-01-01

    This paper embeds a bi-fold contribution for Underwater Wireless Sensor Networks (UWSNs); performance analysis of incremental relaying in terms of outage and error probability, and based on the analysis proposition of two new cooperative routing protocols. Subject to the first contribution, a three step procedure is carried out; a system model is presented, the number of available relays are determined, and based on cooperative incremental retransmission methodology, closed-form expressions for outage and error probability are derived. Subject to the second contribution, Adaptive Cooperation in Energy (ACE) efficient depth based routing and Enhanced-ACE (E-ACE) are presented. In the proposed model, feedback mechanism indicates success or failure of data transmission. If direct transmission is successful, there is no need for relaying by cooperative relay nodes. In case of failure, all the available relays retransmit the data one by one till the desired signal quality is achieved at destination. Simulation results show that the ACE and E-ACE significantly improves network performance, i.e., throughput, when compared with other incremental relaying protocols like Cooperative Automatic Repeat reQuest (CARQ). E-ACE and ACE achieve 69% and 63% more throughput respectively as compared to CARQ in hard underwater environment. PMID:27420061

  17. Combining a dispersal model with network theory to assess habitat connectivity.

    PubMed

    Lookingbill, Todd R; Gardner, Robert H; Ferrari, Joseph R; Keller, Cherry E

    2010-03-01

    Assessing the potential for threatened species to persist and spread within fragmented landscapes requires the identification of core areas that can sustain resident populations and dispersal corridors that can link these core areas with isolated patches of remnant habitat. We developed a set of GIS tools, simulation methods, and network analysis procedures to assess potential landscape connectivity for the Delmarva fox squirrel (DFS; Sciurus niger cinereus), an endangered species inhabiting forested areas on the Delmarva Peninsula, USA. Information on the DFS's life history and dispersal characteristics, together with data on the composition and configuration of land cover on the peninsula, were used as input data for an individual-based model to simulate dispersal patterns of millions of squirrels. Simulation results were then assessed using methods from graph theory, which quantifies habitat attributes associated with local and global connectivity. Several bottlenecks to dispersal were identified that were not apparent from simple distance-based metrics, highlighting specific locations for landscape conservation, restoration, and/or squirrel translocations. Our approach links simulation models, network analysis, and available field data in an efficient and general manner, making these methods useful and appropriate for assessing the movement dynamics of threatened species within landscapes being altered by human and natural disturbances.

  18. Spatial Rule-Based Modeling: A Method and Its Application to the Human Mitotic Kinetochore

    PubMed Central

    Ibrahim, Bashar; Henze, Richard; Gruenert, Gerd; Egbert, Matthew; Huwald, Jan; Dittrich, Peter

    2013-01-01

    A common problem in the analysis of biological systems is the combinatorial explosion that emerges from the complexity of multi-protein assemblies. Conventional formalisms, like differential equations, Boolean networks and Bayesian networks, are unsuitable for dealing with the combinatorial explosion, because they are designed for a restricted state space with fixed dimensionality. To overcome this problem, the rule-based modeling language, BioNetGen, and the spatial extension, SRSim, have been developed. Here, we describe how to apply rule-based modeling to integrate experimental data from different sources into a single spatial simulation model and how to analyze the output of that model. The starting point for this approach can be a combination of molecular interaction data, reaction network data, proximities, binding and diffusion kinetics and molecular geometries at different levels of detail. We describe the technique and then use it to construct a model of the human mitotic inner and outer kinetochore, including the spindle assembly checkpoint signaling pathway. This allows us to demonstrate the utility of the procedure, show how a novel perspective for understanding such complex systems becomes accessible and elaborate on challenges that arise in the formulation, simulation and analysis of spatial rule-based models. PMID:24709796

  19. 42 CFR 422.202 - Participation procedures.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... procedures. (a) Notice and appeal rights. An MA organization that operates a coordinated care plan or network... of groups of physicians, through reasonable procedures that include the following: (1) Written notice... guidelines. (c) Subcontracted groups. An MA organization that operates an MA plan through subcontracted...

  20. Secure and lightweight network admission and transmission protocol for body sensor networks.

    PubMed

    He, Daojing; Chen, Chun; Chan, Sammy; Bu, Jiajun; Zhang, Pingxin

    2013-05-01

    A body sensor network (BSN) is a wireless network of biosensors and a local processing unit, which is commonly referred to as the personal wireless hub (PWH). Personal health information (PHI) is collected by biosensors and delivered to the PWH before it is forwarded to the remote healthcare center for further processing. In a BSN, it is critical to only admit eligible biosensors and PWH into the network. Also, securing the transmission from each biosensor to PWH is essential not only for ensuring safety of PHI delivery, but also for preserving the privacy of PHI. In this paper, we present the design, implementation, and evaluation of a secure network admission and transmission subsystem based on a polynomial-based authentication scheme. The procedures in this subsystem to establish keys for each biosensor are communication efficient and energy efficient. Moreover, based on the observation that an adversary eavesdropping in a BSN faces inevitable channel errors, we propose to exploit the adversary's uncertainty regarding the PHI transmission to update the individual key dynamically and improve key secrecy. In addition to the theoretical analysis that demonstrates the security properties of our system, this paper also reports the experimental results of the proposed protocol on resource-limited sensor platforms, which show the efficiency of our system in practice.

  1. Finding Statistically Significant Communities in Networks

    PubMed Central

    Lancichinetti, Andrea; Radicchi, Filippo; Ramasco, José J.; Fortunato, Santo

    2011-01-01

    Community structure is one of the main structural features of networks, revealing both their internal organization and the similarity of their elementary units. Despite the large variety of methods proposed to detect communities in graphs, there is a big need for multi-purpose techniques, able to handle different types of datasets and the subtleties of community structure. In this paper we present OSLOM (Order Statistics Local Optimization Method), the first method capable to detect clusters in networks accounting for edge directions, edge weights, overlapping communities, hierarchies and community dynamics. It is based on the local optimization of a fitness function expressing the statistical significance of clusters with respect to random fluctuations, which is estimated with tools of Extreme and Order Statistics. OSLOM can be used alone or as a refinement procedure of partitions/covers delivered by other techniques. We have also implemented sequential algorithms combining OSLOM with other fast techniques, so that the community structure of very large networks can be uncovered. Our method has a comparable performance as the best existing algorithms on artificial benchmark graphs. Several applications on real networks are shown as well. OSLOM is implemented in a freely available software (http://www.oslom.org), and we believe it will be a valuable tool in the analysis of networks. PMID:21559480

  2. BrainMap VBM: An environment for structural meta-analysis.

    PubMed

    Vanasse, Thomas J; Fox, P Mickle; Barron, Daniel S; Robertson, Michaela; Eickhoff, Simon B; Lancaster, Jack L; Fox, Peter T

    2018-05-02

    The BrainMap database is a community resource that curates peer-reviewed, coordinate-based human neuroimaging literature. By pairing the results of neuroimaging studies with their relevant meta-data, BrainMap facilitates coordinate-based meta-analysis (CBMA) of the neuroimaging literature en masse or at the level of experimental paradigm, clinical disease, or anatomic location. Initially dedicated to the functional, task-activation literature, BrainMap is now expanding to include voxel-based morphometry (VBM) studies in a separate sector, titled: BrainMap VBM. VBM is a whole-brain, voxel-wise method that measures significant structural differences between or within groups which are reported as standardized, peak x-y-z coordinates. Here we describe BrainMap VBM, including the meta-data structure, current data volume, and automated reverse inference functions (region-to-disease profile) of this new community resource. CBMA offers a robust methodology for retaining true-positive and excluding false-positive findings across studies in the VBM literature. As with BrainMap's functional database, BrainMap VBM may be synthesized en masse or at the level of clinical disease or anatomic location. As a use-case scenario for BrainMap VBM, we illustrate a trans-diagnostic data-mining procedure wherein we explore the underlying network structure of 2,002 experiments representing over 53,000 subjects through independent components analysis (ICA). To reduce data-redundancy effects inherent to any database, we demonstrate two data-filtering approaches that proved helpful to ICA. Finally, we apply hierarchical clustering analysis (HCA) to measure network- and disease-specificity. This procedure distinguished psychiatric from neurological diseases. We invite the neuroscientific community to further exploit BrainMap VBM with other modeling approaches. © 2018 Wiley Periodicals, Inc.

  3. Avoiding the Enumeration of Infeasible Elementary Flux Modes by Including Transcriptional Regulatory Rules in the Enumeration Process Saves Computational Costs

    PubMed Central

    Jungreuthmayer, Christian; Ruckerbauer, David E.; Gerstl, Matthias P.; Hanscho, Michael; Zanghellini, Jürgen

    2015-01-01

    Despite the significant progress made in recent years, the computation of the complete set of elementary flux modes of large or even genome-scale metabolic networks is still impossible. We introduce a novel approach to speed up the calculation of elementary flux modes by including transcriptional regulatory information into the analysis of metabolic networks. Taking into account gene regulation dramatically reduces the solution space and allows the presented algorithm to constantly eliminate biologically infeasible modes at an early stage of the computation procedure. Thereby, computational costs, such as runtime, memory usage, and disk space, are extremely reduced. Moreover, we show that the application of transcriptional rules identifies non-trivial system-wide effects on metabolism. Using the presented algorithm pushes the size of metabolic networks that can be studied by elementary flux modes to new and much higher limits without the loss of predictive quality. This makes unbiased, system-wide predictions in large scale metabolic networks possible without resorting to any optimization principle. PMID:26091045

  4. A Wireless Sensor Network-Based Portable Vehicle Detector Evaluation System

    PubMed Central

    Yoo, Seong-eun

    2013-01-01

    In an upcoming smart transportation environment, performance evaluations of existing Vehicle Detection Systems are crucial to maintain their accuracy. The existing evaluation method for Vehicle Detection Systems is based on a wired Vehicle Detection System reference and a video recorder, which must be operated and analyzed by capable traffic experts. However, this conventional evaluation system has many disadvantages. It is inconvenient to deploy, the evaluation takes a long time, and it lacks scalability and objectivity. To improve the evaluation procedure, this paper proposes a Portable Vehicle Detector Evaluation System based on wireless sensor networks. We describe both the architecture and design of a Vehicle Detector Evaluation System and the implementation results, focusing on the wireless sensor networks and methods for traffic information measurement. With the help of wireless sensor networks and automated analysis, our Vehicle Detector Evaluation System can evaluate a Vehicle Detection System conveniently and objectively. The extensive evaluations of our Vehicle Detector Evaluation System show that it can measure the traffic information such as volume counts and speed with over 98% accuracy. PMID:23344388

  5. Male-to-Female Transgender Individuals Building Social Support and Capital From Within a Gender-Focused Network

    PubMed Central

    Pinto, Rogério M.; Melendez, Rita M.; Spector, Anya Y.

    2009-01-01

    The literature on male-to-female transgender (MTF) individuals lists myriad problems such individuals face in their day-to-day lives, including high rates of HIV/AIDS, addiction to drugs, violence, and lack of health care. These problems are exacerbated for ethnic and racial minority MTFs. Support available from their social networks can help MTFs alleviate these problems. This article explores how minority MTFs, specifically in an urban environment, develop supportive social networks defined by their gender and sexual identities. Using principles of community-based participatory research (CBPR), 20 African American and Latina MTFs were recruited at a community-based health care clinic. Their ages ranged from 18 to 53. Data were coded and analyzed following standard procedure for content analysis. The qualitative interviews revealed that participants formed their gender and sexual identities over time, developed gender-focused social networks based in the clinic from which they receive services, and engaged in social capital building and political action. Implications for using CBPR in research with MTFs are discussed. PMID:20418965

  6. Male-to-Female Transgender Individuals Building Social Support and Capital From Within a Gender-Focused Network.

    PubMed

    Pinto, Rogério M; Melendez, Rita M; Spector, Anya Y

    2008-09-01

    The literature on male-to-female transgender (MTF) individuals lists myriad problems such individuals face in their day-to-day lives, including high rates of HIV/AIDS, addiction to drugs, violence, and lack of health care. These problems are exacerbated for ethnic and racial minority MTFs. Support available from their social networks can help MTFs alleviate these problems. This article explores how minority MTFs, specifically in an urban environment, develop supportive social networks defined by their gender and sexual identities. Using principles of community-based participatory research (CBPR), 20 African American and Latina MTFs were recruited at a community-based health care clinic. Their ages ranged from 18 to 53. Data were coded and analyzed following standard procedure for content analysis. The qualitative interviews revealed that participants formed their gender and sexual identities over time, developed gender-focused social networks based in the clinic from which they receive services, and engaged in social capital building and political action. Implications for using CBPR in research with MTFs are discussed.

  7. A wireless sensor network-based portable vehicle detector evaluation system.

    PubMed

    Yoo, Seong-eun

    2013-01-17

    In an upcoming smart transportation environment, performance evaluations of existing Vehicle Detection Systems are crucial to maintain their accuracy. The existing evaluation method for Vehicle Detection Systems is based on a wired Vehicle Detection System reference and a video recorder, which must be operated and analyzed by capable traffic experts. However, this conventional evaluation system has many disadvantages. It is inconvenient to deploy, the evaluation takes a long time, and it lacks scalability and objectivity. To improve the evaluation procedure, this paper proposes a Portable Vehicle Detector Evaluation System based on wireless sensor networks. We describe both the architecture and design of a Vehicle Detector Evaluation System and the implementation results, focusing on the wireless sensor networks and methods for traffic information measurement. With the help of wireless sensor networks and automated analysis, our Vehicle Detector Evaluation System can evaluate a Vehicle Detection System conveniently and objectively. The extensive evaluations of our Vehicle Detector Evaluation System show that it can measure the traffic information such as volume counts and speed with over 98% accuracy.

  8. Evaluation plan for space station network interface units

    NASA Technical Reports Server (NTRS)

    Weaver, Alfred C.

    1990-01-01

    Outlined here is a procedure for evaluating network interface units (NIUs) produced for the Space Station program. The procedures should be equally applicable to the data management system (DMS) testbed NIUs produced by Honeywell and IBM. The evaluation procedures are divided into four areas. Performance measurement tools are hardware and software that must be developed in order to evaluate NIU performance. Performance tests are a series of tests, each of which documents some specific characteristic of NIU and/or network performance. In general, these performance tests quantify the speed, capacity, latency, and reliability of message transmission under a wide variety of conditions. Functionality tests are a series of tests and code inspections that demonstrate the functionality of the particular subset of ISO protocols which have been implemented in a given NIU. Conformance tests are a series of tests which would expose whether or not selected features within the ISO protocols are present and interoperable.

  9. Rule-based simulation models

    NASA Technical Reports Server (NTRS)

    Nieten, Joseph L.; Seraphine, Kathleen M.

    1991-01-01

    Procedural modeling systems, rule based modeling systems, and a method for converting a procedural model to a rule based model are described. Simulation models are used to represent real time engineering systems. A real time system can be represented by a set of equations or functions connected so that they perform in the same manner as the actual system. Most modeling system languages are based on FORTRAN or some other procedural language. Therefore, they must be enhanced with a reaction capability. Rule based systems are reactive by definition. Once the engineering system has been decomposed into a set of calculations using only basic algebraic unary operations, a knowledge network of calculations and functions can be constructed. The knowledge network required by a rule based system can be generated by a knowledge acquisition tool or a source level compiler. The compiler would take an existing model source file, a syntax template, and a symbol table and generate the knowledge network. Thus, existing procedural models can be translated and executed by a rule based system. Neural models can be provide the high capacity data manipulation required by the most complex real time models.

  10. An approach to the rationalization of streamflow data collection networks

    NASA Astrophysics Data System (ADS)

    Burn, Donald H.; Goulter, Ian C.

    1991-01-01

    A new procedure for rationalizing a streamflow data collection network is developed. The procedure is a two-phase approach in which in the first phase, a hierarchical clustering technique is used to identify groups of similar gauging stations. In the second phase, a single station from each identified group of gauging stations is selected to be retained in the rationalized network. The station selection phase is an inherently heuristic process that incorporates information about the characteristics of the individual stations in the network. The methodology allows the direct inclusion of user judgement into the station selection process in that it is possible to select more than one station from a group, if conditions warrant. The technique is demonstrated using streamflow gauging stations in and near the Pembina River basin, southern Manitoba, Canada.

  11. Performance verification of network function virtualization in software defined optical transport networks

    NASA Astrophysics Data System (ADS)

    Zhao, Yongli; Hu, Liyazhou; Wang, Wei; Li, Yajie; Zhang, Jie

    2017-01-01

    With the continuous opening of resource acquisition and application, there are a large variety of network hardware appliances deployed as the communication infrastructure. To lunch a new network application always implies to replace the obsolete devices and needs the related space and power to accommodate it, which will increase the energy and capital investment. Network function virtualization1 (NFV) aims to address these problems by consolidating many network equipment onto industry standard elements such as servers, switches and storage. Many types of IT resources have been deployed to run Virtual Network Functions (vNFs), such as virtual switches and routers. Then how to deploy NFV in optical transport networks is a of great importance problem. This paper focuses on this problem, and gives an implementation architecture of NFV-enabled optical transport networks based on Software Defined Optical Networking (SDON) with the procedure of vNFs call and return. Especially, an implementation solution of NFV-enabled optical transport node is designed, and a parallel processing method for NFV-enabled OTN nodes is proposed. To verify the performance of NFV-enabled SDON, the protocol interaction procedures of control function virtualization and node function virtualization are demonstrated on SDON testbed. Finally, the benefits and challenges of the parallel processing method for NFV-enabled OTN nodes are simulated and analyzed.

  12. Effects of bursting dynamic features on the generation of multi-clustered structure of neural network with symmetric spike-timing-dependent plasticity learning rule.

    PubMed

    Liu, Hui; Song, Yongduan; Xue, Fangzheng; Li, Xiumin

    2015-11-01

    In this paper, the generation of multi-clustered structure of self-organized neural network with different neuronal firing patterns, i.e., bursting or spiking, has been investigated. The initially all-to-all-connected spiking neural network or bursting neural network can be self-organized into clustered structure through the symmetric spike-timing-dependent plasticity learning for both bursting and spiking neurons. However, the time consumption of this clustering procedure of the burst-based self-organized neural network (BSON) is much shorter than the spike-based self-organized neural network (SSON). Our results show that the BSON network has more obvious small-world properties, i.e., higher clustering coefficient and smaller shortest path length than the SSON network. Also, the results of larger structure entropy and activity entropy of the BSON network demonstrate that this network has higher topological complexity and dynamical diversity, which benefits for enhancing information transmission of neural circuits. Hence, we conclude that the burst firing can significantly enhance the efficiency of clustering procedure and the emergent clustered structure renders the whole network more synchronous and therefore more sensitive to weak input. This result is further confirmed from its improved performance on stochastic resonance. Therefore, we believe that the multi-clustered neural network which self-organized from the bursting dynamics has high efficiency in information processing.

  13. Effects of bursting dynamic features on the generation of multi-clustered structure of neural network with symmetric spike-timing-dependent plasticity learning rule

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Hui; Song, Yongduan; Xue, Fangzheng

    In this paper, the generation of multi-clustered structure of self-organized neural network with different neuronal firing patterns, i.e., bursting or spiking, has been investigated. The initially all-to-all-connected spiking neural network or bursting neural network can be self-organized into clustered structure through the symmetric spike-timing-dependent plasticity learning for both bursting and spiking neurons. However, the time consumption of this clustering procedure of the burst-based self-organized neural network (BSON) is much shorter than the spike-based self-organized neural network (SSON). Our results show that the BSON network has more obvious small-world properties, i.e., higher clustering coefficient and smaller shortest path length than themore » SSON network. Also, the results of larger structure entropy and activity entropy of the BSON network demonstrate that this network has higher topological complexity and dynamical diversity, which benefits for enhancing information transmission of neural circuits. Hence, we conclude that the burst firing can significantly enhance the efficiency of clustering procedure and the emergent clustered structure renders the whole network more synchronous and therefore more sensitive to weak input. This result is further confirmed from its improved performance on stochastic resonance. Therefore, we believe that the multi-clustered neural network which self-organized from the bursting dynamics has high efficiency in information processing.« less

  14. RNA transcriptional biosignature analysis for identifying febrile infants with serious bacterial infections in the emergency department: a feasibility study.

    PubMed

    Mahajan, Prashant; Kuppermann, Nathan; Suarez, Nicolas; Mejias, Asuncion; Casper, Charlie; Dean, J Michael; Ramilo, Octavio

    2015-01-01

    To develop the infrastructure and demonstrate the feasibility of conducting microarray-based RNA transcriptional profile analyses for the diagnosis of serious bacterial infections in febrile infants 60 days and younger in a multicenter pediatric emergency research network. We designed a prospective multicenter cohort study with the aim of enrolling more than 4000 febrile infants 60 days and younger. To ensure success of conducting complex genomic studies in emergency department (ED) settings, we established an infrastructure within the Pediatric Emergency Care Applied Research Network, including 21 sites, to evaluate RNA transcriptional profiles in young febrile infants. We developed a comprehensive manual of operations and trained site investigators to obtain and process blood samples for RNA extraction and genomic analyses. We created standard operating procedures for blood sample collection, processing, storage, shipping, and analyses. We planned to prospectively identify, enroll, and collect 1 mL blood samples for genomic analyses from eligible patients to identify logistical issues with study procedures. Finally, we planned to batch blood samples and determined RNA quantity and quality at the central microarray laboratory and organized data analysis with the Pediatric Emergency Care Applied Research Network data coordinating center. Below we report on establishment of the infrastructure and the feasibility success in the first year based on the enrollment of a limited number of patients. We successfully established the infrastructure at 21 EDs. Over the first 5 months we enrolled 79% (74 of 94) of eligible febrile infants. We were able to obtain and ship 1 mL of blood from 74% (55 of 74) of enrolled participants, with at least 1 sample per participating ED. The 55 samples were shipped and evaluated at the microarray laboratory, and 95% (52 of 55) of blood samples were of adequate quality and contained sufficient RNA for expression analysis. It is possible to create a robust infrastructure to conduct genomic studies in young febrile infants in the context of a multicenter pediatric ED research setting. The sufficient quantity and high quality of RNA obtained suggests that whole blood transcriptional profile analysis for the diagnostic evaluation of young febrile infants can be successfully performed in this setting.

  15. 47 CFR 68.7 - Technical criteria for terminal equipment.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... (CONTINUED) CONNECTION OF TERMINAL EQUIPMENT TO THE TELEPHONE NETWORK General § 68.7 Technical criteria for... switched telephone network. (b) Technical criteria published by the Administrative Council for Terminal... network from harms caused by the connection of terminal equipment, subject to the appeal procedures in...

  16. 47 CFR 68.7 - Technical criteria for terminal equipment.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... (CONTINUED) CONNECTION OF TERMINAL EQUIPMENT TO THE TELEPHONE NETWORK General § 68.7 Technical criteria for... switched telephone network. (b) Technical criteria published by the Administrative Council for Terminal... network from harms caused by the connection of terminal equipment, subject to the appeal procedures in...

  17. ReNE: A Cytoscape Plugin for Regulatory Network Enhancement

    PubMed Central

    Politano, Gianfranco; Benso, Alfredo; Savino, Alessandro; Di Carlo, Stefano

    2014-01-01

    One of the biggest challenges in the study of biological regulatory mechanisms is the integration, americanmodeling, and analysis of the complex interactions which take place in biological networks. Despite post transcriptional regulatory elements (i.e., miRNAs) are widely investigated in current research, their usage and visualization in biological networks is very limited. Regulatory networks are commonly limited to gene entities. To integrate networks with post transcriptional regulatory data, researchers are therefore forced to manually resort to specific third party databases. In this context, we introduce ReNE, a Cytoscape 3.x plugin designed to automatically enrich a standard gene-based regulatory network with more detailed transcriptional, post transcriptional, and translational data, resulting in an enhanced network that more precisely models the actual biological regulatory mechanisms. ReNE can automatically import a network layout from the Reactome or KEGG repositories, or work with custom pathways described using a standard OWL/XML data format that the Cytoscape import procedure accepts. Moreover, ReNE allows researchers to merge multiple pathways coming from different sources. The merged network structure is normalized to guarantee a consistent and uniform description of the network nodes and edges and to enrich all integrated data with additional annotations retrieved from genome-wide databases like NCBI, thus producing a pathway fully manageable through the Cytoscape environment. The normalized network is then analyzed to include missing transcription factors, miRNAs, and proteins. The resulting enhanced network is still a fully functional Cytoscape network where each regulatory element (transcription factor, miRNA, gene, protein) and regulatory mechanism (up-regulation/down-regulation) is clearly visually identifiable, thus enabling a better visual understanding of its role and the effect in the network behavior. The enhanced network produced by ReNE is exportable in multiple formats for further analysis via third party applications. ReNE can be freely installed from the Cytoscape App Store (http://apps.cytoscape.org/apps/rene) and the full source code is freely available for download through a SVN repository accessible at http://www.sysbio.polito.it/tools_svn/BioInformatics/Rene/releases/. ReNE enhances a network by only integrating data from public repositories, without any inference or prediction. The reliability of the introduced interactions only depends on the reliability of the source data, which is out of control of ReNe developers. PMID:25541727

  18. Automatic recognition of holistic functional brain networks using iteratively optimized convolutional neural networks (IO-CNN) with weak label initialization.

    PubMed

    Zhao, Yu; Ge, Fangfei; Liu, Tianming

    2018-07-01

    fMRI data decomposition techniques have advanced significantly from shallow models such as Independent Component Analysis (ICA) and Sparse Coding and Dictionary Learning (SCDL) to deep learning models such Deep Belief Networks (DBN) and Convolutional Autoencoder (DCAE). However, interpretations of those decomposed networks are still open questions due to the lack of functional brain atlases, no correspondence across decomposed or reconstructed networks across different subjects, and significant individual variabilities. Recent studies showed that deep learning, especially deep convolutional neural networks (CNN), has extraordinary ability of accommodating spatial object patterns, e.g., our recent works using 3D CNN for fMRI-derived network classifications achieved high accuracy with a remarkable tolerance for mistakenly labelled training brain networks. However, the training data preparation is one of the biggest obstacles in these supervised deep learning models for functional brain network map recognitions, since manual labelling requires tedious and time-consuming labours which will sometimes even introduce label mistakes. Especially for mapping functional networks in large scale datasets such as hundreds of thousands of brain networks used in this paper, the manual labelling method will become almost infeasible. In response, in this work, we tackled both the network recognition and training data labelling tasks by proposing a new iteratively optimized deep learning CNN (IO-CNN) framework with an automatic weak label initialization, which enables the functional brain networks recognition task to a fully automatic large-scale classification procedure. Our extensive experiments based on ABIDE-II 1099 brains' fMRI data showed the great promise of our IO-CNN framework. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. 47 CFR 68.418 - Procedure; designation of agents for service.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Procedure; designation of agents for service. 68.418 Section 68.418 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) CONNECTION OF TERMINAL EQUIPMENT TO THE TELEPHONE NETWORK Complaint Procedures § 68...

  20. The GGOS Bureau of Networks and Observations: an update on the Space Geodesy Network and the New Implementation Plan for 2017 -18

    NASA Astrophysics Data System (ADS)

    Pearlman, Michael R.; Ma, Chopo; Neilan, Ruth; Noll, Carey; Pavlis, Erricos; Saunier, Jérôme; Schoene, Tilo; Barzaghi, Riccardo; Thaller, Daniela; Bergstrand, Sten; Mueller, Juergen

    2017-04-01

    Working with the IAG geometric services (VLBI, SLR, GNSS, and DORIS) the Bureau continues to advocate for the expansion and upgrade of the space geodesy networks for the maintenance and improvement of the reference frame and other application, and for the extension and integration with other techniques. New sites are being established following the GGOS concept of "core" and co-location sites; new technologies are being implemented to enhance performance in data yield as well as accuracy. In particular, several groups are undertaking initiatives and seeking partnerships to update existing sites and expand the networks in geographic areas void of coverage. The Bureau continues to meet with organizations to discuss possibilities of new and expanded participation and to promote the concept of partnerships. The Bureau provides the opportunity for representatives from the services to meet and share progress and plans, and to discuss issues of common interest. The Bureau monitors the status and projects the evolution of the network based on information from the current and expected future participants. Of particular interest at the moment is the integration of gravity and tide gauge networks. The Committees and Joint Working Groups play an essential role in the Bureau activity. The Standing Committee on Performance Simulations and Architectural Trade-off (PLATO) uses simulation and analysis techniques to project future network capability and to examine trade-off options. The Committee on Data and Information is working on a strategy for a GGOS metadata system on a near term plan for data products and a more comprehensive longer-term plan for an all-inclusive system. The Committee on Satellite Missions is working to enhance communication with the space missions, to advocate for missions that support GGOS goals and to enhance ground systems support. The IERS Working Group on Site Survey and Co-location (also participating in the Bureau) is working to enhance standardization in procedures, outreach and to encourage new survey groups to participate, and improve procedures to determine systems reference points. The 2017-2018 Implementation Plan for the GGOS Bureau of Networks and Observations has been posted on the GGOS website. We will outline progress over the past two years and discuss the status of the network and updated plan.

  1. Abnormal metabolic brain networks in Parkinson's disease from blackboard to bedside.

    PubMed

    Tang, Chris C; Eidelberg, David

    2010-01-01

    Metabolic imaging in the rest state has provided valuable information concerning the abnormalities of regional brain function that underlie idiopathic Parkinson's disease (PD). Moreover, network modeling procedures, such as spatial covariance analysis, have further allowed for the quantification of these changes at the systems level. In recent years, we have utilized this strategy to identify and validate three discrete metabolic networks in PD associated with the motor and cognitive manifestations of the disease. In this chapter, we will review and compare the specific functional topographies underlying parkinsonian akinesia/rigidity, tremor, and cognitive disturbance. While network activity progressed over time, the rate of change for each pattern was distinctive and paralleled the development of the corresponding clinical symptoms in early-stage patients. This approach is already showing great promise in identifying individuals with prodromal manifestations of PD and in assessing the rate of progression before clinical onset. Network modulation was found to correlate with the clinical effects of dopaminergic treatment and surgical interventions, such as subthalamic nucleus (STN) deep brain stimulation (DBS) and gene therapy. Abnormal metabolic networks have also been identified for atypical parkinsonian syndromes, such as multiple system atrophy (MSA) and progressive supranuclear palsy (PSP). Using multiple disease-related networks for PD, MSA, and PSP, we have developed a novel, fully automated algorithm for accurate classification at the single-patient level, even at early disease stages. Copyright © 2010 Elsevier B.V. All rights reserved.

  2. Affinity purification–mass spectrometry and network analysis to understand protein-protein interactions

    PubMed Central

    Morris, John H; Knudsen, Giselle M; Verschueren, Erik; Johnson, Jeffrey R; Cimermancic, Peter; Greninger, Alexander L; Pico, Alexander R

    2015-01-01

    By determining protein-protein interactions in normal, diseased and infected cells, we can improve our understanding of cellular systems and their reaction to various perturbations. In this protocol, we discuss how to use data obtained in affinity purification–mass spectrometry (AP-MS) experiments to generate meaningful interaction networks and effective figures. We begin with an overview of common epitope tagging, expression and AP practices, followed by liquid chromatography–MS (LC-MS) data collection. We then provide a detailed procedure covering a pipeline approach to (i) pre-processing the data by filtering against contaminant lists such as the Contaminant Repository for Affinity Purification (CRAPome) and normalization using the spectral index (SIN) or normalized spectral abundance factor (NSAF); (ii) scoring via methods such as MiST, SAInt and CompPASS; and (iii) testing the resulting scores. Data formats familiar to MS practitioners are then transformed to those most useful for network-based analyses. The protocol also explores methods available in Cytoscape to visualize and analyze these types of interaction data. The scoring pipeline can take anywhere from 1 d to 1 week, depending on one’s familiarity with the tools and data peculiarities. Similarly, the network analysis and visualization protocol in Cytoscape takes 2–4 h to complete with the provided sample data, but we recommend taking days or even weeks to explore one’s data and find the right questions. PMID:25275790

  3. On Learning Cluster Coefficient of Private Networks

    PubMed Central

    Wang, Yue; Wu, Xintao; Zhu, Jun; Xiang, Yang

    2013-01-01

    Enabling accurate analysis of social network data while preserving differential privacy has been challenging since graph features such as clustering coefficient or modularity often have high sensitivity, which is different from traditional aggregate functions (e.g., count and sum) on tabular data. In this paper, we treat a graph statistics as a function f and develop a divide and conquer approach to enforce differential privacy. The basic procedure of this approach is to first decompose the target computation f into several less complex unit computations f1, …, fm connected by basic mathematical operations (e.g., addition, subtraction, multiplication, division), then perturb the output of each fi with Laplace noise derived from its own sensitivity value and the distributed privacy threshold εi, and finally combine those perturbed fi as the perturbed output of computation f. We examine how various operations affect the accuracy of complex computations. When unit computations have large global sensitivity values, we enforce the differential privacy by calibrating noise based on the smooth sensitivity, rather than the global sensitivity. By doing this, we achieve the strict differential privacy guarantee with smaller magnitude noise. We illustrate our approach by using clustering coefficient, which is a popular statistics used in social network analysis. Empirical evaluations on five real social networks and various synthetic graphs generated from three random graph models show the developed divide and conquer approach outperforms the direct approach. PMID:24429843

  4. Representing Operations Procedures Using Temporal Dependency Algorithms

    NASA Technical Reports Server (NTRS)

    Fayyad, K.; Cooper, L.

    1992-01-01

    The research presented in this paper is investigating new ways of specifying operations procedures that incorporate the insight of operations, engineering, and science personnel to improve mission operations. The paper describes the rationale for using Temporal Dependency Networks to represent the procedures, a description of how the data is acquired, and the knowledge engineering effort required to represent operations procedures.

  5. A Gap-Filling Procedure for Hydrologic Data Based on Kalman Filtering and Expectation Maximization: Application to Data from the Wireless Sensor Networks of the Sierra Nevada

    NASA Astrophysics Data System (ADS)

    Coogan, A.; Avanzi, F.; Akella, R.; Conklin, M. H.; Bales, R. C.; Glaser, S. D.

    2017-12-01

    Automatic meteorological and snow stations provide large amounts of information at dense temporal resolution, but data quality is often compromised by noise and missing values. We present a new gap-filling and cleaning procedure for networks of these stations based on Kalman filtering and expectation maximization. Our method utilizes a multi-sensor, regime-switching Kalman filter to learn a latent process that captures dependencies between nearby stations and handles sharp changes in snowfall rate. Since the latent process is inferred using observations across working stations in the network, it can be used to fill in large data gaps for a malfunctioning station. The procedure was tested on meteorological and snow data from Wireless Sensor Networks (WSN) in the American River basin of the Sierra Nevada. Data include air temperature, relative humidity, and snow depth from dense networks of 10 to 12 stations within 1 km2 swaths. Both wet and dry water years have similar data issues. Data with artificially created gaps was used to quantify the method's performance. Our multi-sensor approach performs better than a single-sensor one, especially with large data gaps, as it learns and exploits the dominant underlying processes in snowpack at each site.

  6. Determining the trophic guilds of fishes and macroinvertebrates in a seagrass food web

    USGS Publications Warehouse

    Luczkovich, J.J.; Ward, G.P.; Johnson, J.C.; Christian, R.R.; Baird, D.; Neckles, H.; Rizzo, W.M.

    2002-01-01

    We established trophic guilds of macroinvertebrate and fish taxa using correspondence analysis and a hierarchical clustering strategy for a seagrass food web in winter in the northeastern Gulf of Mexico. To create the diet matrix, we characterized the trophic linkages of macroinvertebrate and fish taxa present in Halodule wrightii seagrass habitat areas within the St. Marks National Wildlife Refuge (Florida) using binary data, combining dietary links obtained from relevant literature for macroinvertebrates with stomach analysis of common fishes collected during January and February of 1994. Heirarchical average-linkage cluster analysis of the 73 taxa of fishes and macroinvertebrates in the diet matrix yielded 14 clusters with diet similarity ??? 0.60. We then used correspondence analysis with three factors to jointly plot the coordinates of the consumers (identified by cluster membership) and of the 33 food sources. Correspondence analysis served as a visualization tool for assigning each taxon to one of eight trophic guilds: herbivores, detritivores, suspension feeders, omnivores, molluscivores, meiobenthos consumers, macrobenthos consumers, and piscivores. These trophic groups, cross-classified with major taxonomic groups, were further used to develop consumer compartments in a network analysis model of carbon flow in this seagrass ecosystem. The method presented here should greatly improve the development of future network models of food webs by providing an objective procedure for aggregating trophic groups.

  7. Systems Design and Pilot Operation of a Regional Center for Technical Processing for the Libraries of the New England State Universities. NELINET, New England Library Information Network. Progress Report, July 1, 1967 - March 30, 1968, Volume II, Appendices.

    ERIC Educational Resources Information Center

    Agenbroad, James E.; And Others

    Included in this volume of appendices to LI 000 979 are acquisitions flow charts; a current operations questionnaire; an algorithm for splitting the Library of Congress call number; analysis of the Machine-Readable Cataloging (MARC II) format; production problems and decisions; operating procedures for information transmittal in the New England…

  8. Brain Consequences of Spinal Cord Injury with and without Neuropathic Pain: Translating Animal Models of Neuroinflammation onto Human Neural Networks and Back

    DTIC Science & Technology

    2016-10-01

    During year one , we have: Obtained IRB and HRPO approval for the human studies , obtained IACUC and ACURO approval for the animal studies , refined the...human study protocol and collected PET-MR data on healthy individuals and spinal cord injured subjects, developed the rodent imaging procedures...qualtiative synthesis of the current state of the field, and 6 studies can be included in a quantitative meta-analysis. The studies eligible for inclusion in

  9. Designing of network planning system for small-scale manufacturing

    NASA Astrophysics Data System (ADS)

    Kapulin, D. V.; Russkikh, P. A.; Vinnichenko, M. V.

    2018-05-01

    The paper presents features of network planning in small-scale discrete production. The procedure of explosion of the production order, considering multilevel representation, is developed. The software architecture is offered. Approbation of the network planning system is carried out. This system allows carrying out dynamic updating of the production plan.

  10. Frontal and Parietal Cortices Show Different Spatiotemporal Dynamics across Problem-solving Stages.

    PubMed

    Tschentscher, Nadja; Hauk, Olaf

    2016-08-01

    Arithmetic problem-solving can be conceptualized as a multistage process ranging from task encoding over rule and strategy selection to step-wise task execution. Previous fMRI research suggested a frontal-parietal network involved in the execution of complex numerical and nonnumerical tasks, but evidence is lacking on the particular contributions of frontal and parietal cortices across time. In an arithmetic task paradigm, we evaluated individual participants' "retrieval" and "multistep procedural" strategies on a trial-by-trial basis and contrasted those in time-resolved analyses using combined EEG and MEG. Retrieval strategies relied on direct retrieval of arithmetic facts (e.g., 2 + 3 = 5). Procedural strategies required multiple solution steps (e.g., 12 + 23 = 12 + 20 + 3 or 23 + 10 + 2). Evoked source analyses revealed independent activation dynamics within the first second of problem-solving in brain areas previously described as one network, such as the frontal-parietal cognitive control network: The right frontal cortex showed earliest effects of strategy selection for multistep procedural strategies around 300 msec, before parietal cortex activated around 700 msec. In time-frequency source power analyses, memory retrieval and multistep procedural strategies were differentially reflected in theta, alpha, and beta frequencies: Stronger beta and alpha desynchronizations emerged for procedural strategies in right frontal, parietal, and temporal regions as function of executive demands. Arithmetic fact retrieval was reflected in right prefrontal increases in theta power. Our results demonstrate differential brain dynamics within frontal-parietal networks across the time course of a problem-solving process, and analyses of different frequency bands allowed us to disentangle cortical regions supporting the underlying memory and executive functions.

  11. Automated detection and localization of bowhead whale sounds in the presence of seismic airgun surveys.

    PubMed

    Thode, Aaron M; Kim, Katherine H; Blackwell, Susanna B; Greene, Charles R; Nations, Christopher S; McDonald, Trent L; Macrander, A Michael

    2012-05-01

    An automated procedure has been developed for detecting and localizing frequency-modulated bowhead whale sounds in the presence of seismic airgun surveys. The procedure was applied to four years of data, collected from over 30 directional autonomous recording packages deployed over a 280 km span of continental shelf in the Alaskan Beaufort Sea. The procedure has six sequential stages that begin by extracting 25-element feature vectors from spectrograms of potential call candidates. Two cascaded neural networks then classify some feature vectors as bowhead calls, and the procedure then matches calls between recorders to triangulate locations. To train the networks, manual analysts flagged 219 471 bowhead call examples from 2008 and 2009. Manual analyses were also used to identify 1.17 million transient signals that were not whale calls. The network output thresholds were adjusted to reject 20% of whale calls in the training data. Validation runs using 2007 and 2010 data found that the procedure missed 30%-40% of manually detected calls. Furthermore, 20%-40% of the sounds flagged as calls are not present in the manual analyses; however, these extra detections incorporate legitimate whale calls overlooked by human analysts. Both manual and automated methods produce similar spatial and temporal call distributions.

  12. Mining the modular structure of protein interaction networks.

    PubMed

    Berenstein, Ariel José; Piñero, Janet; Furlong, Laura Inés; Chernomoretz, Ariel

    2015-01-01

    Cluster-based descriptions of biological networks have received much attention in recent years fostered by accumulated evidence of the existence of meaningful correlations between topological network clusters and biological functional modules. Several well-performing clustering algorithms exist to infer topological network partitions. However, due to respective technical idiosyncrasies they might produce dissimilar modular decompositions of a given network. In this contribution, we aimed to analyze how alternative modular descriptions could condition the outcome of follow-up network biology analysis. We considered a human protein interaction network and two paradigmatic cluster recognition algorithms, namely: the Clauset-Newman-Moore and the infomap procedures. We analyzed to what extent both methodologies yielded different results in terms of granularity and biological congruency. In addition, taking into account Guimera's cartographic role characterization of network nodes, we explored how the adoption of a given clustering methodology impinged on the ability to highlight relevant network meso-scale connectivity patterns. As a case study we considered a set of aging related proteins and showed that only the high-resolution modular description provided by infomap, could unveil statistically significant associations between them and inter/intra modular cartographic features. Besides reporting novel biological insights that could be gained from the discovered associations, our contribution warns against possible technical concerns that might affect the tools used to mine for interaction patterns in network biology studies. In particular our results suggested that sub-optimal partitions from the strict point of view of their modularity levels might still be worth being analyzed when meso-scale features were to be explored in connection with external source of biological knowledge.

  13. A Calculation Method for Convective Heat and Mass Transfer in Multiply-Slotted Film-Cooling Applications.

    DTIC Science & Technology

    1980-01-01

    Transport of Heat ..... .......... 8 3. THE SOLUTION PROCEDURE ..... .. ................. 8 3.1 The Finite-Difference Grid Network ... .......... 8 3.2...The Finite-Difference Grid Network. Figure 4: The Iterative Solution Procedure used at each Streamwise Station. Figure 5: Velocity Profiles in the...the finite-difference grid in the y-direction. I is the mixing length. L is the distance in the x-direction from the injection slot entrance to the

  14. Fast automated analysis of strong gravitational lenses with convolutional neural networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hezaveh, Yashar D.; Levasseur, Laurence Perreault; Marshall, Philip J.

    Quantifying image distortions caused by strong gravitational lensing—the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures—and estimating the corresponding matter distribution of these structures (the ‘gravitational lens’) has primarily been performed using maximum likelihood modelling of observations. Our procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physicalmore » processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. We report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the ‘singular isothermal ellipsoid’ density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.« less

  15. Fast automated analysis of strong gravitational lenses with convolutional neural networks

    DOE PAGES

    Hezaveh, Yashar D.; Levasseur, Laurence Perreault; Marshall, Philip J.

    2017-08-30

    Quantifying image distortions caused by strong gravitational lensing—the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures—and estimating the corresponding matter distribution of these structures (the ‘gravitational lens’) has primarily been performed using maximum likelihood modelling of observations. Our procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physicalmore » processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. We report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the ‘singular isothermal ellipsoid’ density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.« less

  16. Fast automated analysis of strong gravitational lenses with convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Hezaveh, Yashar D.; Levasseur, Laurence Perreault; Marshall, Philip J.

    2017-08-01

    Quantifying image distortions caused by strong gravitational lensing—the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures—and estimating the corresponding matter distribution of these structures (the ‘gravitational lens’) has primarily been performed using maximum likelihood modelling of observations. This procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physical processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. Here we report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the ‘singular isothermal ellipsoid’ density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.

  17. Automotive System for Remote Surface Classification.

    PubMed

    Bystrov, Aleksandr; Hoare, Edward; Tran, Thuy-Yung; Clarke, Nigel; Gashinova, Marina; Cherniakov, Mikhail

    2017-04-01

    In this paper we shall discuss a novel approach to road surface recognition, based on the analysis of backscattered microwave and ultrasonic signals. The novelty of our method is sonar and polarimetric radar data fusion, extraction of features for separate swathes of illuminated surface (segmentation), and using of multi-stage artificial neural network for surface classification. The developed system consists of 24 GHz radar and 40 kHz ultrasonic sensor. The features are extracted from backscattered signals and then the procedures of principal component analysis and supervised classification are applied to feature data. The special attention is paid to multi-stage artificial neural network which allows an overall increase in classification accuracy. The proposed technique was tested for recognition of a large number of real surfaces in different weather conditions with the average accuracy of correct classification of 95%. The obtained results thereby demonstrate that the use of proposed system architecture and statistical methods allow for reliable discrimination of various road surfaces in real conditions.

  18. Automotive System for Remote Surface Classification

    PubMed Central

    Bystrov, Aleksandr; Hoare, Edward; Tran, Thuy-Yung; Clarke, Nigel; Gashinova, Marina; Cherniakov, Mikhail

    2017-01-01

    In this paper we shall discuss a novel approach to road surface recognition, based on the analysis of backscattered microwave and ultrasonic signals. The novelty of our method is sonar and polarimetric radar data fusion, extraction of features for separate swathes of illuminated surface (segmentation), and using of multi-stage artificial neural network for surface classification. The developed system consists of 24 GHz radar and 40 kHz ultrasonic sensor. The features are extracted from backscattered signals and then the procedures of principal component analysis and supervised classification are applied to feature data. The special attention is paid to multi-stage artificial neural network which allows an overall increase in classification accuracy. The proposed technique was tested for recognition of a large number of real surfaces in different weather conditions with the average accuracy of correct classification of 95%. The obtained results thereby demonstrate that the use of proposed system architecture and statistical methods allow for reliable discrimination of various road surfaces in real conditions. PMID:28368297

  19. Age estimation by assessment of pulp chamber volume: a Bayesian network for the evaluation of dental evidence.

    PubMed

    Sironi, Emanuele; Taroni, Franco; Baldinotti, Claudio; Nardi, Cosimo; Norelli, Gian-Aristide; Gallidabino, Matteo; Pinchi, Vilma

    2017-11-14

    The present study aimed to investigate the performance of a Bayesian method in the evaluation of dental age-related evidence collected by means of a geometrical approximation procedure of the pulp chamber volume. Measurement of this volume was based on three-dimensional cone beam computed tomography images. The Bayesian method was applied by means of a probabilistic graphical model, namely a Bayesian network. Performance of that method was investigated in terms of accuracy and bias of the decisional outcomes. Influence of an informed elicitation of the prior belief of chronological age was also studied by means of a sensitivity analysis. Outcomes in terms of accuracy were adequate with standard requirements for forensic adult age estimation. Findings also indicated that the Bayesian method does not show a particular tendency towards under- or overestimation of the age variable. Outcomes of the sensitivity analysis showed that results on estimation are improved with a ration elicitation of the prior probabilities of age.

  20. COMPADRE: an R and web resource for pathway activity analysis by component decompositions.

    PubMed

    Ramos-Rodriguez, Roberto-Rafael; Cuevas-Diaz-Duran, Raquel; Falciani, Francesco; Tamez-Peña, Jose-Gerardo; Trevino, Victor

    2012-10-15

    The analysis of biological networks has become essential to study functional genomic data. Compadre is a tool to estimate pathway/gene sets activity indexes using sub-matrix decompositions for biological networks analyses. The Compadre pipeline also includes one of the direct uses of activity indexes to detect altered gene sets. For this, the gene expression sub-matrix of a gene set is decomposed into components, which are used to test differences between groups of samples. This procedure is performed with and without differentially expressed genes to decrease false calls. During this process, Compadre also performs an over-representation test. Compadre already implements four decomposition methods [principal component analysis (PCA), Isomaps, independent component analysis (ICA) and non-negative matrix factorization (NMF)], six statistical tests (t- and f-test, SAM, Kruskal-Wallis, Welch and Brown-Forsythe), several gene sets (KEGG, BioCarta, Reactome, GO and MsigDB) and can be easily expanded. Our simulation results shown in Supplementary Information suggest that Compadre detects more pathways than over-representation tools like David, Babelomics and Webgestalt and less false positives than PLAGE. The output is composed of results from decomposition and over-representation analyses providing a more complete biological picture. Examples provided in Supplementary Information show the utility, versatility and simplicity of Compadre for analyses of biological networks. Compadre is freely available at http://bioinformatica.mty.itesm.mx:8080/compadre. The R package is also available at https://sourceforge.net/p/compadre.

  1. Fast inference of interactions in assemblies of stochastic integrate-and-fire neurons from spike recordings.

    PubMed

    Monasson, Remi; Cocco, Simona

    2011-10-01

    We present two Bayesian procedures to infer the interactions and external currents in an assembly of stochastic integrate-and-fire neurons from the recording of their spiking activity. The first procedure is based on the exact calculation of the most likely time courses of the neuron membrane potentials conditioned by the recorded spikes, and is exact for a vanishing noise variance and for an instantaneous synaptic integration. The second procedure takes into account the presence of fluctuations around the most likely time courses of the potentials, and can deal with moderate noise levels. The running time of both procedures is proportional to the number S of spikes multiplied by the squared number N of neurons. The algorithms are validated on synthetic data generated by networks with known couplings and currents. We also reanalyze previously published recordings of the activity of the salamander retina (including from 32 to 40 neurons, and from 65,000 to 170,000 spikes). We study the dependence of the inferred interactions on the membrane leaking time; the differences and similarities with the classical cross-correlation analysis are discussed.

  2. [Aggressive B‑cell lymphomas : Recommendations from the German Panel of Reference Pathologists in the Competence Network on Malignant Lymphomas on diagnostic procedures according to the current WHO classification, update 2017].

    PubMed

    Klapper, W; Fend, F; Feller, A; Hansmann, M L; Möller, P; Stein, H; Rosenwald, A; Ott, G

    2018-04-17

    The update of the 4th edition of the WHO classification for hematopoietic neoplasms introduces changes in the field of mature aggressive B‑cell lymphomas that are relevant to diagnostic pathologists. In daily practice, the question arises of which analysis should be performed when diagnosing the most common lymphoma entity, diffuse large B‑cell lymphoma. We discuss the importance of the cell of origin, the analysis of MYC translocations, and the delineation of the new WHO entities of high-grade B‑cell lymphomas.

  3. Document co-citation analysis to enhance transdisciplinary research

    PubMed Central

    Trujillo, Caleb M.; Long, Tammy M.

    2018-01-01

    Specialized and emerging fields of research infrequently cross disciplinary boundaries and would benefit from frameworks, methods, and materials informed by other fields. Document co-citation analysis, a method developed by bibliometric research, is demonstrated as a way to help identify key literature for cross-disciplinary ideas. To illustrate the method in a useful context, we mapped peer-recognized scholarship related to systems thinking. In addition, three procedures for validation of co-citation networks are proposed and implemented. This method may be useful for strategically selecting information that can build consilience about ideas and constructs that are relevant across a range of disciplines. PMID:29308433

  4. Fault detection on a sewer network by a combination of a Kalman filter and a binary sequential probability ratio test

    NASA Astrophysics Data System (ADS)

    Piatyszek, E.; Voignier, P.; Graillot, D.

    2000-05-01

    One of the aims of sewer networks is the protection of population against floods and the reduction of pollution rejected to the receiving water during rainy events. To meet these goals, managers have to equip the sewer networks with and to set up real-time control systems. Unfortunately, a component fault (leading to intolerable behaviour of the system) or sensor fault (deteriorating the process view and disturbing the local automatism) makes the sewer network supervision delicate. In order to ensure an adequate flow management during rainy events it is essential to set up procedures capable of detecting and diagnosing these anomalies. This article introduces a real-time fault detection method, applicable to sewer networks, for the follow-up of rainy events. This method consists in comparing the sensor response with a forecast of this response. This forecast is provided by a model and more precisely by a state estimator: a Kalman filter. This Kalman filter provides not only a flow estimate but also an entity called 'innovation'. In order to detect abnormal operations within the network, this innovation is analysed with the binary sequential probability ratio test of Wald. Moreover, by crossing available information on several nodes of the network, a diagnosis of the detected anomalies is carried out. This method provided encouraging results during the analysis of several rains, on the sewer network of Seine-Saint-Denis County, France.

  5. Incorporation of spatial interactions in location networks to identify critical geo-referenced routes for assessing disease control measures on a large-scale campus.

    PubMed

    Wen, Tzai-Hung; Chin, Wei Chien Benny

    2015-04-14

    Respiratory diseases mainly spread through interpersonal contact. Class suspension is the most direct strategy to prevent the spread of disease through elementary or secondary schools by blocking the contact network. However, as university students usually attend courses in different buildings, the daily contact patterns on a university campus are complicated, and once disease clusters have occurred, suspending classes is far from an efficient strategy to control disease spread. The purpose of this study is to propose a methodological framework for generating campus location networks from a routine administration database, analyzing the community structure of the network, and identifying the critical links and nodes for blocking respiratory disease transmission. The data comes from the student enrollment records of a major comprehensive university in Taiwan. We combined the social network analysis and spatial interaction model to establish a geo-referenced community structure among the classroom buildings. We also identified the critical links among the communities that were acting as contact bridges and explored the changes in the location network after the sequential removal of the high-risk buildings. Instead of conducting a questionnaire survey, the study established a standard procedure for constructing a location network on a large-scale campus from a routine curriculum database. We also present how a location network structure at a campus could function to target the high-risk buildings as the bridges connecting communities for blocking disease transmission.

  6. Formal analysis and evaluation of the back-off procedure in IEEE802.11P VANET

    NASA Astrophysics Data System (ADS)

    Jin, Li; Zhang, Guoan; Zhu, Xiaojun

    2017-07-01

    The back-off procedure is one of the media access control technologies in 802.11P communication protocol. It plays an important role in avoiding message collisions and allocating channel resources. Formal methods are effective approaches for studying the performances of communication systems. In this paper, we establish a discrete time model for the back-off procedure. We use Markov Decision Processes (MDPs) to model the non-deterministic and probabilistic behaviors of the procedure, and use the probabilistic computation tree logic (PCTL) language to express different properties, which ensure that the discrete time model performs their basic functionality. Based on the model and PCTL specifications, we study the effect of contention window length on the number of senders in the neighborhood of given receivers, and that on the station’s expected cost required by the back-off procedure to successfully send packets. The variation of the window length may increase or decrease the maximum probability of correct transmissions within a time contention unit. We propose to use PRISM model checker to describe our proposed back-off procedure for IEEE802.11P protocol in vehicle network, and define different probability properties formulas to automatically verify the model and derive numerical results. The obtained results are helpful for justifying the values of the time contention unit.

  7. A KST framework for correlation network construction from time series signals

    NASA Astrophysics Data System (ADS)

    Qi, Jin-Peng; Gu, Quan; Zhu, Ying; Zhang, Ping

    2018-04-01

    A KST (Kolmogorov-Smirnov test and T statistic) method is used for construction of a correlation network based on the fluctuation of each time series within the multivariate time signals. In this method, each time series is divided equally into multiple segments, and the maximal data fluctuation in each segment is calculated by a KST change detection procedure. Connections between each time series are derived from the data fluctuation matrix, and are used for construction of the fluctuation correlation network (FCN). The method was tested with synthetic simulations and the result was compared with those from using KS or T only for detection of data fluctuation. The novelty of this study is that the correlation analyses was based on the data fluctuation in each segment of each time series rather than on the original time signals, which would be more meaningful for many real world applications and for analysis of large-scale time signals where prior knowledge is uncertain.

  8. Human performance under two different command and control paradigms.

    PubMed

    Walker, Guy H; Stanton, Neville A; Salmon, Paul M; Jenkins, Daniel P

    2014-05-01

    The paradoxical behaviour of a new command and control concept called Network Enabled Capability (NEC) provides the motivation for this paper. In it, a traditional hierarchical command and control organisation was pitted against a network centric alternative on a common task, played thirty times, by two teams. Multiple regression was used to undertake a simple form of time series analysis. It revealed that whilst the NEC condition ended up being slightly slower than its hierarchical counterpart, it was able to balance and optimise all three of the performance variables measured (task time, enemies neutralised and attrition). From this it is argued that a useful conceptual response is not to consider NEC as an end product comprised of networked computers and standard operating procedures, nor to regard the human system interaction as inherently stable, but rather to view it as a set of initial conditions from which the most adaptable component of all can be harnessed: the human. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  9. Security analysis and enhanced user authentication in proxy mobile IPv6 networks

    PubMed Central

    Kang, Dongwoo; Jung, Jaewook; Lee, Donghoon; Kim, Hyoungshick

    2017-01-01

    The Proxy Mobile IPv6 (PMIPv6) is a network-based mobility management protocol that allows a Mobile Node(MN) connected to the PMIPv6 domain to move from one network to another without changing the assigned IPv6 address. The user authentication procedure in this protocol is not standardized, but many smartcard based authentication schemes have been proposed. Recently, Alizadeh et al. proposed an authentication scheme for the PMIPv6. However, it could allow an attacker to derive an encryption key that must be securely shared between MN and the Mobile Access Gate(MAG). As a result, outsider adversary can derive MN’s identity, password and session key. In this paper, we analyze Alizadeh et al.’s scheme regarding security and propose an enhanced authentication scheme that uses a dynamic identity to satisfy anonymity. Furthermore, we use BAN logic to show that our scheme can successfully generate and communicate with the inter-entity session key. PMID:28719621

  10. Cluster Analysis of Weighted Bipartite Networks: A New Copula-Based Approach

    PubMed Central

    Chessa, Alessandro; Crimaldi, Irene; Riccaboni, Massimo; Trapin, Luca

    2014-01-01

    In this work we are interested in identifying clusters of “positional equivalent” actors, i.e. actors who play a similar role in a system. In particular, we analyze weighted bipartite networks that describes the relationships between actors on one side and features or traits on the other, together with the intensity level to which actors show their features. We develop a methodological approach that takes into account the underlying multivariate dependence among groups of actors. The idea is that positions in a network could be defined on the basis of the similar intensity levels that the actors exhibit in expressing some features, instead of just considering relationships that actors hold with each others. Moreover, we propose a new clustering procedure that exploits the potentiality of copula functions, a mathematical instrument for the modelization of the stochastic dependence structure. Our clustering algorithm can be applied both to binary and real-valued matrices. We validate it with simulations and applications to real-world data. PMID:25303095

  11. Toward a More Robust Pruning Procedure for MLP Networks

    NASA Technical Reports Server (NTRS)

    Stepniewski, Slawomir W.; Jorgensen, Charles C.

    1998-01-01

    Choosing a proper neural network architecture is a problem of great practical importance. Smaller models mean not only simpler designs but also lower variance for parameter estimation and network prediction. The widespread utilization of neural networks in modeling highlights an issue in human factors. The procedure of building neural models should find an appropriate level of model complexity in a more or less automatic fashion to make it less prone to human subjectivity. In this paper we present a Singular Value Decomposition based node elimination technique and enhanced implementation of the Optimal Brain Surgeon algorithm. Combining both methods creates a powerful pruning engine that can be used for tuning feedforward connectionist models. The performance of the proposed method is demonstrated by adjusting the structure of a multi-input multi-output model used to calibrate a six-component wind tunnel strain gage.

  12. NetGen: a novel network-based probabilistic generative model for gene set functional enrichment analysis.

    PubMed

    Sun, Duanchen; Liu, Yinliang; Zhang, Xiang-Sun; Wu, Ling-Yun

    2017-09-21

    High-throughput experimental techniques have been dramatically improved and widely applied in the past decades. However, biological interpretation of the high-throughput experimental results, such as differential expression gene sets derived from microarray or RNA-seq experiments, is still a challenging task. Gene Ontology (GO) is commonly used in the functional enrichment studies. The GO terms identified via current functional enrichment analysis tools often contain direct parent or descendant terms in the GO hierarchical structure. Highly redundant terms make users difficult to analyze the underlying biological processes. In this paper, a novel network-based probabilistic generative model, NetGen, was proposed to perform the functional enrichment analysis. An additional protein-protein interaction (PPI) network was explicitly used to assist the identification of significantly enriched GO terms. NetGen achieved a superior performance than the existing methods in the simulation studies. The effectiveness of NetGen was explored further on four real datasets. Notably, several GO terms which were not directly linked with the active gene list for each disease were identified. These terms were closely related to the corresponding diseases when accessed to the curated literatures. NetGen has been implemented in the R package CopTea publicly available at GitHub ( http://github.com/wulingyun/CopTea/ ). Our procedure leads to a more reasonable and interpretable result of the functional enrichment analysis. As a novel term combination-based functional enrichment analysis method, NetGen is complementary to current individual term-based methods, and can help to explore the underlying pathogenesis of complex diseases.

  13. Currency arbitrage detection using a binary integer programming model

    NASA Astrophysics Data System (ADS)

    Soon, Wanmei; Ye, Heng-Qing

    2011-04-01

    In this article, we examine the use of a new binary integer programming (BIP) model to detect arbitrage opportunities in currency exchanges. This model showcases an excellent application of mathematics to the real world. The concepts involved are easily accessible to undergraduate students with basic knowledge in Operations Research. Through this work, students can learn to link several types of basic optimization models, namely linear programming, integer programming and network models, and apply the well-known sensitivity analysis procedure to accommodate realistic changes in the exchange rates. Beginning with a BIP model, we discuss how it can be reduced to an equivalent but considerably simpler model, where an efficient algorithm can be applied to find the arbitrages and incorporate the sensitivity analysis procedure. A simple comparison is then made with a different arbitrage detection model. This exercise helps students learn to apply basic Operations Research concepts to a practical real-life example, and provides insights into the processes involved in Operations Research model formulations.

  14. Decision-making in irrigation networks: Selecting appropriate canal structures using multi-attribute decision analysis.

    PubMed

    Hosseinzade, Zeinab; Pagsuyoin, Sheree A; Ponnambalam, Kumaraswamy; Monem, Mohammad J

    2017-12-01

    The stiff competition for water between agriculture and non-agricultural production sectors makes it necessary to have effective management of irrigation networks in farms. However, the process of selecting flow control structures in irrigation networks is highly complex and involves different levels of decision makers. In this paper, we apply multi-attribute decision making (MADM) methodology to develop a decision analysis (DA) framework for evaluating, ranking and selecting check and intake structures for irrigation canals. The DA framework consists of identifying relevant attributes for canal structures, developing a robust scoring system for alternatives, identifying a procedure for data quality control, and identifying a MADM model for the decision analysis. An application is illustrated through an analysis for automation purposes of the Qazvin irrigation network, one of the oldest and most complex irrigation networks in Iran. A survey questionnaire designed based on the decision framework was distributed to experts, managers, and operators of the Qazvin network and to experts from the Ministry of Power in Iran. Five check structures and four intake structures were evaluated. A decision matrix was generated from the average scores collected from the survey, and was subsequently solved using TOPSIS (Technique for Order of Preference by Similarity to Ideal Solution) method. To identify the most critical structure attributes for the selection process, optimal attribute weights were calculated using Entropy method. For check structures, results show that the duckbill weir is the preferred structure while the pivot weir is the least preferred. Use of the duckbill weir can potentially address the problem with existing Amil gates where manual intervention is required to regulate water levels during periods of flow extremes. For intake structures, the Neyrpic® gate and constant head orifice are the most and least preferred alternatives, respectively. Some advantages of the Neyrpic® gate are ease of operation and capacity to measure discharge flows. Overall, the application to the Qazvin irrigation network demonstrates the utility of the proposed DA framework in selecting appropriate structures for regulating water flows in irrigation canals. This framework systematically aids the decision process by capturing decisions made at various levels (individual farmers to high-level management). It can be applied to other cases where a new irrigation network is being designed, or where changes in irrigation structures need to be identified to improve flow control in existing networks. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Geometrical features assessment of liver's tumor with application of artificial neural network evolved by imperialist competitive algorithm.

    PubMed

    Keshavarz, M; Mojra, A

    2015-05-01

    Geometrical features of a cancerous tumor embedded in biological soft tissue, including tumor size and depth, are a necessity in the follow-up procedure and making suitable therapeutic decisions. In this paper, a new socio-politically motivated global search strategy which is called imperialist competitive algorithm (ICA) is implemented to train a feed forward neural network (FFNN) to estimate the tumor's geometrical characteristics (FFNNICA). First, a viscoelastic model of liver tissue is constructed by using a series of in vitro uniaxial and relaxation test data. Then, 163 samples of the tissue including a tumor with different depths and diameters are generated by making use of PYTHON programming to link the ABAQUS and MATLAB together. Next, the samples are divided into 123 samples as training dataset and 40 samples as testing dataset. Training inputs of the network are mechanical parameters extracted from palpation of the tissue through a developing noninvasive technology called artificial tactile sensing (ATS). Last, to evaluate the FFNNICA performance, outputs of the network including tumor's depth and diameter are compared with desired values for both training and testing datasets. Deviations of the outputs from desired values are calculated by a regression analysis. Statistical analysis is also performed by measuring Root Mean Square Error (RMSE) and Efficiency (E). RMSE in diameter and depth estimations are 0.50 mm and 1.49, respectively, for the testing dataset. Results affirm that the proposed optimization algorithm for training neural network can be useful to characterize soft tissue tumors accurately by employing an artificial palpation approach. Copyright © 2015 John Wiley & Sons, Ltd.

  16. Practice and Learning: Spatiotemporal Differences in Thalamo-Cortical-Cerebellar Networks Engagement across Learning Phases in Schizophrenia.

    PubMed

    Korostil, Michele; Remington, Gary; McIntosh, Anthony Randal

    2016-01-01

    Understanding how practice mediates the transition of brain-behavior networks between early and later stages of learning is constrained by the common approach to analysis of fMRI data. Prior imaging studies have mostly relied on a single scan, and parametric, task-related analyses. Our experiment incorporates a multisession fMRI lexicon-learning experiment with multivariate, whole-brain analysis to further knowledge of the distributed networks supporting practice-related learning in schizophrenia (SZ). Participants with SZ were compared with healthy control (HC) participants as they learned a novel lexicon during two fMRI scans over a several day period. All participants were trained to equal task proficiency prior to scanning. Behavioral-Partial Least Squares, a multivariate analytic approach, was used to analyze the imaging data. Permutation testing was used to determine statistical significance and bootstrap resampling to determine the reliability of the findings. With practice, HC participants transitioned to a brain-accuracy network incorporating dorsostriatal regions in late-learning stages. The SZ participants did not transition to this pattern despite comparable behavioral results. Instead, successful learners with SZ were differentiated primarily on the basis of greater engagement of perceptual and perceptual-integration brain regions. There is a different spatiotemporal unfolding of brain-learning relationships in SZ. In SZ, given the same amount of practice, the movement from networks suggestive of effortful learning toward subcortically driven procedural one differs from HC participants. Learning performance in SZ is driven by varying levels of engagement in perceptual regions, which suggests perception itself is impaired and may impact downstream, "higher level" cognition.

  17. A Network Optimization Solution using SAS/OR Tools for the Department of the Army Branching Problem

    DTIC Science & Technology

    2010-02-18

    OPTMODEL; NETFLOW ;Nodes;Arcs;ROTC; assignments;Basic Branches;Cadet Satisfaction; CLASSIFICATION: Unclassified This paper will demonstrate...implement a solution using the NETFLOW procedure and repeat that network solution using the OPTMODEL procedure. The OPTMODEL implementation will be...96.545599 M 1 AV AV IN EN FA AR 4 96.221521 M 1 IN IN MI EN MP AR 1 Figure 1, Supply: cadet data (5 of 2545) ordered by OMS PROC NETFLOW takes a

  18. Queueing Network Models for Parallel Processing of Task Systems: an Operational Approach

    NASA Technical Reports Server (NTRS)

    Mak, Victor W. K.

    1986-01-01

    Computer performance modeling of possibly complex computations running on highly concurrent systems is considered. Earlier works in this area either dealt with a very simple program structure or resulted in methods with exponential complexity. An efficient procedure is developed to compute the performance measures for series-parallel-reducible task systems using queueing network models. The procedure is based on the concept of hierarchical decomposition and a new operational approach. Numerical results for three test cases are presented and compared to those of simulations.

  19. Network Performance and Coordination in the Health, Education, Telecommunications System. Satellite Technology Demonstration, Technical Report No. 0422.

    ERIC Educational Resources Information Center

    Braunstein, Jean; Janky, James M.

    This paper describes the network coordination for the Health, Education, Telecommunications (HET) system. Specifically, it discusses HET network performance as a function of a specially-developed coordination system which was designed to link terrestrial equipment to satellite operations centers. Because all procedures and equipment developed for…

  20. Integrating Genetic and Functional Genomic Data to Elucidate Common Disease Tra

    NASA Astrophysics Data System (ADS)

    Schadt, Eric

    2005-03-01

    The reconstruction of genetic networks in mammalian systems is one of the primary goals in biological research, especially as such reconstructions relate to elucidating not only common, polygenic human diseases, but living systems more generally. Here I present a statistical procedure for inferring causal relationships between gene expression traits and more classic clinical traits, including complex disease traits. This procedure has been generalized to the gene network reconstruction problem, where naturally occurring genetic variations in segregating mouse populations are used as a source of perturbations to elucidate tissue-specific gene networks. Differences in the extent of genetic control between genders and among four different tissues are highlighted. I also demonstrate that the networks derived from expression data in segregating mouse populations using the novel network reconstruction algorithm are able to capture causal associations between genes that result in increased predictive power, compared to more classically reconstructed networks derived from the same data. This approach to causal inference in large segregating mouse populations over multiple tissues not only elucidates fundamental aspects of transcriptional control, it also allows for the objective identification of key drivers of common human diseases.

  1. Deep Spatial-Temporal Joint Feature Representation for Video Object Detection.

    PubMed

    Zhao, Baojun; Zhao, Boya; Tang, Linbo; Han, Yuqi; Wang, Wenzheng

    2018-03-04

    With the development of deep neural networks, many object detection frameworks have shown great success in the fields of smart surveillance, self-driving cars, and facial recognition. However, the data sources are usually videos, and the object detection frameworks are mostly established on still images and only use the spatial information, which means that the feature consistency cannot be ensured because the training procedure loses temporal information. To address these problems, we propose a single, fully-convolutional neural network-based object detection framework that involves temporal information by using Siamese networks. In the training procedure, first, the prediction network combines the multiscale feature map to handle objects of various sizes. Second, we introduce a correlation loss by using the Siamese network, which provides neighboring frame features. This correlation loss represents object co-occurrences across time to aid the consistent feature generation. Since the correlation loss should use the information of the track ID and detection label, our video object detection network has been evaluated on the large-scale ImageNet VID dataset where it achieves a 69.5% mean average precision (mAP).

  2. Parameter Estimation for a Model of Space-Time Rainfall

    NASA Astrophysics Data System (ADS)

    Smith, James A.; Karr, Alan F.

    1985-08-01

    In this paper, parameter estimation procedures, based on data from a network of rainfall gages, are developed for a class of space-time rainfall models. The models, which are designed to represent the spatial distribution of daily rainfall, have three components, one that governs the temporal occurrence of storms, a second that distributes rain cells spatially for a given storm, and a third that determines the rainfall pattern within a rain cell. Maximum likelihood and method of moments procedures are developed. We illustrate that limitations on model structure are imposed by restricting data sources to rain gage networks. The estimation procedures are applied to a 240-mi2 (621 km2) catchment in the Potomac River basin.

  3. Consulting report on the NASA technology utilization network system

    NASA Technical Reports Server (NTRS)

    Hlava, Marjorie M. K.

    1992-01-01

    The purposes of this consulting effort are: (1) to evaluate the existing management and production procedures and workflow as they each relate to the successful development, utilization, and implementation of the NASA Technology Utilization Network System (TUNS) database; (2) to identify, as requested by the NASA Project Monitor, the strengths, weaknesses, areas of bottlenecking, and previously unaddressed problem areas affecting TUNS; (3) to recommend changes or modifications of existing procedures as necessary in order to effect corrections for the overall benefit of NASA TUNS database production, implementation, and utilization; and (4) to recommend the addition of alternative procedures, routines, and activities that will consolidate and facilitate the production, implementation, and utilization of the NASA TUNS database.

  4. Bayesian networks of age estimation and classification based on dental evidence: A study on the third molar mineralization.

    PubMed

    Sironi, Emanuele; Pinchi, Vilma; Pradella, Francesco; Focardi, Martina; Bozza, Silvia; Taroni, Franco

    2018-04-01

    Not only does the Bayesian approach offer a rational and logical environment for evidence evaluation in a forensic framework, but it also allows scientists to coherently deal with uncertainty related to a collection of multiple items of evidence, due to its flexible nature. Such flexibility might come at the expense of elevated computational complexity, which can be handled by using specific probabilistic graphical tools, namely Bayesian networks. In the current work, such probabilistic tools are used for evaluating dental evidence related to the development of third molars. A set of relevant properties characterizing the graphical models are discussed and Bayesian networks are implemented to deal with the inferential process laying beyond the estimation procedure, as well as to provide age estimates. Such properties include operationality, flexibility, coherence, transparence and sensitivity. A data sample composed of Italian subjects was employed for the analysis; results were in agreement with previous studies in terms of point estimate and age classification. The influence of the prior probability elicitation in terms of Bayesian estimate and classifies was also analyzed. Findings also supported the opportunity to take into consideration multiple teeth in the evaluative procedure, since it can be shown this results in an increased robustness towards the prior probability elicitation process, as well as in more favorable outcomes from a forensic perspective. Copyright © 2018 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  5. A Framework for Dynamic Constraint Reasoning Using Procedural Constraints

    NASA Technical Reports Server (NTRS)

    Jonsson, Ari K.; Frank, Jeremy D.

    1999-01-01

    Many complex real-world decision and control problems contain an underlying constraint reasoning problem. This is particularly evident in a recently developed approach to planning, where almost all planning decisions are represented by constrained variables. This translates a significant part of the planning problem into a constraint network whose consistency determines the validity of the plan candidate. Since higher-level choices about control actions can add or remove variables and constraints, the underlying constraint network is invariably highly dynamic. Arbitrary domain-dependent constraints may be added to the constraint network and the constraint reasoning mechanism must be able to handle such constraints effectively. Additionally, real problems often require handling constraints over continuous variables. These requirements present a number of significant challenges for a constraint reasoning mechanism. In this paper, we introduce a general framework for handling dynamic constraint networks with real-valued variables, by using procedures to represent and effectively reason about general constraints. The framework is based on a sound theoretical foundation, and can be proven to be sound and complete under well-defined conditions. Furthermore, the framework provides hybrid reasoning capabilities, as alternative solution methods like mathematical programming can be incorporated into the framework, in the form of procedures.

  6. Pure F-actin networks are distorted and branched by steps in the critical-point drying method.

    PubMed

    Resch, Guenter P; Goldie, Kenneth N; Hoenger, Andreas; Small, J Victor

    2002-03-01

    Elucidation of the ultrastructural organization of actin networks is crucial for understanding the molecular mechanisms underlying actin-based motility. Results obtained from cytoskeletons and actin comets prepared by the critical-point procedure, followed by rotary shadowing, support recent models incorporating actin filament branching as a main feature of lamellipodia and pathogen propulsion. Since actin branches were not evident in earlier images obtained by negative staining, we explored how these differences arise. Accordingly, we have followed the structural fate of dense networks of pure actin filaments subjected to steps of the critical-point drying protocol. The filament networks have been visualized in parallel by both cryo-electron microscopy and negative staining. Our results demonstrate the selective creation of branches and other artificial structures in pure F-actin networks by the critical-point procedure and challenge the reliability of this method for preserving the detailed organization of actin assemblies that drive motility. (c) 2002 Elsevier Science (USA).

  7. Effect of genetic algorithm as a variable selection method on different chemometric models applied for the analysis of binary mixture of amoxicillin and flucloxacillin: A comparative study

    NASA Astrophysics Data System (ADS)

    Attia, Khalid A. M.; Nassar, Mohammed W. I.; El-Zeiny, Mohamed B.; Serag, Ahmed

    2016-03-01

    Different chemometric models were applied for the quantitative analysis of amoxicillin (AMX), and flucloxacillin (FLX) in their binary mixtures, namely, partial least squares (PLS), spectral residual augmented classical least squares (SRACLS), concentration residual augmented classical least squares (CRACLS) and artificial neural networks (ANNs). All methods were applied with and without variable selection procedure (genetic algorithm GA). The methods were used for the quantitative analysis of the drugs in laboratory prepared mixtures and real market sample via handling the UV spectral data. Robust and simpler models were obtained by applying GA. The proposed methods were found to be rapid, simple and required no preliminary separation steps.

  8. Who Do Hospital Physicians and Nurses Go to for Advice About Medications? A Social Network Analysis and Examination of Prescribing Error Rates.

    PubMed

    Creswick, Nerida; Westbrook, Johanna Irene

    2015-09-01

    To measure the weekly medication advice-seeking networks of hospital staff, to compare patterns across professional groups, and to examine these in the context of prescribing error rates. A social network analysis was conducted. All 101 staff in 2 wards in a large, academic teaching hospital in Sydney, Australia, were surveyed (response rate, 90%) using a detailed social network questionnaire. The extent of weekly medication advice seeking was measured by density of connections, proportion of reciprocal relationships by reciprocity, number of colleagues to whom each person provided advice by in-degree, and perceptions of amount and impact of advice seeking between physicians and nurses. Data on prescribing error rates from the 2 wards were compared. Weekly medication advice-seeking networks were sparse (density: 7% ward A and 12% ward B). Information sharing across professional groups was modest, and rates of reciprocation of advice were low (9% ward A, 14% ward B). Pharmacists provided advice to most people, and junior physicians also played central roles. Senior physicians provided medication advice to few people. Many staff perceived that physicians rarely sought advice from nurses when prescribing, but almost all believed that an increase in communication between physicians and nurses about medications would improve patient safety. The medication networks in ward B had higher measures for density, reciprocation, and fewer senior physicians who were isolates. Ward B had a significantly lower rate of both procedural and clinical prescribing errors than ward A (0.63 clinical prescribing errors per admission [95%CI, 0.47-0.79] versus 1.81/ admission [95%CI, 1.49-2.13]). Medication advice-seeking networks among staff on hospital wards are limited. Hubs of advice provision include pharmacists, junior physicians, and senior nurses. Senior physicians are poorly integrated into medication advice networks. Strategies to improve the advice-giving networks between senior and junior physicians may be a fruitful area for intervention to improve medication safety. We found that one ward with stronger networks also had a significantly lower prescribing error rate, suggesting a promising area for further investigation.

  9. Verifying the buildingEXODUS through an emergency response procedure (ERP) exercise at an underground intervention shaft

    NASA Astrophysics Data System (ADS)

    Tajedi, Noor Aqilah A.; Sukor, Nur Sabahiah A.; Ismail, Mohd Ashraf M.; Shamsudin, Shahrul A.

    2017-10-01

    An Emergency Response Plan (ERP) is an essential safety procedure that needs to be taken into account for railway operations, especially for underground railway networks. Several parameters need to be taken into consideration in planning an ERP such as the design of tunnels and intervention shafts, and operation procedures for underground transportation systems. Therefore, the purpose of this paper is to observe and analyse the Emergency Response Procedure (ERP) exercise for the underground train network at the LRT Kelana Jaya Line. The exercise was conducted at one of the underground intervention shaft exits, where the height of the staircase from the bottom floor to the upper floor was 24.59 metres. Four cameras were located at selected levels of the shaft, and 71 participants were assigned for the evacuation exercise. The participants were tagged with a number at the front and back of their safety vests. Ten respondents were randomly selected to give details of their height and weight and, at the same time, they had to self-record the time taken for them to evacuate from the bottom to the top of the shaft. The video footages that were taken during the ERP were analysed, and the data were used for the verification process on the buildingEXODUS simulation software. It was found that the results of the ERP experiment were significantly similar to the simulation results, thereby successfully verifying the simulation. This verification process was important to ensure that the results of the simulation were in accordance with the real situation. Therefore, a further evacuation analysis made use of the results from this verification.

  10. Neural network-based recognition of whistlers on spectrograms detected by satellite

    NASA Astrophysics Data System (ADS)

    Conti, Livio

    2016-04-01

    We present a system to automatically recognize and classify the occurrence of whistler waves on spectrograms of electric field measurements performed by satellite. Whistlers - VLF waves generated by lightning, with a specific spectral dispersion relation - can induce precipitation of trapped Van Allen particles and have a role in the chemistry of some atmospheric components (mainly NOx). Moreover, it has also been suggested that the increase of the number of anomalous whistlers (i.e. whistlers with high value of dispersion constant) could be induced by disturbances in the Earth-ionosphere wave-guide, generated by seismo-electromagnetic emissions. On satellite, the recognition of whistlers asks for analyzing high-resolution spectrograms that cannot be downloaded to Earth, due to the limits of data transmission. For this reason, a real time identification and classification must be performed on satellite, by avoiding downloading all the unprocessed data. The procedure that we have developed is based on a Time Delay Neural Network (TDNN). The TDNN, proposed some years ago for speech recognition, can be fruitfully also applied in real-time analysis of electromagnetic spectrograms in order to detect phenomena characterized by a specific shape/signature such as those of the whistler waves. Some studies have been performed by the RNF experiment on board of the DEMETER satellite and our algorithm could be adopted on board of the satellite CSES (China Seismo-Electromagnetic Satellite), launch scheduled by the end of 2016. Moreover, the procedure can be also adopted to automatic analysis of whistlers detected on ground.

  11. Far-field tsunami of 2017 Mw 8.1 Tehuantepec, Mexico earthquake recorded by Chilean tide gauge network: Implications for tsunami warning systems

    NASA Astrophysics Data System (ADS)

    González-Carrasco, J. F.; Benavente, R. F.; Zelaya, C.; Núñez, C.; Gonzalez, G.

    2017-12-01

    The 2017 Mw 8.1, Tehuantepec earthquake generated a moderated tsunami, which was registered in near-field tide gauges network activating a tsunami threat state for Mexico issued by PTWC. In the case of Chile, the forecast of tsunami waves indicate amplitudes less than 0.3 meters above the tide level, advising an informative state of threat, without activation of evacuation procedures. Nevertheless, during sea level monitoring of network we detect wave amplitudes (> 0.3 m) indicating a possible change of threat state. Finally, NTWS maintains informative level of threat based on mathematical filtering analysis of sea level records. After 2010 Mw 8.8, Maule earthquake, the Chilean National Tsunami Warning System (NTWS) has increased its observational capabilities to improve early response. Most important operational efforts have focused on strengthening tide gauge network for national area of responsibility. Furthermore, technological initiatives as Integrated Tsunami Prediction and Warning System (SIPAT) has segmented the area of responsibility in blocks to focus early warning and evacuation procedures on most affected coastal areas, while maintaining an informative state for distant areas of near-field earthquake. In the case of far-field events, NTWS follow the recommendations proposed by Pacific Tsunami Warning Center (PTWC), including a comprehensive monitoring of sea level records, such as tide gauges and DART (Deep-Ocean Assessment and Reporting of Tsunami) buoys, to evaluate the state of tsunami threat in the area of responsibility. The main objective of this work is to analyze the first-order physical processes involved in the far-field propagation and coastal impact of tsunami, including implications for decision-making of NTWS. To explore our main question, we construct a finite-fault model of the 2017, Mw 8.1 Tehuantepec earthquake. We employ the rupture model to simulate a transoceanic tsunami modeled by Neowave2D. We generate synthetic time series at tide gauge stations and compare them with recorded sea level data, to dismiss meteorological processes, such as storms and surges. Resonance analysis is performed by wavelet technique.

  12. Aerosol Optical Depths over Oceans: a View from MISR Retrievals and Collocated MAN and AERONET in Situ Observations

    NASA Technical Reports Server (NTRS)

    Witek, Marcin L.; Garay, Michael J.; Diner, David J.; Smirnov, Alexander

    2013-01-01

    In this study, aerosol optical depths over oceans are analyzed from satellite and surface perspectives. Multiangle Imaging SpectroRadiometer (MISR) aerosol retrievals are investigated and validated primarily against Maritime Aerosol Network (MAN) observations. Furthermore, AErosol RObotic NETwork (AERONET) data from 19 island and coastal sites is incorporated in this study. The 270 MISRMAN comparison points scattered across all oceans were identified. MISR on average overestimates aerosol optical depths (AODs) by 0.04 as compared to MAN; the correlation coefficient and root-mean-square error are 0.95 and 0.06, respectively. A new screening procedure based on retrieval region characterization is proposed, which is capable of substantially reducing MISR retrieval biases. Over 1000 additional MISRAERONET comparison points are added to the analysis to confirm the validity of the method. The bias reduction is effective within all AOD ranges. Setting a clear flag fraction threshold to 0.6 reduces the bias to below 0.02, which is close to a typical ground-based measurement uncertainty. Twelve years of MISR data are analyzed with the new screening procedure. The average over ocean AOD is reduced by 0.03, from 0.15 to 0.12. The largest AOD decrease is observed in high latitudes of both hemispheres, regions with climatologically high cloud cover. It is postulated that the screening procedure eliminates spurious retrieval errors associated with cloud contamination and cloud adjacency effects. The proposed filtering method can be used for validating aerosol and chemical transport models.

  13. A machine learning methodology for the selection and classification of spontaneous spinal cord dorsum potentials allows disclosure of structured (non-random) changes in neuronal connectivity induced by nociceptive stimulation

    PubMed Central

    Martin, Mario; Contreras-Hernández, Enrique; Béjar, Javier; Esposito, Gennaro; Chávez, Diógenes; Glusman, Silvio; Cortés, Ulises; Rudomin, Pablo

    2015-01-01

    Previous studies aimed to disclose the functional organization of the neuronal networks involved in the generation of the spontaneous cord dorsum potentials (CDPs) generated in the lumbosacral spinal segments used predetermined templates to select specific classes of spontaneous CDPs. Since this procedure was time consuming and required continuous supervision, it was limited to the analysis of two specific types of CDPs (negative CDPs and negative positive CDPs), thus excluding potentials that may reflect activation of other neuronal networks of presumed functional relevance. We now present a novel procedure based in machine learning that allows the efficient and unbiased selection of a variety of spontaneous CDPs with different shapes and amplitudes. The reliability and performance of the present method is evaluated by analyzing the effects on the probabilities of generation of different classes of spontaneous CDPs induced by the intradermic injection of small amounts of capsaicin in the anesthetized cat, a procedure known to induce a state of central sensitization leading to allodynia and hyperalgesia. The results obtained with the selection method presently described allowed detection of spontaneous CDPs with specific shapes and amplitudes that are assumed to represent the activation of functionally coupled sets of dorsal horn neurones that acquire different, structured configurations in response to nociceptive stimuli. These changes are considered as responses tending to adequate transmission of sensory information to specific functional requirements as part of homeostatic adjustments. PMID:26379540

  14. Mean Field Analysis of Large-Scale Interacting Populations of Stochastic Conductance-Based Spiking Neurons Using the Klimontovich Method

    NASA Astrophysics Data System (ADS)

    Gandolfo, Daniel; Rodriguez, Roger; Tuckwell, Henry C.

    2017-03-01

    We investigate the dynamics of large-scale interacting neural populations, composed of conductance based, spiking model neurons with modifiable synaptic connection strengths, which are possibly also subjected to external noisy currents. The network dynamics is controlled by a set of neural population probability distributions (PPD) which are constructed along the same lines as in the Klimontovich approach to the kinetic theory of plasmas. An exact non-closed, nonlinear, system of integro-partial differential equations is derived for the PPDs. As is customary, a closing procedure leads to a mean field limit. The equations we have obtained are of the same type as those which have been recently derived using rigorous techniques of probability theory. The numerical solutions of these so called McKean-Vlasov-Fokker-Planck equations, which are only valid in the limit of infinite size networks, actually shows that the statistical measures as obtained from PPDs are in good agreement with those obtained through direct integration of the stochastic dynamical system for large but finite size networks. Although numerical solutions have been obtained for networks of Fitzhugh-Nagumo model neurons, which are often used to approximate Hodgkin-Huxley model neurons, the theory can be readily applied to networks of general conductance-based model neurons of arbitrary dimension.

  15. Statistical methods and neural network approaches for classification of data from multiple sources

    NASA Technical Reports Server (NTRS)

    Benediktsson, Jon Atli; Swain, Philip H.

    1990-01-01

    Statistical methods for classification of data from multiple data sources are investigated and compared to neural network models. A problem with using conventional multivariate statistical approaches for classification of data of multiple types is in general that a multivariate distribution cannot be assumed for the classes in the data sources. Another common problem with statistical classification methods is that the data sources are not equally reliable. This means that the data sources need to be weighted according to their reliability but most statistical classification methods do not have a mechanism for this. This research focuses on statistical methods which can overcome these problems: a method of statistical multisource analysis and consensus theory. Reliability measures for weighting the data sources in these methods are suggested and investigated. Secondly, this research focuses on neural network models. The neural networks are distribution free since no prior knowledge of the statistical distribution of the data is needed. This is an obvious advantage over most statistical classification methods. The neural networks also automatically take care of the problem involving how much weight each data source should have. On the other hand, their training process is iterative and can take a very long time. Methods to speed up the training procedure are introduced and investigated. Experimental results of classification using both neural network models and statistical methods are given, and the approaches are compared based on these results.

  16. Reinforcement learning design-based adaptive tracking control with less learning parameters for nonlinear discrete-time MIMO systems.

    PubMed

    Liu, Yan-Jun; Tang, Li; Tong, Shaocheng; Chen, C L Philip; Li, Dong-Juan

    2015-01-01

    Based on the neural network (NN) approximator, an online reinforcement learning algorithm is proposed for a class of affine multiple input and multiple output (MIMO) nonlinear discrete-time systems with unknown functions and disturbances. In the design procedure, two networks are provided where one is an action network to generate an optimal control signal and the other is a critic network to approximate the cost function. An optimal control signal and adaptation laws can be generated based on two NNs. In the previous approaches, the weights of critic and action networks are updated based on the gradient descent rule and the estimations of optimal weight vectors are directly adjusted in the design. Consequently, compared with the existing results, the main contributions of this paper are: 1) only two parameters are needed to be adjusted, and thus the number of the adaptation laws is smaller than the previous results and 2) the updating parameters do not depend on the number of the subsystems for MIMO systems and the tuning rules are replaced by adjusting the norms on optimal weight vectors in both action and critic networks. It is proven that the tracking errors, the adaptation laws, and the control inputs are uniformly bounded using Lyapunov analysis method. The simulation examples are employed to illustrate the effectiveness of the proposed algorithm.

  17. State criminal justice telecommunications (STACOM). Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    Fielding, J. E.; Frewing, H. K.; Lee, J. J.; Leflang, W. G.; Reilly, N. B.

    1977-01-01

    Techniques for identifying user requirements and network designs for criminal justice networks on a state wide basis are discussed. Topics covered include: methods for determining data required; data collection and survey; data organization procedures, and methods for forecasting network traffic volumes. Developed network design techniques center around a computerized topology program which enables the user to generate least cost network topologies that satisfy network traffic requirements, response time requirements and other specified functional requirements. The developed techniques were applied in Texas and Ohio, and results of these studies are presented.

  18. Skin antiseptics in venous puncture site disinfection for preventing blood culture contamination: A Bayesian network meta-analysis of randomized controlled trials.

    PubMed

    Liu, Wenjie; Duan, Yuchen; Cui, Wenyao; Li, Li; Wang, Xia; Dai, Heling; You, Chao; Chen, Maojun

    2016-07-01

    To compare the efficacy of several antiseptics in decreasing the blood culture contamination rate. Network meta-analysis. Electronic searches of PubMed and Embase were conducted up to November 2015. Only randomized controlled trials or quasi-randomized controlled trials were eligible. We applied no language restriction. A comprehensive review of articles in the reference lists was also accomplished for possible relevant studies. Relevant studies evaluating efficacy of different antiseptics in venous puncture site for decreasing the blood culture contamination rate were included. The data were extracted from the included randomized controlled trials by two authors independently. The risk of bias was evaluated using Detsky scale by two authors independently. We used WinBUGS1.43 software and statistic model described by Chaimani to perform this network meta-analysis. Then graphs of statistical results of WinBUGS1.43 software were generated using 'networkplot', 'ifplot', 'netfunnel' and 'sucra' procedure by STATA13.0. Odds ratio and 95% confidence intervals were assessed for dichotomous data. A probability of p less than 0.05 was considered to be statistically significant. Compared with ordinary meta-analyses, this network meta-analysis offered hierarchies for the efficacy of different antiseptics in decreasing the blood culture contamination rate. Seven randomized controlled trials involving 34,408 blood samples were eligible for the meta-analysis. No significant difference was found in blood culture contamination rate among different antiseptics. No significant difference was found between non-alcoholic antiseptics and alcoholic antiseptics, alcoholic chlorhexidine and povidone iodine, chlorhexidine and iodine compounds, povidone iodine and iodine tincture in this aspect, respectively. Different antiseptics may not affect the blood culture contamination rate. Different intervals between the skin disinfection and the venous puncture, the different settings (emergency room, medical wards, and intensive care units) and the performance of the phlebotomy may affect the blood culture contamination rate. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Implicit Procedural Learning in Fragile X and Down Syndrome

    ERIC Educational Resources Information Center

    Bussy, G.; Charrin, E.; Brun, A.; Curie, A.; des Portes, V.

    2011-01-01

    Background: Procedural learning refers to rule-based motor skill learning and storage. It involves the cerebellum, striatum and motor areas of the frontal lobe network. Fragile X syndrome, which has been linked with anatomical abnormalities within the striatum, may result in implicit procedural learning deficit. Methods: To address this issue, a…

  20. A near-optimum procedure for selecting stations in a streamgaging network

    USGS Publications Warehouse

    Lanfear, Kenneth J.

    2005-01-01

    Two questions are fundamental to Federal government goals for a network of streamgages which are operated by the U.S. Geological Survey: (1) how well does the present network of streamagaging stations meet defined Federal goals and (2) what is the optimum set of stations to add or reactivate to support remaining goals? The solution involves an incremental-stepping procedure that is based on Basic Feasible Incremental Solutions (BFIS?s) where each BFIS satisfies at least one Federal streamgaging goal. A set of minimum Federal goals for streamgaging is defined to include water measurements for legal compacts and decrees, flooding, water budgets, regionalization of streamflow characteristics, and water quality. Fully satisfying all these goals by using the assumptions outlined in this paper would require adding 887 new streamgaging stations to the U.S. Geological Survey network and reactivating an additional 857 stations that are currently inactive.

  1. Developing a Procedure for Segmenting Meshed Heat Networks of Heat Supply Systems without Outflows

    NASA Astrophysics Data System (ADS)

    Tokarev, V. V.

    2018-06-01

    The heat supply systems of cities have, as a rule, a ring structure with the possibility of redistributing the flows. Despite the fact that a ring structure is more reliable than a radial one, the operators of heat networks prefer to use them in normal modes according to the scheme without overflows of the heat carrier between the heat mains. With such a scheme, it is easier to adjust the networks and to detect and locate faults in them. The article proposes a formulation of the heat network segmenting problem. The problem is set in terms of optimization with the heat supply system's excessive hydraulic power used as the optimization criterion. The heat supply system computer model has a hierarchically interconnected multilevel structure. Since iterative calculations are only carried out for the level of trunk heat networks, decomposing the entire system into levels allows the dimensionality of the solved subproblems to be reduced by an order of magnitude. An attempt to solve the problem by fully enumerating possible segmentation versions does not seem to be feasible for systems of really existing sizes. The article suggests a procedure for searching rational segmentation of heat supply networks with limiting the search to versions of dividing the system into segments near the flow convergence nodes with subsequent refining of the solution. The refinement is performed in two stages according to the total excess hydraulic power criterion. At the first stage, the loads are redistributed among the sources. After that, the heat networks are divided into independent fragments, and the possibility of increasing the excess hydraulic power in the obtained fragments is checked by shifting the division places inside a fragment. The proposed procedure has been approbated taking as an example a municipal heat supply system involving six heat mains fed from a common source, 24 loops within the feeding mains plane, and more than 5000 consumers. Application of the proposed segmentation procedure made it possible to find a version with required hydraulic power in the heat supply system on 3% less than the one found using the simultaneous segmentation method.

  2. Update on the activities of the GGOS Bureau of Networks and Observations

    NASA Technical Reports Server (NTRS)

    Pearlman, Michael R.; Pavlis, Erricos C.; Ma, Chopo; Noll, Carey; Thaller, Daniela; Richter, Bernd; Gross, Richard; Neilan, Ruth; Mueller, Juergen; Barzaghi, Ricardo; hide

    2016-01-01

    The recently reorganized GGOS Bureau of Networks and Observations has many elements that are associated with building and sustaining the infrastructure that supports the Global Geodetic Observing System (GGOS) through the development and maintenance of the International Terrestrial and Celestial Reference Frames, improved gravity field models and their incorporation into the reference frame, the production of precision orbits for missions of interest to GGOS, and many other applications. The affiliated Service Networks (IVS, ILRS, IGS, IDS, and now the IGFS and the PSMSL) continue to grow geographically and to improve core and co-location site performance with newer technologies. Efforts are underway to expand GGOS participation and outreach. Several groups are undertaking initiatives and seeking partnerships to update existing sites and expand the networks in geographic areas void of coverage. New satellites are being launched by the Space Agencies in disciplines relevant to GGOS. Working groups now constitute an integral part of the Bureau, providing key service to GGOS. Their activities include: projecting future network capability and examining trade-off options for station deployment and technology upgrades, developing metadata collection and online availability strategies; improving coordination and information exchange with the missions for better ground-based network response and space-segment adequacy for the realization of GGOS goals; and standardizing site-tie measurement, archiving, and analysis procedures. This poster will present the progress in the Bureau's activities and its efforts to expand the networks and make them more effective in supporting GGOS.

  3. The concurrent multiplicative-additive approach for gauge-radar/satellite multisensor precipitation estimates

    NASA Astrophysics Data System (ADS)

    Garcia-Pintado, J.; Barberá, G. G.; Erena Arrabal, M.; Castillo, V. M.

    2010-12-01

    Objective analysis schemes (OAS), also called ``succesive correction methods'' or ``observation nudging'', have been proposed for multisensor precipitation estimation combining remote sensing data (meteorological radar or satellite) with data from ground-based raingauge networks. However, opposite to the more complex geostatistical approaches, the OAS techniques for this use are not optimized. On the other hand, geostatistical techniques ideally require, at the least, modelling the covariance from the rain gauge data at every time step evaluated, which commonly cannot be soundly done. Here, we propose a new procedure (concurrent multiplicative-additive objective analysis scheme [CMA-OAS]) for operational rainfall estimation using rain gauges and meteorological radar, which does not require explicit modelling of spatial covariances. On the basis of a concurrent multiplicative-additive (CMA) decomposition of the spatially nonuniform radar bias, within-storm variability of rainfall and fractional coverage of rainfall are taken into account. Thus both spatially nonuniform radar bias, given that rainfall is detected, and bias in radar detection of rainfall are handled. The interpolation procedure of CMA-OAS is built on the OAS, whose purpose is to estimate a filtered spatial field of the variable of interest through a successive correction of residuals resulting from a Gaussian kernel smoother applied on spatial samples. The CMA-OAS, first, poses an optimization problem at each gauge-radar support point to obtain both a local multiplicative-additive radar bias decomposition and a regionalization parameter. Second, local biases and regionalization parameters are integrated into an OAS to estimate the multisensor rainfall at the ground level. The approach considers radar estimates as background a priori information (first guess), so that nudging to observations (gauges) may be relaxed smoothly to the first guess, and the relaxation shape is obtained from the sequential optimization. The procedure is suited to relatively sparse rain gauge networks. To show the procedure, six storms are analyzed at hourly steps over 10,663 km2. Results generally indicated an improved quality with respect to other methods evaluated: a standard mean-field bias adjustment, an OAS spatially variable adjustment with multiplicative factors, ordinary cokriging, and kriging with external drift. In theory, it could be equally applicable to gauge-satellite estimates and other hydrometeorological variables.

  4. Classify epithelium-stroma in histopathological images based on deep transferable network.

    PubMed

    Yu, X; Zheng, H; Liu, C; Huang, Y; Ding, X

    2018-04-20

    Recently, the deep learning methods have received more attention in histopathological image analysis. However, the traditional deep learning methods assume that training data and test data have the same distributions, which causes certain limitations in real-world histopathological applications. However, it is costly to recollect a large amount of labeled histology data to train a new neural network for each specified image acquisition procedure even for similar tasks. In this paper, an unsupervised domain adaptation is introduced into a typical deep convolutional neural network (CNN) model to mitigate the repeating of the labels. The unsupervised domain adaptation is implemented by adding two regularisation terms, namely the feature-based adaptation and entropy minimisation, to the object function of a widely used CNN model called the AlexNet. Three independent public epithelium-stroma datasets were used to verify the proposed method. The experimental results have demonstrated that in the epithelium-stroma classification, the proposed method can achieve better performance than the commonly used deep learning methods and some existing deep domain adaptation methods. Therefore, the proposed method can be considered as a better option for the real-world applications of histopathological image analysis because there is no requirement for recollection of large-scale labeled data for every specified domain. © 2018 The Authors Journal of Microscopy © 2018 Royal Microscopical Society.

  5. Natural learning in NLDA networks.

    PubMed

    González, Ana; Dorronsoro, José R

    2007-07-01

    Non Linear Discriminant Analysis (NLDA) networks combine a standard Multilayer Perceptron (MLP) transfer function with the minimization of a Fisher analysis criterion. In this work we will define natural-like gradients for NLDA network training. Instead of a more principled approach, that would require the definition of an appropriate Riemannian structure on the NLDA weight space, we will follow a simpler procedure, based on the observation that the gradient of the NLDA criterion function J can be written as the expectation nablaJ(W)=E[Z(X,W)] of a certain random vector Z and defining then I=E[Z(X,W)Z(X,W)(t)] as the Fisher information matrix in this case. This definition of I formally coincides with that of the information matrix for the MLP or other square error functions; the NLDA J criterion, however, does not have this structure. Although very simple, the proposed approach shows much faster convergence than that of standard gradient descent, even when its costlier complexity is taken into account. While the faster convergence of natural MLP batch training can be also explained in terms of its relationship with the Gauss-Newton minimization method, this is not the case for NLDA training, as we will see analytically and numerically that the hessian and information matrices are different.

  6. Wavelet evolutionary network for complex-constrained portfolio rebalancing

    NASA Astrophysics Data System (ADS)

    Suganya, N. C.; Vijayalakshmi Pai, G. A.

    2012-07-01

    Portfolio rebalancing problem deals with resetting the proportion of different assets in a portfolio with respect to changing market conditions. The constraints included in the portfolio rebalancing problem are basic, cardinality, bounding, class and proportional transaction cost. In this study, a new heuristic algorithm named wavelet evolutionary network (WEN) is proposed for the solution of complex-constrained portfolio rebalancing problem. Initially, the empirical covariance matrix, one of the key inputs to the problem, is estimated using the wavelet shrinkage denoising technique to obtain better optimal portfolios. Secondly, the complex cardinality constraint is eliminated using k-means cluster analysis. Finally, WEN strategy with logical procedures is employed to find the initial proportion of investment in portfolio of assets and also rebalance them after certain period. Experimental studies of WEN are undertaken on Bombay Stock Exchange, India (BSE200 index, period: July 2001-July 2006) and Tokyo Stock Exchange, Japan (Nikkei225 index, period: March 2002-March 2007) data sets. The result obtained using WEN is compared with the only existing counterpart named Hopfield evolutionary network (HEN) strategy and also verifies that WEN performs better than HEN. In addition, different performance metrics and data envelopment analysis are carried out to prove the robustness and efficiency of WEN over HEN strategy.

  7. Neural network modeling of nonlinear systems based on Volterra series extension of a linear model

    NASA Technical Reports Server (NTRS)

    Soloway, Donald I.; Bialasiewicz, Jan T.

    1992-01-01

    A Volterra series approach was applied to the identification of nonlinear systems which are described by a neural network model. A procedure is outlined by which a mathematical model can be developed from experimental data obtained from the network structure. Applications of the results to the control of robotic systems are discussed.

  8. Generalized Cartographic and Simultaneous Representation of Utility Networks for Decision-Support Systems and Crisis Management in Urban Environments

    NASA Astrophysics Data System (ADS)

    Becker, T.; König, G.

    2015-10-01

    Cartographic visualizations of crises are used to create a Common Operational Picture (COP) and enforce Situational Awareness by presenting relevant information to the involved actors. As nearly all crises affect geospatial entities, geo-data representations have to support location-specific analysis throughout the decision-making process. Meaningful cartographic presentation is needed for coordinating the activities of crisis manager in a highly dynamic situation, since operators' attention span and their spatial memories are limiting factors during the perception and interpretation process. Situational Awareness of operators in conjunction with a COP are key aspects in decision-making process and essential for making well thought-out and appropriate decisions. Considering utility networks as one of the most complex and particularly frequent required systems in urban environment, meaningful cartographic presentation of multiple utility networks with respect to disaster management do not exist. Therefore, an optimized visualization of utility infrastructure for emergency response procedures is proposed. The article will describe a conceptual approach on how to simplify, aggregate, and visualize multiple utility networks and their components to meet the requirements of the decision-making process and to support Situational Awareness.

  9. Analysis of fMRI data using noise-diffusion network models: a new covariance-coding perspective.

    PubMed

    Gilson, Matthieu

    2018-04-01

    Since the middle of the 1990s, studies of resting-state fMRI/BOLD data have explored the correlation patterns of activity across the whole brain, which is referred to as functional connectivity (FC). Among the many methods that have been developed to interpret FC, a recently proposed model-based approach describes the propagation of fluctuating BOLD activity within the recurrently connected brain network by inferring the effective connectivity (EC). In this model, EC quantifies the strengths of directional interactions between brain regions, viewed from the proxy of BOLD activity. In addition, the tuning procedure for the model provides estimates for the local variability (input variances) to explain how the observed FC is generated. Generalizing, the network dynamics can be studied in the context of an input-output mapping-determined by EC-for the second-order statistics of fluctuating nodal activities. The present paper focuses on the following detection paradigm: observing output covariances, how discriminative is the (estimated) network model with respect to various input covariance patterns? An application with the model fitted to experimental fMRI data-movie viewing versus resting state-illustrates that changes in local variability and changes in brain coordination go hand in hand.

  10. Analysis of QoS Requirements for e-Health Services and Mapping to Evolved Packet System QoS Classes

    PubMed Central

    Skorin-Kapov, Lea; Matijasevic, Maja

    2010-01-01

    E-Health services comprise a broad range of healthcare services delivered by using information and communication technology. In order to support existing as well as emerging e-Health services over converged next generation network (NGN) architectures, there is a need for network QoS control mechanisms that meet the often stringent requirements of such services. In this paper, we evaluate the QoS support for e-Health services in the context of the Evolved Packet System (EPS), specified by the Third Generation Partnership Project (3GPP) as a multi-access all-IP NGN. We classify heterogeneous e-Health services based on context and network QoS requirements and propose a mapping to existing 3GPP QoS Class Identifiers (QCIs) that serve as a basis for the class-based QoS concept of the EPS. The proposed mapping aims to provide network operators with guidelines for meeting heterogeneous e-Health service requirements. As an example, we present the QoS requirements for a prototype e-Health service supporting tele-consultation between a patient and a doctor and illustrate the use of the proposed mapping to QCIs in standardized QoS control procedures. PMID:20976301

  11. Effects of training strategies implemented in a complex videogame on functional connectivity of attentional networks.

    PubMed

    Voss, Michelle W; Prakash, Ruchika Shaurya; Erickson, Kirk I; Boot, Walter R; Basak, Chandramallika; Neider, Mark B; Simons, Daniel J; Fabiani, Monica; Gratton, Gabriele; Kramer, Arthur F

    2012-01-02

    We used the Space Fortress videogame, originally developed by cognitive psychologists to study skill acquisition, as a platform to examine learning-induced plasticity of interacting brain networks. Novice videogame players learned Space Fortress using one of two training strategies: (a) focus on all aspects of the game during learning (fixed priority), or (b) focus on improving separate game components in the context of the whole game (variable priority). Participants were scanned during game play using functional magnetic resonance imaging (fMRI), both before and after 20 h of training. As expected, variable priority training enhanced learning, particularly for individuals who initially performed poorly. Functional connectivity analysis revealed changes in brain network interaction reflective of more flexible skill learning and retrieval with variable priority training, compared to procedural learning and skill implementation with fixed priority training. These results provide the first evidence for differences in the interaction of large-scale brain networks when learning with different training strategies. Our approach and findings also provide a foundation for exploring the brain plasticity involved in transfer of trained abilities to novel real-world tasks such as driving, sport, or neurorehabilitation. Copyright © 2011 Elsevier Inc. All rights reserved.

  12. Spatial mismatch analysis among hotspots of alien plant species, road and railway networks in Germany and Austria

    PubMed Central

    Morelli, Federico

    2017-01-01

    Road and railway networks are pervasive elements of all environments, which have expanded intensively over the last century in all European countries. These transportation infrastructures have major impacts on the surrounding landscape, representing a threat to biodiversity. Roadsides and railways may function as corridors for dispersal of alien species in fragmented landscapes. However, only few studies have explored the spread of invasive species in relationship to transport network at large spatial scales. We performed a spatial mismatch analysis, based on a spatially explicit correlation test, to investigate whether alien plant species hotspots in Germany and Austria correspond to areas of high density of roads and railways. We tested this independently of the effects of dominant environments in each spatial unit, in order to focus just on the correlation between occurrence of alien species and density of linear transportation infrastructures. We found a significant spatial association between alien plant species hotspots distribution and roads and railways density in both countries. As expected, anthropogenic landscapes, such as urban areas, harbored more alien plant species, followed by water bodies. However, our findings suggested that the distribution of neobiota is strongest correlated to road/railways density than to land use composition. This study provides new evidence, from a transnational scale, that alien plants can use roadsides and rail networks as colonization corridors. Furthermore, our approach contributes to the understanding on alien plant species distribution at large spatial scale by the combination with spatial modeling procedures. PMID:28829818

  13. Epithelium-Stroma Classification via Convolutional Neural Networks and Unsupervised Domain Adaptation in Histopathological Images.

    PubMed

    Huang, Yue; Zheng, Han; Liu, Chi; Ding, Xinghao; Rohde, Gustavo K

    2017-11-01

    Epithelium-stroma classification is a necessary preprocessing step in histopathological image analysis. Current deep learning based recognition methods for histology data require collection of large volumes of labeled data in order to train a new neural network when there are changes to the image acquisition procedure. However, it is extremely expensive for pathologists to manually label sufficient volumes of data for each pathology study in a professional manner, which results in limitations in real-world applications. A very simple but effective deep learning method, that introduces the concept of unsupervised domain adaptation to a simple convolutional neural network (CNN), has been proposed in this paper. Inspired by transfer learning, our paper assumes that the training data and testing data follow different distributions, and there is an adaptation operation to more accurately estimate the kernels in CNN in feature extraction, in order to enhance performance by transferring knowledge from labeled data in source domain to unlabeled data in target domain. The model has been evaluated using three independent public epithelium-stroma datasets by cross-dataset validations. The experimental results demonstrate that for epithelium-stroma classification, the proposed framework outperforms the state-of-the-art deep neural network model, and it also achieves better performance than other existing deep domain adaptation methods. The proposed model can be considered to be a better option for real-world applications in histopathological image analysis, since there is no longer a requirement for large-scale labeled data in each specified domain.

  14. EPA Library Disaster Response and Continuity of Operations (COOP) Procedures

    EPA Pesticide Factsheets

    To establish Agency-wide procedures for the EPA National Library Network libraries to mitigate, prepare for, respond to, and recover from disasters in EPA libraries and provide continuing operations during and after a disaster.

  15. Problems in the design of multifunction meteor-radar networks

    NASA Astrophysics Data System (ADS)

    Nechitailenko, V. A.; Voloshchuk, Iu. I.

    The design of meteor-radar networks is examined in connection with the need to conduct experiments on a mass scale in meteor geophysics and astronomy. Attention is given to network architecture features and procedures of communication-path selection in the organization of information transfer, with allowance for the features of the meteor communication link. The meteor link is considered as the main means to ensure traffic in the meteor-radar network.

  16. Architectures and protocols for an integrated satellite-terrestrial mobile system

    NASA Technical Reports Server (NTRS)

    Delre, E.; Dellipriscoli, F.; Iannucci, P.; Menolascino, R.; Settimo, F.

    1993-01-01

    This paper aims to depict some basic concepts related to the definition of an integrated system for mobile communications, consisting of a satellite network and a terrestrial cellular network. In particular three aspects are discussed: (1) architecture definition for the satellite network; (2) assignment strategy of the satellite channels; and (3) definition of 'internetworking procedures' between cellular and satellite network, according to the selected architecture and the satellite channel assignment strategy.

  17. Determining the trophic guilds of fishes and macroinvertebrates in a seagrass food web

    USGS Publications Warehouse

    Luczkovich, J.J.; Ward, G.P.; Johnson, J.C.; Christian, R.R.; Baird, D.; Neckles, H.; Rizzo, W.M.

    2002-01-01

    We established trophic guilds of macroinvertebrate and fish taxa using correspondence analysis and a hierarchical clustering strategy for a seagrass food web in winter in the northeastern Gulf of Mexico. To create the diet matrix, we characterized the trophic linkages of macroinvertebrate and fish taxa. present in Hatodule wrightii seagrass habitat areas within the St. Marks National Wildlife Refuge (Florida) using binary data, combining dietary links obtained from relevant literature for macroinvertebrates with stomach analysis of common fishes collected during January and February of 1994. Heirarchical average-linkage cluster analysis of the 73 taxa of fishes and macroinvertebrates in the diet matrix yielded 14 clusters with diet similarity greater than or equal to 0.60. We then used correspondence analysis with three factors to jointly plot the coordinates of the consumers (identified by cluster membership) and of the 33 food sources. Correspondence analysis served as a visualization tool for assigning each taxon to one of eight trophic guilds: herbivores, detritivores, suspension feeders, omnivores, molluscivores, meiobenthos consumers, macrobenthos consumers, and piscivores. These trophic groups, cross-classified with major taxonomic groups, were further used to develop consumer compartments in a network analysis model of carbon flow in this seagrass ecosystem. The method presented here should greatly improve the development of future network models of food webs by providing an objective procedure for aggregating trophic groups.

  18. [Application of near-infrared spectroscopy to agriculture and food analysis].

    PubMed

    Wang, Duo-jia; Zhou, Xiang-yang; Jin, Tong-ming; Hu, Xiang-na; Zhong, Jiao-e; Wu, Qi-tang

    2004-04-01

    Near-Infrared Spectroscopy (NIRS) is the most rapidly developing and the most noticeable spectrographic technique in the 90's (the last century). Its principle and characteristics were explained in this paper, and the development of NIRS instrumentation, the methodology of spectrum pre-processing, as well as the chemical metrology were also introduced. The anthors mainly summarized the applications to agriculture and food, especially in-line analysis methods, which have been used in production procedure by fiber optics. The authors analyzed the NIRS application status in China, and made the first proposal to establish information sharing mode between central database and end-user by using network technology and concentrating valuable resources.

  19. Storage, retrieval, and analysis of ST data

    NASA Technical Reports Server (NTRS)

    Albrecht, R.

    1984-01-01

    Space Telescope can generate multidimensional image data, very similar in nature to data produced with microdensitometers. An overview is presented of the ST science ground system between carrying out the observations and the interactive analysis of preprocessed data. The ground system elements used in data archival and retrieval are described and operational procedures are discussed. Emphasis is given to aspects of the ground system that are relevant to the science user and to general principles of system software development in a production environment. While the system being developed uses relatively conservative concepts for the launch baseline, concepts were developed to enhance the ground system. This includes networking, remote access, and the utilization of alternate data storage technologies.

  20. Feature extraction in MFL signals of machined defects in steel tubes

    NASA Astrophysics Data System (ADS)

    Perazzo, R.; Pignotti, A.; Reich, S.; Stickar, P.

    2001-04-01

    Thirty defects of various shapes were machined on the external and internal wall surfaces of a 177 mm diameter ferromagnetic steel pipe. MFL signals were digitized and recorded at a frequency of 4 Khz. Various magnetizing currents and relative tube-probe velocities of the order of 2m/s were used. The identification of the location of the defect by a principal component/neural network analysis of the signal is shown to be more effective than the standard procedure of classification based on the average signal frequency.

  1. An authoring system for creating a practice environment in the network service field

    NASA Technical Reports Server (NTRS)

    Kiyama, Minoru; Fukuhara, Yoshimi

    1993-01-01

    This paper describes an authoring system whose main purpose is to reduce the cost of developing and maintaining courseware which contains procedural knowledge used in the network service field. This aim can be achieved by considering the characteristics of this field. Material knowledge is divided into two parts, behavioral knowledge and procedural knowledge. We show that both of these parts are constructed by an easy authoring methods and efficient modification algorithms. This authoring system has been used to build several types of courseware, and the development costs have been reduced.

  2. Partial Least Squares and Neural Networks for Quantitative Calibration of Laser-induced Breakdown Spectroscopy (LIBs) of Geologic Samples

    NASA Technical Reports Server (NTRS)

    Anderson, R. B.; Morris, Richard V.; Clegg, S. M.; Humphries, S. D.; Wiens, R. C.; Bell, J. F., III; Mertzman, S. A.

    2010-01-01

    The ChemCam instrument [1] on the Mars Science Laboratory (MSL) rover will be used to obtain the chemical composition of surface targets within 7 m of the rover using Laser Induced Breakdown Spectroscopy (LIBS). ChemCam analyzes atomic emission spectra (240-800 nm) from a plasma created by a pulsed Nd:KGW 1067 nm laser. The LIBS spectra can be used in a semiquantitative way to rapidly classify targets (e.g., basalt, andesite, carbonate, sulfate, etc.) and in a quantitative way to estimate their major and minor element chemical compositions. Quantitative chemical analysis from LIBS spectra is complicated by a number of factors, including chemical matrix effects [2]. Recent work has shown promising results using multivariate techniques such as partial least squares (PLS) regression and artificial neural networks (ANN) to predict elemental abundances in samples [e.g. 2-6]. To develop, refine, and evaluate analysis schemes for LIBS spectra of geologic materials, we collected spectra of a diverse set of well-characterized natural geologic samples and are comparing the predictive abilities of PLS, cascade correlation ANN (CC-ANN) and multilayer perceptron ANN (MLP-ANN) analysis procedures.

  3. Multivariate Analysis and Modeling of Sediment Pollution Using Neural Network Models and Geostatistics

    NASA Astrophysics Data System (ADS)

    Golay, Jean; Kanevski, Mikhaïl

    2013-04-01

    The present research deals with the exploration and modeling of a complex dataset of 200 measurement points of sediment pollution by heavy metals in Lake Geneva. The fundamental idea was to use multivariate Artificial Neural Networks (ANN) along with geostatistical models and tools in order to improve the accuracy and the interpretability of data modeling. The results obtained with ANN were compared to those of traditional geostatistical algorithms like ordinary (co)kriging and (co)kriging with an external drift. Exploratory data analysis highlighted a great variety of relationships (i.e. linear, non-linear, independence) between the 11 variables of the dataset (i.e. Cadmium, Mercury, Zinc, Copper, Titanium, Chromium, Vanadium and Nickel as well as the spatial coordinates of the measurement points and their depth). Then, exploratory spatial data analysis (i.e. anisotropic variography, local spatial correlations and moving window statistics) was carried out. It was shown that the different phenomena to be modeled were characterized by high spatial anisotropies, complex spatial correlation structures and heteroscedasticity. A feature selection procedure based on General Regression Neural Networks (GRNN) was also applied to create subsets of variables enabling to improve the predictions during the modeling phase. The basic modeling was conducted using a Multilayer Perceptron (MLP) which is a workhorse of ANN. MLP models are robust and highly flexible tools which can incorporate in a nonlinear manner different kind of high-dimensional information. In the present research, the input layer was made of either two (spatial coordinates) or three neurons (when depth as auxiliary information could possibly capture an underlying trend) and the output layer was composed of one (univariate MLP) to eight neurons corresponding to the heavy metals of the dataset (multivariate MLP). MLP models with three input neurons can be referred to as Artificial Neural Networks with EXternal drift (ANNEX). Moreover, the exact number of output neurons and the selection of the corresponding variables were based on the subsets created during the exploratory phase. Concerning hidden layers, no restriction were made and multiple architectures were tested. For each MLP model, the quality of the modeling procedure was assessed by variograms: if the variogram of the residuals demonstrates pure nugget effect and if the level of the nugget exactly corresponds to the nugget value of the theoretical variogram of the corresponding variable, all the structured information has been correctly extracted without overfitting. Finally, it is worth mentioning that simple MLP models are not always able to remove all the spatial correlation structure from the data. In that case, Neural Network Residual Kriging (NNRK) can be carried out and risk assessment can be conducted with Neural Network Residual Simulations (NNRS). Finally, the results of the ANNEX models were compared to those of ordinary (co)kriging and (co)kriging with an external drift. It was shown that the ANNEX models performed better than traditional geostatistical algorithms when the relationship between the variable of interest and the auxiliary predictor was not linear. References Kanevski, M. and Maignan, M. (2004). Analysis and Modelling of Spatial Environmental Data. Lausanne: EPFL Press.

  4. Is There Room for Prevention? Examining the Effect of Outpatient Facility Type on the Risk of Surgical Site Infection.

    PubMed

    Parikh, Rishi; Pollock, Daniel; Sharma, Jyotirmay; Edwards, Jonathan

    2016-10-01

    OBJECTIVE We compared risk for surgical site infection (SSI) following surgical breast procedures among 2 patient groups: those whose procedures were performed in ambulatory surgery centers (ASCs) and those whose procedures were performed in hospital-based outpatient facilities. DESIGN Cohort study using National Healthcare Safety Network (NHSN) SSI data for breast procedures performed from 2010 to 2014. METHODS Unconditional multivariate logistic regression was used to examine the association between facility type and breast SSI, adjusting for American Society of Anesthesiologists (ASA) Physical Status Classification, patient age, and duration of procedure. Other potential adjustment factors examined were wound classification, anesthesia use, and gender. RESULTS Among 124,021 total outpatient breast procedures performed between 2010 and 2014, 110,987 procedure reports submitted to the NHSN provided complete covariate data and were included in the analysis. Breast procedures performed in ASCs carried a lower risk of SSI compared with those performed in hospital-based outpatient settings. For patients aged ≤51 years, the adjusted risk ratio was 0.36 (95% CI, 0.25-0.50) and for patients >51 years old, the adjusted risk ratio was 0.32 (95% CI, 0.21-0.49). CONCLUSIONS SSI risk following breast procedures was significantly lower among ASC patients than among hospital-based outpatients. These findings should be placed in the context of study limitations, including the possibility of incomplete ascertainment of SSIs and shortcomings in the data available to control for differences in patient case mix. Additional studies are needed to better understand the role of procedural settings in SSI risk following breast procedures and to identify prevention opportunities. Infect Control Hosp Epidemiol 2016;1-7.

  5. Maintenance and operations cost model for DSN subsystems

    NASA Technical Reports Server (NTRS)

    Burt, R. W.; Lesh, J. R.

    1977-01-01

    A procedure is described which partitions the recurring costs of the Deep Space Network (DSN) over the individual DSN subsystems. The procedure results in a table showing the maintenance, operations, sustaining engineering and supportive costs for each subsystems.

  6. Staphylococcus aureus infections following knee and hip prosthesis insertion procedures.

    PubMed

    Arduino, Jean Marie; Kaye, Keith S; Reed, Shelby D; Peter, Senaka A; Sexton, Daniel J; Chen, Luke F; Hardy, N Chantelle; Tong, Steven Yc; Smugar, Steven S; Fowler, Vance G; Anderson, Deverick J

    2015-01-01

    Staphylococcus aureus is the most common and most important pathogen following knee and hip arthroplasty procedures. Understanding the epidemiology of invasive S. aureus infections is important to quantify this serious complication. This nested retrospective cohort analysis included adult patients who had undergone insertion of knee or hip prostheses with clean or clean-contaminated wound class at 11 hospitals between 2003-2006. Invasive S. aureus infections, non-superficial incisional surgical site infections (SSIs) and blood stream infections (BSIs), were prospectively identified following each procedure. Prevalence rates, per 100 procedures, were estimated. 13,719 prosthetic knee (62%) and hip (38%) insertion procedures were performed. Of 92 invasive S. aureus infections identified, SSIs were more common (80%) than SSI and BSI (10%) or BSI alone (10%). The rate of invasive S. aureus infection/100 procedures was 0.57 [95% CI: 0.43-0.73] for knee insertion and 0.83 [95% CI: 0.61-1.08] for hip insertion. More than half (53%) were methicillin-resistant. Median time-to-onset of infection was 34 and 26 days for knee and hip insertion, respectively. Infection was associated with higher National Healthcare Safety Network risk index (p ≤ 0.0001). Post-operative invasive S. aureus infections were rare, but difficult-to-treat methicillin-resistant infections were relatively common. Optimizing preventative efforts may greatly reduce the healthcare burden associated with S. aureus infections.

  7. LV software support for supersonic flow analysis

    NASA Technical Reports Server (NTRS)

    Bell, W. A.; Lepicovsky, J.

    1992-01-01

    The software for configuring an LV counter processor system has been developed using structured design. The LV system includes up to three counter processors and a rotary encoder. The software for configuring and testing the LV system has been developed, tested, and included in an overall software package for data acquisition, analysis, and reduction. Error handling routines respond to both operator and instrument errors which often arise in the course of measuring complex, high-speed flows. The use of networking capabilities greatly facilitates the software development process by allowing software development and testing from a remote site. In addition, high-speed transfers allow graphics files or commands to provide viewing of the data from a remote site. Further advances in data analysis require corresponding advances in procedures for statistical and time series analysis of nonuniformly sampled data.

  8. LV software support for supersonic flow analysis

    NASA Technical Reports Server (NTRS)

    Bell, William A.

    1992-01-01

    The software for configuring a Laser Velocimeter (LV) counter processor system was developed using structured design. The LV system includes up to three counter processors and a rotary encoder. The software for configuring and testing the LV system was developed, tested, and included in an overall software package for data acquisition, analysis, and reduction. Error handling routines respond to both operator and instrument errors which often arise in the course of measuring complex, high-speed flows. The use of networking capabilities greatly facilitates the software development process by allowing software development and testing from a remote site. In addition, high-speed transfers allow graphics files or commands to provide viewing of the data from a remote site. Further advances in data analysis require corresponding advances in procedures for statistical and time series analysis of nonuniformly sampled data.

  9. Leveraging the NPS Femto Satellite for Alternative Satellite Communication Networks

    DTIC Science & Technology

    2017-09-01

    the next-generation NPSFS. 14. SUBJECT TERMS space , Femto satellite, NPSFS, network, communication , Arduino, RockBlock, Iridium Modem 15. NUMBER...provides a proof of concept for using Naval Postgraduate School Femto Satellites (NPSFS) as an alternative communication space -based network. The...We need several physical and procedural elements to conduct communication through space and using the electromagnetic spectrum. 1. Power Any

  10. A Handbook for Automatic Data Processing Equipment Acquisition.

    DTIC Science & Technology

    1981-12-01

    Navy ADPE Procurement Policies (Automatic Data Processing Equipment (ADPE) procurement by federal agencies is governed by an interlocking network of...ADPE) procurement by federal agencies is governed by an interlocking network of policies and directives issued by federal agencies, the Department...SECNAVINST) and local procedures governing the acquisition of ADPE. Obtaining and understanding this interlocking network of policies is often difficult

  11. Multiple fMRI system-level baseline connectivity is disrupted in patients with consciousness alterations.

    PubMed

    Demertzi, Athena; Gómez, Francisco; Crone, Julia Sophia; Vanhaudenhuyse, Audrey; Tshibanda, Luaba; Noirhomme, Quentin; Thonnard, Marie; Charland-Verville, Vanessa; Kirsch, Murielle; Laureys, Steven; Soddu, Andrea

    2014-03-01

    In healthy conditions, group-level fMRI resting state analyses identify ten resting state networks (RSNs) of cognitive relevance. Here, we aim to assess the ten-network model in severely brain-injured patients suffering from disorders of consciousness and to identify those networks which will be most relevant to discriminate between patients and healthy subjects. 300 fMRI volumes were obtained in 27 healthy controls and 53 patients in minimally conscious state (MCS), vegetative state/unresponsive wakefulness syndrome (VS/UWS) and coma. Independent component analysis (ICA) reduced data dimensionality. The ten networks were identified by means of a multiple template-matching procedure and were tested on neuronality properties (neuronal vs non-neuronal) in a data-driven way. Univariate analyses detected between-group differences in networks' neuronal properties and estimated voxel-wise functional connectivity in the networks, which were significantly less identifiable in patients. A nearest-neighbor "clinical" classifier was used to determine the networks with high between-group discriminative accuracy. Healthy controls were characterized by more neuronal components compared to patients in VS/UWS and in coma. Compared to healthy controls, fewer patients in MCS and VS/UWS showed components of neuronal origin for the left executive control network, default mode network (DMN), auditory, and right executive control network. The "clinical" classifier indicated the DMN and auditory network with the highest accuracy (85.3%) in discriminating patients from healthy subjects. FMRI multiple-network resting state connectivity is disrupted in severely brain-injured patients suffering from disorders of consciousness. When performing ICA, multiple-network testing and control for neuronal properties of the identified RSNs can advance fMRI system-level characterization. Automatic data-driven patient classification is the first step towards future single-subject objective diagnostics based on fMRI resting state acquisitions. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Comparing species interaction networks along environmental gradients.

    PubMed

    Pellissier, Loïc; Albouy, Camille; Bascompte, Jordi; Farwig, Nina; Graham, Catherine; Loreau, Michel; Maglianesi, Maria Alejandra; Melián, Carlos J; Pitteloud, Camille; Roslin, Tomas; Rohr, Rudolf; Saavedra, Serguei; Thuiller, Wilfried; Woodward, Guy; Zimmermann, Niklaus E; Gravel, Dominique

    2018-05-01

    Knowledge of species composition and their interactions, in the form of interaction networks, is required to understand processes shaping their distribution over time and space. As such, comparing ecological networks along environmental gradients represents a promising new research avenue to understand the organization of life. Variation in the position and intensity of links within networks along environmental gradients may be driven by turnover in species composition, by variation in species abundances and by abiotic influences on species interactions. While investigating changes in species composition has a long tradition, so far only a limited number of studies have examined changes in species interactions between networks, often with differing approaches. Here, we review studies investigating variation in network structures along environmental gradients, highlighting how methodological decisions about standardization can influence their conclusions. Due to their complexity, variation among ecological networks is frequently studied using properties that summarize the distribution or topology of interactions such as number of links, connectance, or modularity. These properties can either be compared directly or using a procedure of standardization. While measures of network structure can be directly related to changes along environmental gradients, standardization is frequently used to facilitate interpretation of variation in network properties by controlling for some co-variables, or via null models. Null models allow comparing the deviation of empirical networks from random expectations and are expected to provide a more mechanistic understanding of the factors shaping ecological networks when they are coupled with functional traits. As an illustration, we compare approaches to quantify the role of trait matching in driving the structure of plant-hummingbird mutualistic networks, i.e. a direct comparison, standardized by null models and hypothesis-based metaweb. Overall, our analysis warns against a comparison of studies that rely on distinct forms of standardization, as they are likely to highlight different signals. Fostering a better understanding of the analytical tools available and the signal they detect will help produce deeper insights into how and why ecological networks vary along environmental gradients. © 2017 Cambridge Philosophical Society.

  13. World-wide satellite night-light data as a proxy of society-hydrology interaction and vulnerability to flood risk

    NASA Astrophysics Data System (ADS)

    Ceola, S.; Laio, F.; Montanari, A.

    2013-12-01

    The study and the analysis of the interactions and feedbacks between hydrology and society constitute the main issue of socio-hydrology. Recent flood events, which occurred across the globe, highlighted once again that mitigation strategies are needed to reduce flood risk. In particular, quick procedures for the identification of vulnerable human settlements and flood prone areas are a necessary tool to identify priorities for flood risk management. To this aim, a 19-year long period of world-wide night light data, as a proxy of human population, and the global river network have been examined. The spatio-temporal evolution of artificial luminosity depending on the distance from the river network has been assessed in order to quantitatively identify the likelihood for a populated pixel to be reached by water. The analysis focuses both on a global and on a local scale. Hotspots, such as highly illuminated areas and developing regions, have been also examined. The analysis shows an increment of yearly-averaged artificial luminosity from 1992 to 2010 (i.e. the time period of satellite data availability), whereas light intensity tends to decrease with increasing distance from the river network. The results thus reveal an increased vulnerability of human settlements to flooding events. A nearly 70-year long period of peace and the economic development after the Second World War could reasonably explain the observed enhancement of human population proximity to water bodies.

  14. Trend analysis of weekly acid rain data, 1978-83

    USGS Publications Warehouse

    Schertz, Terry L.; Hirsch, Robert M.

    1985-01-01

    There are 19 stations in the National Atmospheric Deposition Program which operated over the period 1978-83 and were subsequently incorporated into the National Trends Network in 1983. The precipitation chemistry data for these stations for this period were analyzed for trend, spatial correlation, seasonality, and relationship to precipitation volume. The intent of the analysis was to provide insights on the sources of variation in precipitation chemistry and to attempt to ascertain what statistical procedures may be most useful for ongoing analysis of the National Trends Network data. The Seasonal Kendall test was used for detection of trends in raw concentrations of dissolved constituents, pH and specific conductance, and residuals of these parameters from regression analysis. Forty-one percent of the trends detected in the raw concentrations were downtrends, 4 percent were uptrends, and 55 percent showed no trends at a = 0.2. At a more restrictive significance level of a = 0.05, 24 percent of the trends detected were downtrends, 2 percent were uptrends, and 74 percent showed no trends. The two constituents of greatest interest in terms of human generated emissions and environmental effects, sulfate and nitrate, showed only downtrends, and sulfate showed the largest decreases in concentration per year of all the ions tested.

  15. Network speech systems technology program

    NASA Astrophysics Data System (ADS)

    Weinstein, C. J.

    1981-09-01

    This report documents work performed during FY 1981 on the DCA-sponsored Network Speech Systems Technology Program. The two areas of work reported are: (1) communication system studies in support of the evolving Defense Switched Network (DSN) and (2) design and implementation of satellite/terrestrial interfaces for the Experimental Integrated Switched Network (EISN). The system studies focus on the development and evaluation of economical and endurable network routing procedures. Satellite/terrestrial interface development includes circuit-switched and packet-switched connections to the experimental wideband satellite network. Efforts in planning and coordination of EISN experiments are reported in detail in a separate EISN Experiment Plan.

  16. Short-term estimation of GNSS TEC using a neural network model in Brazil

    NASA Astrophysics Data System (ADS)

    Ferreira, Arthur Amaral; Borges, Renato Alves; Paparini, Claudia; Ciraolo, Luigi; Radicella, Sandro M.

    2017-10-01

    This work presents a novel Neural Network (NN) model to estimate Total Electron Content (TEC) from Global Navigation Satellite Systems (GNSS) measurements in three distinct sectors in Brazil. The purpose of this work is to start the investigations on the development of a regional model that can be used to determine the vertical TEC over Brazil, aiming future applications on a near real-time frame estimations and short-term forecasting. The NN is used to estimate the GNSS TEC values at void locations, where no dual-frequency GNSS receiver that may be used as a source of data to GNSS TEC estimation is available. This approach is particularly useful for GNSS single-frequency users that rely on corrections of ionospheric range errors by TEC models. GNSS data from the first GLONASS network for research and development (GLONASS R&D network) installed in Latin America, and from the Brazilian Network for Continuous Monitoring of the GNSS (RMBC) were used on TEC calibration. The input parameters of the NN model are based on features known to influence TEC values, such as geographic location of the GNSS receiver, magnetic activity, seasonal and diurnal variations, and solar activity. Data from two ten-days periods (from DoY 154 to 163 and from 282 to 291) are used to train the network. Three distinct analyses have been carried out in order to assess time-varying and spatial performance of the model. At the spatial performance analysis, for each region, a set of stations is chosen to provide training data to the NN, and after the training procedure, the NN is used to estimate vTEC behavior for the test station which data were not presented to the NN in training process. An analysis is done by comparing, for each testing station, the estimated NN vTEC delivered by the NN and reference calibrated vTEC. Also, as a second analysis, the network ability to forecast one day after the time interval (DoY 292) based on information of the second period of investigation is also assessed in order to verify the feasibility on using low amount of data for short-term forecasting. In a third analysis, the spatial performance of the NN model is assessed and compared against CODE Global Ionospheric Maps during the geomagnetic storm registered on 13th and 14th October 2016. The results obtained from the three described analyses indicate that even using a ten-days period of data to train the network, the proposed NN model provides good spatial performance and presents to be a promising tool for short-term forecasting. The results obtained in the analysis presented a root mean squared error less than 7.9 TECU in all scenarios under investigation.

  17. Hydrologic controls on basin-scale distribution of benthic macroinvertebrates

    NASA Astrophysics Data System (ADS)

    Bertuzzo, E.; Ceola, S.; Singer, G. A.; Battin, T. J.; Montanari, A.; Rinaldo, A.

    2013-12-01

    The presentation deals with the role of streamflow variability on basin-scale distributions of benthic macroinvertebrates. Specifically, we present a probabilistic analysis of the impacts of the variability along the river network of relevant hydraulic variables on the density of benthic macroinvertebrate species. The relevance of this work is based on the implications of the predictability of macroinvertebrate patterns within a catchment on fluvial ecosystem health, being macroinvertebrates commonly used as sensitive indicators, and on the effects of anthropogenic activity. The analytical tools presented here outline a novel procedure of general nature aiming at a spatially-explicit quantitative assessment of how near-bed flow variability affects benthic macroinvertebrate abundance. Moving from the analytical characterization of the at-a-site probability distribution functions (pdfs) of streamflow and bottom shear stress, a spatial extension to a whole river network is performed aiming at the definition of spatial maps of streamflow and bottom shear stress. Then, bottom shear stress pdf, coupled with habitat suitability curves (e.g., empirical relations between species density and bottom shear stress) derived from field studies are used to produce maps of macroinvertebrate suitability to shear stress conditions. Thus, moving from measured hydrologic conditions, possible effects of river streamflow alterations on macroinvertebrate densities may be fairly assessed. We apply this framework to an Austrian river network, used as benchmark for the analysis, for which rainfall and streamflow time-series and river network hydraulic properties and macroinvertebrate density data are available. A comparison between observed vs "modeled" species' density in three locations along the examined river network is also presented. Although the proposed approach focuses on a single controlling factor, it shows important implications with water resources management and fluvial ecosystem protection.

  18. NATbox: a network analysis toolbox in R.

    PubMed

    Chavan, Shweta S; Bauer, Michael A; Scutari, Marco; Nagarajan, Radhakrishnan

    2009-10-08

    There has been recent interest in capturing the functional relationships (FRs) from high-throughput assays using suitable computational techniques. FRs elucidate the working of genes in concert as a system as opposed to independent entities hence may provide preliminary insights into biological pathways and signalling mechanisms. Bayesian structure learning (BSL) techniques and its extensions have been used successfully for modelling FRs from expression profiles. Such techniques are especially useful in discovering undocumented FRs, investigating non-canonical signalling mechanisms and cross-talk between pathways. The objective of the present study is to develop a graphical user interface (GUI), NATbox: Network Analysis Toolbox in the language R that houses a battery of BSL algorithms in conjunction with suitable statistical tools for modelling FRs in the form of acyclic networks from gene expression profiles and their subsequent analysis. NATbox is a menu-driven open-source GUI implemented in the R statistical language for modelling and analysis of FRs from gene expression profiles. It provides options to (i) impute missing observations in the given data (ii) model FRs and network structure from gene expression profiles using a battery of BSL algorithms and identify robust dependencies using a bootstrap procedure, (iii) present the FRs in the form of acyclic graphs for visualization and investigate its topological properties using network analysis metrics, (iv) retrieve FRs of interest from published literature. Subsequently, use these FRs as structural priors in BSL (v) enhance scalability of BSL across high-dimensional data by parallelizing the bootstrap routines. NATbox provides a menu-driven GUI for modelling and analysis of FRs from gene expression profiles. By incorporating readily available functions from existing R-packages, it minimizes redundancy and improves reproducibility, transparency and sustainability, characteristic of open-source environments. NATbox is especially suited for interdisciplinary researchers and biologists with minimal programming experience and would like to use systems biology approaches without delving into the algorithmic aspects. The GUI provides appropriate parameter recommendations for the various menu options including default parameter choices for the user. NATbox can also prove to be a useful demonstration and teaching tool in graduate and undergraduate course in systems biology. It has been tested successfully under Windows and Linux operating systems. The source code along with installation instructions and accompanying tutorial can be found at http://bioinformatics.ualr.edu/natboxWiki/index.php/Main_Page.

  19. Department of the Navy Naval Networking Environment (NNE)-2016. Strategic Definition, Scope and Strategy Paper, Version 1.1

    DTIC Science & Technology

    2008-05-13

    IA capabilities applied to protect, defend, and respond to them. This will provide decision makers and network operators, at all command levels...procedures to recognize, react, and respond to potential system and network compromises must be in place and provide control sufficient to protect the...to respond to and track users’ needs. • Information Service Visibility. Interview responses described a need for the reporting of network status and

  20. Protocol for communications in potentially noisy environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boyd, Gerlad M.; Farrow, Jeffrey

    2016-02-09

    A communications protocol that is designed for transmission of data in networks that are subjected to harsh conditions is described herein. A network includes a plurality of devices, where the devices comprise respective nodes. The nodes are in communication with one another by way of a central network hub. The protocol causes the nodes to transmit data over a network bus at different data rates depending upon whether the nodes are operating normally or an arbitration procedure has been invoked.

  1. A Second Law Based Unstructured Finite Volume Procedure for Generalized Flow Simulation

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok

    1998-01-01

    An unstructured finite volume procedure has been developed for steady and transient thermo-fluid dynamic analysis of fluid systems and components. The procedure is applicable for a flow network consisting of pipes and various fittings where flow is assumed to be one dimensional. It can also be used to simulate flow in a component by modeling a multi-dimensional flow using the same numerical scheme. The flow domain is discretized into a number of interconnected control volumes located arbitrarily in space. The conservation equations for each control volume account for the transport of mass, momentum and entropy from the neighboring control volumes. In addition, they also include the sources of each conserved variable and time dependent terms. The source term of entropy equation contains entropy generation due to heat transfer and fluid friction. Thermodynamic properties are computed from the equation of state of a real fluid. The system of equations is solved by a hybrid numerical method which is a combination of simultaneous Newton-Raphson and successive substitution schemes. The paper also describes the application and verification of the procedure by comparing its predictions with the analytical and numerical solution of several benchmark problems.

  2. Cross-Domain Analogies as Relating Derived Relations among Two Separate Relational Networks

    PubMed Central

    Ruiz, Francisco J; Luciano, Carmen

    2011-01-01

    Contemporary behavior analytic research is making headway in analyzing analogy as the establishment of a relation of coordination among common types of trained or derived relations. Previous studies have been focused on within-domain analogy. The current study expands previous research by analyzing cross-domain analogy as relating relations among separate relational networks and by correlating participants' performance with a standard measure of analogical reasoning. In two experiments, adult participants first completed general intelligence and analogical reasoning tests. Subsequently, they were exposed to a computerized conditional discrimination training procedure designed to create two relational networks, each consisting of two 3-member equivalence classes. The critical test was a two-part analogical test in which participants had to relate combinatorial relations of coordination and distinction between the two relational networks. In Experiment 1, combinatorial relations for each network were individually tested prior to analogical testing, but in Experiment 2 they were not. Across both experiments, 65% of participants passed the analogical test on the first attempt. Moreover, results from the training procedure were strongly correlated with the standard measure of analogical reasoning. PMID:21547072

  3. Recurrent Neural Network Applications for Astronomical Time Series

    NASA Astrophysics Data System (ADS)

    Protopapas, Pavlos

    2017-06-01

    The benefits of good predictive models in astronomy lie in early event prediction systems and effective resource allocation. Current time series methods applicable to regular time series have not evolved to generalize for irregular time series. In this talk, I will describe two Recurrent Neural Network methods, Long Short-Term Memory (LSTM) and Echo State Networks (ESNs) for predicting irregular time series. Feature engineering along with a non-linear modeling proved to be an effective predictor. For noisy time series, the prediction is improved by training the network on error realizations using the error estimates from astronomical light curves. In addition to this, we propose a new neural network architecture to remove correlation from the residuals in order to improve prediction and compensate for the noisy data. Finally, I show how to set hyperparameters for a stable and performant solution correctly. In this work, we circumvent this obstacle by optimizing ESN hyperparameters using Bayesian optimization with Gaussian Process priors. This automates the tuning procedure, enabling users to employ the power of RNN without needing an in-depth understanding of the tuning procedure.

  4. A New Local Bipolar Autoassociative Memory Based on External Inputs of Discrete Recurrent Neural Networks With Time Delay.

    PubMed

    Zhou, Caigen; Zeng, Xiaoqin; Luo, Chaomin; Zhang, Huaguang

    In this paper, local bipolar auto-associative memories are presented based on discrete recurrent neural networks with a class of gain type activation function. The weight parameters of neural networks are acquired by a set of inequalities without the learning procedure. The global exponential stability criteria are established to ensure the accuracy of the restored patterns by considering time delays and external inputs. The proposed methodology is capable of effectively overcoming spurious memory patterns and achieving memory capacity. The effectiveness, robustness, and fault-tolerant capability are validated by simulated experiments.In this paper, local bipolar auto-associative memories are presented based on discrete recurrent neural networks with a class of gain type activation function. The weight parameters of neural networks are acquired by a set of inequalities without the learning procedure. The global exponential stability criteria are established to ensure the accuracy of the restored patterns by considering time delays and external inputs. The proposed methodology is capable of effectively overcoming spurious memory patterns and achieving memory capacity. The effectiveness, robustness, and fault-tolerant capability are validated by simulated experiments.

  5. Evaluating the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ortiz-Rodriguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.

    In this work the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks is evaluated. The first one code based on traditional iterative procedures and called Neutron spectrometry and dosimetry from the Universidad Autonoma de Zacatecas (NSDUAZ) use the SPUNIT iterative algorithm and was designed to unfold neutron spectrum and calculate 15 dosimetric quantities and 7 IAEA survey meters. The main feature of this code is the automated selection of the initial guess spectrum trough a compendium of neutron spectrum compiled by the IAEA. The second one code known as Neutron spectrometry and dosimetrymore » with artificial neural networks (NDSann) is a code designed using neural nets technology. The artificial intelligence approach of neural net does not solve mathematical equations. By using the knowledge stored at synaptic weights on a neural net properly trained, the code is capable to unfold neutron spectrum and to simultaneously calculate 15 dosimetric quantities, needing as entrance data, only the rate counts measured with a Bonner spheres system. Similarities of both NSDUAZ and NSDann codes are: they follow the same easy and intuitive user's philosophy and were designed in a graphical interface under the LabVIEW programming environment. Both codes unfold the neutron spectrum expressed in 60 energy bins, calculate 15 dosimetric quantities and generate a full report in HTML format. Differences of these codes are: NSDUAZ code was designed using classical iterative approaches and needs an initial guess spectrum in order to initiate the iterative procedure. In NSDUAZ, a programming routine was designed to calculate 7 IAEA instrument survey meters using the fluence-dose conversion coefficients. NSDann code use artificial neural networks for solving the ill-conditioned equation system of neutron spectrometry problem through synaptic weights of a properly trained neural network. Contrary to iterative procedures, in neural net approach it is possible to reduce the rate counts used to unfold the neutron spectrum. To evaluate these codes a computer tool called Neutron Spectrometry and dosimetry computer tool was designed. The results obtained with this package are showed. The codes here mentioned are freely available upon request to the authors.« less

  6. Evaluating the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks

    NASA Astrophysics Data System (ADS)

    Ortiz-Rodríguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.; Solís Sánches, L. O.; Miranda, R. Castañeda; Cervantes Viramontes, J. M.; Vega-Carrillo, H. R.

    2013-07-01

    In this work the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks is evaluated. The first one code based on traditional iterative procedures and called Neutron spectrometry and dosimetry from the Universidad Autonoma de Zacatecas (NSDUAZ) use the SPUNIT iterative algorithm and was designed to unfold neutron spectrum and calculate 15 dosimetric quantities and 7 IAEA survey meters. The main feature of this code is the automated selection of the initial guess spectrum trough a compendium of neutron spectrum compiled by the IAEA. The second one code known as Neutron spectrometry and dosimetry with artificial neural networks (NDSann) is a code designed using neural nets technology. The artificial intelligence approach of neural net does not solve mathematical equations. By using the knowledge stored at synaptic weights on a neural net properly trained, the code is capable to unfold neutron spectrum and to simultaneously calculate 15 dosimetric quantities, needing as entrance data, only the rate counts measured with a Bonner spheres system. Similarities of both NSDUAZ and NSDann codes are: they follow the same easy and intuitive user's philosophy and were designed in a graphical interface under the LabVIEW programming environment. Both codes unfold the neutron spectrum expressed in 60 energy bins, calculate 15 dosimetric quantities and generate a full report in HTML format. Differences of these codes are: NSDUAZ code was designed using classical iterative approaches and needs an initial guess spectrum in order to initiate the iterative procedure. In NSDUAZ, a programming routine was designed to calculate 7 IAEA instrument survey meters using the fluence-dose conversion coefficients. NSDann code use artificial neural networks for solving the ill-conditioned equation system of neutron spectrometry problem through synaptic weights of a properly trained neural network. Contrary to iterative procedures, in neural net approach it is possible to reduce the rate counts used to unfold the neutron spectrum. To evaluate these codes a computer tool called Neutron Spectrometry and dosimetry computer tool was designed. The results obtained with this package are showed. The codes here mentioned are freely available upon request to the authors.

  7. Synergy and sustainability in rural procedural medicine: views from the coalface.

    PubMed

    Swayne, Andrew; Eley, Diann S

    2010-02-01

    The practice of rural and remote medicine in Australia entails many challenges, including a broad casemix and the remoteness of specialist support. Many rural practitioners employ advanced procedural skills in anaesthetics, surgery, obstetrics and emergency medicine, but the use of these skills has been declining over the last 20 years. This study explored the perceptions of rural general practitioners (GPs) on the current and future situation of procedural medicine. The qualitative results of data from a mixed-method design are reported. Free-response survey comments and semistructured interview transcripts were analysed by a framework analysis for major themes. General practices in rural and remote Queensland. Rural GPs in Rural and Remote Metropolitan Classification 4-7 areas of Queensland. The perceptions of rural GPs on the current and future situation of rural procedural medicine. Major concerns from the survey focused on closure of facilities and downgrading of services, cost and time to keep up skills, increasing litigation issues and changing attitudes of the public. Interviews designed to draw out solutions to help rectify the perceived circumstances highlighted two major themes: 'synergy' between the support from medical teams and community in ensuring 'sustainability' of services. This article presents a model of rural procedural practice where synergy between staff, resources and support networks represents the optimal way to deliver a non-metropolitan procedural service. The findings serve to remind educators and policy-makers that future planning for sustainability of rural procedural services must be broad-based and comprehensive.

  8. Resting-State Functional Connectivity in Individuals with Down Syndrome and Williams Syndrome Compared with Typically Developing Controls.

    PubMed

    Vega, Jennifer N; Hohman, Timothy J; Pryweller, Jennifer R; Dykens, Elisabeth M; Thornton-Wells, Tricia A

    2015-10-01

    The emergence of resting-state functional connectivity (rsFC) analysis, which examines temporal correlations of low-frequency (<0.1 Hz) blood oxygen level-dependent signal fluctuations between brain regions, has dramatically improved our understanding of the functional architecture of the typically developing (TD) human brain. This study examined rsFC in Down syndrome (DS) compared with another neurodevelopmental disorder, Williams syndrome (WS), and TD. Ten subjects with DS, 18 subjects with WS, and 40 subjects with TD each participated in a 3-Tesla MRI scan. We tested for group differences (DS vs. TD, DS vs. WS, and WS vs. TD) in between- and within-network rsFC connectivity for seven functional networks. For the DS group, we also examined associations between rsFC and other cognitive and genetic risk factors. In DS compared with TD, we observed higher levels of between-network connectivity in 6 out 21 network pairs but no differences in within-network connectivity. Participants with WS showed lower levels of within-network connectivity and no significant differences in between-network connectivity relative to DS. Finally, our comparison between WS and TD controls revealed lower within-network connectivity in multiple networks and higher between-network connectivity in one network pair relative to TD controls. While preliminary due to modest sample sizes, our findings suggest a global difference in between-network connectivity in individuals with neurodevelopmental disorders compared with controls and that such a difference is exacerbated across many brain regions in DS. However, this alteration in DS does not appear to extend to within-network connections, and therefore, the altered between-network connectivity must be interpreted within the framework of an intact intra-network pattern of activity. In contrast, WS shows markedly lower levels of within-network connectivity in the default mode network and somatomotor network relative to controls. These findings warrant further investigation using a task-based procedure that may help disentangle the relationship between brain function and cognitive performance across the spectrum of neurodevelopmental disorders.

  9. Concepts & Procedures. [SITE 2002 Section].

    ERIC Educational Resources Information Center

    Sarner, Ronald, Ed.; Mullick, Rosemary J., Ed.; Bauder, Deborah Y., Ed.

    This document contains the following full and short papers on concepts and procedures from the SITE (Society for Information Technology & Teacher Education) 2002 conference: "Exploring Minds Network" (Marino C. Alvarez and others); "Learning Communities: A Kaleidoscope of Ecological Designs" (Alain Breuleux and others);…

  10. A neural network approach for the blind deconvolution of turbulent flows

    NASA Astrophysics Data System (ADS)

    Maulik, R.; San, O.

    2017-11-01

    We present a single-layer feedforward artificial neural network architecture trained through a supervised learning approach for the deconvolution of flow variables from their coarse grained computations such as those encountered in large eddy simulations. We stress that the deconvolution procedure proposed in this investigation is blind, i.e. the deconvolved field is computed without any pre-existing information about the filtering procedure or kernel. This may be conceptually contrasted to the celebrated approximate deconvolution approaches where a filter shape is predefined for an iterative deconvolution process. We demonstrate that the proposed blind deconvolution network performs exceptionally well in the a-priori testing of both two-dimensional Kraichnan and three-dimensional Kolmogorov turbulence and shows promise in forming the backbone of a physics-augmented data-driven closure for the Navier-Stokes equations.

  11. Specialization and Universals in the Development of Reading Skill: How Chinese Research Informs a Universal Science of Reading

    PubMed Central

    Perfetti, Charles; Cao, Fan; Booth, James

    2014-01-01

    Understanding Chinese reading is important for identifying the universal aspects of reading, separated from those aspects that are specific to alphabetic writing or to English in particular. Chinese and alphabetic writing make different demands on reading and learning to read, despite reading procedures and their supporting brain networks that are partly universal. Learning to read accommodates the demands of a writing system through the specialization of brain networks that support word identification. This specialization increases with reading development, leading to differences in the brain networks for alphabetic and Chinese reading. We suggest that beyond reading procedures that are partly universal and partly writing-system specific, functional reading universals arise across writing systems in their adaptation to human cognitive abilities. PMID:24744605

  12. Introduction to Blueweb: A Decentralized Scatternet Formation Algorithm for Bluetooth Ad Hoc Networks

    NASA Astrophysics Data System (ADS)

    Yu, Chih-Min; Huang, Chia-Chi

    In this letter, a decentralized scatternet formation algorithm called Bluelayer is proposed. First, Bluelayer uses a designated root to construct a tree-shaped subnet and propagates an integer variable k1 called counter limit as well as a constant k in its downstream direction to determine new roots. Then each new root asks its upstream master to start a return connection procedure to convert the tree-shaped subnet into a web-shaped subnet for its immediate upstream root. At the same time, each new root repeats the same procedure as the root to build its own subnet until the whole scatternet is formed. Simulation results show that Bluelayer achieves good network scalability and generates an efficient scatternet configuration for various sizes of Bluetooth ad hoc network.

  13. An integrated pavement data management and feedback system (PAMS) : final report.

    DOT National Transportation Integrated Search

    1987-04-01

    This report discusses the implementation of a pavement condition rating (PCR) procedure to sample sections of the road network system. The resources needed are identified for such implementation. The uses of PCR data at the network and project level ...

  14. Standards for the Analysis and Processing of Surface-Water Data and Information Using Electronic Methods

    USGS Publications Warehouse

    Sauer, Vernon B.

    2002-01-01

    Surface-water computation methods and procedures are described in this report to provide standards from which a completely automated electronic processing system can be developed. To the greatest extent possible, the traditional U. S. Geological Survey (USGS) methodology and standards for streamflow data collection and analysis have been incorporated into these standards. Although USGS methodology and standards are the basis for this report, the report is applicable to other organizations doing similar work. The proposed electronic processing system allows field measurement data, including data stored on automatic field recording devices and data recorded by the field hydrographer (a person who collects streamflow and other surface-water data) in electronic field notebooks, to be input easily and automatically. A user of the electronic processing system easily can monitor the incoming data and verify and edit the data, if necessary. Input of the computational procedures, rating curves, shift requirements, and other special methods are interactive processes between the user and the electronic processing system, with much of this processing being automatic. Special computation procedures are provided for complex stations such as velocity-index, slope, control structures, and unsteady-flow models, such as the Branch-Network Dynamic Flow Model (BRANCH). Navigation paths are designed to lead the user through the computational steps for each type of gaging station (stage-only, stagedischarge, velocity-index, slope, rate-of-change in stage, reservoir, tide, structure, and hydraulic model stations). The proposed electronic processing system emphasizes the use of interactive graphics to provide good visual tools for unit values editing, rating curve and shift analysis, hydrograph comparisons, data-estimation procedures, data review, and other needs. Documentation, review, finalization, and publication of records are provided for with the electronic processing system, as well as archiving, quality assurance, and quality control.

  15. High-Performance Parallel Analysis of Coupled Problems for Aircraft Propulsion

    NASA Technical Reports Server (NTRS)

    Felippa, C. A.; Farhat, C.; Park, K. C.; Gumaste, U.; Chen, P.-S.; Lesoinne, M.; Stern, P.

    1997-01-01

    Applications are described of high-performance computing methods to the numerical simulation of complete jet engines. The methodology focuses on the partitioned analysis of the interaction of the gas flow with a flexible structure and with the fluid mesh motion driven by structural displacements. The latter is treated by a ALE technique that models the fluid mesh motion as that of a fictitious mechanical network laid along the edges of near-field elements. New partitioned analysis procedures to treat this coupled three-component problem were developed. These procedures involved delayed corrections and subcycling, and have been successfully tested on several massively parallel computers, including the iPSC-860, Paragon XP/S and the IBM SP2. The NASA-sponsored ENG10 program was used for the global steady state analysis of the whole engine. This program uses a regular FV-multiblock-grid discretization in conjunction with circumferential averaging to include effects of blade forces, loss, combustor heat addition, blockage, bleeds and convective mixing. A load-balancing preprocessor for parallel versions of ENG10 was developed as well as the capability for the first full 3D aeroelastic simulation of a multirow engine stage. This capability was tested on the IBM SP2 parallel supercomputer at NASA Ames.

  16. A closer look at cross-validation for assessing the accuracy of gene regulatory networks and models.

    PubMed

    Tabe-Bordbar, Shayan; Emad, Amin; Zhao, Sihai Dave; Sinha, Saurabh

    2018-04-26

    Cross-validation (CV) is a technique to assess the generalizability of a model to unseen data. This technique relies on assumptions that may not be satisfied when studying genomics datasets. For example, random CV (RCV) assumes that a randomly selected set of samples, the test set, well represents unseen data. This assumption doesn't hold true where samples are obtained from different experimental conditions, and the goal is to learn regulatory relationships among the genes that generalize beyond the observed conditions. In this study, we investigated how the CV procedure affects the assessment of supervised learning methods used to learn gene regulatory networks (or in other applications). We compared the performance of a regression-based method for gene expression prediction estimated using RCV with that estimated using a clustering-based CV (CCV) procedure. Our analysis illustrates that RCV can produce over-optimistic estimates of the model's generalizability compared to CCV. Next, we defined the 'distinctness' of test set from training set and showed that this measure is predictive of performance of the regression method. Finally, we introduced a simulated annealing method to construct partitions with gradually increasing distinctness and showed that performance of different gene expression prediction methods can be better evaluated using this method.

  17. Spatial interpolation of solar global radiation

    NASA Astrophysics Data System (ADS)

    Lussana, C.; Uboldi, F.; Antoniazzi, C.

    2010-09-01

    Solar global radiation is defined as the radiant flux incident onto an area element of the terrestrial surface. Its direct knowledge plays a crucial role in many applications, from agrometeorology to environmental meteorology. The ARPA Lombardia's meteorological network includes about one hundred of pyranometers, mostly distributed in the southern part of the Alps and in the centre of the Po Plain. A statistical interpolation method based on an implementation of the Optimal Interpolation is applied to the hourly average of the solar global radiation observations measured by the ARPA Lombardia's network. The background field is obtained using SMARTS (The Simple Model of the Atmospheric Radiative Transfer of Sunshine, Gueymard, 2001). The model is initialised by assuming clear sky conditions and it takes into account the solar position and orography related effects (shade and reflection). The interpolation of pyranometric observations introduces in the analysis fields information about cloud presence and influence. A particular effort is devoted to prevent observations affected by large errors of different kinds (representativity errors, systematic errors, gross errors) from entering the analysis procedure. The inclusion of direct cloud information from satellite observations is also planned.

  18. A Unique Four-Hub Protein Cluster Associates to Glioblastoma Progression

    PubMed Central

    Simeone, Pasquale; Trerotola, Marco; Urbanella, Andrea; Lattanzio, Rossano; Ciavardelli, Domenico; Di Giuseppe, Fabrizio; Eleuterio, Enrica; Sulpizio, Marilisa; Eusebi, Vincenzo; Pession, Annalisa; Piantelli, Mauro; Alberti, Saverio

    2014-01-01

    Gliomas are the most frequent brain tumors. Among them, glioblastomas are malignant and largely resistant to available treatments. Histopathology is the gold standard for classification and grading of brain tumors. However, brain tumor heterogeneity is remarkable and histopathology procedures for glioma classification remain unsatisfactory for predicting disease course as well as response to treatment. Proteins that tightly associate with cancer differentiation and progression, can bear important prognostic information. Here, we describe the identification of protein clusters differentially expressed in high-grade versus low-grade gliomas. Tissue samples from 25 high-grade tumors, 10 low-grade tumors and 5 normal brain cortices were analyzed by 2D-PAGE and proteomic profiling by mass spectrometry. This led to identify 48 differentially expressed protein markers between tumors and normal samples. Protein clustering by multivariate analyses (PCA and PLS-DA) provided discrimination between pathological samples to an unprecedented extent, and revealed a unique network of deranged proteins. We discovered a novel glioblastoma control module centered on four major network hubs: Huntingtin, HNF4α, c-Myc and 14-3-3ζ. Immunohistochemistry, western blotting and unbiased proteome-wide meta-analysis revealed altered expression of this glioblastoma control module in human glioma samples as compared with normal controls. Moreover, the four-hub network was found to cross-talk with both p53 and EGFR pathways. In summary, the findings of this study indicate the existence of a unifying signaling module controlling glioblastoma pathogenesis and malignant progression, and suggest novel targets for development of diagnostic and therapeutic procedures. PMID:25050814

  19. Standardized patient walkthroughs in the National Drug Abuse Treatment Clinical Trials Network: common challenges to protocol implementation.

    PubMed

    Fussell, Holly E; Kunkel, Lynn E; McCarty, Dennis; Lewy, Colleen S

    2011-09-01

    Training research staff to implement clinical trials occurring in community-based addiction treatment programs presents unique challenges. Standardized patient walkthroughs of study procedures may enhance training and protocol implementation. Examine and discuss cross-site and cross-study challenges of participant screening and data collection procedures identified during standardized patient walkthroughs of multi-site clinical trials. Actors portrayed clients and "walked through" study procedures with protocol research staff. The study completed 57 walkthroughs during implementation of 4 clinical trials. Observers and walkthrough participants identified three areas of concern (consent procedures, screening and assessment processes, and protocol implementation) and made suggestions for resolving the concerns. Standardized patient walkthroughs capture issues with study procedures previously unidentified with didactic training or unscripted rehearsals. Clinical trials within the National Drug Abuse Treatment Clinical Trials Network are conducted in addiction treatment centers that vary on multiple dimensions. Based on walkthrough observations, the national protocol team and local site leadership modify standardized operating procedures and resolve cross-site problems prior to recruiting study participants. The standardized patient walkthrough improves consistency across study sites and reduces potential site variation in study outcomes.

  20. 40-Gbps optical backbone network deep packet inspection based on FPGA

    NASA Astrophysics Data System (ADS)

    Zuo, Yuan; Huang, Zhiping; Su, Shaojing

    2014-11-01

    In the era of information, the big data, which contains huge information, brings about some problems, such as high speed transmission, storage and real-time analysis and process. As the important media for data transmission, the Internet is the significant part for big data processing research. With the large-scale usage of the Internet, the data streaming of network is increasing rapidly. The speed level in the main fiber optic communication of the present has reached 40Gbps, even 100Gbps, therefore data on the optical backbone network shows some features of massive data. Generally, data services are provided via IP packets on the optical backbone network, which is constituted with SDH (Synchronous Digital Hierarchy). Hence this method that IP packets are directly mapped into SDH payload is named POS (Packet over SDH) technology. Aiming at the problems of real time process of high speed massive data, this paper designs a process system platform based on ATCA for 40Gbps POS signal data stream recognition and packet content capture, which employs the FPGA as the CPU. This platform offers pre-processing of clustering algorithms, service traffic identification and data mining for the following big data storage and analysis with high efficiency. Also, the operational procedure is proposed in this paper. Four channels of 10Gbps POS signal decomposed by the analysis module, which chooses FPGA as the kernel, are inputted to the flow classification module and the pattern matching component based on TCAM. Based on the properties of the length of payload and net flows, buffer management is added to the platform to keep the key flow information. According to data stream analysis, DPI (deep packet inspection) and flow balance distribute, the signal is transmitted to the backend machine through the giga Ethernet ports on back board. Practice shows that the proposed platform is superior to the traditional applications based on ASIC and NP.

Top