Sample records for network model version

  1. Parallel computation for biological sequence comparison: comparing a portable model to the native model for the Intel Hypercube.

    PubMed

    Nadkarni, P M; Miller, P L

    1991-01-01

    A parallel program for inter-database sequence comparison was developed on the Intel Hypercube using two models of parallel programming. One version was built using machine-specific Hypercube parallel programming commands. The other version was built using Linda, a machine-independent parallel programming language. The two versions of the program provide a case study comparing these two approaches to parallelization in an important biological application area. Benchmark tests with both programs gave comparable results with a small number of processors. As the number of processors was increased, the Linda version was somewhat less efficient. The Linda version was also run without change on Network Linda, a virtual parallel machine running on a network of desktop workstations.

  2. Modeling Evolution on Nearly Neutral Network Fitness Landscapes

    NASA Astrophysics Data System (ADS)

    Yakushkina, Tatiana; Saakian, David B.

    2017-08-01

    To describe virus evolution, it is necessary to define a fitness landscape. In this article, we consider the microscopic models with the advanced version of neutral network fitness landscapes. In this problem setting, we suppose a fitness difference between one-point mutation neighbors to be small. We construct a modification of the Wright-Fisher model, which is related to ordinary infinite population models with nearly neutral network fitness landscape at the large population limit. From the microscopic models in the realistic sequence space, we derive two versions of nearly neutral network models: with sinks and without sinks. We claim that the suggested model describes the evolutionary dynamics of RNA viruses better than the traditional Wright-Fisher model with few sequences.

  3. Parallel computation for biological sequence comparison: comparing a portable model to the native model for the Intel Hypercube.

    PubMed Central

    Nadkarni, P. M.; Miller, P. L.

    1991-01-01

    A parallel program for inter-database sequence comparison was developed on the Intel Hypercube using two models of parallel programming. One version was built using machine-specific Hypercube parallel programming commands. The other version was built using Linda, a machine-independent parallel programming language. The two versions of the program provide a case study comparing these two approaches to parallelization in an important biological application area. Benchmark tests with both programs gave comparable results with a small number of processors. As the number of processors was increased, the Linda version was somewhat less efficient. The Linda version was also run without change on Network Linda, a virtual parallel machine running on a network of desktop workstations. PMID:1807632

  4. Issues in Semantic Memory: A Response to Glass and Holyoak. Technical Report No. 101.

    ERIC Educational Resources Information Center

    Shoben, Edward J.; And Others

    Glass and Holyoak (1975) have raised two issues related to the distinction between set-theoretic and network theories of semantic memory, contending that: (a) their version of a network theory, the Marker Search model, is conceptually and empirically superior to the Feature Comparison model version of a set-theoretic theory; and (b) the contrast…

  5. LANL* V2.0: global modeling and validation

    NASA Astrophysics Data System (ADS)

    Koller, J.; Zaharia, S.

    2011-03-01

    We describe in this paper the new version of LANL*. Just like the previous version, this new version V2.0 of LANL* is an artificial neural network (ANN) for calculating the magnetic drift invariant, L*, that is used for modeling radiation belt dynamics and for other space weather applications. We have implemented the following enhancements in the new version: (1) we have removed the limitation to geosynchronous orbit and the model can now be used for any type of orbit. (2) The new version is based on the improved magnetic field model by Tsyganenko and Sitnov (2005) (TS05) instead of the older model by Tsyganenko et al. (2003). We have validated the model and compared our results to L* calculations with the TS05 model based on ephemerides for CRRES, Polar, GPS, a LANL geosynchronous satellite, and a virtual RBSP type orbit. We find that the neural network performs very well for all these orbits with an error typically Δ L* < 0.2 which corresponds to an error of 3% at geosynchronous orbit. This new LANL-V2.0 artificial neural network is orders of magnitudes faster than traditional numerical field line integration techniques with the TS05 model. It has applications to real-time radiation belt forecasting, analysis of data sets involving decades of satellite of observations, and other problems in space weather.

  6. APINetworks Java. A Java approach to the efficient treatment of large-scale complex networks

    NASA Astrophysics Data System (ADS)

    Muñoz-Caro, Camelia; Niño, Alfonso; Reyes, Sebastián; Castillo, Miriam

    2016-10-01

    We present a new version of the core structural package of our Application Programming Interface, APINetworks, for the treatment of complex networks in arbitrary computational environments. The new version is written in Java and presents several advantages over the previous C++ version: the portability of the Java code, the easiness of object-oriented design implementations, and the simplicity of memory management. In addition, some additional data structures are introduced for storing the sets of nodes and edges. Also, by resorting to the different garbage collectors currently available in the JVM the Java version is much more efficient than the C++ one with respect to memory management. In particular, the G1 collector is the most efficient one because of the parallel execution of G1 and the Java application. Using G1, APINetworks Java outperforms the C++ version and the well-known NetworkX and JGraphT packages in the building and BFS traversal of linear and complete networks. The better memory management of the present version allows for the modeling of much larger networks.

  7. Dynamical influence processes on networks: general theory and applications to social contagion.

    PubMed

    Harris, Kameron Decker; Danforth, Christopher M; Dodds, Peter Sheridan

    2013-08-01

    We study binary state dynamics on a network where each node acts in response to the average state of its neighborhood. By allowing varying amounts of stochasticity in both the network and node responses, we find different outcomes in random and deterministic versions of the model. In the limit of a large, dense network, however, we show that these dynamics coincide. We construct a general mean-field theory for random networks and show this predicts that the dynamics on the network is a smoothed version of the average response function dynamics. Thus, the behavior of the system can range from steady state to chaotic depending on the response functions, network connectivity, and update synchronicity. As a specific example, we model the competing tendencies of imitation and nonconformity by incorporating an off-threshold into standard threshold models of social contagion. In this way, we attempt to capture important aspects of fashions and societal trends. We compare our theory to extensive simulations of this "limited imitation contagion" model on Poisson random graphs, finding agreement between the mean-field theory and stochastic simulations.

  8. Reinterpretaion of the friendship paradox

    NASA Astrophysics Data System (ADS)

    Fu, Jingcheng; Wu, Jianliang

    The friendship paradox (FP) is a sociological phenomenon stating that most people have fewer friends than their friends do. It is to say that in a social network, the number of friends that most individuals have is smaller than the average number of friends of friends. This has been verified by Feld. We call this interpreting method mean value version. But is it the best choice to portray the paradox? In this paper, we propose a probability method to reinterpret this paradox, and we illustrate that the explanation using our method is more persuasive. An individual satisfies the FP if his (her) randomly chosen friend has more friends than him (her) with probability not less than 1/2. Comparing the ratios of nodes satisfying the FP in networks, rp, we can see that the probability version is stronger than the mean value version in real networks both online and offline. We also show some results about the effects of several parameters on rp in random network models. Most importantly, rp is a quadratic polynomial of the power law exponent γ in Price model, and rp is higher when the average clustering coefficient is between 0.4 and 0.5 in Petter-Beom (PB) model. The introduction of the probability method to FP can shed light on understanding the network structure in complex networks especially in social networks.

  9. Learning Orthographic Structure With Sequential Generative Neural Networks.

    PubMed

    Testolin, Alberto; Stoianov, Ivilin; Sperduti, Alessandro; Zorzi, Marco

    2016-04-01

    Learning the structure of event sequences is a ubiquitous problem in cognition and particularly in language. One possible solution is to learn a probabilistic generative model of sequences that allows making predictions about upcoming events. Though appealing from a neurobiological standpoint, this approach is typically not pursued in connectionist modeling. Here, we investigated a sequential version of the restricted Boltzmann machine (RBM), a stochastic recurrent neural network that extracts high-order structure from sensory data through unsupervised generative learning and can encode contextual information in the form of internal, distributed representations. We assessed whether this type of network can extract the orthographic structure of English monosyllables by learning a generative model of the letter sequences forming a word training corpus. We show that the network learned an accurate probabilistic model of English graphotactics, which can be used to make predictions about the letter following a given context as well as to autonomously generate high-quality pseudowords. The model was compared to an extended version of simple recurrent networks, augmented with a stochastic process that allows autonomous generation of sequences, and to non-connectionist probabilistic models (n-grams and hidden Markov models). We conclude that sequential RBMs and stochastic simple recurrent networks are promising candidates for modeling cognition in the temporal domain. Copyright © 2015 Cognitive Science Society, Inc.

  10. A Model of Yeast Cell-Cycle Regulation Based on a Standard Component Modeling Strategy for Protein Regulatory Networks.

    PubMed

    Laomettachit, Teeraphan; Chen, Katherine C; Baumann, William T; Tyson, John J

    2016-01-01

    To understand the molecular mechanisms that regulate cell cycle progression in eukaryotes, a variety of mathematical modeling approaches have been employed, ranging from Boolean networks and differential equations to stochastic simulations. Each approach has its own characteristic strengths and weaknesses. In this paper, we propose a "standard component" modeling strategy that combines advantageous features of Boolean networks, differential equations and stochastic simulations in a framework that acknowledges the typical sorts of reactions found in protein regulatory networks. Applying this strategy to a comprehensive mechanism of the budding yeast cell cycle, we illustrate the potential value of standard component modeling. The deterministic version of our model reproduces the phenotypic properties of wild-type cells and of 125 mutant strains. The stochastic version of our model reproduces the cell-to-cell variability of wild-type cells and the partial viability of the CLB2-dbΔ clb5Δ mutant strain. Our simulations show that mathematical modeling with "standard components" can capture in quantitative detail many essential properties of cell cycle control in budding yeast.

  11. A Model of Yeast Cell-Cycle Regulation Based on a Standard Component Modeling Strategy for Protein Regulatory Networks

    PubMed Central

    Laomettachit, Teeraphan; Chen, Katherine C.; Baumann, William T.

    2016-01-01

    To understand the molecular mechanisms that regulate cell cycle progression in eukaryotes, a variety of mathematical modeling approaches have been employed, ranging from Boolean networks and differential equations to stochastic simulations. Each approach has its own characteristic strengths and weaknesses. In this paper, we propose a “standard component” modeling strategy that combines advantageous features of Boolean networks, differential equations and stochastic simulations in a framework that acknowledges the typical sorts of reactions found in protein regulatory networks. Applying this strategy to a comprehensive mechanism of the budding yeast cell cycle, we illustrate the potential value of standard component modeling. The deterministic version of our model reproduces the phenotypic properties of wild-type cells and of 125 mutant strains. The stochastic version of our model reproduces the cell-to-cell variability of wild-type cells and the partial viability of the CLB2-dbΔ clb5Δ mutant strain. Our simulations show that mathematical modeling with “standard components” can capture in quantitative detail many essential properties of cell cycle control in budding yeast. PMID:27187804

  12. LANL*V2.0: global modeling and validation

    NASA Astrophysics Data System (ADS)

    Koller, J.; Zaharia, S.

    2011-08-01

    We describe in this paper the new version of LANL*, an artificial neural network (ANN) for calculating the magnetic drift invariant L*. This quantity is used for modeling radiation belt dynamics and for space weather applications. We have implemented the following enhancements in the new version: (1) we have removed the limitation to geosynchronous orbit and the model can now be used for a much larger region. (2) The new version is based on the improved magnetic field model by Tsyganenko and Sitnov (2005) (TS05) instead of the older model by Tsyganenko et al. (2003). We have validated the model and compared our results to L* calculations with the TS05 model based on ephemerides for CRRES, Polar, GPS, a LANL geosynchronous satellite, and a virtual RBSP type orbit. We find that the neural network performs very well for all these orbits with an error typically ΔL* < 0.2 which corresponds to an error of 3 % at geosynchronous orbit. This new LANL* V2.0 artificial neural network is orders of magnitudes faster than traditional numerical field line integration techniques with the TS05 model. It has applications to real-time radiation belt forecasting, analysis of data sets involving decades of satellite of observations, and other problems in space weather.

  13. A simple model clarifies the complicated relationships of complex networks

    PubMed Central

    Zheng, Bojin; Wu, Hongrun; Kuang, Li; Qin, Jun; Du, Wenhua; Wang, Jianmin; Li, Deyi

    2014-01-01

    Real-world networks such as the Internet and WWW have many common traits. Until now, hundreds of models were proposed to characterize these traits for understanding the networks. Because different models used very different mechanisms, it is widely believed that these traits origin from different causes. However, we find that a simple model based on optimisation can produce many traits, including scale-free, small-world, ultra small-world, Delta-distribution, compact, fractal, regular and random networks. Moreover, by revising the proposed model, the community-structure networks are generated. By this model and the revised versions, the complicated relationships of complex networks are illustrated. The model brings a new universal perspective to the understanding of complex networks and provide a universal method to model complex networks from the viewpoint of optimisation. PMID:25160506

  14. Distributed Energy Resources Customer Adoption Model Plus (DER-CAM+), Version 1.0.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stadler, Michael; Cardorso, Goncalo; Mashayekh, Salman

    DER-CAM+ v1.0.0 is internally referred to as DER-CAM v5.0.0. Due to fundamental changes from previous versions, a new name (DER-CAM+) will be used for DER-CAM version 5.0.0 and above. DER-CAM+ is a Decision Support Tool for Decentralized Energy Systems that has been tailored for microgrid applications, and now explicitly considers electrical and thermal networks within a microgrid, ancillary services, and operating reserve. DER-CAM was initially created as an exclusively economic energy model, able to find the cost minimizing combination and operation profile of a set of DER technologies that meet energy loads of a building or microgrid for a typicalmore » test year. The previous versions of DER-CAM were formulated without modeling the electrical/thermal networks within the microgrid, and hence, used aggregate single-node approaches. Furthermore, they were not able to consider operating reserve constraints, and microgrid revenue streams from participating in ancillary services markets. This new version DER-CAM+ considers these issues by including electrical power flow and thermal flow equations and constraints in the microgrid, revenues from various ancillary services markets, and operating reserve constraints.« less

  15. LANES 1 Users' Guide

    NASA Technical Reports Server (NTRS)

    Jordan, J.

    1985-01-01

    This document is intended for users of the Local Area Network Extensible Simulator, version I. This simulator models the performance of a Fiber Optic network under a variety of loading conditions and network characteristics. The options available to the user for defining the network conditions are described in this document. Computer hardware and software requirements are also defined.

  16. Knowledge diffusion in complex networks by considering time-varying information channels

    NASA Astrophysics Data System (ADS)

    Zhu, He; Ma, Jing

    2018-03-01

    In this article, based on a model of epidemic spreading, we explore the knowledge diffusion process with an innovative mechanism for complex networks by considering time-varying information channels. To cover the knowledge diffusion process in homogeneous and heterogeneous networks, two types of networks (the BA network and the ER network) are investigated. The mean-field theory is used to theoretically draw the knowledge diffusion threshold. Numerical simulation demonstrates that the knowledge diffusion threshold is almost linearly correlated with the mean of the activity rate. In addition, under the influence of the activity rate and distinct from the classic Susceptible-Infected-Susceptible (SIS) model, the density of knowers almost linearly grows with the spreading rate. Finally, in consideration of the ubiquitous mechanism of innovation, we further study the evolution of knowledge in our proposed model. The results suggest that compared with the effect of the spreading rate, the average knowledge version of the population is affected more by the innovation parameter and the mean of the activity rate. Furthermore, in the BA network, the average knowledge version of individuals with higher degree is always newer than those with lower degree.

  17. Neural network model for growth of Salmonella serotypes in ground chicken subjected to temperature abuse during cold storage for application in HACCP and risk assessment

    USDA-ARS?s Scientific Manuscript database

    With the advent of commercial software applications, it is now easy to develop neural network models for predictive microbiology applications. However, different versions of the model may be required to meet the divergent needs of model users. In the current study, the commercial software applicat...

  18. Probabilistic estimation of dune retreat on the Gold Coast, Australia

    USGS Publications Warehouse

    Palmsten, Margaret L.; Splinter, Kristen D.; Plant, Nathaniel G.; Stockdon, Hilary F.

    2014-01-01

    Sand dunes are an important natural buffer between storm impacts and development backing the beach on the Gold Coast of Queensland, Australia. The ability to forecast dune erosion at a prediction horizon of days to a week would allow efficient and timely response to dune erosion in this highly populated area. Towards this goal, we modified an existing probabilistic dune erosion model for use on the Gold Coast. The original model was trained using observations of dune response from Hurricane Ivan on Santa Rosa Island, Florida, USA (Plant and Stockdon 2012. Probabilistic prediction of barrier-island response to hurricanes, Journal of Geophysical Research, 117(F3), F03015). The model relates dune position change to pre-storm dune elevations, dune widths, and beach widths, along with storm surge and run-up using a Bayesian network. The Bayesian approach captures the uncertainty of inputs and predictions through the conditional probabilities between variables. Three versions of the barrier island response Bayesian network were tested for use on the Gold Coast. One network has the same structure as the original and was trained with the Santa Rosa Island data. The second network has a modified design and was trained using only pre- and post-storm data from 1988-2009 for the Gold Coast. The third version of the network has the same design as the second version of the network and was trained with the combined data from the Gold Coast and Santa Rosa Island. The two networks modified for use on the Gold Coast hindcast dune retreat with equal accuracy. Both networks explained 60% of the observed dune retreat variance, which is comparable to the skill observed by Plant and Stockdon (2012) in the initial Bayesian network application at Santa Rosa Island. The new networks improved predictions relative to application of the original network on the Gold Coast. Dune width was the most important morphologic variable in hindcasting dune retreat, while hydrodynamic variables, surge and run-up elevation, were also important

  19. Sophia Daemon Version 12

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2012-08-09

    Sophia Daemon Version 12 contains the code that is exclusively used by the ‘sophiad’ application. It runs as a service on a Linux host and analyzes network traffic obtained from libpcap and produces a network fingerprint based on hosts and channels. Sophia Daemon Version 12 can, if desired by the user, produce alerts when its fingerprint changes. Sophia Daemon Version 12 can receive data from another Sophia Daemon or raw packet data. It can output data to another Sophia Daemon Version 12, OglNet Version 12 or MySQL. Sophia Daemon Version 12 runs in a passive real-time manner that allows itmore » to be used on a SCADA network. Its network fingerprint is designed to be applicable to SCADA networks rather than general IT networks.« less

  20. Design Science Research toward Designing/Prototyping a Repeatable Model for Testing Location Management (LM) Algorithms for Wireless Networking

    ERIC Educational Resources Information Center

    Peacock, Christopher

    2012-01-01

    The purpose of this research effort was to develop a model that provides repeatable Location Management (LM) testing using a network simulation tool, QualNet version 5.1 (2011). The model will provide current and future protocol developers a framework to simulate stable protocol environments for development. This study used the Design Science…

  1. Assortative model for social networks

    NASA Astrophysics Data System (ADS)

    Catanzaro, Michele; Caldarelli, Guido; Pietronero, Luciano

    2004-09-01

    In this Brief Report we present a version of a network growth model, generalized in order to describe the behavior of social networks. The case of study considered is the preprint archive at cul.arxiv.org. Each node corresponds to a scientist, and a link is present whenever two authors wrote a paper together. This graph is a nice example of degree-assortative network, that is, to say a network where sites with similar degree are connected to each other. The model presented is one of the few able to reproduce such behavior, giving some insight on the microscopic dynamics at the basis of the graph structure.

  2. Social networking addiction, attachment style, and validation of the Italian version of the Bergen Social Media Addiction Scale

    PubMed Central

    Monacis, Lucia; de Palo, Valeria; Griffiths, Mark D.; Sinatra, Maria

    2017-01-01

    Aim Research into social networking addiction has greatly increased over the last decade. However, the number of validated instruments assessing addiction to social networking sites (SNSs) remains few, and none have been validated in the Italian language. Consequently, this study tested the psychometric properties of the Italian version of the Bergen Social Media Addiction Scale (BSMAS), as well as providing empirical data concerning the relationship between attachment styles and SNS addiction. Methods A total of 769 participants were recruited to this study. Confirmatory factor analysis (CFA) and multigroup analyses were applied to assess construct validity of the Italian version of the BSMAS. Reliability analyses comprised the average variance extracted, the standard error of measurement, and the factor determinacy coefficient. Results Indices obtained from the CFA showed the Italian version of the BSMAS to have an excellent fit of the model to the data, thus confirming the single-factor structure of the instrument. Measurement invariance was established at configural, metric, and strict invariances across age groups, and at configural and metric levels across gender groups. Internal consistency was supported by several indicators. In addition, the theoretical associations between SNS addiction and attachment styles were generally supported. Conclusion This study provides evidence that the Italian version of the BSMAS is a psychometrically robust tool that can be used in future Italian research into social networking addiction. PMID:28494648

  3. Social networking addiction, attachment style, and validation of the Italian version of the Bergen Social Media Addiction Scale.

    PubMed

    Monacis, Lucia; de Palo, Valeria; Griffiths, Mark D; Sinatra, Maria

    2017-06-01

    Aim Research into social networking addiction has greatly increased over the last decade. However, the number of validated instruments assessing addiction to social networking sites (SNSs) remains few, and none have been validated in the Italian language. Consequently, this study tested the psychometric properties of the Italian version of the Bergen Social Media Addiction Scale (BSMAS), as well as providing empirical data concerning the relationship between attachment styles and SNS addiction. Methods A total of 769 participants were recruited to this study. Confirmatory factor analysis (CFA) and multigroup analyses were applied to assess construct validity of the Italian version of the BSMAS. Reliability analyses comprised the average variance extracted, the standard error of measurement, and the factor determinacy coefficient. Results Indices obtained from the CFA showed the Italian version of the BSMAS to have an excellent fit of the model to the data, thus confirming the single-factor structure of the instrument. Measurement invariance was established at configural, metric, and strict invariances across age groups, and at configural and metric levels across gender groups. Internal consistency was supported by several indicators. In addition, the theoretical associations between SNS addiction and attachment styles were generally supported. Conclusion This study provides evidence that the Italian version of the BSMAS is a psychometrically robust tool that can be used in future Italian research into social networking addiction.

  4. NETS - A NEURAL NETWORK DEVELOPMENT TOOL, VERSION 3.0 (MACINTOSH VERSION)

    NASA Technical Reports Server (NTRS)

    Phillips, T. A.

    1994-01-01

    NETS, A Tool for the Development and Evaluation of Neural Networks, provides a simulation of Neural Network algorithms plus an environment for developing such algorithms. Neural Networks are a class of systems modeled after the human brain. Artificial Neural Networks are formed from hundreds or thousands of simulated neurons, connected to each other in a manner similar to brain neurons. Problems which involve pattern matching readily fit the class of problems which NETS is designed to solve. NETS uses the back propagation learning method for all of the networks which it creates. The nodes of a network are usually grouped together into clumps called layers. Generally, a network will have an input layer through which the various environment stimuli are presented to the network, and an output layer for determining the network's response. The number of nodes in these two layers is usually tied to some features of the problem being solved. Other layers, which form intermediate stops between the input and output layers, are called hidden layers. NETS allows the user to customize the patterns of connections between layers of a network. NETS also provides features for saving the weight values of a network during the learning process, which allows for more precise control over the learning process. NETS is an interpreter. Its method of execution is the familiar "read-evaluate-print" loop found in interpreted languages such as BASIC and LISP. The user is presented with a prompt which is the simulator's way of asking for input. After a command is issued, NETS will attempt to evaluate the command, which may produce more prompts requesting specific information or an error if the command is not understood. The typical process involved when using NETS consists of translating the problem into a format which uses input/output pairs, designing a network configuration for the problem, and finally training the network with input/output pairs until an acceptable error is reached. NETS allows the user to generate C code to implement the network loaded into the system. This permits the placement of networks as components, or subroutines, in other systems. In short, once a network performs satisfactorily, the Generate C Code option provides the means for creating a program separate from NETS to run the network. Other features: files may be stored in binary or ASCII format; multiple input propagation is permitted; bias values may be included; capability to scale data without writing scaling code; quick interactive testing of network from the main menu; and several options that allow the user to manipulate learning efficiency. NETS is written in ANSI standard C language to be machine independent. The Macintosh version (MSC-22108) includes code for both a graphical user interface version and a command line interface version. The machine independent version (MSC-21588) only includes code for the command line interface version of NETS 3.0. The Macintosh version requires a Macintosh II series computer and has been successfully implemented under System 7. Four executables are included on these diskettes, two for floating point operations and two for integer arithmetic. It requires Think C 5.0 to compile. A minimum of 1Mb of RAM is required for execution. Sample input files and executables for both the command line version and the Macintosh user interface version are provided on the distribution medium. The Macintosh version is available on a set of three 3.5 inch 800K Macintosh format diskettes. The machine independent version has been successfully implemented on an IBM PC series compatible running MS-DOS, a DEC VAX running VMS, a SunIPC running SunOS, and a CRAY Y-MP running UNICOS. Two executables for the IBM PC version are included on the MS-DOS distribution media, one compiled for floating point operations and one for integer arithmetic. The machine independent version is available on a set of three 5.25 inch 360K MS-DOS format diskettes (standard distribution medium) or a .25 inch streaming magnetic tape cartridge in UNIX tar format. NETS was developed in 1989 and updated in 1992. IBM PC is a registered trademark of International Business Machines. MS-DOS is a registered trademark of Microsoft Corporation. DEC, VAX, and VMS are trademarks of Digital Equipment Corporation. SunIPC and SunOS are trademarks of Sun Microsystems, Inc. CRAY Y-MP and UNICOS are trademarks of Cray Research, Inc.

  5. NETS - A NEURAL NETWORK DEVELOPMENT TOOL, VERSION 3.0 (MACHINE INDEPENDENT VERSION)

    NASA Technical Reports Server (NTRS)

    Baffes, P. T.

    1994-01-01

    NETS, A Tool for the Development and Evaluation of Neural Networks, provides a simulation of Neural Network algorithms plus an environment for developing such algorithms. Neural Networks are a class of systems modeled after the human brain. Artificial Neural Networks are formed from hundreds or thousands of simulated neurons, connected to each other in a manner similar to brain neurons. Problems which involve pattern matching readily fit the class of problems which NETS is designed to solve. NETS uses the back propagation learning method for all of the networks which it creates. The nodes of a network are usually grouped together into clumps called layers. Generally, a network will have an input layer through which the various environment stimuli are presented to the network, and an output layer for determining the network's response. The number of nodes in these two layers is usually tied to some features of the problem being solved. Other layers, which form intermediate stops between the input and output layers, are called hidden layers. NETS allows the user to customize the patterns of connections between layers of a network. NETS also provides features for saving the weight values of a network during the learning process, which allows for more precise control over the learning process. NETS is an interpreter. Its method of execution is the familiar "read-evaluate-print" loop found in interpreted languages such as BASIC and LISP. The user is presented with a prompt which is the simulator's way of asking for input. After a command is issued, NETS will attempt to evaluate the command, which may produce more prompts requesting specific information or an error if the command is not understood. The typical process involved when using NETS consists of translating the problem into a format which uses input/output pairs, designing a network configuration for the problem, and finally training the network with input/output pairs until an acceptable error is reached. NETS allows the user to generate C code to implement the network loaded into the system. This permits the placement of networks as components, or subroutines, in other systems. In short, once a network performs satisfactorily, the Generate C Code option provides the means for creating a program separate from NETS to run the network. Other features: files may be stored in binary or ASCII format; multiple input propagation is permitted; bias values may be included; capability to scale data without writing scaling code; quick interactive testing of network from the main menu; and several options that allow the user to manipulate learning efficiency. NETS is written in ANSI standard C language to be machine independent. The Macintosh version (MSC-22108) includes code for both a graphical user interface version and a command line interface version. The machine independent version (MSC-21588) only includes code for the command line interface version of NETS 3.0. The Macintosh version requires a Macintosh II series computer and has been successfully implemented under System 7. Four executables are included on these diskettes, two for floating point operations and two for integer arithmetic. It requires Think C 5.0 to compile. A minimum of 1Mb of RAM is required for execution. Sample input files and executables for both the command line version and the Macintosh user interface version are provided on the distribution medium. The Macintosh version is available on a set of three 3.5 inch 800K Macintosh format diskettes. The machine independent version has been successfully implemented on an IBM PC series compatible running MS-DOS, a DEC VAX running VMS, a SunIPC running SunOS, and a CRAY Y-MP running UNICOS. Two executables for the IBM PC version are included on the MS-DOS distribution media, one compiled for floating point operations and one for integer arithmetic. The machine independent version is available on a set of three 5.25 inch 360K MS-DOS format diskettes (standard distribution medium) or a .25 inch streaming magnetic tape cartridge in UNIX tar format. NETS was developed in 1989 and updated in 1992. IBM PC is a registered trademark of International Business Machines. MS-DOS is a registered trademark of Microsoft Corporation. DEC, VAX, and VMS are trademarks of Digital Equipment Corporation. SunIPC and SunOS are trademarks of Sun Microsystems, Inc. CRAY Y-MP and UNICOS are trademarks of Cray Research, Inc.

  6. Design and Development of Basic Physical Layer WiMAX Network Simulation Models

    DTIC Science & Technology

    2009-01-01

    Wide Web . The third software version was developed during the period of 22 August to 4 November, 2008. The software version developed during the...researched on the Web . The mathematics of some fundamental concepts such as Fourier transforms, convolutional coding techniques were also reviewed...Mathworks Matlab users’ website. A simulation model was found, entitled Estudio y Simulacion de la capa Jisica de la norma 802.16 ( Sistema WiMAX) developed

  7. Insertion algorithms for network model database management systems

    NASA Astrophysics Data System (ADS)

    Mamadolimov, Abdurashid; Khikmat, Saburov

    2017-12-01

    The network model is a database model conceived as a flexible way of representing objects and their relationships. Its distinguishing feature is that the schema, viewed as a graph in which object types are nodes and relationship types are arcs, forms partial order. When a database is large and a query comparison is expensive then the efficiency requirement of managing algorithms is minimizing the number of query comparisons. We consider updating operation for network model database management systems. We develop a new sequantial algorithm for updating operation. Also we suggest a distributed version of the algorithm.

  8. The importance of including dynamic social networks when modeling epidemics of airborne infections: does increasing complexity increase accuracy?

    PubMed

    Blower, Sally; Go, Myong-Hyun

    2011-07-19

    Mathematical models are useful tools for understanding and predicting epidemics. A recent innovative modeling study by Stehle and colleagues addressed the issue of how complex models need to be to ensure accuracy. The authors collected data on face-to-face contacts during a two-day conference. They then constructed a series of dynamic social contact networks, each of which was used to model an epidemic generated by a fast-spreading airborne pathogen. Intriguingly, Stehle and colleagues found that increasing model complexity did not always increase accuracy. Specifically, the most detailed contact network and a simplified version of this network generated very similar results. These results are extremely interesting and require further exploration to determine their generalizability.

  9. Forecasting SPEI and SPI Drought Indices Using the Integrated Artificial Neural Networks

    PubMed Central

    Maca, Petr; Pech, Pavel

    2016-01-01

    The presented paper compares forecast of drought indices based on two different models of artificial neural networks. The first model is based on feedforward multilayer perceptron, sANN, and the second one is the integrated neural network model, hANN. The analyzed drought indices are the standardized precipitation index (SPI) and the standardized precipitation evaporation index (SPEI) and were derived for the period of 1948–2002 on two US catchments. The meteorological and hydrological data were obtained from MOPEX experiment. The training of both neural network models was made by the adaptive version of differential evolution, JADE. The comparison of models was based on six model performance measures. The results of drought indices forecast, explained by the values of four model performance indices, show that the integrated neural network model was superior to the feedforward multilayer perceptron with one hidden layer of neurons. PMID:26880875

  10. Forecasting SPEI and SPI Drought Indices Using the Integrated Artificial Neural Networks.

    PubMed

    Maca, Petr; Pech, Pavel

    2016-01-01

    The presented paper compares forecast of drought indices based on two different models of artificial neural networks. The first model is based on feedforward multilayer perceptron, sANN, and the second one is the integrated neural network model, hANN. The analyzed drought indices are the standardized precipitation index (SPI) and the standardized precipitation evaporation index (SPEI) and were derived for the period of 1948-2002 on two US catchments. The meteorological and hydrological data were obtained from MOPEX experiment. The training of both neural network models was made by the adaptive version of differential evolution, JADE. The comparison of models was based on six model performance measures. The results of drought indices forecast, explained by the values of four model performance indices, show that the integrated neural network model was superior to the feedforward multilayer perceptron with one hidden layer of neurons.

  11. DSN Array Simulator

    NASA Technical Reports Server (NTRS)

    Tikidjian, Raffi; Mackey, Ryan

    2008-01-01

    The DSN Array Simulator (wherein 'DSN' signifies NASA's Deep Space Network) is an updated version of software previously denoted the DSN Receive Array Technology Assessment Simulation. This software (see figure) is used for computational modeling of a proposed DSN facility comprising user-defined arrays of antennas and transmitting and receiving equipment for microwave communication with spacecraft on interplanetary missions. The simulation includes variations in spacecraft tracked and communication demand changes for up to several decades of future operation. Such modeling is performed to estimate facility performance, evaluate requirements that govern facility design, and evaluate proposed improvements in hardware and/or software. The updated version of this software affords enhanced capability for characterizing facility performance against user-defined mission sets. The software includes a Monte Carlo simulation component that enables rapid generation of key mission-set metrics (e.g., numbers of links, data rates, and date volumes), and statistical distributions thereof as functions of time. The updated version also offers expanded capability for mixed-asset network modeling--for example, for running scenarios that involve user-definable mixtures of antennas having different diameters (in contradistinction to a fixed number of antennas having the same fixed diameter). The improved version also affords greater simulation fidelity, sufficient for validation by comparison with actual DSN operations and analytically predictable performance metrics.

  12. mizuRoute version 1: A river network routing tool for a continental domain water resources applications

    USGS Publications Warehouse

    Mizukami, Naoki; Clark, Martyn P.; Sampson, Kevin; Nijssen, Bart; Mao, Yixin; McMillan, Hilary; Viger, Roland; Markstrom, Steven; Hay, Lauren E.; Woods, Ross; Arnold, Jeffrey R.; Brekke, Levi D.

    2016-01-01

    This paper describes the first version of a stand-alone runoff routing tool, mizuRoute. The mizuRoute tool post-processes runoff outputs from any distributed hydrologic model or land surface model to produce spatially distributed streamflow at various spatial scales from headwater basins to continental-wide river systems. The tool can utilize both traditional grid-based river network and vector-based river network data. Both types of river network include river segment lines and the associated drainage basin polygons, but the vector-based river network can represent finer-scale river lines than the grid-based network. Streamflow estimates at any desired location in the river network can be easily extracted from the output of mizuRoute. The routing process is simulated as two separate steps. First, hillslope routing is performed with a gamma-distribution-based unit-hydrograph to transport runoff from a hillslope to a catchment outlet. The second step is river channel routing, which is performed with one of two routing scheme options: (1) a kinematic wave tracking (KWT) routing procedure; and (2) an impulse response function – unit-hydrograph (IRF-UH) routing procedure. The mizuRoute tool also includes scripts (python, NetCDF operators) to pre-process spatial river network data. This paper demonstrates mizuRoute's capabilities to produce spatially distributed streamflow simulations based on river networks from the United States Geological Survey (USGS) Geospatial Fabric (GF) data set in which over 54 000 river segments and their contributing areas are mapped across the contiguous United States (CONUS). A brief analysis of model parameter sensitivity is also provided. The mizuRoute tool can assist model-based water resources assessments including studies of the impacts of climate change on streamflow.

  13. DFN Modeling for the Safety Case of the Final Disposal of Spent Nuclear Fuel in Olkiluoto, Finland

    NASA Astrophysics Data System (ADS)

    Vanhanarkaus, O.

    2017-12-01

    Olkiluoto Island is a site in SW Finland chosen to host a deep geological repository for high-level nuclear waste generated by nuclear power plants of power companies TVO and Fortum. Posiva, a nuclear waste management organization, submitted a construction license application for the Olkiluoto repository to the Finnish government in 2012. A key component of the license application was an integrated geological, hydrological and biological description of the Olkiluoto site. After the safety case was reviewed in 2015 by the Radiation and Nuclear Safety Authority in Finland, Posiva was granted a construction license. Posiva is now preparing an updated safety case for the operating license application to be submitted in 2022, and an update of the discrete fracture network (DFN) model used for site characterization is part of that. The first step describing and modelling the network of fractures in the Olkiluoto bedrock was DFN model version 1 (2009), which presented an initial understanding of the relationships between rock fracturing and geology at the site and identified the important primary controls on fracturing. DFN model version 2 (2012) utilized new subsurface data from additional drillholes, tunnels and excavated underground facilities in ONKALO to better understand spatial variability of the geological controls on geological and hydrogeological fracture properties. DFN version 2 connected fracture geometric and hydraulic properties to distinct tectonic domains and to larger-scale hydraulically conductive fault zones. In the version 2 DFN model, geological and hydrogeological models were developed along separate parallel tracks. The version 3 (2017) DFN model for the Olkiluoto site integrates geological and hydrogeological elements into a single consistent model used for geological, rock mechanical, hydrogeological and hydrogeochemical studies. New elements in the version 3 DFN model include a stochastic description of fractures within Brittle Fault Zones (BFZ), integration of geological and hydrostructural interpretations of BFZ, greater use of 3D geological models to better constrain the spatial variability of fracturing and fractures using hydromechanical principles to account for material behavior and in-situ stresses.

  14. Bayesian Network Webserver: a comprehensive tool for biological network modeling.

    PubMed

    Ziebarth, Jesse D; Bhattacharya, Anindya; Cui, Yan

    2013-11-01

    The Bayesian Network Webserver (BNW) is a platform for comprehensive network modeling of systems genetics and other biological datasets. It allows users to quickly and seamlessly upload a dataset, learn the structure of the network model that best explains the data and use the model to understand relationships between network variables. Many datasets, including those used to create genetic network models, contain both discrete (e.g. genotype) and continuous (e.g. gene expression traits) variables, and BNW allows for modeling hybrid datasets. Users of BNW can incorporate prior knowledge during structure learning through an easy-to-use structural constraint interface. After structure learning, users are immediately presented with an interactive network model, which can be used to make testable hypotheses about network relationships. BNW, including a downloadable structure learning package, is available at http://compbio.uthsc.edu/BNW. (The BNW interface for adding structural constraints uses HTML5 features that are not supported by current version of Internet Explorer. We suggest using other browsers (e.g. Google Chrome or Mozilla Firefox) when accessing BNW). ycui2@uthsc.edu. Supplementary data are available at Bioinformatics online.

  15. Connectionist Models: Proceedings of the Summer School Held in San Diego, California on 1990

    DTIC Science & Technology

    1990-01-01

    modes: control network continues activation spreading based There is the sequential version and the parallel version on the actual inputs instead of...ent). 2. Execute all motoric actions based on activations of r a ent.The parallel version of the algorithm is local in time, units in A. Update the...a- movements that help o recognize an entering person.) tions like ’move focus left’, ’rotate focus’ are based on the activations of the C’s output

  16. Mathematical analysis techniques for modeling the space network activities

    NASA Technical Reports Server (NTRS)

    Foster, Lisa M.

    1992-01-01

    The objective of the present work was to explore and identify mathematical analysis techniques, and in particular, the use of linear programming. This topic was then applied to the Tracking and Data Relay Satellite System (TDRSS) in order to understand the space network better. Finally, a small scale version of the system was modeled, variables were identified, data was gathered, and comparisons were made between actual and theoretical data.

  17. Evaluation of the Surface PM2.5 in Version 1 of the NASA MERRA Aerosol Reanalysis over the United States

    NASA Technical Reports Server (NTRS)

    Buchard, V.; da Silva, A. M.; Randles, C. A.; Colarco, P.; Ferrare, R.; Hair, J.; Hostetler, C.; Tackett, J.; Winker, D.

    2015-01-01

    We use surface fine particulate matter (PM2.5) measurements collected by the United States Environmental Protection Agency (US EPA) and the Interagency Monitoring of Protected Visual Environments (IMPROVE) networks as independent validation for Version 1 of the Modern Era Retrospective analysis for Research and Applications Aerosol Reanalysis (MERRAero) developed by the Global Modeling Assimilation Office (GMAO). MERRAero is based on a version of the GEOS-5 model that is radiatively coupled to the Goddard Chemistry, Aerosol, Radiation, and Transport (GOCART) aerosol module and includes assimilation of bias corrected Aerosol Optical Depth (AOD) from Moderate Resolution Imaging Spectroradiometer (MODIS) sensors on both Terra and Aqua satellites. By combining the spatial and temporal coverage of GEOS-5 with observational constraints on AOD, MERRAero has the potential to provide improved estimates of PM2.5 compared to the model alone and with greater coverage than available observations.Importantly, assimilation of AOD data constrains the total column aerosol mass in MERRAero subject to assumptions about optical properties for each of the species represented in GOGART. However, single visible wavelength AOD data does not contain sufficient information content to correct errors in either aerosol vertical placement or composition, critical elements for a proper characterization of surface PM2.5. Despite this, we find that the data-assimilation equipped version of GEOS-5 better represents observed PM2.5 between 2003 and 2012 compared to the same version of the model without AOD assimilation. Compared to measurements from the EPA-AQS network, MERRAero shows better PM2.5 agreement with the IMPROVE network measurements, which are composed essentially of rural stations. Regardless the data network, MERRAero PM2.5 are closer to observation values during the summer while larger discrepancies are observed during the winter. Comparing MERRAero to PM2.5 data collected by the Chemical Speciation Network (CSN) offers greater insight on the species MERRAero predicts well and those for which there are biases relative to the EPA observations. Analysis of this speciated data indicates that the lack of nitrate emissions in MERRAero and an underestimation of carbonaceous emissions in the Western US explains much of the reanalysis bias during the winter. To further understand discrepancies between the reanalysis and observations, we use complimentary data to assess two important aspects of MERRAero that are of relevance to the diagnosis of PM2.5, in particular AOD and vertical structure

  18. Evaluation of the surface PM2.5 in Version 1 of the NASA MERRA Aerosol Reanalysis over the United States

    NASA Astrophysics Data System (ADS)

    Buchard, V.; da Silva, A. M.; Randles, C. A.; Colarco, P.; Ferrare, R.; Hair, J.; Hostetler, C.; Tackett, J.; Winker, D.

    2016-01-01

    We use surface fine particulate matter (PM2.5) measurements collected by the United States Environmental Protection Agency (US EPA) and the Interagency Monitoring of Protected Visual Environments (IMPROVE) networks as independent validation for Version 1 of the Modern Era Retrospective analysis for Research and Applications Aerosol Reanalysis (MERRAero) developed by the Global Modeling Assimilation Office (GMAO). MERRAero is based on a version of the GEOS-5 model that is radiatively coupled to the Goddard Chemistry, Aerosol, Radiation, and Transport (GOCART) aerosol module and includes assimilation of bias corrected Aerosol Optical Depth (AOD) from Moderate Resolution Imaging Spectroradiometer (MODIS) sensors on both Terra and Aqua satellites. By combining the spatial and temporal coverage of GEOS-5 with observational constraints on AOD, MERRAero has the potential to provide improved estimates of PM2.5 compared to the model alone and with greater coverage than available observations. Importantly, assimilation of AOD data constrains the total column aerosol mass in MERRAero subject to assumptions about optical properties for each of the species represented in GOGART. However, single visible wavelength AOD data does not contain sufficient information content to correct errors in either aerosol vertical placement or composition, critical elements for a proper characterization of surface PM2.5. Despite this, we find that the data-assimilation equipped version of GEOS-5 better represents observed PM2.5 between 2003 and 2012 compared to the same version of the model without AOD assimilation. Compared to measurements from the EPA-AQS network, MERRAero shows better PM2.5 agreement with the IMPROVE network measurements, which are composed essentially of rural stations. Regardless the data network, MERRAero PM2.5 are closer to observation values during the summer while larger discrepancies are observed during the winter. Comparing MERRAero to PM2.5 data collected by the Chemical Speciation Network (CSN) offers greater insight on the species MERRAero predicts well and those for which there are biases relative to the EPA observations. Analysis of this speciated data indicates that the lack of nitrate emissions in MERRAero and an underestimation of carbonaceous emissions in the Western US explains much of the reanalysis bias during the winter. To further understand discrepancies between the reanalysis and observations, we use complimentary data to assess two important aspects of MERRAero that are of relevance to the diagnosis of PM2.5, in particular AOD and vertical structure.

  19. Robust criticality of an Ising model on rewired directed networks

    NASA Astrophysics Data System (ADS)

    Lipowski, Adam; Gontarek, Krzysztof; Lipowska, Dorota

    2015-06-01

    We show that preferential rewiring, which is supposed to mimic the behavior of financial agents, changes a directed-network Ising ferromagnet with a single critical point into a model with robust critical behavior. For the nonrewired random graph version, due to a constant number of out-links for each site, we write a simple mean-field-like equation describing the behavior of magnetization; we argue that it is exact and support the claim with extensive Monte Carlo simulations. For the rewired version, this equation is obeyed only at low temperatures. At higher temperatures, rewiring leads to strong heterogeneities, which apparently invalidates mean-field arguments and induces large fluctuations and divergent susceptibility. Such behavior is traced back to the formation of a relatively small core of agents that influence the entire system.

  20. Research in computer science

    NASA Technical Reports Server (NTRS)

    Ortega, J. M.

    1984-01-01

    The research efforts of University of Virginia students under a NASA sponsored program are summarized and the status of the program is reported. The research includes: testing method evaluations for N version programming; a representation scheme for modeling three dimensional objects; fault tolerant protocols for real time local area networks; performance investigation of Cyber network; XFEM implementation; and vectorizing incomplete Cholesky conjugate gradients.

  1. Status of the NASA Micro Pulse Lidar Network (MPLNET): overview of the network and future plans, new version 3 data products, and the polarized MPL

    NASA Astrophysics Data System (ADS)

    Welton, Ellsworth J.; Stewart, Sebastian A.; Lewis, Jasper R.; Belcher, Larry R.; Campbell, James R.; Lolli, Simone

    2018-04-01

    The NASA Micro Pulse Lidar Network (MPLNET) is a global federated network of Micro-Pulse Lidars (MPL) co-located with the NASA Aerosol Robotic Network (AERONET). MPLNET began in 2000, and there are currently 17 long-term sites, numerous field campaigns, and more planned sites on the way. We have developed a new Version 3 processing system including the deployment of polarized MPLs across the network. Here we provide an overview of Version 3, the polarized MPL, and current and future plans.

  2. Application of artificial neural networks to gaming

    NASA Astrophysics Data System (ADS)

    Baba, Norio; Kita, Tomio; Oda, Kazuhiro

    1995-04-01

    Recently, neural network technology has been applied to various actual problems. It has succeeded in producing a large number of intelligent systems. In this article, we suggest that it could be applied to the field of gaming. In particular, we suggest that the neural network model could be used to mimic players' characters. Several computer simulation results using a computer gaming system which is a modified version of the COMMONS GAME confirm our idea.

  3. Version 6 of the consensus yeast metabolic network refines biochemical coverage and improves model performance

    PubMed Central

    Heavner, Benjamin D.; Smallbone, Kieran; Price, Nathan D.; Walker, Larry P.

    2013-01-01

    Updates to maintain a state-of-the art reconstruction of the yeast metabolic network are essential to reflect our understanding of yeast metabolism and functional organization, to eliminate any inaccuracies identified in earlier iterations, to improve predictive accuracy and to continue to expand into novel subsystems to extend the comprehensiveness of the model. Here, we present version 6 of the consensus yeast metabolic network (Yeast 6) as an update to the community effort to computationally reconstruct the genome-scale metabolic network of Saccharomyces cerevisiae S288c. Yeast 6 comprises 1458 metabolites participating in 1888 reactions, which are annotated with 900 yeast genes encoding the catalyzing enzymes. Compared with Yeast 5, Yeast 6 demonstrates improved sensitivity, specificity and positive and negative predictive values for predicting gene essentiality in glucose-limited aerobic conditions when analyzed with flux balance analysis. Additionally, Yeast 6 improves the accuracy of predicting the likelihood that a mutation will cause auxotrophy. The network reconstruction is available as a Systems Biology Markup Language (SBML) file enriched with Minimium Information Requested in the Annotation of Biochemical Models (MIRIAM)-compliant annotations. Small- and macromolecules in the network are referenced to authoritative databases such as Uniprot or ChEBI. Molecules and reactions are also annotated with appropriate publications that contain supporting evidence. Yeast 6 is freely available at http://yeast.sf.net/ as three separate SBML files: a model using the SBML level 3 Flux Balance Constraint package, a model compatible with the MATLAB® COBRA Toolbox for backward compatibility and a reconstruction containing only reactions for which there is experimental evidence (without the non-biological reactions necessary for simulating growth). Database URL: http://yeast.sf.net/ PMID:23935056

  4. The North American Drought Atlas: Tree-Ring Reconstructions of Drought Variability for Climate Modeling and Assessment

    NASA Astrophysics Data System (ADS)

    Cook, E. R.

    2007-05-01

    The North American Drought Atlas describes a detailed reconstruction of drought variability from tree rings over most of North America for the past 500-1000 years. The first version of it, produced over three years ago, was based on a network of 835 tree-ring chronologies and a 286-point grid of instrumental Palmer Drought Severity Indices (PDSI). These gridded PDSI reconstructions have been used in numerous published studies now that range from modeling fire in the American West, to the impact of drought on palaeo-Indian societies, and to the determination of the primary causes of drought over North America through climate modeling experiments. Some examples of these applications will be described to illustrate the scientific value of these large-scale reconstructions of drought. Since the development and free public release of Version 1 of the North American Drought Atlas (see http:iridl.ldeo.columbia.edu/SOURCES/.LDEO/.TRL/.NADA2004/.pdsi-atlas.html), great improvements have been made in the critical tree-ring network used to reconstruct PDSI at each grid point. This network has now been enlarged to 1743 annual tree-ring chronologies, which greatly improves the density of tree-ring records in certain parts of the grid, especially in Canada and Mexico. In addition, the number of tree-ring records that extend back before AD 1400 has been substantially increased. These developments justify the creation of Version 2 of the North American Drought Atlas. In this talk I will describe this new version of the drought atlas and some of its properties that make it a significant improvement over the previous version. The new product provides enhanced resolution of the spatial and temporal variability of prolonged drought such as the late 16th century event that impacted regions of both Mexico and the United States. I will also argue for the North American Drought Atlas being used as a template for the development of large-scale drought reconstructions in other land areas of the Northern Hemisphere where sufficient tree-ring data exist. By doing so, the importance of this product to the modeling community will be significantly enhanced.

  5. Representing distributed cognition in complex systems: how a submarine returns to periscope depth.

    PubMed

    Stanton, Neville A

    2014-01-01

    This paper presents the Event Analysis of Systemic Teamwork (EAST) method as a means of modelling distributed cognition in systems. The method comprises three network models (i.e. task, social and information) and their combination. This method was applied to the interactions between the sound room and control room in a submarine, following the activities of returning the submarine to periscope depth. This paper demonstrates three main developments in EAST. First, building the network models directly, without reference to the intervening methods. Second, the application of analysis metrics to all three networks. Third, the combination of the aforementioned networks in different ways to gain a broader understanding of the distributed cognition. Analyses have shown that EAST can be used to gain both qualitative and quantitative insights into distributed cognition. Future research should focus on the analyses of network resilience and modelling alternative versions of a system.

  6. Chaotic itinerancy in the oscillator neural network without Lyapunov functions.

    PubMed

    Uchiyama, Satoki; Fujisaka, Hirokazu

    2004-09-01

    Chaotic itinerancy (CI), which is defined as an incessant spontaneous switching phenomenon among attractor ruins in deterministic dynamical systems without Lyapunov functions, is numerically studied in the case of an oscillator neural network model. The model is the pseudoinverse-matrix version of the previous model [S. Uchiyama and H. Fujisaka, Phys. Rev. E 65, 061912 (2002)] that was studied theoretically with the aid of statistical neurodynamics. It is found that CI in neural nets can be understood as the intermittent dynamics of weakly destabilized chaotic retrieval solutions. Copyright 2004 American Institute of Physics

  7. Oscillatory network with self-organized dynamical connections for synchronization-based image segmentation.

    PubMed

    Kuzmina, Margarita; Manykin, Eduard; Surina, Irina

    2004-01-01

    An oscillatory network of columnar architecture located in 3D spatial lattice was recently designed by the authors as oscillatory model of the brain visual cortex. Single network oscillator is a relaxational neural oscillator with internal dynamics tunable by visual image characteristics - local brightness and elementary bar orientation. It is able to demonstrate either activity state (stable undamped oscillations) or "silence" (quickly damped oscillations). Self-organized nonlocal dynamical connections of oscillators depend on oscillator activity levels and orientations of cortical receptive fields. Network performance consists in transfer into a state of clusterized synchronization. At current stage grey-level image segmentation tasks are carried out by 2D oscillatory network, obtained as a limit version of the source model. Due to supplemented network coupling strength control the 2D reduced network provides synchronization-based image segmentation. New results on segmentation of brightness and texture images presented in the paper demonstrate accurate network performance and informative visualization of segmentation results, inherent in the model.

  8. Evaluation of snow modeling with Noah and Noah-MP land surface models in NCEP GFS/CFS system

    NASA Astrophysics Data System (ADS)

    Dong, J.; Ek, M. B.; Wei, H.; Meng, J.

    2017-12-01

    Land surface serves as lower boundary forcing in global forecast system (GFS) and climate forecast system (CFS), simulating interactions between land and the atmosphere. Understanding the underlying land model physics is a key to improving weather and seasonal prediction skills. With the upgrades in land model physics (e.g., release of newer versions of a land model), different land initializations, changes in parameterization schemes used in the land model (e.g., land physical parametrization options), and how the land impact is handled (e.g., physics ensemble approach), it always prompts the necessity that climate prediction experiments need to be re-conducted to examine its impact. The current NASA LIS (version 7) integrates NOAA operational land surface and hydrological models (NCEP's Noah, versions from 2.7.1 to 3.6 and the future Noah-MP), high-resolution satellite and observational data, and land DA tools. The newer versions of the Noah LSM used in operational models have a variety of enhancements compared to older versions, where the Noah-MP allows for different physics parameterization options and the choice could have large impact on physical processes underlying seasonal predictions. These impacts need to be reexamined before implemented into NCEP operational systems. A set of offline numerical experiments driven by the GFS forecast forcing have been conducted to evaluate the impact of snow modeling with daily Global Historical Climatology Network (GHCN).

  9. A Very Large Area Network (VLAN) knowledge-base applied to space communication problems

    NASA Technical Reports Server (NTRS)

    Zander, Carol S.

    1988-01-01

    This paper first describes a hierarchical model for very large area networks (VLAN). Space communication problems whose solution could profit by the model are discussed and then an enhanced version of this model incorporating the knowledge needed for the missile detection-destruction problem is presented. A satellite network or VLAN is a network which includes at least one satellite. Due to the complexity, a compromise between fully centralized and fully distributed network management has been adopted. Network nodes are assigned to a physically localized group, called a partition. Partitions consist of groups of cell nodes with one cell node acting as the organizer or master, called the Group Master (GM). Coordinating the group masters is a Partition Master (PM). Knowledge is also distributed hierarchically existing in at least two nodes. Each satellite node has a back-up earth node. Knowledge must be distributed in such a way so as to minimize information loss when a node fails. Thus the model is hierarchical both physically and informationally.

  10. Comparison of functional network connectivity for passive-listening and active-response narrative comprehension in adolescents.

    PubMed

    Wang, Yingying; Holland, Scott K

    2014-05-01

    Comprehension of narrative stories plays an important role in the development of language skills. In this study, we compared brain activity elicited by a passive-listening version and an active-response (AR) version of a narrative comprehension task by using independent component (IC) analysis on functional magnetic resonance imaging data from 21 adolescents (ages 14-18 years). Furthermore, we explored differences in functional network connectivity engaged by two versions of the task and investigated the relationship between the online response time and the strength of connectivity between each pair of ICs. Despite similar brain region involvements in auditory, temporoparietal, and frontoparietal language networks for both versions, the AR version engages some additional network elements including the left dorsolateral prefrontal, anterior cingulate, and sensorimotor networks. These additional involvements are likely associated with working memory and maintenance of attention, which can be attributed to the differences in cognitive strategic aspects of the two versions. We found significant positive correlation between the online response time and the strength of connectivity between an IC in left inferior frontal region and an IC in sensorimotor region. An explanation for this finding is that longer reaction time indicates stronger connection between the frontal and sensorimotor networks caused by increased activation in adolescents who require more effort to complete the task.

  11. Mars Digital Image Model 2.1 Control Network

    NASA Technical Reports Server (NTRS)

    Archinal, B. A.; Kirk, R. L.; Duxbury, T. C.; Lee, E. M.; Sucharski, R.; Cook, D.

    2003-01-01

    USGS is currently preparing a new version of its global Mars digital image mosaic, which will be known as MDIM 2.1. As part of this process we are completing a new photogrammetric solution of the global Mars control network. This is an improved version of the network established earlier by RAND and USGS personnel, as partially described previously. MDIM 2.1 will have many improvements over earlier Viking Orbiter (VO) global mosaics. Geometrically, it will be an orthoimage product, draped on Mars Orbiter Laser Altimeter (MOLA) derived topography, thus accounting properly for the commonly oblique VO imagery. Through the network being described here it will be tied to the newly defined IAU/IAG 2000 Mars coordinate system via ties to MOLA data. Thus, MDIM 2.1 will provide complete global orthorectified imagery coverage of Mars at the resolution of 1/256 deg of MDIM 2.0, and be compatible with MOLA and other products produced in the current coordinate system.

  12. Application-driven strategies for efficient transfer of medical images over very high speed networks

    NASA Astrophysics Data System (ADS)

    Alsafadi, Yasser H.; McNeill, Kevin M.; Martinez, Ralph

    1993-09-01

    The American College of Radiology (ACR) and the National Electrical Manufacturing Association (NEMA) in 1982 formed the ACR-NEMA committee to develop a standard to enable equipment from different vendors to communicate and participate in a picture archiving and communications system (PACS). The standard focused mostly on interconnectivity issues and communication needs of PACS. It was patterned after the international standards organization open systems interconnection (ISO/OSI) reference model. Three versions of the standard appeared, evolving from simple point-to-point specification of connection between two medical devices to a complex standard of a network environment. However, fast changes in network software and hardware technologies makes it difficult for the standard to keep pace. This paper compares two versions of the ACR-NEMA standard and then describes a system that is used at the University of Arizona Intensive Care Unit. In this system, the application should specify the interface to network services and grade of service required. These provisions are suggested to make the application independent from evolving network technology and support true open systems.

  13. Influence of Network Model Detail on Estimated Health Effects of Drinking Water Contamination Events

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, Michael J.; Janke, Robert

    Network model detail can influence the accuracy of results from analyses of water distribution systems. Some previous work has shown the limitations of skeletonized network models when considering water quality and hydraulic effects. Loss of model detail is potentially less important for aggregated effects such as the systemwide health effects associated with a contamination event, but has received limited attention. The influence of model detail on such effects is examined here by comparing results obtained for contamination events using three large network models and several skeletonized versions of the models. Loss of model detail decreases the accuracy of estimated aggregatedmore » adverse effects related to contamination events. It has the potential to have a large negative influence on the results of consequence assessments and the design of contamination warning systems. But, the adverse influence on analysis results can be minimized by restricting attention to high percentile effects (i.e., 95th percentile or higher).« less

  14. Influence of Network Model Detail on Estimated Health Effects of Drinking Water Contamination Events

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, Michael J.; Janke, Robert

    Network model detail can influence the accuracy of results from analyses of water distribution systems. Previous work has shown the limitations of skeletonized network models when considering water quality and hydraulic effects. Loss of model detail is potentially less important for aggregated effects such as the systemwide health effects associated with a contamination event, but has received limited attention. The influence of model detail on such effects is examined here by comparing results obtained for contamination events using three large network models and several skeletonized versions of the models. Loss of model detail decreases the accuracy of estimated aggregated adversemore » effects related to contamination events. It has the potential to have a large negative influence on the results of consequence assessments and the design of contamination warning systems. However, the adverse influence on analysis results can be minimized by restricting attention to high percentile effects (i.e., 95th percentile or higher).« less

  15. Influence of Network Model Detail on Estimated Health Effects of Drinking Water Contamination Events

    DOE PAGES

    Davis, Michael J.; Janke, Robert

    2015-01-01

    Network model detail can influence the accuracy of results from analyses of water distribution systems. Some previous work has shown the limitations of skeletonized network models when considering water quality and hydraulic effects. Loss of model detail is potentially less important for aggregated effects such as the systemwide health effects associated with a contamination event, but has received limited attention. The influence of model detail on such effects is examined here by comparing results obtained for contamination events using three large network models and several skeletonized versions of the models. Loss of model detail decreases the accuracy of estimated aggregatedmore » adverse effects related to contamination events. It has the potential to have a large negative influence on the results of consequence assessments and the design of contamination warning systems. But, the adverse influence on analysis results can be minimized by restricting attention to high percentile effects (i.e., 95th percentile or higher).« less

  16. Rochester Connectionist Papers. 1979-1985

    DTIC Science & Technology

    1985-12-01

    updated and improved version of the thesis account of recent neurolinguistic data. Fanty, M., "Context-free parsing in connectionist networks." TR 174...April 1982. Our first large program in the connectionist paradigm. It simulates a multi- layer network for recognizing line drawings of Origami figures...The program successfully deals with noise and simple occlusion and the thesis incorporates many key ideas on designing and running large models. Small

  17. IEEE 802.15.4 MAC with GTS transmission for heterogeneous devices with application to wheelchair body-area sensor networks.

    PubMed

    Shrestha, Bharat; Hossain, Ekram; Camorlinga, Sergio

    2011-09-01

    In wireless personal area networks, such as wireless body-area sensor networks, stations or devices have different bandwidth requirements and, thus, create heterogeneous traffics. For such networks, the IEEE 802.15.4 medium access control (MAC) can be used in the beacon-enabled mode, which supports guaranteed time slot (GTS) allocation for time-critical data transmissions. This paper presents a general discrete-time Markov chain model for the IEEE 802.15.4-based networks taking into account the slotted carrier sense multiple access with collision avoidance and GTS transmission phenomena together in the heterogeneous traffic scenario and under nonsaturated condition. For this purpose, the standard GTS allocation scheme is modified. For each non-identical device, the Markov model is solved and the average service time and the service utilization factor are analyzed in the non-saturated mode. The analysis is validated by simulations using network simulator version 2.33. Also, the model is enhanced with a wireless propagation model and the performance of the MAC is evaluated in a wheelchair body-area sensor network scenario.

  18. Balancing building and maintenance costs in growing transport networks

    NASA Astrophysics Data System (ADS)

    Bottinelli, Arianna; Louf, Rémi; Gherardi, Marco

    2017-09-01

    The costs associated to the length of links impose unavoidable constraints to the growth of natural and artificial transport networks. When future network developments cannot be predicted, the costs of building and maintaining connections cannot be minimized simultaneously, requiring competing optimization mechanisms. Here, we study a one-parameter nonequilibrium model driven by an optimization functional, defined as the convex combination of building cost and maintenance cost. By varying the coefficient of the combination, the model interpolates between global and local length minimization, i.e., between minimum spanning trees and a local version known as dynamical minimum spanning trees. We show that cost balance within this ensemble of dynamical networks is a sufficient ingredient for the emergence of tradeoffs between the network's total length and transport efficiency, and of optimal strategies of construction. At the transition between two qualitatively different regimes, the dynamics builds up power-law distributed waiting times between global rearrangements, indicating a point of nonoptimality. Finally, we use our model as a framework to analyze empirical ant trail networks, showing its relevance as a null model for cost-constrained network formation.

  19. Equation-based model for the stock market

    NASA Astrophysics Data System (ADS)

    Xavier, Paloma O. C.; Atman, A. P. F.; de Magalhães, A. R. Bosco

    2017-09-01

    We propose a stock market model which is investigated in the forms of difference and differential equations whose variables correspond to the demand or supply of each agent and to the price. In the model, agents are driven by the behavior of their trust contact network as well by fundamental analysis. By means of the deterministic version of the model, the connection between such drive mechanisms and the price is analyzed: imitation behavior promotes market instability, finitude of resources is associated to stock index stability, and high sensitivity to the fair price provokes price oscillations. Long-range correlations in the price temporal series and heavy-tailed distribution of returns are observed for the version of the model which considers different proposals for stochasticity of microeconomic and macroeconomic origins.

  20. Modeling of cortical signals using echo state networks

    NASA Astrophysics Data System (ADS)

    Zhou, Hanying; Wang, Yongji; Huang, Jiangshuai

    2009-10-01

    Diverse modeling frameworks have been utilized with the ultimate goal of translating brain cortical signals into prediction of visible behavior. The inputs to these models are usually multidimensional neural recordings collected from relevant regions of a monkey's brain while the outputs are the associated behavior which is typically the 2-D or 3-D hand position of a primate. Here our task is to set up a proper model in order to figure out the move trajectories by input the neural signals which are simultaneously collected in the experiment. In this paper, we propose to use Echo State Networks (ESN) to map the neural firing activities into hand positions. ESN is a newly developed recurrent neural network(RNN) model. Besides its dynamic property and short term memory just as other recurrent neural networks have, it has a special echo state property which endows it with the ability to model nonlinear dynamic systems powerfully. What distinguished it from transitional recurrent neural networks most significantly is its special learning method. In this paper we train this net with a refined version of its typical training method and get a better model.

  1. Wireless Authentication Protocol Implementation: Descriptions of a Zero-Knowledge Proof (ZKP) Protocol Implementation for Testing on Ground and Airborne Mobile Networks

    DTIC Science & Technology

    2015-01-01

    on AFRL’s small unmanned aerial vehicle (UAV) test bed . 15. SUBJECT TERMS Zero-Knowledge Proof Protocol Testing 16. SECURITY CLASSIFICATION OF...VERIFIER*** edition Version Information: Version 1.1.3 Version Details: Successful ZK authentication between two networked machines. Fixed a bug ...that causes intermittent bignum errors. Fixed a network hang bug and now allows continually authentication at the Verifier. Also now removing

  2. Comparison of Functional Network Connectivity for Passive-Listening and Active-Response Narrative Comprehension in Adolescents

    PubMed Central

    Holland, Scott K.

    2014-01-01

    Abstract Comprehension of narrative stories plays an important role in the development of language skills. In this study, we compared brain activity elicited by a passive-listening version and an active-response (AR) version of a narrative comprehension task by using independent component (IC) analysis on functional magnetic resonance imaging data from 21 adolescents (ages 14–18 years). Furthermore, we explored differences in functional network connectivity engaged by two versions of the task and investigated the relationship between the online response time and the strength of connectivity between each pair of ICs. Despite similar brain region involvements in auditory, temporoparietal, and frontoparietal language networks for both versions, the AR version engages some additional network elements including the left dorsolateral prefrontal, anterior cingulate, and sensorimotor networks. These additional involvements are likely associated with working memory and maintenance of attention, which can be attributed to the differences in cognitive strategic aspects of the two versions. We found significant positive correlation between the online response time and the strength of connectivity between an IC in left inferior frontal region and an IC in sensorimotor region. An explanation for this finding is that longer reaction time indicates stronger connection between the frontal and sensorimotor networks caused by increased activation in adolescents who require more effort to complete the task. PMID:24689887

  3. A modified NSGA-II solution for a new multi-objective hub maximal covering problem under uncertain shipments

    NASA Astrophysics Data System (ADS)

    Ebrahimi Zade, Amir; Sadegheih, Ahmad; Lotfi, Mohammad Mehdi

    2014-07-01

    Hubs are centers for collection, rearrangement, and redistribution of commodities in transportation networks. In this paper, non-linear multi-objective formulations for single and multiple allocation hub maximal covering problems as well as the linearized versions are proposed. The formulations substantially mitigate complexity of the existing models due to the fewer number of constraints and variables. Also, uncertain shipments are studied in the context of hub maximal covering problems. In many real-world applications, any link on the path from origin to destination may fail to work due to disruption. Therefore, in the proposed bi-objective model, maximizing safety of the weakest path in the network is considered as the second objective together with the traditional maximum coverage goal. Furthermore, to solve the bi-objective model, a modified version of NSGA-II with a new dynamic immigration operator is developed in which the accurate number of immigrants depends on the results of the other two common NSGA-II operators, i.e. mutation and crossover. Besides validating proposed models, computational results confirm a better performance of modified NSGA-II versus traditional one.

  4. Scientific and educational recommender systems

    NASA Astrophysics Data System (ADS)

    Guseva, A. I.; Kireev, V. S.; Bochkarev, P. V.; Kuznetsov, I. A.; Philippov, S. A.

    2017-01-01

    This article discusses the questions associated with the use of reference systems in the preparation of graduates in physical function. The objective of this research is creation of model of recommender system user from the sphere of science and education. The detailed review of current scientific and social network for scientists and the problem of constructing recommender systems in this area. The result of this study is to research user information model systems. The model is presented in two versions: the full one - in the form of a semantic network, and short - in a relational form. The relational model is the projection in the form of semantic network, taking into account the restrictions on the amount of bonds that characterize the number of information items (research results), which interact with the system user.

  5. S-curve networks and an approximate method for estimating degree distributions of complex networks

    NASA Astrophysics Data System (ADS)

    Guo, Jin-Li

    2010-12-01

    In the study of complex networks almost all theoretical models have the property of infinite growth, but the size of actual networks is finite. According to statistics from the China Internet IPv4 (Internet Protocol version 4) addresses, this paper proposes a forecasting model by using S curve (logistic curve). The growing trend of IPv4 addresses in China is forecasted. There are some reference values for optimizing the distribution of IPv4 address resource and the development of IPv6. Based on the laws of IPv4 growth, that is, the bulk growth and the finitely growing limit, it proposes a finite network model with a bulk growth. The model is said to be an S-curve network. Analysis demonstrates that the analytic method based on uniform distributions (i.e., Barabási-Albert method) is not suitable for the network. It develops an approximate method to predict the growth dynamics of the individual nodes, and uses this to calculate analytically the degree distribution and the scaling exponents. The analytical result agrees with the simulation well, obeying an approximately power-law form. This method can overcome a shortcoming of Barabási-Albert method commonly used in current network research.

  6. Empirical Reference Distributions for Networks of Different Size

    PubMed Central

    Smith, Anna; Calder, Catherine A.; Browning, Christopher R.

    2016-01-01

    Network analysis has become an increasingly prevalent research tool across a vast range of scientific fields. Here, we focus on the particular issue of comparing network statistics, i.e. graph-level measures of network structural features, across multiple networks that differ in size. Although “normalized” versions of some network statistics exist, we demonstrate via simulation why direct comparison is often inappropriate. We consider normalizing network statistics relative to a simple fully parameterized reference distribution and demonstrate via simulation how this is an improvement over direct comparison, but still sometimes problematic. We propose a new adjustment method based on a reference distribution constructed as a mixture model of random graphs which reflect the dependence structure exhibited in the observed networks. We show that using simple Bernoulli models as mixture components in this reference distribution can provide adjusted network statistics that are relatively comparable across different network sizes but still describe interesting features of networks, and that this can be accomplished at relatively low computational expense. Finally, we apply this methodology to a collection of ecological networks derived from the Los Angeles Family and Neighborhood Survey activity location data. PMID:27721556

  7. The development of a green supply chain dual-objective facility by considering different levels of uncertainty

    NASA Astrophysics Data System (ADS)

    Khorasani, Sasan Torabzadeh; Almasifard, Maryam

    2017-11-01

    This paper presents a dual-objective facility programming model for a green supply chain network. The main objectives of the presented model are minimizing overall expenditure and negative environmental impacts of the supply chain. This study contributes to the existing literature by incorporating uncertainty in customer demand, suppliers, production, and casting capacity. An industrial case study is also analyzed to reveal the feasibility of the proposed model and its application. A fuzzy approach which is known as TH is used to solve the suggested dual-objective model. TH approach is integration of a max-min method (LH) and modified version of Werners' approach (MW). The outcome of this study reveals that the presented model can support green supply chain network in different levels of uncertainty. In presented model, cost and negative environmental impacts derived from the supply chain network will increase of higher levels of uncertainty.

  8. Corporations' Resistance to Innovation: The Adoption of the Internet Protocol Version 6

    ERIC Educational Resources Information Center

    Pazdrowski, Tomasz

    2013-01-01

    Computer networks that brought unprecedented growth in global communication have been using Internet Protocol version 4 (IPv4) as a standard for routing. The exponential increase in the use of the networks caused an acute shortage of available identification numbers (IP addresses). The shortage and other network communication issues are…

  9. Tracking state deployments of commercial vehicle information systems and networks : 1998 Washington State report

    DOT National Transportation Integrated Search

    1999-12-01

    Volume III of the Logical Architecture contract deliverable documents the Data Dictionary. This formatted version of the Teamwork model data dictionary is mechanically produced from the Teamwork CDIF (Case Data Interchange Format) output file. It is ...

  10. A PERFORMANCE EVALUATION OF THE 2004 RELEASE OF MODELS-3 CMAQ

    EPA Science Inventory

    This performance evaluation compares a full annual simulation (2001) of CMAQ (Version4.4) covering the contiguous United States against monitoring data from four nationwide networks. This effort, which represents one of the most spatially and temporally comprehensive performance...

  11. Neural Network Cloud Classification Research

    DTIC Science & Technology

    1993-03-01

    analysis of the database made this study possible. We would also like to thank Don Chisolm and Rosemary Dyer for their enlightening discussions and...elements of the model correspond closely to neurophysiological data about the visual cortex. Efficient versions of the BCS and FCS have been

  12. Biomimicry of symbiotic multi-species coevolution for discrete and continuous optimization in RFID networks.

    PubMed

    Lin, Na; Chen, Hanning; Jing, Shikai; Liu, Fang; Liang, Xiaodan

    2017-03-01

    In recent years, symbiosis as a rich source of potential engineering applications and computational model has attracted more and more attentions in the adaptive complex systems and evolution computing domains. Inspired by different symbiotic coevolution forms in nature, this paper proposed a series of multi-swarm particle swarm optimizers called PS 2 Os, which extend the single population particle swarm optimization (PSO) algorithm to interacting multi-swarms model by constructing hierarchical interaction topologies and enhanced dynamical update equations. According to different symbiotic interrelationships, four versions of PS 2 O are initiated to mimic mutualism, commensalism, predation, and competition mechanism, respectively. In the experiments, with five benchmark problems, the proposed algorithms are proved to have considerable potential for solving complex optimization problems. The coevolutionary dynamics of symbiotic species in each PS 2 O version are also studied respectively to demonstrate the heterogeneity of different symbiotic interrelationships that effect on the algorithm's performance. Then PS 2 O is used for solving the radio frequency identification (RFID) network planning (RNP) problem with a mixture of discrete and continuous variables. Simulation results show that the proposed algorithm outperforms the reference algorithms for planning RFID networks, in terms of optimization accuracy and computation robustness.

  13. FTP Extensions for Variable Protocol Specification

    NASA Technical Reports Server (NTRS)

    Allman, Mark; Ostermann, Shawn

    2000-01-01

    The specification for the File Transfer Protocol (FTP) assumes that the underlying network protocols use a 32-bit network address and a 16-bit transport address (specifically IP version 4 and TCP). With the deployment of version 6 of the Internet Protocol, network addresses will no longer be 32-bits. This paper species extensions to FTP that will allow the protocol to work over a variety of network and transport protocols.

  14. Joint physical and numerical modeling of water distribution networks.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zimmerman, Adam; O'Hern, Timothy John; Orear, Leslie Jr.

    2009-01-01

    This report summarizes the experimental and modeling effort undertaken to understand solute mixing in a water distribution network conducted during the last year of a 3-year project. The experimental effort involves measurement of extent of mixing within different configurations of pipe networks, measurement of dynamic mixing in a single mixing tank, and measurement of dynamic solute mixing in a combined network-tank configuration. High resolution analysis of turbulence mixing is carried out via high speed photography as well as 3D finite-volume based Large Eddy Simulation turbulence models. Macroscopic mixing rules based on flow momentum balance are also explored, and in somemore » cases, implemented in EPANET. A new version EPANET code was developed to yield better mixing predictions. The impact of a storage tank on pipe mixing in a combined pipe-tank network during diurnal fill-and-drain cycles is assessed. Preliminary comparison between dynamic pilot data and EPANET-BAM is also reported.« less

  15. Neural-Network-Development Program

    NASA Technical Reports Server (NTRS)

    Phillips, Todd A.

    1993-01-01

    NETS, software tool for development and evaluation of neural networks, provides simulation of neural-network algorithms plus computing environment for development of such algorithms. Uses back-propagation learning method for all of networks it creates. Enables user to customize patterns of connections between layers of network. Also provides features for saving, during learning process, values of weights, providing more-precise control over learning process. Written in ANSI standard C language. Machine-independent version (MSC-21588) includes only code for command-line-interface version of NETS 3.0.

  16. Comparison of weighted and unweighted network analysis in the case of a pig trade network in Northern Germany.

    PubMed

    Büttner, Kathrin; Krieter, Joachim

    2018-08-01

    The analysis of trade networks as well as the spread of diseases within these systems focuses mainly on pure animal movements between farms. However, additional data included as edge weights can complement the informational content of the network analysis. However, the inclusion of edge weights can also alter the outcome of the network analysis. Thus, the aim of the study was to compare unweighted and weighted network analyses of a pork supply chain in Northern Germany and to evaluate the impact on the centrality parameters. Five different weighted network versions were constructed by adding the following edge weights: number of trade contacts, number of delivered livestock, average number of delivered livestock per trade contact, geographical distance and reciprocal geographical distance. Additionally, two different edge weight standardizations were used. The network observed from 2013 to 2014 contained 678 farms which were connected by 1,018 edges. General network characteristics including shortest path structure (e.g. identical shortest paths, shortest path lengths) as well as centrality parameters for each network version were calculated. Furthermore, the targeted and the random removal of farms were performed in order to evaluate the structural changes in the networks. All network versions and edge weight standardizations revealed the same number of shortest paths (1,935). Between 94.4 to 98.9% of the unweighted network and the weighted network versions were identical. Furthermore, depending on the calculated centrality parameters and the edge weight standardization used, it could be shown that the weighted network versions differed from the unweighted network (e.g. for the centrality parameters based on ingoing trade contacts) or did not differ (e.g. for the centrality parameters based on the outgoing trade contacts) with regard to the Spearman Rank Correlation and the targeted removal of farms. The choice of standardization method as well as the inclusion or exclusion of specific farm types (e.g. abattoirs) can alter the results significantly. These facts have to be considered when centrality parameters are to be used for the implementation of prevention and control strategies in the case of an epidemic. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Continuous time limits of the utterance selection model

    NASA Astrophysics Data System (ADS)

    Michaud, Jérôme

    2017-02-01

    In this paper we derive alternative continuous time limits of the utterance selection model (USM) for language change [G. J. Baxter et al., Phys. Rev. E 73, 046118 (2006), 10.1103/PhysRevE.73.046118]. This is motivated by the fact that the Fokker-Planck continuous time limit derived in the original version of the USM is only valid for a small range of parameters. We investigate the consequences of relaxing these constraints on parameters. Using the normal approximation of the multinomial approximation, we derive a continuous time limit of the USM in the form of a weak-noise stochastic differential equation. We argue that this weak noise, not captured by the Kramers-Moyal expansion, cannot be neglected. We then propose a coarse-graining procedure, which takes the form of a stochastic version of the heterogeneous mean field approximation. This approximation groups the behavior of nodes of the same degree, reducing the complexity of the problem. With the help of this approximation, we study in detail two simple families of networks: the regular networks and the star-shaped networks. The analysis reveals and quantifies a finite-size effect of the dynamics. If we increase the size of the network by keeping all the other parameters constant, we transition from a state where conventions emerge to a state where no convention emerges. Furthermore, we show that the degree of a node acts as a time scale. For heterogeneous networks such as star-shaped networks, the time scale difference can become very large, leading to a noisier behavior of highly connected nodes.

  18. Violence-related content in video game may lead to functional connectivity changes in brain networks as revealed by fMRI-ICA in young men.

    PubMed

    Zvyagintsev, M; Klasen, M; Weber, R; Sarkheil, P; Esposito, F; Mathiak, K A; Schwenzer, M; Mathiak, K

    2016-04-21

    In violent video games, players engage in virtual aggressive behaviors. Exposure to virtual aggressive behavior induces short-term changes in players' behavior. In a previous study, a violence-related version of the racing game "Carmageddon TDR2000" increased aggressive affects, cognitions, and behaviors compared to its non-violence-related version. This study investigates the differences in neural network activity during the playing of both versions of the video game. Functional magnetic resonance imaging (fMRI) recorded ongoing brain activity of 18 young men playing the violence-related and the non-violence-related version of the video game Carmageddon. Image time series were decomposed into functional connectivity (FC) patterns using independent component analysis (ICA) and template-matching yielded a mapping to established functional brain networks. The FC patterns revealed a decrease in connectivity within 6 brain networks during the violence-related compared to the non-violence-related condition: three sensory-motor networks, the reward network, the default mode network (DMN), and the right-lateralized frontoparietal network. Playing violent racing games may change functional brain connectivity, in particular and even after controlling for event frequency, in the reward network and the DMN. These changes may underlie the short-term increase of aggressive affects, cognitions, and behaviors as observed after playing violent video games. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  19. ARACHNE: A neural-neuroglial network builder with remotely controlled parallel computing

    PubMed Central

    Rusakov, Dmitri A.; Savtchenko, Leonid P.

    2017-01-01

    Creating and running realistic models of neural networks has hitherto been a task for computing professionals rather than experimental neuroscientists. This is mainly because such networks usually engage substantial computational resources, the handling of which requires specific programing skills. Here we put forward a newly developed simulation environment ARACHNE: it enables an investigator to build and explore cellular networks of arbitrary biophysical and architectural complexity using the logic of NEURON and a simple interface on a local computer or a mobile device. The interface can control, through the internet, an optimized computational kernel installed on a remote computer cluster. ARACHNE can combine neuronal (wired) and astroglial (extracellular volume-transmission driven) network types and adopt realistic cell models from the NEURON library. The program and documentation (current version) are available at GitHub repository https://github.com/LeonidSavtchenko/Arachne under the MIT License (MIT). PMID:28362877

  20. Thin Watts-Strogatz networks.

    PubMed

    de Moura, Alessandro P S

    2006-01-01

    A modified version of the Watts-Strogatz (WS) network model is proposed, in which the number of shortcuts scales with the network size N as Nalpha, with alpha < 1. In these networks, the ratio of the number of shortcuts to the network size approaches zero as N --> infinity, whereas in the original WS model, this ratio is constant. We call such networks "thin Watts-Strogatz networks." We show that even though the fraction of shortcuts becomes vanishingly small for large networks, they still cause a kind of small-world effect, in the sense that the length L of the network increases sublinearly with the size. We develop a mean-field theory for these networks, which predicts that the length scales as N1-alpha ln N for large N. We also study how a search using only local information works in thin WS networks. We find that the search performance is enhanced compared to the regular network, and we predict that the search time tau scales as N1-alpha/2. These theoretical results are tested using numerical simulations. We comment on the possible relevance of thin WS networks for the design of high-performance low-cost communication networks.

  1. Structure and weights optimisation of a modified Elman network emotion classifier using hybrid computational intelligence algorithms: a comparative study

    NASA Astrophysics Data System (ADS)

    Sheikhan, Mansour; Abbasnezhad Arabi, Mahdi; Gharavian, Davood

    2015-10-01

    Artificial neural networks are efficient models in pattern recognition applications, but their performance is dependent on employing suitable structure and connection weights. This study used a hybrid method for obtaining the optimal weight set and architecture of a recurrent neural emotion classifier based on gravitational search algorithm (GSA) and its binary version (BGSA), respectively. By considering the features of speech signal that were related to prosody, voice quality, and spectrum, a rich feature set was constructed. To select more efficient features, a fast feature selection method was employed. The performance of the proposed hybrid GSA-BGSA method was compared with similar hybrid methods based on particle swarm optimisation (PSO) algorithm and its binary version, PSO and discrete firefly algorithm, and hybrid of error back-propagation and genetic algorithm that were used for optimisation. Experimental tests on Berlin emotional database demonstrated the superior performance of the proposed method using a lighter network structure.

  2. Memory-induced mechanism for self-sustaining activity in networks

    NASA Astrophysics Data System (ADS)

    Allahverdyan, A. E.; Steeg, G. Ver; Galstyan, A.

    2015-12-01

    We study a mechanism of activity sustaining on networks inspired by a well-known model of neuronal dynamics. Our primary focus is the emergence of self-sustaining collective activity patterns, where no single node can stay active by itself, but the activity provided initially is sustained within the collective of interacting agents. In contrast to existing models of self-sustaining activity that are caused by (long) loops present in the network, here we focus on treelike structures and examine activation mechanisms that are due to temporal memory of the nodes. This approach is motivated by applications in social media, where long network loops are rare or absent. Our results suggest that under a weak behavioral noise, the nodes robustly split into several clusters, with partial synchronization of nodes within each cluster. We also study the randomly weighted version of the models where the nodes are allowed to change their connection strength (this can model attention redistribution) and show that it does facilitate the self-sustained activity.

  3. Causal biological network database: a comprehensive platform of causal biological network models focused on the pulmonary and vascular systems

    PubMed Central

    Boué, Stéphanie; Talikka, Marja; Westra, Jurjen Willem; Hayes, William; Di Fabio, Anselmo; Park, Jennifer; Schlage, Walter K.; Sewer, Alain; Fields, Brett; Ansari, Sam; Martin, Florian; Veljkovic, Emilija; Kenney, Renee; Peitsch, Manuel C.; Hoeng, Julia

    2015-01-01

    With the wealth of publications and data available, powerful and transparent computational approaches are required to represent measured data and scientific knowledge in a computable and searchable format. We developed a set of biological network models, scripted in the Biological Expression Language, that reflect causal signaling pathways across a wide range of biological processes, including cell fate, cell stress, cell proliferation, inflammation, tissue repair and angiogenesis in the pulmonary and cardiovascular context. This comprehensive collection of networks is now freely available to the scientific community in a centralized web-based repository, the Causal Biological Network database, which is composed of over 120 manually curated and well annotated biological network models and can be accessed at http://causalbionet.com. The website accesses a MongoDB, which stores all versions of the networks as JSON objects and allows users to search for genes, proteins, biological processes, small molecules and keywords in the network descriptions to retrieve biological networks of interest. The content of the networks can be visualized and browsed. Nodes and edges can be filtered and all supporting evidence for the edges can be browsed and is linked to the original articles in PubMed. Moreover, networks may be downloaded for further visualization and evaluation. Database URL: http://causalbionet.com PMID:25887162

  4. Using simple agent-based modeling to inform and enhance neighborhood walkability.

    PubMed

    Badland, Hannah; White, Marcus; Macaulay, Gus; Eagleson, Serryn; Mavoa, Suzanne; Pettit, Christopher; Giles-Corti, Billie

    2013-12-11

    Pedestrian-friendly neighborhoods with proximal destinations and services encourage walking and decrease car dependence, thereby contributing to more active and healthier communities. Proximity to key destinations and services is an important aspect of the urban design decision making process, particularly in areas adopting a transit-oriented development (TOD) approach to urban planning, whereby densification occurs within walking distance of transit nodes. Modeling destination access within neighborhoods has been limited to circular catchment buffers or more sophisticated network-buffers generated using geoprocessing routines within geographical information systems (GIS). Both circular and network-buffer catchment methods are problematic. Circular catchment models do not account for street networks, thus do not allow exploratory 'what-if' scenario modeling; and network-buffering functionality typically exists within proprietary GIS software, which can be costly and requires a high level of expertise to operate. This study sought to overcome these limitations by developing an open-source simple agent-based walkable catchment tool that can be used by researchers, urban designers, planners, and policy makers to test scenarios for improving neighborhood walkable catchments. A simplified version of an agent-based model was ported to a vector-based open source GIS web tool using data derived from the Australian Urban Research Infrastructure Network (AURIN). The tool was developed and tested with end-user stakeholder working group input. The resulting model has proven to be effective and flexible, allowing stakeholders to assess and optimize the walkability of neighborhood catchments around actual or potential nodes of interest (e.g., schools, public transport stops). Users can derive a range of metrics to compare different scenarios modeled. These include: catchment area versus circular buffer ratios; mean number of streets crossed; and modeling of different walking speeds and wait time at intersections. The tool has the capacity to influence planning and public health advocacy and practice, and by using open-access source software, it is available for use locally and internationally. There is also scope to extend this version of the tool from a simple to a complex model, which includes agents (i.e., simulated pedestrians) 'learning' and incorporating other environmental attributes that enhance walkability (e.g., residential density, mixed land use, traffic volume).

  5. Critical space-time networks and geometric phase transitions from frustrated edge antiferromagnetism

    NASA Astrophysics Data System (ADS)

    Trugenberger, Carlo A.

    2015-12-01

    Recently I proposed a simple dynamical network model for discrete space-time that self-organizes as a graph with Hausdorff dimension dH=4 . The model has a geometric quantum phase transition with disorder parameter (dH-ds) , where ds is the spectral dimension of the dynamical graph. Self-organization in this network model is based on a competition between a ferromagnetic Ising model for vertices and an antiferromagnetic Ising model for edges. In this paper I solve a toy version of this model defined on a bipartite graph in the mean-field approximation. I show that the geometric phase transition corresponds exactly to the antiferromagnetic transition for edges, the dimensional disorder parameter of the former being mapped to the staggered magnetization order parameter of the latter. The model has a critical point with long-range correlations between edges, where a continuum random geometry can be defined, exactly as in Kazakov's famed 2D random lattice Ising model but now in any number of dimensions.

  6. Uncoordinated MAC for Adaptive Multi-Beam Directional Networks: Analysis and Evaluation

    DTIC Science & Technology

    2016-04-10

    transmission times, hence traditional CSMA approaches are not appropriate. We first present our model of these multi-beamforming capa- bilities and the...resulting wireless interference. We then derive an upper bound on multi-access performance for an idealized version of this physical layer. We then present... transmissions and receptions in a mobile ad-hoc network has in practice led to very constrained topologies. As mentioned, one approach for system design is to de

  7. Evolution of tag-based cooperation with emotion on complex networks

    NASA Astrophysics Data System (ADS)

    Lima, F. W. S.

    2018-04-01

    We study the evolution of the four strategies: Ethnocentric, altruistic, egoistic and cosmopolitan in one community of individuals through Monte Carlo simulations. Interactions and reproduction among computational agents are simulated on undirected Barabási-Albert (UBA) networks and Erdös-Rènyi random graphs (ER).We study the Hammond-Axelrod model on both UBA networks and ER random graphs for the asexual reproduction case. We use a modified version of the traditional Hammond-Axelrod model and we also allow the agents’ decisions about one of the strategies to take into account the emotion among their equals. Our simulations showed that egoism and altruism win, differently from other results found in the literature where ethnocentric strategy is common.

  8. Super-resolution Time-Lapse Seismic Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Ovcharenko, O.; Kazei, V.; Peter, D. B.; Alkhalifah, T.

    2017-12-01

    Time-lapse seismic waveform inversion is a technique, which allows tracking changes in the reservoirs over time. Such monitoring is relatively computationally extensive and therefore it is barely feasible to perform it on-the-fly. Most of the expenses are related to numerous FWI iterations at high temporal frequencies, which is inevitable since the low-frequency components can not resolve fine scale features of a velocity model. Inverted velocity changes are also blurred when there is noise in the data, so the problem of low-resolution images is widely known. One of the problems intensively tackled by computer vision research community is the recovering of high-resolution images having their low-resolution versions. Usage of artificial neural networks to reach super-resolution from a single downsampled image is one of the leading solutions for this problem. Each pixel of the upscaled image is affected by all the pixels of its low-resolution version, which enables the workflow to recover features that are likely to occur in the corresponding environment. In the present work, we adopt machine learning image enhancement technique to improve the resolution of time-lapse full-waveform inversion. We first invert the baseline model with conventional FWI. Then we run a few iterations of FWI on a set of the monitoring data to find desired model changes. These changes are blurred and we enhance their resolution by using a deep neural network. The network is trained to map low-resolution model updates predicted by FWI into the real perturbations of the baseline model. For supervised training of the network we generate a set of random perturbations in the baseline model and perform FWI on the noisy data from the perturbed models. We test the approach on a realistic perturbation of Marmousi II model and demonstrate that it outperforms conventional convolution-based deblurring techniques.

  9. Towards cortex sized artificial neural systems.

    PubMed

    Johansson, Christopher; Lansner, Anders

    2007-01-01

    We propose, implement, and discuss an abstract model of the mammalian neocortex. This model is instantiated with a sparse recurrently connected neural network that has spiking leaky integrator units and continuous Hebbian learning. First we study the structure, modularization, and size of neocortex, and then we describe a generic computational model of the cortical circuitry. A characterizing feature of the model is that it is based on the modularization of neocortex into hypercolumns and minicolumns. Both a floating- and fixed-point arithmetic implementation of the model are presented along with simulation results. We conclude that an implementation on a cluster computer is not communication but computation bounded. A mouse and rat cortex sized version of our model executes in 44% and 23% of real-time respectively. Further, an instance of the model with 1.6 x 10(6) units and 2 x 10(11) connections performed noise reduction and pattern completion. These implementations represent the current frontier of large-scale abstract neural network simulations in terms of network size and running speed.

  10. The Portals 4.0 network programming interface.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barrett, Brian W.; Brightwell, Ronald Brian; Pedretti, Kevin

    2012-11-01

    This report presents a specification for the Portals 4.0 network programming interface. Portals 4.0 is intended to allow scalable, high-performance network communication between nodes of a parallel computing system. Portals 4.0 is well suited to massively parallel processing and embedded systems. Portals 4.0 represents an adaption of the data movement layer developed for massively parallel processing platforms, such as the 4500-node Intel TeraFLOPS machine. Sandias Cplant cluster project motivated the development of Version 3.0, which was later extended to Version 3.3 as part of the Cray Red Storm machine and XT line. Version 4.0 is targeted to the next generationmore » of machines employing advanced network interface architectures that support enhanced offload capabilities.« less

  11. The connection-set algebra--a novel formalism for the representation of connectivity structure in neuronal network models.

    PubMed

    Djurfeldt, Mikael

    2012-07-01

    The connection-set algebra (CSA) is a novel and general formalism for the description of connectivity in neuronal network models, from small-scale to large-scale structure. The algebra provides operators to form more complex sets of connections from simpler ones and also provides parameterization of such sets. CSA is expressive enough to describe a wide range of connection patterns, including multiple types of random and/or geometrically dependent connectivity, and can serve as a concise notation for network structure in scientific writing. CSA implementations allow for scalable and efficient representation of connectivity in parallel neuronal network simulators and could even allow for avoiding explicit representation of connections in computer memory. The expressiveness of CSA makes prototyping of network structure easy. A C+ + version of the algebra has been implemented and used in a large-scale neuronal network simulation (Djurfeldt et al., IBM J Res Dev 52(1/2):31-42, 2008b) and an implementation in Python has been publicly released.

  12. Delay-induced Turing-like waves for one-species reaction-diffusion model on a network

    NASA Astrophysics Data System (ADS)

    Petit, Julien; Carletti, Timoteo; Asllani, Malbor; Fanelli, Duccio

    2015-09-01

    A one-species time-delay reaction-diffusion system defined on a complex network is studied. Traveling waves are predicted to occur following a symmetry-breaking instability of a homogeneous stationary stable solution, subject to an external nonhomogeneous perturbation. These are generalized Turing-like waves that materialize in a single-species populations dynamics model, as the unexpected byproduct of the imposed delay in the diffusion part. Sufficient conditions for the onset of the instability are mathematically provided by performing a linear stability analysis adapted to time-delayed differential equations. The method here developed exploits the properties of the Lambert W-function. The prediction of the theory are confirmed by direct numerical simulation carried out for a modified version of the classical Fisher model, defined on a Watts-Strogatz network and with the inclusion of the delay.

  13. A general model for metabolic scaling in self-similar asymmetric networks

    PubMed Central

    Savage, Van M.; Enquist, Brian J.

    2017-01-01

    How a particular attribute of an organism changes or scales with its body size is known as an allometry. Biological allometries, such as metabolic scaling, have been hypothesized to result from selection to maximize how vascular networks fill space yet minimize internal transport distances and resistances. The West, Brown, Enquist (WBE) model argues that these two principles (space-filling and energy minimization) are (i) general principles underlying the evolution of the diversity of biological networks across plants and animals and (ii) can be used to predict how the resulting geometry of biological networks then governs their allometric scaling. Perhaps the most central biological allometry is how metabolic rate scales with body size. A core assumption of the WBE model is that networks are symmetric with respect to their geometric properties. That is, any two given branches within the same generation in the network are assumed to have identical lengths and radii. However, biological networks are rarely if ever symmetric. An open question is: Does incorporating asymmetric branching change or influence the predictions of the WBE model? We derive a general network model that relaxes the symmetric assumption and define two classes of asymmetrically bifurcating networks. We show that asymmetric branching can be incorporated into the WBE model. This asymmetric version of the WBE model results in several theoretical predictions for the structure, physiology, and metabolism of organisms, specifically in the case for the cardiovascular system. We show how network asymmetry can now be incorporated in the many allometric scaling relationships via total network volume. Most importantly, we show that the 3/4 metabolic scaling exponent from Kleiber’s Law can still be attained within many asymmetric networks. PMID:28319153

  14. A general model for metabolic scaling in self-similar asymmetric networks.

    PubMed

    Brummer, Alexander Byers; Savage, Van M; Enquist, Brian J

    2017-03-01

    How a particular attribute of an organism changes or scales with its body size is known as an allometry. Biological allometries, such as metabolic scaling, have been hypothesized to result from selection to maximize how vascular networks fill space yet minimize internal transport distances and resistances. The West, Brown, Enquist (WBE) model argues that these two principles (space-filling and energy minimization) are (i) general principles underlying the evolution of the diversity of biological networks across plants and animals and (ii) can be used to predict how the resulting geometry of biological networks then governs their allometric scaling. Perhaps the most central biological allometry is how metabolic rate scales with body size. A core assumption of the WBE model is that networks are symmetric with respect to their geometric properties. That is, any two given branches within the same generation in the network are assumed to have identical lengths and radii. However, biological networks are rarely if ever symmetric. An open question is: Does incorporating asymmetric branching change or influence the predictions of the WBE model? We derive a general network model that relaxes the symmetric assumption and define two classes of asymmetrically bifurcating networks. We show that asymmetric branching can be incorporated into the WBE model. This asymmetric version of the WBE model results in several theoretical predictions for the structure, physiology, and metabolism of organisms, specifically in the case for the cardiovascular system. We show how network asymmetry can now be incorporated in the many allometric scaling relationships via total network volume. Most importantly, we show that the 3/4 metabolic scaling exponent from Kleiber's Law can still be attained within many asymmetric networks.

  15. Software For Graphical Representation Of A Network

    NASA Technical Reports Server (NTRS)

    Mcallister, R. William; Mclellan, James P.

    1993-01-01

    System Visualization Tool (SVT) computer program developed to provide systems engineers with means of graphically representing networks. Generates diagrams illustrating structures and states of networks defined by users. Provides systems engineers powerful tool simplifing analysis of requirements and testing and maintenance of complex software-controlled systems. Employs visual models supporting analysis of chronological sequences of requirements, simulation data, and related software functions. Applied to pneumatic, hydraulic, and propellant-distribution networks. Used to define and view arbitrary configurations of such major hardware components of system as propellant tanks, valves, propellant lines, and engines. Also graphically displays status of each component. Advantage of SVT: utilizes visual cues to represent configuration of each component within network. Written in Turbo Pascal(R), version 5.0.

  16. Fast Recall for Complex-Valued Hopfield Neural Networks with Projection Rules.

    PubMed

    Kobayashi, Masaki

    2017-01-01

    Many models of neural networks have been extended to complex-valued neural networks. A complex-valued Hopfield neural network (CHNN) is a complex-valued version of a Hopfield neural network. Complex-valued neurons can represent multistates, and CHNNs are available for the storage of multilevel data, such as gray-scale images. The CHNNs are often trapped into the local minima, and their noise tolerance is low. Lee improved the noise tolerance of the CHNNs by detecting and exiting the local minima. In the present work, we propose a new recall algorithm that eliminates the local minima. We show that our proposed recall algorithm not only accelerated the recall but also improved the noise tolerance through computer simulations.

  17. A generalized approach to complex networks

    NASA Astrophysics Data System (ADS)

    Costa, L. Da F.; da Rocha, L. E. C.

    2006-03-01

    This work describes how the formalization of complex network concepts in terms of discrete mathematics, especially mathematical morphology, allows a series of generalizations and important results ranging from new measurements of the network topology to new network growth models. First, the concepts of node degree and clustering coefficient are extended in order to characterize not only specific nodes, but any generic subnetwork. Second, the consideration of distance transform and rings are used to further extend those concepts in order to obtain a signature, instead of a single scalar measurement, ranging from the single node to whole graph scales. The enhanced discriminative potential of such extended measurements is illustrated with respect to the identification of correspondence between nodes in two complex networks, namely a protein-protein interaction network and a perturbed version of it.

  18. Losing My Religion? The Impact of Spiritual Cues on Noncognitive Skills

    ERIC Educational Resources Information Center

    Bowen, Daniel H.; Cheng, Albert

    2016-01-01

    Studies consistently show that Catholic schools produce positive impacts on educational outcomes. Many charter school networks in the United States now provide, what are essentially, secularized versions of the Catholic education model. However, charter schools cannot legally replicate the overt religious curriculum and mission of Catholic…

  19. AERONET Version 3 Release: Providing Significant Improvements for Multi-Decadal Global Aerosol Database and Near Real-Time Validation

    NASA Technical Reports Server (NTRS)

    Holben, Brent; Slutsker, Ilya; Giles, David; Eck, Thomas; Smirnov, Alexander; Sinyuk, Aliaksandr; Schafer, Joel; Sorokin, Mikhail; Rodriguez, Jon; Kraft, Jason; hide

    2016-01-01

    Aerosols are highly variable in space, time and properties. Global assessment from satellite platforms and model predictions rely on validation from AERONET, a highly accurate ground-based network. Ver. 3 represents a significant improvement in accuracy and quality.

  20. An open source web interface for linking models to infrastructure system databases

    NASA Astrophysics Data System (ADS)

    Knox, S.; Mohamed, K.; Harou, J. J.; Rheinheimer, D. E.; Medellin-Azuara, J.; Meier, P.; Tilmant, A.; Rosenberg, D. E.

    2016-12-01

    Models of networked engineered resource systems such as water or energy systems are often built collaboratively with developers from different domains working at different locations. These models can be linked to large scale real world databases, and they are constantly being improved and extended. As the development and application of these models becomes more sophisticated, and the computing power required for simulations and/or optimisations increases, so has the need for online services and tools which enable the efficient development and deployment of these models. Hydra Platform is an open source, web-based data management system, which allows modellers of network-based models to remotely store network topology and associated data in a generalised manner, allowing it to serve multiple disciplines. Hydra Platform uses a web API using JSON to allow external programs (referred to as `Apps') to interact with its stored networks and perform actions such as importing data, running models, or exporting the networks to different formats. Hydra Platform supports multiple users accessing the same network and has a suite of functions for managing users and data. We present ongoing development in Hydra Platform, the Hydra Web User Interface, through which users can collaboratively manage network data and models in a web browser. The web interface allows multiple users to graphically access, edit and share their networks, run apps and view results. Through apps, which are located on the server, the web interface can give users access to external data sources and models without the need to install or configure any software. This also ensures model results can be reproduced by removing platform or version dependence. Managing data and deploying models via the web interface provides a way for multiple modellers to collaboratively manage data, deploy and monitor model runs and analyse results.

  1. SMC: SCENIC Model Control

    NASA Technical Reports Server (NTRS)

    Srivastava, Priyaka; Kraus, Jeff; Murawski, Robert; Golden, Bertsel, Jr.

    2015-01-01

    NASAs Space Communications and Navigation (SCaN) program manages three active networks: the Near Earth Network, the Space Network, and the Deep Space Network. These networks simultaneously support NASA missions and provide communications services to customers worldwide. To efficiently manage these resources and their capabilities, a team of student interns at the NASA Glenn Research Center is developing a distributed system to model the SCaN networks. Once complete, the system shall provide a platform that enables users to perform capacity modeling of current and prospective missions with finer-grained control of information between several simulation and modeling tools. This will enable the SCaN program to access a holistic view of its networks and simulate the effects of modifications in order to provide NASA with decisional information. The development of this capacity modeling system is managed by NASAs Strategic Center for Education, Networking, Integration, and Communication (SCENIC). Three primary third-party software tools offer their unique abilities in different stages of the simulation process. MagicDraw provides UMLSysML modeling, AGIs Systems Tool Kit simulates the physical transmission parameters and de-conflicts scheduled communication, and Riverbed Modeler (formerly OPNET) simulates communication protocols and packet-based networking. SCENIC developers are building custom software extensions to integrate these components in an end-to-end space communications modeling platform. A central control module acts as the hub for report-based messaging between client wrappers. Backend databases provide information related to mission parameters and ground station configurations, while the end user defines scenario-specific attributes for the model. The eight SCENIC interns are working under the direction of their mentors to complete an initial version of this capacity modeling system during the summer of 2015. The intern team is composed of four students in Computer Science, two in Computer Engineering, one in Electrical Engineering, and one studying Space Systems Engineering.

  2. Deploying Monitoring Trails for Fault Localization in All- Optical Networks and Radio-over-Fiber Passive Optical Networks

    NASA Astrophysics Data System (ADS)

    Maamoun, Khaled Mohamed

    Fault localization is the process of realizing the true source of a failure from a set of collected failure notifications. Isolating failure recovery within the network optical domain is necessary to resolve alarm storm problems. The introduction of the monitoring trail (m-trail) has been proven to deliver better performance by employing monitoring resources in a form of optical trails - a monitoring framework that generalizes all the previously reported counterparts. In this dissertation, the m-trail design is explored and a focus is given to the analysis on using m-trails with established lightpaths to achieve fault localization. This process saves network resources by reducing the number of the m-trails required for fault localization and therefore the number of wavelengths used in the network. A novel approach based on Geographic Midpoint Technique, an adapted version of the Chinese Postman's Problem (CPP) solution and an adapted version of the Traveling Salesman's Problem (TSP) solution algorithms is introduced. The desirable features of network architectures and the enabling of innovative technologies for delivering future millimeter-waveband (mm-WB) Radio-over-Fiber (RoF) systems for wireless services integrated in a Dense Wavelength Division Multiplexing (DWDM) is proposed in this dissertation. For the conceptual illustration, a DWDM RoF system with channel spacing of 12.5 GHz is considered. The mm-WB Radio Frequency (RF) signal is obtained at each Optical Network Unit (ONU) by simultaneously using optical heterodyning photo detection between two optical carriers. The generated RF modulated signal has a frequency of 12.5 GHz. This RoF system is easy, cost-effective, resistant to laser phase noise and also reduces maintenance needs, in principle. A revision of related RoF network proposals and experiments is also included. A number of models for Passive Optical Networks (PON)/ RoF-PON that combine both innovative and existing ideas along with a number of solutions for m-trail design problem of these models are proposed. The comparison between these models uses the expected survivability function which proved that these models are liable to be implemented in the new and existing PON/ RoF-PON systems. This dissertation is followed by recommendation of possible directions for future research in this area.

  3. Analysis of Handoff Mechanisms in Mobile IP

    NASA Astrophysics Data System (ADS)

    Jayaraj, Maria Nadine Simonel; Issac, Biju; Haldar, Manas Kumar

    2011-06-01

    One of the most important challenges in mobile Internet Protocol (IP) is to provide service for a mobile node to maintain its connectivity to network when it moves from one domain to another. IP is responsible for routing packets across network. The first major version of IP is the Internet Protocol version 4 (IPv4). It is one of the dominant protocols relevant to wireless network. Later a newer version of IP called the IPv6 was proposed. Mobile IPv6 is mainly introduced for the purpose of mobility. Mobility management enables network to locate roaming nodes in order to deliver packets and maintain connections with them when moving into new domains. Handoff occurs when a mobile node moves from one network to another. It is a key factor of mobility because a mobile node can trigger several handoffs during a session. This paper briefly explains on mobile IP and its handoff issues, along with the drawbacks of mobile IP.

  4. Ada Compiler Validation Summary Report: Certificate Number: 940305W1. 11335 TLD Systems, Ltd. TLD Comanche VAX/i960 Ada Compiler System, Version 4.1.1 VAX Cluster under VMS 5.5 = Tronix JIAWG Execution Vehicle (i960MX) under TLD Real Time Executive, Version 4.1.1

    DTIC Science & Technology

    1994-03-14

    Comanche VAX/i960 Ada Compiler System, Version 4.1.1 Host Computer System: Digital Local Area Network VAX Cluster executing on (2) MicroVAX 3100 Model 90...31 $MAX DIGITS 15 SmNx INT 2147483647 $MAX INT PLUS_1 2147483648 $MIN IN -2_147483648 A-3 MACR PARAMEERIS $NAME NO SUCH INTEGER TYPE $NAME LIST...nested generlcs are Supported and generics defined in libary units are pexitted. zt is not possible to pen ore a macro instantiation for a generic I

  5. NetMOD v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merchant, Bion J

    2015-12-22

    NetMOD is a tool to model the performance of global ground-based explosion monitoring systems. The version 2.0 of the software supports the simulation of seismic, hydroacoustic, and infrasonic detection capability. The tool provides a user interface to execute simulations based upon a hypothetical definition of the monitoring system configuration, geophysical properties of the Earth, and detection analysis criteria. NetMOD will be distributed with a project file defining the basic performance characteristics of the International Monitoring System (IMS), a network of sensors operated by the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO). Network modeling is needed to be able to assess and explainmore » the potential effect of changes to the IMS, to prioritize station deployment and repair, and to assess the overall CTBTO monitoring capability currently and in the future. Currently the CTBTO uses version 1.0 of NetMOD, provided to them in early 2014. NetMOD will provide a modern tool that will cover all the simulations currently available and allow for the development of additional simulation capabilities of the IMS in the future. NetMOD simulates the performance of monitoring networks by estimating the relative amplitudes of the signal and noise measured at each of the stations within the network based upon known geophysical principles. From these signal and noise estimates, a probability of detection may be determined for each of the stations. The detection probabilities at each of the stations may then be combined to produce an estimate of the detection probability for the entire monitoring network.« less

  6. The portals 4.0.1 network programming interface.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barrett, Brian W.; Brightwell, Ronald Brian; Pedretti, Kevin

    2013-04-01

    This report presents a specification for the Portals 4.0 network programming interface. Portals 4.0 is intended to allow scalable, high-performance network communication between nodes of a parallel computing system. Portals 4.0 is well suited to massively parallel processing and embedded systems. Portals 4.0 represents an adaption of the data movement layer developed for massively parallel processing platforms, such as the 4500-node Intel TeraFLOPS machine. Sandias Cplant cluster project motivated the development of Version 3.0, which was later extended to Version 3.3 as part of the Cray Red Storm machine and XT line. Version 4.0 is targeted to the next generationmore » of machines employing advanced network interface architectures that support enhanced offload capabilities. 3« less

  7. Jamming Attack in Wireless Sensor Network: From Time to Space

    NASA Astrophysics Data System (ADS)

    Sun, Yanqiang; Wang, Xiaodong; Zhou, Xingming

    Classical jamming attack models in the time domain have been proposed, such as constant jammer, random jammer, and reactive jammer. In this letter, we consider a new problem: given k jammers, how does the attacker minimize the pair-wise connectivity among the nodes in a Wireless Sensor Network (WSN)? We call this problem k-Jammer Deployment Problem (k-JDP). To the best of our knowledge, this is the first attempt at considering the position-critical jamming attack against wireless sensor network. We mainly make three contributions. First, we prove that the decision version of k-JDP is NP-complete even in the ideal situation where the attacker has full knowledge of the topology information of sensor network. Second, we propose a mathematical formulation based on Integer Programming (IP) model which yields an optimal solution. Third, we present a heuristic algorithm HAJDP, and compare it with the IP model. Numerical results show that our heuristic algorithm is computationally efficient.

  8. Performance of an Abbreviated Version of the Lubben Social Network Scale among Three European Community-Dwelling Older Adult Populations

    ERIC Educational Resources Information Center

    Lubben, James; Blozik, Eva; Gillmann, Gerhard; Iliffe, Steve; von Renteln-Kruse, Wolfgang; Beck, John C.; Stuck, Andreas E.

    2006-01-01

    Purpose: There is a need for valid and reliable short scales that can be used to assess social networks and social supports and to screen for social isolation in older persons. Design and Methods: The present study is a cross-national and cross-cultural evaluation of the performance of an abbreviated version of the Lubben Social Network Scale…

  9. Causal biological network database: a comprehensive platform of causal biological network models focused on the pulmonary and vascular systems.

    PubMed

    Boué, Stéphanie; Talikka, Marja; Westra, Jurjen Willem; Hayes, William; Di Fabio, Anselmo; Park, Jennifer; Schlage, Walter K; Sewer, Alain; Fields, Brett; Ansari, Sam; Martin, Florian; Veljkovic, Emilija; Kenney, Renee; Peitsch, Manuel C; Hoeng, Julia

    2015-01-01

    With the wealth of publications and data available, powerful and transparent computational approaches are required to represent measured data and scientific knowledge in a computable and searchable format. We developed a set of biological network models, scripted in the Biological Expression Language, that reflect causal signaling pathways across a wide range of biological processes, including cell fate, cell stress, cell proliferation, inflammation, tissue repair and angiogenesis in the pulmonary and cardiovascular context. This comprehensive collection of networks is now freely available to the scientific community in a centralized web-based repository, the Causal Biological Network database, which is composed of over 120 manually curated and well annotated biological network models and can be accessed at http://causalbionet.com. The website accesses a MongoDB, which stores all versions of the networks as JSON objects and allows users to search for genes, proteins, biological processes, small molecules and keywords in the network descriptions to retrieve biological networks of interest. The content of the networks can be visualized and browsed. Nodes and edges can be filtered and all supporting evidence for the edges can be browsed and is linked to the original articles in PubMed. Moreover, networks may be downloaded for further visualization and evaluation. Database URL: http://causalbionet.com © The Author(s) 2015. Published by Oxford University Press.

  10. Hardware Prototyping of Neural Network based Fetal Electrocardiogram Extraction

    NASA Astrophysics Data System (ADS)

    Hasan, M. A.; Reaz, M. B. I.

    2012-01-01

    The aim of this paper is to model the algorithm for Fetal ECG (FECG) extraction from composite abdominal ECG (AECG) using VHDL (Very High Speed Integrated Circuit Hardware Description Language) for FPGA (Field Programmable Gate Array) implementation. Artificial Neural Network that provides efficient and effective ways of separating FECG signal from composite AECG signal has been designed. The proposed method gives an accuracy of 93.7% for R-peak detection in FHR monitoring. The designed VHDL model is synthesized and fitted into Altera's Stratix II EP2S15F484C3 using the Quartus II version 8.0 Web Edition for FPGA implementation.

  11. Modeling and Density Estimation of an Urban Freeway Network Based on Dynamic Graph Hybrid Automata

    PubMed Central

    Chen, Yangzhou; Guo, Yuqi; Wang, Ying

    2017-01-01

    In this paper, in order to describe complex network systems, we firstly propose a general modeling framework by combining a dynamic graph with hybrid automata and thus name it Dynamic Graph Hybrid Automata (DGHA). Then we apply this framework to model traffic flow over an urban freeway network by embedding the Cell Transmission Model (CTM) into the DGHA. With a modeling procedure, we adopt a dual digraph of road network structure to describe the road topology, use linear hybrid automata to describe multi-modes of dynamic densities in road segments and transform the nonlinear expressions of the transmitted traffic flow between two road segments into piecewise linear functions in terms of multi-mode switchings. This modeling procedure is modularized and rule-based, and thus is easily-extensible with the help of a combination algorithm for the dynamics of traffic flow. It can describe the dynamics of traffic flow over an urban freeway network with arbitrary topology structures and sizes. Next we analyze mode types and number in the model of the whole freeway network, and deduce a Piecewise Affine Linear System (PWALS) model. Furthermore, based on the PWALS model, a multi-mode switched state observer is designed to estimate the traffic densities of the freeway network, where a set of observer gain matrices are computed by using the Lyapunov function approach. As an example, we utilize the PWALS model and the corresponding switched state observer to traffic flow over Beijing third ring road. In order to clearly interpret the principle of the proposed method and avoid computational complexity, we adopt a simplified version of Beijing third ring road. Practical application for a large-scale road network will be implemented by decentralized modeling approach and distributed observer designing in the future research. PMID:28353664

  12. Modeling and Density Estimation of an Urban Freeway Network Based on Dynamic Graph Hybrid Automata.

    PubMed

    Chen, Yangzhou; Guo, Yuqi; Wang, Ying

    2017-03-29

    In this paper, in order to describe complex network systems, we firstly propose a general modeling framework by combining a dynamic graph with hybrid automata and thus name it Dynamic Graph Hybrid Automata (DGHA). Then we apply this framework to model traffic flow over an urban freeway network by embedding the Cell Transmission Model (CTM) into the DGHA. With a modeling procedure, we adopt a dual digraph of road network structure to describe the road topology, use linear hybrid automata to describe multi-modes of dynamic densities in road segments and transform the nonlinear expressions of the transmitted traffic flow between two road segments into piecewise linear functions in terms of multi-mode switchings. This modeling procedure is modularized and rule-based, and thus is easily-extensible with the help of a combination algorithm for the dynamics of traffic flow. It can describe the dynamics of traffic flow over an urban freeway network with arbitrary topology structures and sizes. Next we analyze mode types and number in the model of the whole freeway network, and deduce a Piecewise Affine Linear System (PWALS) model. Furthermore, based on the PWALS model, a multi-mode switched state observer is designed to estimate the traffic densities of the freeway network, where a set of observer gain matrices are computed by using the Lyapunov function approach. As an example, we utilize the PWALS model and the corresponding switched state observer to traffic flow over Beijing third ring road. In order to clearly interpret the principle of the proposed method and avoid computational complexity, we adopt a simplified version of Beijing third ring road. Practical application for a large-scale road network will be implemented by decentralized modeling approach and distributed observer designing in the future research.

  13. Predicting Cost/Performance Trade-Offs for Whitney: A Commodity Computing Cluster

    NASA Technical Reports Server (NTRS)

    Becker, Jeffrey C.; Nitzberg, Bill; VanderWijngaart, Rob F.; Kutler, Paul (Technical Monitor)

    1997-01-01

    Recent advances in low-end processor and network technology have made it possible to build a "supercomputer" out of commodity components. We develop simple models of the NAS Parallel Benchmarks version 2 (NPB 2) to explore the cost/performance trade-offs involved in building a balanced parallel computer supporting a scientific workload. We develop closed form expressions detailing the number and size of messages sent by each benchmark. Coupling these with measured single processor performance, network latency, and network bandwidth, our models predict benchmark performance to within 30%. A comparison based on total system cost reveals that current commodity technology (200 MHz Pentium Pros with 100baseT Ethernet) is well balanced for the NPBs up to a total system cost of around $1,000,000.

  14. A novel wavelet sequence based on deep bidirectional LSTM network model for ECG signal classification.

    PubMed

    Yildirim, Özal

    2018-05-01

    Long-short term memory networks (LSTMs), which have recently emerged in sequential data analysis, are the most widely used type of recurrent neural networks (RNNs) architecture. Progress on the topic of deep learning includes successful adaptations of deep versions of these architectures. In this study, a new model for deep bidirectional LSTM network-based wavelet sequences called DBLSTM-WS was proposed for classifying electrocardiogram (ECG) signals. For this purpose, a new wavelet-based layer is implemented to generate ECG signal sequences. The ECG signals were decomposed into frequency sub-bands at different scales in this layer. These sub-bands are used as sequences for the input of LSTM networks. New network models that include unidirectional (ULSTM) and bidirectional (BLSTM) structures are designed for performance comparisons. Experimental studies have been performed for five different types of heartbeats obtained from the MIT-BIH arrhythmia database. These five types are Normal Sinus Rhythm (NSR), Ventricular Premature Contraction (VPC), Paced Beat (PB), Left Bundle Branch Block (LBBB), and Right Bundle Branch Block (RBBB). The results show that the DBLSTM-WS model gives a high recognition performance of 99.39%. It has been observed that the wavelet-based layer proposed in the study significantly improves the recognition performance of conventional networks. This proposed network structure is an important approach that can be applied to similar signal processing problems. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Metabolic networks are almost nonfractal: a comprehensive evaluation.

    PubMed

    Takemoto, Kazuhiro

    2014-08-01

    Network self-similarity or fractality are widely accepted as an important topological property of metabolic networks; however, recent studies cast doubt on the reality of self-similarity in the networks. Therefore, we perform a comprehensive evaluation of metabolic network fractality using a box-covering method with an earlier version and the latest version of metabolic networks and demonstrate that the latest metabolic networks are almost self-dissimilar, while the earlier ones are fractal, as reported in a number of previous studies. This result may be because the networks were randomized because of an increase in network density due to database updates, suggesting that the previously observed network fractality was due to a lack of available data on metabolic reactions. This finding may not entirely discount the importance of self-similarity of metabolic networks. Rather, it highlights the need for a more suitable definition of network fractality and a more careful examination of self-similarity of metabolic networks.

  16. The Local Structure of Globalization. The Network Dynamics of Foreign Direct Investments in the International Electricity Industry

    NASA Astrophysics Data System (ADS)

    Koskinen, Johan; Lomi, Alessandro

    2013-05-01

    We study the evolution of the network of foreign direct investment (FDI) in the international electricity industry during the period 1994-2003. We assume that the ties in the network of investment relations between countries are created and deleted in continuous time, according to a conditional Gibbs distribution. This assumption allows us to take simultaneously into account the aggregate predictions of the well-established gravity model of international trade as well as local dependencies between network ties connecting the countries in our sample. According to the modified version of the gravity model that we specify, the probability of observing an investment tie between two countries depends on the mass of the economies involved, their physical distance, and the tendency of the network to self-organize into local configurations of network ties. While the limiting distribution of the data generating process is an exponential random graph model, we do not assume the system to be in equilibrium. We find evidence of the effects of the standard gravity model of international trade on evolution of the global FDI network. However, we also provide evidence of significant dyadic and extra-dyadic dependencies between investment ties that are typically ignored in available research. We show that local dependencies between national electricity industries are sufficient for explaining global properties of the network of foreign direct investments. We also show, however, that network dependencies vary significantly over time giving rise to a time-heterogeneous localized process of network evolution.

  17. A jazz-based approach for optimal setting of pressure reducing valves in water distribution networks

    NASA Astrophysics Data System (ADS)

    De Paola, Francesco; Galdiero, Enzo; Giugni, Maurizio

    2016-05-01

    This study presents a model for valve setting in water distribution networks (WDNs), with the aim of reducing the level of leakage. The approach is based on the harmony search (HS) optimization algorithm. The HS mimics a jazz improvisation process able to find the best solutions, in this case corresponding to valve settings in a WDN. The model also interfaces with the improved version of a popular hydraulic simulator, EPANET 2.0, to check the hydraulic constraints and to evaluate the performances of the solutions. Penalties are introduced in the objective function in case of violation of the hydraulic constraints. The model is applied to two case studies, and the obtained results in terms of pressure reductions are comparable with those of competitive metaheuristic algorithms (e.g. genetic algorithms). The results demonstrate the suitability of the HS algorithm for water network management and optimization.

  18. The impact of a social network intervention on retention in Belgian therapeutic communities: a quasi-experimental study.

    PubMed

    Soyez, Veerle; De Leon, George; Broekaert, Eric; Rosseel, Yves

    2006-07-01

    Although numerous studies recognize the importance of social network support in engaging substance abusers into treatment, there is only limited knowledge of the impact of network involvement and support during treatment. The primary objective of this research was to enhance retention in Therapeutic Community treatment utilizing a social network intervention. The specific goals of this study were (1) to determine whether different pre-treatment factors predicted treatment retention in a Therapeutic Community; and (2) to determine whether participation of significant others in a social network intervention predicted treatment retention. Consecutive admissions to four long-term residential Therapeutic Communities were assessed at intake (n = 207); the study comprised a mainly male (84.9%) sample of polydrug (41.1%) and opiate (20.8%) abusers, of whom 64.4% had ever injected drugs. Assessment involved the European version of the Addiction Severity Index (EuropASI), the Circumstances, Motivation, Readiness scales (CMR), the Dutch version of the family environment scale (GKS/FES) and an in-depth interview on social network structure and perceived social support. Network members of different cohorts were assigned to a social network intervention, which consisted of three elements (a video, participation at an induction day and participation in a discussion session). Hierarchical regression analyses showed that client-perceived social support (F1,198 = 10.9, P = 0.001) and treatment motivation and readiness (F1,198 = 8.8; P = 0.003) explained a significant proportion of the variance in treatment retention (model fit: F7,197 = 4.4; P = 0.000). By including the variable 'significant others' participation in network intervention' (network involvement) in the model, the fit clearly improved (F1,197 = 6.2; P = 0.013). At the same time, the impact of perceived social support decreased (F1,197 = 2.9; P = 0.091). Participation in the social network intervention was associated with improved treatment retention controlling for other client characteristics. This suggests that the intervention may be of benefit in the treatment of addicted individuals.

  19. Percolation mechanism drives actin gels to the critically connected state

    NASA Astrophysics Data System (ADS)

    Lee, Chiu Fan; Pruessner, Gunnar

    2016-05-01

    Cell motility and tissue morphogenesis depend crucially on the dynamic remodeling of actomyosin networks. An actomyosin network consists of an actin polymer network connected by cross-linker proteins and motor protein myosins that generate internal stresses on the network. A recent discovery shows that for a range of experimental parameters, actomyosin networks contract to clusters with a power-law size distribution [J. Alvarado, Nat. Phys. 9, 591 (2013), 10.1038/nphys2715]. Here, we argue that actomyosin networks can exhibit a robust critical signature without fine-tuning because the dynamics of the system can be mapped onto a modified version of percolation with trapping (PT), which is known to show critical behavior belonging to the static percolation universality class without the need for fine-tuning of a control parameter. We further employ our PT model to generate experimentally testable predictions.

  20. A hybrid neural network model for noisy data regression.

    PubMed

    Lee, Eric W M; Lim, Chee Peng; Yuen, Richard K K; Lo, S M

    2004-04-01

    A hybrid neural network model, based on the fusion of fuzzy adaptive resonance theory (FA ART) and the general regression neural network (GRNN), is proposed in this paper. Both FA and the GRNN are incremental learning systems and are very fast in network training. The proposed hybrid model, denoted as GRNNFA, is able to retain these advantages and, at the same time, to reduce the computational requirements in calculating and storing information of the kernels. A clustering version of the GRNN is designed with data compression by FA for noise removal. An adaptive gradient-based kernel width optimization algorithm has also been devised. Convergence of the gradient descent algorithm can be accelerated by the geometric incremental growth of the updating factor. A series of experiments with four benchmark datasets have been conducted to assess and compare effectiveness of GRNNFA with other approaches. The GRNNFA model is also employed in a novel application task for predicting the evacuation time of patrons at typical karaoke centers in Hong Kong in the event of fire. The results positively demonstrate the applicability of GRNNFA in noisy data regression problems.

  1. A network model of successive partitioning-limited solute diffusion through the stratum corneum.

    PubMed

    Schumm, Phillip; Scoglio, Caterina M; van der Merwe, Deon

    2010-02-07

    As the most exposed point of contact with the external environment, the skin is an important barrier to many chemical exposures, including medications, potentially toxic chemicals and cosmetics. Traditional dermal absorption models treat the stratum corneum lipids as a homogenous medium through which solutes diffuse according to Fick's first law of diffusion. This approach does not explain non-linear absorption and irregular distribution patterns within the stratum corneum lipids as observed in experimental data. A network model, based on successive partitioning-limited solute diffusion through the stratum corneum, where the lipid structure is represented by a large, sparse, and regular network where nodes have variable characteristics, offers an alternative, efficient, and flexible approach to dermal absorption modeling that simulates non-linear absorption data patterns. Four model versions are presented: two linear models, which have unlimited node capacities, and two non-linear models, which have limited node capacities. The non-linear model outputs produce absorption to dose relationships that can be best characterized quantitatively by using power equations, similar to the equations used to describe non-linear experimental data.

  2. Attack tolerance of correlated time-varying social networks with well-defined communities

    NASA Astrophysics Data System (ADS)

    Sur, Souvik; Ganguly, Niloy; Mukherjee, Animesh

    2015-02-01

    In this paper, we investigate the efficiency and the robustness of information transmission for real-world social networks, modeled as time-varying instances, under targeted attack in shorter time spans. We observe that these quantities are markedly higher than that of the randomized versions of the considered networks. An important factor that drives this efficiency or robustness is the presence of short-time correlations across the network instances which we quantify by a novel metric the-edge emergence factor, denoted as ξ. We find that standard targeted attacks are not effective in collapsing this network structure. Remarkably, if the hourly community structures of the temporal network instances are attacked with the largest size community attacked first, the second largest next and so on, the network soon collapses. This behavior, we show is an outcome of the fact that the edge emergence factor bears a strong positive correlation with the size ordered community structures.

  3. Parameterization of dust emissions in the global atmospheric chemistry-climate model EMAC: impact of nudging and soil properties

    NASA Astrophysics Data System (ADS)

    Astitha, M.; Lelieveld, J.; Abdel Kader, M.; Pozzer, A.; de Meij, A.

    2012-11-01

    Airborne desert dust influences radiative transfer, atmospheric chemistry and dynamics, as well as nutrient transport and deposition. It directly and indirectly affects climate on regional and global scales. Two versions of a parameterization scheme to compute desert dust emissions are incorporated into the atmospheric chemistry general circulation model EMAC (ECHAM5/MESSy2.41 Atmospheric Chemistry). One uses a globally uniform soil particle size distribution, whereas the other explicitly accounts for different soil textures worldwide. We have tested these two versions and investigated the sensitivity to input parameters, using remote sensing data from the Aerosol Robotic Network (AERONET) and dust concentrations and deposition measurements from the AeroCom dust benchmark database (and others). The two versions are shown to produce similar atmospheric dust loads in the N-African region, while they deviate in the Asian, Middle Eastern and S-American regions. The dust outflow from Africa over the Atlantic Ocean is accurately simulated by both schemes, in magnitude, location and seasonality. Approximately 70% of the modelled annual deposition data and 70-75% of the modelled monthly aerosol optical depth (AOD) in the Atlantic Ocean stations lay in the range 0.5 to 2 times the observations for all simulations. The two versions have similar performance, even though the total annual source differs by ~50%, which underscores the importance of transport and deposition processes (being the same for both versions). Even though the explicit soil particle size distribution is considered more realistic, the simpler scheme appears to perform better in several locations. This paper discusses the differences between the two versions of the dust emission scheme, focusing on their limitations and strengths in describing the global dust cycle and suggests possible future improvements.

  4. Using simple agent-based modeling to inform and enhance neighborhood walkability

    PubMed Central

    2013-01-01

    Background Pedestrian-friendly neighborhoods with proximal destinations and services encourage walking and decrease car dependence, thereby contributing to more active and healthier communities. Proximity to key destinations and services is an important aspect of the urban design decision making process, particularly in areas adopting a transit-oriented development (TOD) approach to urban planning, whereby densification occurs within walking distance of transit nodes. Modeling destination access within neighborhoods has been limited to circular catchment buffers or more sophisticated network-buffers generated using geoprocessing routines within geographical information systems (GIS). Both circular and network-buffer catchment methods are problematic. Circular catchment models do not account for street networks, thus do not allow exploratory ‘what-if’ scenario modeling; and network-buffering functionality typically exists within proprietary GIS software, which can be costly and requires a high level of expertise to operate. Methods This study sought to overcome these limitations by developing an open-source simple agent-based walkable catchment tool that can be used by researchers, urban designers, planners, and policy makers to test scenarios for improving neighborhood walkable catchments. A simplified version of an agent-based model was ported to a vector-based open source GIS web tool using data derived from the Australian Urban Research Infrastructure Network (AURIN). The tool was developed and tested with end-user stakeholder working group input. Results The resulting model has proven to be effective and flexible, allowing stakeholders to assess and optimize the walkability of neighborhood catchments around actual or potential nodes of interest (e.g., schools, public transport stops). Users can derive a range of metrics to compare different scenarios modeled. These include: catchment area versus circular buffer ratios; mean number of streets crossed; and modeling of different walking speeds and wait time at intersections. Conclusions The tool has the capacity to influence planning and public health advocacy and practice, and by using open-access source software, it is available for use locally and internationally. There is also scope to extend this version of the tool from a simple to a complex model, which includes agents (i.e., simulated pedestrians) ‘learning’ and incorporating other environmental attributes that enhance walkability (e.g., residential density, mixed land use, traffic volume). PMID:24330721

  5. Quantitative prediction of cellular metabolism with constraint-based models: the COBRA Toolbox v2.0

    PubMed Central

    Schellenberger, Jan; Que, Richard; Fleming, Ronan M. T.; Thiele, Ines; Orth, Jeffrey D.; Feist, Adam M.; Zielinski, Daniel C.; Bordbar, Aarash; Lewis, Nathan E.; Rahmanian, Sorena; Kang, Joseph; Hyduke, Daniel R.; Palsson, Bernhard Ø.

    2012-01-01

    Over the past decade, a growing community of researchers has emerged around the use of COnstraint-Based Reconstruction and Analysis (COBRA) methods to simulate, analyze and predict a variety of metabolic phenotypes using genome-scale models. The COBRA Toolbox, a MATLAB package for implementing COBRA methods, was presented earlier. Here we present a significant update of this in silico ToolBox. Version 2.0 of the COBRA Toolbox expands the scope of computations by including in silico analysis methods developed since its original release. New functions include: (1) network gap filling, (2) 13C analysis, (3) metabolic engineering, (4) omics-guided analysis, and (5) visualization. As with the first version, the COBRA Toolbox reads and writes Systems Biology Markup Language formatted models. In version 2.0, we improved performance, usability, and the level of documentation. A suite of test scripts can now be used to learn the core functionality of the Toolbox and validate results. This Toolbox lowers the barrier of entry to use powerful COBRA methods. PMID:21886097

  6. A General Map of Iron Metabolism and Tissue-specific Subnetworks

    PubMed Central

    Hower, Valerie; Mendes, Pedro; Torti, Frank M.; Laubenbacher, Reinhard; Akman, Steven; Shulaev, Vladmir; Torti, Suzy V.

    2009-01-01

    Iron is required for survival of mammalian cells. Recently, understanding of iron metabolism and trafficking has increased dramatically, revealing a complex, interacting network largely unknown just a few years ago. This provides an excellent model for systems biology development and analysis. The first step in such an analysis is the construction of a structural network of iron metabolism, which we present here. This network was created using CellDesigner version 3.5.2 and includes reactions occurring in mammalian cells of numerous tissue types. The iron metabolic network contains 151 chemical species and 107 reactions and transport steps. Starting from this general model, we construct iron networks for specific tissues and cells that are fundamental to maintaining body iron homeostasis. We include subnetworks for cells of the intestine and liver, tissues important in iron uptake and storage, respectively; as well as the reticulocyte and macrophage, key cells in iron utilization and recycling. The addition of kinetic information to our structural network will permit the simulation of iron metabolism in different tissues as well as in health and disease. PMID:19381358

  7. Fractal and multifractal analyses of bipartite networks

    NASA Astrophysics Data System (ADS)

    Liu, Jin-Long; Wang, Jian; Yu, Zu-Guo; Xie, Xian-Hua

    2017-03-01

    Bipartite networks have attracted considerable interest in various fields. Fractality and multifractality of unipartite (classical) networks have been studied in recent years, but there is no work to study these properties of bipartite networks. In this paper, we try to unfold the self-similarity structure of bipartite networks by performing the fractal and multifractal analyses for a variety of real-world bipartite network data sets and models. First, we find the fractality in some bipartite networks, including the CiteULike, Netflix, MovieLens (ml-20m), Delicious data sets and (u, v)-flower model. Meanwhile, we observe the shifted power-law or exponential behavior in other several networks. We then focus on the multifractal properties of bipartite networks. Our results indicate that the multifractality exists in those bipartite networks possessing fractality. To capture the inherent attribute of bipartite network with two types different nodes, we give the different weights for the nodes of different classes, and show the existence of multifractality in these node-weighted bipartite networks. In addition, for the data sets with ratings, we modify the two existing algorithms for fractal and multifractal analyses of edge-weighted unipartite networks to study the self-similarity of the corresponding edge-weighted bipartite networks. The results show that our modified algorithms are feasible and can effectively uncover the self-similarity structure of these edge-weighted bipartite networks and their corresponding node-weighted versions.

  8. Fractal and multifractal analyses of bipartite networks.

    PubMed

    Liu, Jin-Long; Wang, Jian; Yu, Zu-Guo; Xie, Xian-Hua

    2017-03-31

    Bipartite networks have attracted considerable interest in various fields. Fractality and multifractality of unipartite (classical) networks have been studied in recent years, but there is no work to study these properties of bipartite networks. In this paper, we try to unfold the self-similarity structure of bipartite networks by performing the fractal and multifractal analyses for a variety of real-world bipartite network data sets and models. First, we find the fractality in some bipartite networks, including the CiteULike, Netflix, MovieLens (ml-20m), Delicious data sets and (u, v)-flower model. Meanwhile, we observe the shifted power-law or exponential behavior in other several networks. We then focus on the multifractal properties of bipartite networks. Our results indicate that the multifractality exists in those bipartite networks possessing fractality. To capture the inherent attribute of bipartite network with two types different nodes, we give the different weights for the nodes of different classes, and show the existence of multifractality in these node-weighted bipartite networks. In addition, for the data sets with ratings, we modify the two existing algorithms for fractal and multifractal analyses of edge-weighted unipartite networks to study the self-similarity of the corresponding edge-weighted bipartite networks. The results show that our modified algorithms are feasible and can effectively uncover the self-similarity structure of these edge-weighted bipartite networks and their corresponding node-weighted versions.

  9. Fractal and multifractal analyses of bipartite networks

    PubMed Central

    Liu, Jin-Long; Wang, Jian; Yu, Zu-Guo; Xie, Xian-Hua

    2017-01-01

    Bipartite networks have attracted considerable interest in various fields. Fractality and multifractality of unipartite (classical) networks have been studied in recent years, but there is no work to study these properties of bipartite networks. In this paper, we try to unfold the self-similarity structure of bipartite networks by performing the fractal and multifractal analyses for a variety of real-world bipartite network data sets and models. First, we find the fractality in some bipartite networks, including the CiteULike, Netflix, MovieLens (ml-20m), Delicious data sets and (u, v)-flower model. Meanwhile, we observe the shifted power-law or exponential behavior in other several networks. We then focus on the multifractal properties of bipartite networks. Our results indicate that the multifractality exists in those bipartite networks possessing fractality. To capture the inherent attribute of bipartite network with two types different nodes, we give the different weights for the nodes of different classes, and show the existence of multifractality in these node-weighted bipartite networks. In addition, for the data sets with ratings, we modify the two existing algorithms for fractal and multifractal analyses of edge-weighted unipartite networks to study the self-similarity of the corresponding edge-weighted bipartite networks. The results show that our modified algorithms are feasible and can effectively uncover the self-similarity structure of these edge-weighted bipartite networks and their corresponding node-weighted versions. PMID:28361962

  10. Streamflow simulation for continental-scale river basins

    NASA Astrophysics Data System (ADS)

    Nijssen, Bart; Lettenmaier, Dennis P.; Liang, Xu; Wetzel, Suzanne W.; Wood, Eric F.

    1997-04-01

    A grid network version of the two-layer variable infiltration capacity (VIC-2L) macroscale hydrologic model is described. VIC-2L is a hydrologically based soil- vegetation-atmosphere transfer scheme designed to represent the land surface in numerical weather prediction and climate models. The grid network scheme allows streamflow to be predicted for large continental rivers. Off-line (observed and estimated surface meteorological and radiative forcings) applications of the model to the Columbia River (1° latitude-longitude spatial resolution) and Delaware River (0.5° resolution) are described. The model performed quite well in both applications, reproducing the seasonal hydrograph and annual flow volumes to within a few percent. Difficulties in reproducing observed streamflow in the arid portion of the Snake River basin are attributed to groundwater-surface water interactions, which are not modeled by VIC-2L.

  11. Emergent latent symbol systems in recurrent neural networks

    NASA Astrophysics Data System (ADS)

    Monner, Derek; Reggia, James A.

    2012-12-01

    Fodor and Pylyshyn [(1988). Connectionism and cognitive architecture: A critical analysis. Cognition, 28(1-2), 3-71] famously argued that neural networks cannot behave systematically short of implementing a combinatorial symbol system. A recent response from Frank et al. [(2009). Connectionist semantic systematicity. Cognition, 110(3), 358-379] claimed to have trained a neural network to behave systematically without implementing a symbol system and without any in-built predisposition towards combinatorial representations. We believe systems like theirs may in fact implement a symbol system on a deeper and more interesting level: one where the symbols are latent - not visible at the level of network structure. In order to illustrate this possibility, we demonstrate our own recurrent neural network that learns to understand sentence-level language in terms of a scene. We demonstrate our model's learned understanding by testing it on novel sentences and scenes. By paring down our model into an architecturally minimal version, we demonstrate how it supports combinatorial computation over distributed representations by using the associative memory operations of Vector Symbolic Architectures. Knowledge of the model's memory scheme gives us tools to explain its errors and construct superior future models. We show how the model designs and manipulates a latent symbol system in which the combinatorial symbols are patterns of activation distributed across the layers of a neural network, instantiating a hybrid of classical symbolic and connectionist representations that combines advantages of both.

  12. Adapting the Facilitating Conditions Questionnaire (FCQ) for Bilingual Filipino Adolescents: Validating English and Filipino Versions.

    PubMed

    Ganotice, Fraide A; Bernardo, Allan B I; King, Ronnel B

    2013-06-01

    This study examined the applicability of the English and Filipino versions of the Facilitating Conditions Questionnaire (FCQ) among Filipino high school students. The FCQ measures the external forces in students' social environments that can influence their motivation for school. It is composed of 11 factors: university intention, school valuing, parent support, teacher support, peer help, leave school, pride from others, negative parent influence, affect to school, negative peer influence, and positive peer influence. It was translated into conversational Filipino. Seven hundred sixty-five high school students answered one of the two language versions. Both within-network and between-network approaches to construct validation were used. Confirmatory factor analyses (CFA) of the two versions showed good fit. Results of the multigroup CFA indicated that there was invariance in terms of factor loadings for the two versions. Results of the between-network test also showed that the factors in the FCQ correlated systematically with theoretically relevant constructs. Taken together, this study supports the applicability of the FCQ for use with Filipino bilingual adolescents.

  13. Parallelization of a Fully-Distributed Hydrologic Model using Sub-basin Partitioning

    NASA Astrophysics Data System (ADS)

    Vivoni, E. R.; Mniszewski, S.; Fasel, P.; Springer, E.; Ivanov, V. Y.; Bras, R. L.

    2005-12-01

    A primary obstacle towards advances in watershed simulations has been the limited computational capacity available to most models. The growing trend of model complexity, data availability and physical representation has not been matched by adequate developments in computational efficiency. This situation has created a serious bottleneck which limits existing distributed hydrologic models to small domains and short simulations. In this study, we present novel developments in the parallelization of a fully-distributed hydrologic model. Our work is based on the TIN-based Real-time Integrated Basin Simulator (tRIBS), which provides continuous hydrologic simulation using a multiple resolution representation of complex terrain based on a triangulated irregular network (TIN). While the use of TINs reduces computational demand, the sequential version of the model is currently limited over large basins (>10,000 km2) and long simulation periods (>1 year). To address this, a parallel MPI-based version of the tRIBS model has been implemented and tested using high performance computing resources at Los Alamos National Laboratory. Our approach utilizes domain decomposition based on sub-basin partitioning of the watershed. A stream reach graph based on the channel network structure is used to guide the sub-basin partitioning. Individual sub-basins or sub-graphs of sub-basins are assigned to separate processors to carry out internal hydrologic computations (e.g. rainfall-runoff transformation). Routed streamflow from each sub-basin forms the major hydrologic data exchange along the stream reach graph. Individual sub-basins also share subsurface hydrologic fluxes across adjacent boundaries. We demonstrate how the sub-basin partitioning provides computational feasibility and efficiency for a set of test watersheds in northeastern Oklahoma. We compare the performance of the sequential and parallelized versions to highlight the efficiency gained as the number of processors increases. We also discuss how the coupled use of TINs and parallel processing can lead to feasible long-term simulations in regional watersheds while preserving basin properties at high-resolution.

  14. Code System to Calculate Tornado-Induced Flow Material Transport.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    ANDRAE, R. W.

    1999-11-18

    Version: 00 TORAC models tornado-induced flows, pressures, and material transport within structures. Its use is directed toward nuclear fuel cycle facilities and their primary release pathway, the ventilation system. However, it is applicable to other structures and can model other airflow pathways within a facility. In a nuclear facility, this network system could include process cells, canyons, laboratory offices, corridors, and offgas systems. TORAC predicts flow through a network system that also includes ventilation system components such as filters, dampers, ducts, and blowers. These ventilation system components are connected to the rooms and corridors of the facility to form amore » complete network for moving air through the structure and, perhaps, maintaining pressure levels in certain areas. The material transport capability in TORAC is very basic and includes convection, depletion, entrainment, and filtration of material.« less

  15. The Everglades Depth Estimation Network (EDEN) surface-water model, version 2

    USGS Publications Warehouse

    Telis, Pamela A.; Xie, Zhixiao; Liu, Zhongwei; Li, Yingru; Conrads, Paul

    2015-01-01

    Three applications of the EDEN-modeled water surfaces and other EDEN datasets are presented in the report to show how scientists and resource managers are using EDEN datasets to analyze biological and ecological responses to hydrologic changes in the Everglades. The biological responses of two important Everglades species, alligators and wading birds, to changes in hydrology are described. The effects of hydrology on fire dynamics in the Everglades are also discussed.

  16. Comparing Two Versions of Professional Development for Teachers Using Formative Assessment in Networked Mathematics Classrooms

    ERIC Educational Resources Information Center

    Yin, Yue; Olson, Judith; Olson, Melfried; Solvin, Hannah; Brandon, Paul R.

    2015-01-01

    This study compared two versions of professional development (PD) designed for teachers using formative assessment (FA) in mathematics classrooms that were networked with Texas Instruments Navigator (NAV) technology. Thirty-two middle school mathematics teachers were randomly assigned to one of the two groups: FA-then-NAV group and FA-and-NAV…

  17. Validating the Chinese Version of the Inventory of School Motivation

    ERIC Educational Resources Information Center

    King, Ronnel B.; Watkins, David A.

    2013-01-01

    The aim of this study is to assess the cross-cultural applicability of the Chinese version of the Inventory of School Motivation (ISM; McInerney & Sinclair, 1991) in the Hong Kong context using both within-network and between-network approaches to construct validation. The ISM measures four types of achievement goals: mastery, performance,…

  18. PyPanda: a Python package for gene regulatory network reconstruction

    PubMed Central

    van IJzendoorn, David G.P.; Glass, Kimberly; Quackenbush, John; Kuijjer, Marieke L.

    2016-01-01

    Summary: PANDA (Passing Attributes between Networks for Data Assimilation) is a gene regulatory network inference method that uses message-passing to integrate multiple sources of ‘omics data. PANDA was originally coded in C ++. In this application note we describe PyPanda, the Python version of PANDA. PyPanda runs considerably faster than the C ++ version and includes additional features for network analysis. Availability and implementation: The open source PyPanda Python package is freely available at http://github.com/davidvi/pypanda. Contact: mkuijjer@jimmy.harvard.edu or d.g.p.van_ijzendoorn@lumc.nl PMID:27402905

  19. PyPanda: a Python package for gene regulatory network reconstruction.

    PubMed

    van IJzendoorn, David G P; Glass, Kimberly; Quackenbush, John; Kuijjer, Marieke L

    2016-11-01

    PANDA (Passing Attributes between Networks for Data Assimilation) is a gene regulatory network inference method that uses message-passing to integrate multiple sources of 'omics data. PANDA was originally coded in C ++. In this application note we describe PyPanda, the Python version of PANDA. PyPanda runs considerably faster than the C ++ version and includes additional features for network analysis. The open source PyPanda Python package is freely available at http://github.com/davidvi/pypanda CONTACT: mkuijjer@jimmy.harvard.edu or d.g.p.van_ijzendoorn@lumc.nl. © The Author 2016. Published by Oxford University Press.

  20. Seller's reputation and capacity on the illicit drug markets: 11-month study on the Finnish version of the Silk Road.

    PubMed

    Nurmi, Juha; Kaskela, Teemu; Perälä, Jussi; Oksanen, Atte

    2017-09-01

    This 11-month study analyzed illicit drug sales on the anonymous Tor network, with a focus on investigating whether a seller's reputation and capacity increased daily drug sales. The data were gathered from Silkkitie, the Finnish version of the Silk Road, by web crawling the site on a daily basis from (November 2014 to September 2015). The data include information on sellers (n=260) and products (n=3823). The measurements include the sellers' reputation, the sale amounts (in euros), the number of available products and the types of drugs sold. The sellers' capacity was measured using their full sales potential (in euros). Fixed-effects regression models were used to estimate the effects of sellers' reputation and capacity; these models were adjusted for the types of drugs sold. Overall, illicit drug sales totalled over 2 million euros during the study, but many products were not sold at all, and sellers were active for only a short time on average (mean=62.8days). Among the products sold, stimulants were most widely purchased, followed by cannabis, MDMA, and psychedelics. A seller's reputation and capacity were both associated with drug sales. The Tor network has enabled a transformation in drug sales. Due to the network's anonymity, the seller's reputation and capacity both have an impact on sales. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Developing a Procedure for Segmenting Meshed Heat Networks of Heat Supply Systems without Outflows

    NASA Astrophysics Data System (ADS)

    Tokarev, V. V.

    2018-06-01

    The heat supply systems of cities have, as a rule, a ring structure with the possibility of redistributing the flows. Despite the fact that a ring structure is more reliable than a radial one, the operators of heat networks prefer to use them in normal modes according to the scheme without overflows of the heat carrier between the heat mains. With such a scheme, it is easier to adjust the networks and to detect and locate faults in them. The article proposes a formulation of the heat network segmenting problem. The problem is set in terms of optimization with the heat supply system's excessive hydraulic power used as the optimization criterion. The heat supply system computer model has a hierarchically interconnected multilevel structure. Since iterative calculations are only carried out for the level of trunk heat networks, decomposing the entire system into levels allows the dimensionality of the solved subproblems to be reduced by an order of magnitude. An attempt to solve the problem by fully enumerating possible segmentation versions does not seem to be feasible for systems of really existing sizes. The article suggests a procedure for searching rational segmentation of heat supply networks with limiting the search to versions of dividing the system into segments near the flow convergence nodes with subsequent refining of the solution. The refinement is performed in two stages according to the total excess hydraulic power criterion. At the first stage, the loads are redistributed among the sources. After that, the heat networks are divided into independent fragments, and the possibility of increasing the excess hydraulic power in the obtained fragments is checked by shifting the division places inside a fragment. The proposed procedure has been approbated taking as an example a municipal heat supply system involving six heat mains fed from a common source, 24 loops within the feeding mains plane, and more than 5000 consumers. Application of the proposed segmentation procedure made it possible to find a version with required hydraulic power in the heat supply system on 3% less than the one found using the simultaneous segmentation method.

  2. Extending the Stabilized Supralinear Network model for binocular image processing.

    PubMed

    Selby, Ben; Tripp, Bryan

    2017-06-01

    The visual cortex is both extensive and intricate. Computational models are needed to clarify the relationships between its local mechanisms and high-level functions. The Stabilized Supralinear Network (SSN) model was recently shown to account for many receptive field phenomena in V1, and also to predict subtle receptive field properties that were subsequently confirmed in vivo. In this study, we performed a preliminary exploration of whether the SSN is suitable for incorporation into large, functional models of the visual cortex, considering both its extensibility and computational tractability. First, whereas the SSN receives abstract orientation signals as input, we extended it to receive images (through a linear-nonlinear stage), and found that the extended version behaved similarly. Secondly, whereas the SSN had previously been studied in a monocular context, we found that it could also reproduce data on interocular transfer of surround suppression. Finally, we reformulated the SSN as a convolutional neural network, and found that it scaled well on parallel hardware. These results provide additional support for the plausibility of the SSN as a model of lateral interactions in V1, and suggest that the SSN is well suited as a component of complex vision models. Future work will use the SSN to explore relationships between local network interactions and sophisticated vision processes in large networks. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. BioTapestry now provides a web application and improved drawing and layout tools

    PubMed Central

    Paquette, Suzanne M.; Leinonen, Kalle; Longabaugh, William J.R.

    2016-01-01

    Gene regulatory networks (GRNs) control embryonic development, and to understand this process in depth, researchers need to have a detailed understanding of both the network architecture and its dynamic evolution over time and space. Interactive visualization tools better enable researchers to conceptualize, understand, and share GRN models. BioTapestry is an established application designed to fill this role, and recent enhancements released in Versions 6 and 7 have targeted two major facets of the program. First, we introduced significant improvements for network drawing and automatic layout that have now made it much easier for the user to create larger, more organized network drawings. Second, we revised the program architecture so it could continue to support the current Java desktop Editor program, while introducing a new BioTapestry GRN Viewer that runs as a JavaScript web application in a browser. We have deployed a number of GRN models using this new web application. These improvements will ensure that BioTapestry remains viable as a research tool in the face of the continuing evolution of web technologies, and as our understanding of GRN models grows. PMID:27134726

  4. BioTapestry now provides a web application and improved drawing and layout tools.

    PubMed

    Paquette, Suzanne M; Leinonen, Kalle; Longabaugh, William J R

    2016-01-01

    Gene regulatory networks (GRNs) control embryonic development, and to understand this process in depth, researchers need to have a detailed understanding of both the network architecture and its dynamic evolution over time and space. Interactive visualization tools better enable researchers to conceptualize, understand, and share GRN models. BioTapestry is an established application designed to fill this role, and recent enhancements released in Versions 6 and 7 have targeted two major facets of the program. First, we introduced significant improvements for network drawing and automatic layout that have now made it much easier for the user to create larger, more organized network drawings. Second, we revised the program architecture so it could continue to support the current Java desktop Editor program, while introducing a new BioTapestry GRN Viewer that runs as a JavaScript web application in a browser. We have deployed a number of GRN models using this new web application. These improvements will ensure that BioTapestry remains viable as a research tool in the face of the continuing evolution of web technologies, and as our understanding of GRN models grows.

  5. Mimoza: web-based semantic zooming and navigation in metabolic networks.

    PubMed

    Zhukova, Anna; Sherman, David J

    2015-02-26

    The complexity of genome-scale metabolic models makes them quite difficult for human users to read, since they contain thousands of reactions that must be included for accurate computer simulation. Interestingly, hidden similarities between groups of reactions can be discovered, and generalized to reveal higher-level patterns. The web-based navigation system Mimoza allows a human expert to explore metabolic network models in a semantically zoomable manner: The most general view represents the compartments of the model; the next view shows the generalized versions of reactions and metabolites in each compartment; and the most detailed view represents the initial network with the generalization-based layout (where similar metabolites and reactions are placed next to each other). It allows a human expert to grasp the general structure of the network and analyze it in a top-down manner Mimoza can be installed standalone, or used on-line at http://mimoza.bordeaux.inria.fr/ , or installed in a Galaxy server for use in workflows. Mimoza views can be embedded in web pages, or downloaded as COMBINE archives.

  6. DCMDN: Deep Convolutional Mixture Density Network

    NASA Astrophysics Data System (ADS)

    D'Isanto, Antonio; Polsterer, Kai Lars

    2017-09-01

    Deep Convolutional Mixture Density Network (DCMDN) estimates probabilistic photometric redshift directly from multi-band imaging data by combining a version of a deep convolutional network with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) are applied as performance criteria. DCMDN is able to predict redshift PDFs independently from the type of source, e.g. galaxies, quasars or stars and renders pre-classification of objects and feature extraction unnecessary; the method is extremely general and allows the solving of any kind of probabilistic regression problems based on imaging data, such as estimating metallicity or star formation rate in galaxies.

  7. Briefer assessment of social network drinking: A test of the Important People Instrument-5 (IP-5).

    PubMed

    Hallgren, Kevin A; Barnett, Nancy P

    2016-12-01

    The Important People instrument (IP; Longabaugh et al., 2010) is one of the most commonly used measures of social network drinking. Although its reliability and validity are well-supported, the length of the instrument may limit its use in many settings. The present study evaluated whether a briefer, 5-person version of the IP (IP-5) adequately reproduces scores from the full IP. College freshmen (N = 1,053) reported their own past-month drinking, alcohol-related consequences, and information about drinking in their close social networks at baseline and 1 year later. From this we derived network members' drinking frequency, percentage of drinkers, and percentage of heavy drinkers, assessed for up to 10 (full IP) or 5 (IP-5) network members. We first modeled the expected concordance between full-IP scores and scores from simulated shorter IP instruments by sampling smaller subsets of network members from full IP data. Then, using quasi-experimental methods, we administered the full IP and IP-5 and compared the 2 instruments' score distributions and concurrent and year-lagged associations with participants' alcohol consumption and consequences. Most of the full-IP variance was reproduced from simulated shorter versions of the IP (ICCs ≥ 0.80). The full IP and IP-5 yielded similar score distributions, concurrent associations with drinking (r = 0.22 to 0.52), and year-lagged associations with drinking. The IP-5 retains most of the information about social network drinking from the full IP. The shorter instrument may be useful in clinical and research settings that require frequent measure administration, yielding greater temporal resolution for monitoring social network drinking. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  8. A security analysis of version 2 of the Network Time Protocol (NTP): A report to the privacy and security research group

    NASA Technical Reports Server (NTRS)

    Bishop, Matt

    1991-01-01

    The Network Time Protocol is being used throughout the Internet to provide an accurate time service. The security requirements are examined of such a service, version 2 of the NTP protocol is analyzed to determine how well it meets these requirements, and improvements are suggested where appropriate.

  9. Modelling conflicts with cluster dynamics in networks

    NASA Astrophysics Data System (ADS)

    Tadić, Bosiljka; Rodgers, G. J.

    2010-12-01

    We introduce cluster dynamical models of conflicts in which only the largest cluster can be involved in an action. This mimics the situations in which an attack is planned by a central body, and the largest attack force is used. We study the model in its annealed random graph version, on a fixed network, and on a network evolving through the actions. The sizes of actions are distributed with a power-law tail, however, the exponent is non-universal and depends on the frequency of actions and sparseness of the available connections between units. Allowing the network reconstruction over time in a self-organized manner, e.g., by adding the links based on previous liaisons between units, we find that the power-law exponent depends on the evolution time of the network. Its lower limit is given by the universal value 5/2, derived analytically for the case of random fragmentation processes. In the temporal patterns behind the size of actions we find long-range correlations in the time series of the number of clusters and the non-trivial distribution of time that a unit waits between two actions. In the case of an evolving network the distribution develops a power-law tail, indicating that through repeated actions, the system develops an internal structure with a hierarchy of units.

  10. An Emotional ANN (EANN) approach to modeling rainfall-runoff process

    NASA Astrophysics Data System (ADS)

    Nourani, Vahid

    2017-01-01

    This paper presents the first hydrological implementation of Emotional Artificial Neural Network (EANN), as a new generation of Artificial Intelligence-based models for daily rainfall-runoff (r-r) modeling of the watersheds. Inspired by neurophysiological form of brain, in addition to conventional weights and bias, an EANN includes simulated emotional parameters aimed at improving the network learning process. EANN trained by a modified version of back-propagation (BP) algorithm was applied to single and multi-step-ahead runoff forecasting of two watersheds with two distinct climatic conditions. Also to evaluate the ability of EANN trained by smaller training data set, three data division strategies with different number of training samples were considered for the training purpose. The overall comparison of the obtained results of the r-r modeling indicates that the EANN could outperform the conventional feed forward neural network (FFNN) model up to 13% and 34% in terms of training and verification efficiency criteria, respectively. The superiority of EANN over classic ANN is due to its ability to recognize and distinguish dry (rainless days) and wet (rainy days) situations using hormonal parameters of the artificial emotional system.

  11. Development of a Distributed Hydrologic Model Using Triangulated Irregular Networks for Continuous, Real-Time Flood Forecasting

    NASA Astrophysics Data System (ADS)

    Ivanov, V. Y.; Vivoni, E. R.; Bras, R. L.; Entekhabi, D.

    2001-05-01

    The Triangulated Irregular Networks (TINs) are widespread in many finite-element modeling applications stressing high spatial non-uniformity while describing the domain of interest in an optimized fashion that results in superior computational efficiency. TINs, being adaptive to the complexity of any terrain, are capable of maintaining topological relations between critical surface features and therefore afford higher flexibility in data manipulation. The TIN-based Real-time Integrated Basin Simulator (tRIBS) is a distributed hydrologic model that utilizes the mesh architecture and the software environment developed for the CHILD landscape evolution model and employs the hydrologic routines of its raster-oriented version, RIBS. As a totally independent software unit, the tRIBS consolidates the strengths of the distributed approach and efficient computational data platform. The current version couples the unsaturated and the saturated zones and accounts for the interaction of moving infiltration fronts with a variable groundwater surface, allowing the model to handle both storm and interstorm periods in a continuous fashion. Recent model enhancements have included the development of interstorm hydrologic fluxes through an evapotranspiration scheme as well as incorporation of a rainfall interception module. Overall, the tRIBS model has proven to properly mimic successive phases of the distributed catchment response by reproducing various runoff production mechanisms and handling their meteorological constraints. Important improvements in modeling options, robustness to data availability and overall design flexibility have also been accomplished. The current efforts are focused on further model developments as well as the application of the tRIBS to various watersheds.

  12. Acceleration of spiking neural network based pattern recognition on NVIDIA graphics processors.

    PubMed

    Han, Bing; Taha, Tarek M

    2010-04-01

    There is currently a strong push in the research community to develop biological scale implementations of neuron based vision models. Systems at this scale are computationally demanding and generally utilize more accurate neuron models, such as the Izhikevich and the Hodgkin-Huxley models, in favor of the more popular integrate and fire model. We examine the feasibility of using graphics processing units (GPUs) to accelerate a spiking neural network based character recognition network to enable such large scale systems. Two versions of the network utilizing the Izhikevich and Hodgkin-Huxley models are implemented. Three NVIDIA general-purpose (GP) GPU platforms are examined, including the GeForce 9800 GX2, the Tesla C1060, and the Tesla S1070. Our results show that the GPGPUs can provide significant speedup over conventional processors. In particular, the fastest GPGPU utilized, the Tesla S1070, provided a speedup of 5.6 and 84.4 over highly optimized implementations on the fastest central processing unit (CPU) tested, a quadcore 2.67 GHz Xeon processor, for the Izhikevich and the Hodgkin-Huxley models, respectively. The CPU implementation utilized all four cores and the vector data parallelism offered by the processor. The results indicate that GPUs are well suited for this application domain.

  13. Electromagnetic Wave Propagation in Body Area Networks Using the Finite-Difference-Time-Domain Method

    PubMed Central

    Bringuier, Jonathan N.; Mittra, Raj

    2012-01-01

    A rigorous full-wave solution, via the Finite-Difference-Time-Domain (FDTD) method, is performed in an attempt to obtain realistic communication channel models for on-body wireless transmission in Body-Area-Networks (BANs), which are local data networks using the human body as a propagation medium. The problem of modeling the coupling between body mounted antennas is often not amenable to attack by hybrid techniques owing to the complex nature of the human body. For instance, the time-domain Green's function approach becomes more involved when the antennas are not conformal. Furthermore, the human body is irregular in shape and has dispersion properties that are unique. One consequence of this is that we must resort to modeling the antenna network mounted on the body in its entirety, and the number of degrees of freedom (DoFs) can be on the order of billions. Even so, this type of problem can still be modeled by employing a parallel version of the FDTD algorithm running on a cluster. Lastly, we note that the results of rigorous simulation of BANs can serve as benchmarks for comparison with the abundance of measurement data. PMID:23012575

  14. The Application of the SPASE Metadata Standard in the U.S. and Worldwide

    NASA Astrophysics Data System (ADS)

    Thieman, J. R.; King, T. A.; Roberts, D.

    2012-12-01

    The Space Physics Archive Search and Extract (SPASE) Metadata standard for Heliophysics and related data is now an established standard within the NASA-funded space and solar physics community and is spreading to the international groups within that community. Development of SPASE had involved a number of international partners and the current version of the SPASE Metadata Model (version 2.2.2) has not needed any structural modifications since January 2011 . The SPASE standard has been adopted by groups such as NASA's Heliophysics division, the Canadian Space Science Data Portal (CSSDP), Canada's AUTUMN network, Japan's Inter-university Upper atmosphere Global Observation NETwork (IUGONET), Centre de Données de la Physique des Plasmas (CDPP), and the near-Earth space data infrastructure for e-Science (ESPAS). In addition, portions of the SPASE dictionary have been modeled in semantic web ontologies for use with reasoners and semantic searches. While we anticipate additional modifications to the model in the future to accommodate simulation and model data, these changes will not affect the data descriptions already generated for instrument-related datasets. Examples of SPASE descriptions can be viewed at http://www.spase-group.org/registry/explorer and data can be located using SPASE concepts by searching the Virtual Space Physics Observatory (http://vspo.gsfc.nasa.gov/websearch/dispatcher) for data of interest.

  15. Distal gap junctions and active dendrites can tune network dynamics.

    PubMed

    Saraga, Fernanda; Ng, Leo; Skinner, Frances K

    2006-03-01

    Gap junctions allow direct electrical communication between CNS neurons. From theoretical and modeling studies, it is well known that although gap junctions can act to synchronize network output, they can also give rise to many other dynamic patterns including antiphase and other phase-locked states. The particular network pattern that arises depends on cellular, intrinsic properties that affect firing frequencies as well as the strength and location of the gap junctions. Interneurons or GABAergic neurons in hippocampus are diverse in their cellular characteristics and have been shown to have active dendrites. Furthermore, parvalbumin-positive GABAergic neurons, also known as basket cells, can contact one another via gap junctions on their distal dendrites. Using two-cell network models, we explore how distal electrical connections affect network output. We build multi-compartment models of hippocampal basket cells using NEURON and endow them with varying amounts of active dendrites. Two-cell networks of these model cells as well as reduced versions are explored. The relationship between intrinsic frequency and the level of active dendrites allows us to define three regions based on what sort of network dynamics occur with distal gap junction coupling. Weak coupling theory is used to predict the delineation of these regions as well as examination of phase response curves and distal dendritic polarization levels. We find that a nonmonotonic dependence of network dynamic characteristics (phase lags) on gap junction conductance occurs. This suggests that distal electrical coupling and active dendrite levels can control how sensitive network dynamics are to gap junction modulation. With the extended geometry, gap junctions located at more distal locations must have larger conductances for pure synchrony to occur. Furthermore, based on simulations with heterogeneous networks, it may be that one requires active dendrites if phase-locking is to occur in networks formed with distal gap junctions.

  16. A low complexity visualization tool that helps to perform complex systems analysis

    NASA Astrophysics Data System (ADS)

    Beiró, M. G.; Alvarez-Hamelin, J. I.; Busch, J. R.

    2008-12-01

    In this paper, we present an extension of large network visualization (LaNet-vi), a tool to visualize large scale networks using the k-core decomposition. One of the new features is how vertices compute their angular position. While in the later version it is done using shell clusters, in this version we use the angular coordinate of vertices in higher k-shells, and arrange the highest shell according to a cliques decomposition. The time complexity goes from O(n\\sqrt n) to O(n) upon bounds on a heavy-tailed degree distribution. The tool also performs a k-core-connectivity analysis, highlighting vertices that are not k-connected; e.g. this property is useful to measure robustness or quality of service (QoS) capabilities in communication networks. Finally, the actual version of LaNet-vi can draw labels and all the edges using transparencies, yielding an accurate visualization. Based on the obtained figure, it is possible to distinguish different sources and types of complex networks at a glance, in a sort of 'network iris-print'.

  17. Solving a combinatorial problem via self-organizing process: an application of the Kohonen algorithm to the traveling salesman problem.

    PubMed

    Fort, J C

    1988-01-01

    We present an application of the Kohonen algorithm to the traveling salesman problem: Using only this algorithm, without energy function nor any parameter chosen "ad hoc", we found good suboptimal tours. We give a neural model version of this algorithm, closer to classical neural networks. This is illustrated with various numerical examples.

  18. NetMOD Version 2.0 Parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merchant, Bion J.

    2015-08-01

    NetMOD ( Net work M onitoring for O ptimal D etection) is a Java-based software package for conducting simulation of seismic, hydroacoustic and infrasonic networks. Network simulations have long been used to study network resilience to station outages and to determine where additional stations are needed to reduce monitoring thresholds. NetMOD makes use of geophysical models to determine the source characteristics, signal attenuation along the path between the source and station, and the performance and noise properties of the station. These geophysical models are combined to simulate the relative amplitudes of signal and noise that are observed at each ofmore » the stations. From these signal-to-noise ratios (SNR), the probability of detection can be computed given a detection threshold. This document describes the parameters that are used to configure the NetMOD tool and the input and output parameters that make up the simulation definitions.« less

  19. The GEOS-5 Neural Network Retrieval for AOD

    NASA Astrophysics Data System (ADS)

    Castellanos, P.; da Silva, A. M., Jr.

    2017-12-01

    One of the difficulties in data assimilation is the need for multi-sensor data merging that can account for temporal and spatial biases between satellite sensors. In the Goddard Earth Observing System Model Version 5 (GEOS-5) aerosol data assimilation system, a neural network retrieval (NNR) is used as a mapping between satellite observed top of the atmosphere (TOA) reflectance and AOD, which is the target variable that is assimilated in the model. By training observations of TOA reflectance from multiple sensors to map to a common AOD dataset (in this case AOD observed by the ground based Aerosol Robotic Network, AERONET), we are able to create a global, homogenous, satellite data record of AOD from MODIS observations on board the Terra and Aqua satellites. In this talk, I will present the implementation of and recent updates to the GEOS-5 NNR for MODIS collection 6 data.

  20. The GEOS-5 Neural Network Retrieval (NNR) for AOD

    NASA Technical Reports Server (NTRS)

    Castellanos, Patricia; Da Silva, Arlindo

    2017-01-01

    One of the difficulties in data assimilation is the need for multi-sensor data merging that can account for temporal and spatial biases between satellite sensors. In the Goddard Earth Observing System Model Version 5 (GEOS-5) aerosol data assimilation system, a neural network retrieval (NNR) is used as a mapping between satellite observed top of the atmosphere (TOA) reflectance and AOD, which is the target variable that is assimilated in the model. By training observations of TOA reflectance from multiple sensors to map to a common AOD dataset (in this case AOD observed by the ground based Aerosol Robotic Network, AERONET), we are able to create a global, homogenous, satellite data record of AOD from MODIS observations on board the Terra and Aqua satellites. In this talk, I will present the implementation of and recent updates to the GEOS-5 NNR for MODIS collection 6 data.

  1. minet: A R/Bioconductor package for inferring large transcriptional networks using mutual information.

    PubMed

    Meyer, Patrick E; Lafitte, Frédéric; Bontempi, Gianluca

    2008-10-29

    This paper presents the R/Bioconductor package minet (version 1.1.6) which provides a set of functions to infer mutual information networks from a dataset. Once fed with a microarray dataset, the package returns a network where nodes denote genes, edges model statistical dependencies between genes and the weight of an edge quantifies the statistical evidence of a specific (e.g transcriptional) gene-to-gene interaction. Four different entropy estimators are made available in the package minet (empirical, Miller-Madow, Schurmann-Grassberger and shrink) as well as four different inference methods, namely relevance networks, ARACNE, CLR and MRNET. Also, the package integrates accuracy assessment tools, like F-scores, PR-curves and ROC-curves in order to compare the inferred network with a reference one. The package minet provides a series of tools for inferring transcriptional networks from microarray data. It is freely available from the Comprehensive R Archive Network (CRAN) as well as from the Bioconductor website.

  2. Approaches in highly parameterized inversion-PESTCommander, a graphical user interface for file and run management across networks

    USGS Publications Warehouse

    Karanovic, Marinko; Muffels, Christopher T.; Tonkin, Matthew J.; Hunt, Randall J.

    2012-01-01

    Models of environmental systems have become increasingly complex, incorporating increasingly large numbers of parameters in an effort to represent physical processes on a scale approaching that at which they occur in nature. Consequently, the inverse problem of parameter estimation (specifically, model calibration) and subsequent uncertainty analysis have become increasingly computation-intensive endeavors. Fortunately, advances in computing have made computational power equivalent to that of dozens to hundreds of desktop computers accessible through a variety of alternate means: modelers have various possibilities, ranging from traditional Local Area Networks (LANs) to cloud computing. Commonly used parameter estimation software is well suited to take advantage of the availability of such increased computing power. Unfortunately, logistical issues become increasingly important as an increasing number and variety of computers are brought to bear on the inverse problem. To facilitate efficient access to disparate computer resources, the PESTCommander program documented herein has been developed to provide a Graphical User Interface (GUI) that facilitates the management of model files ("file management") and remote launching and termination of "slave" computers across a distributed network of computers ("run management"). In version 1.0 described here, PESTCommander can access and ascertain resources across traditional Windows LANs: however, the architecture of PESTCommander has been developed with the intent that future releases will be able to access computing resources (1) via trusted domains established in Wide Area Networks (WANs) in multiple remote locations and (2) via heterogeneous networks of Windows- and Unix-based operating systems. The design of PESTCommander also makes it suitable for extension to other computational resources, such as those that are available via cloud computing. Version 1.0 of PESTCommander was developed primarily to work with the parameter estimation software PEST; the discussion presented in this report focuses on the use of the PESTCommander together with Parallel PEST. However, PESTCommander can be used with a wide variety of programs and models that require management, distribution, and cleanup of files before or after model execution. In addition to its use with the Parallel PEST program suite, discussion is also included in this report regarding the use of PESTCommander with the Global Run Manager GENIE, which was developed simultaneously with PESTCommander.

  3. Conceptual Architecture for Obtaining Cyber Situational Awareness

    DTIC Science & Technology

    2014-06-01

    1-893723-17-8. [10] SKYBOX SECURITY. Developer´s Guide. Skybox View. Manual.Version 11. 2010. [11] SCALABLE Network. EXata communications...E. Understanding command and control. Washington, D.C.: CCRP Publication Series, 2006. 255 p. ISBN 1-893723-17-8. • [10] SKYBOX SECURITY. Developer...s Guide. Skybox View. Manual.Version 11. 2010. • [11] SCALABLE Network. EXata communications simulation platform. Available: <http://www.scalable

  4. Evaluation of a parallel implementation of the learning portion of the backward error propagation neural network: experiments in artifact identification.

    PubMed Central

    Sittig, D. F.; Orr, J. A.

    1991-01-01

    Various methods have been proposed in an attempt to solve problems in artifact and/or alarm identification including expert systems, statistical signal processing techniques, and artificial neural networks (ANN). ANNs consist of a large number of simple processing units connected by weighted links. To develop truly robust ANNs, investigators are required to train their networks on huge training data sets, requiring enormous computing power. We implemented a parallel version of the backward error propagation neural network training algorithm in the widely portable parallel programming language C-Linda. A maximum speedup of 4.06 was obtained with six processors. This speedup represents a reduction in total run-time from approximately 6.4 hours to 1.5 hours. We conclude that use of the master-worker model of parallel computation is an excellent method for obtaining speedups in the backward error propagation neural network training algorithm. PMID:1807607

  5. Multi-subject hierarchical inverse covariance modelling improves estimation of functional brain networks.

    PubMed

    Colclough, Giles L; Woolrich, Mark W; Harrison, Samuel J; Rojas López, Pedro A; Valdes-Sosa, Pedro A; Smith, Stephen M

    2018-05-07

    A Bayesian model for sparse, hierarchical, inver-covariance estimation is presented, and applied to multi-subject functional connectivity estimation in the human brain. It enables simultaneous inference of the strength of connectivity between brain regions at both subject and population level, and is applicable to fMRI, MEG and EEG data. Two versions of the model can encourage sparse connectivity, either using continuous priors to suppress irrelevant connections, or using an explicit description of the network structure to estimate the connection probability between each pair of regions. A large evaluation of this model, and thirteen methods that represent the state of the art of inverse covariance modelling, is conducted using both simulated and resting-state functional imaging datasets. Our novel Bayesian approach has similar performance to the best extant alternative, Ng et al.'s Sparse Group Gaussian Graphical Model algorithm, which also is based on a hierarchical structure. Using data from the Human Connectome Project, we show that these hierarchical models are able to reduce the measurement error in MEG beta-band functional networks by 10%, producing concomitant increases in estimates of the genetic influence on functional connectivity. Copyright © 2018. Published by Elsevier Inc.

  6. Distributed Observer Network (DON), Version 3.0, User's Guide

    NASA Technical Reports Server (NTRS)

    Mazzone, Rebecca A.; Conroy, Michael P.

    2015-01-01

    The Distributed Observer Network (DON) is a data presentation tool developed by the National Aeronautics and Space Administration (NASA) to distribute and publish simulation results. Leveraging the display capabilities inherent in modern gaming technology, DON places users in a fully navigable 3-D environment containing graphical models and allows the users to observe how those models evolve and interact over time in a given scenario. Each scenario is driven with data that has been generated by authoritative NASA simulation tools and exported in accordance with a published data interface specification. This decoupling of the data from the source tool enables DON to faithfully display a simulator's results and ensure that every simulation stakeholder will view the exact same information every time.

  7. NetMOD Version 2.0 Mathematical Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merchant, Bion J.; Young, Christopher J.; Chael, Eric P.

    2015-08-01

    NetMOD ( Net work M onitoring for O ptimal D etection) is a Java-based software package for conducting simulation of seismic, hydroacoustic and infrasonic networks. Network simulations have long been used to study network resilience to station outages and to determine where additional stations are needed to reduce monitoring thresholds. NetMOD makes use of geophysical models to determine the source characteristics, signal attenuation along the path between the source and station, and the performance and noise properties of the station. These geophysical models are combined to simulate the relative amplitudes of signal and noise that are observed at each ofmore » the stations. From these signal-to-noise ratios (SNR), the probabilities of signal detection at each station and event detection across the network of stations can be computed given a detection threshold. The purpose of this document is to clearly and comprehensively present the mathematical framework used by NetMOD, the software package developed by Sandia National Laboratories to assess the monitoring capability of ground-based sensor networks. Many of the NetMOD equations used for simulations are inherited from the NetSim network capability assessment package developed in the late 1980s by SAIC (Sereno et al., 1990).« less

  8. NetMOD version 1.0 user's manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merchant, Bion John

    2014-01-01

    NetMOD (Network Monitoring for Optimal Detection) is a Java-based software package for conducting simulation of seismic networks. Specifically, NetMOD simulates the detection capabilities of seismic monitoring networks. Network simulations have long been used to study network resilience to station outages and to determine where additional stations are needed to reduce monitoring thresholds. NetMOD makes use of geophysical models to determine the source characteristics, signal attenuation along the path between the source and station, and the performance and noise properties of the station. These geophysical models are combined to simulate the relative amplitudes of signal and noise that are observed atmore » each of the stations. From these signal-to-noise ratios (SNR), the probability of detection can be computed given a detection threshold. This manual describes how to configure and operate NetMOD to perform seismic detection simulations. In addition, NetMOD is distributed with a simulation dataset for the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) International Monitoring System (IMS) seismic network for the purpose of demonstrating NetMOD's capabilities and providing user training. The tutorial sections of this manual use this dataset when describing how to perform the steps involved when running a simulation.« less

  9. Integrating probabilistic models of perception and interactive neural networks: a historical and tutorial review

    PubMed Central

    McClelland, James L.

    2013-01-01

    This article seeks to establish a rapprochement between explicitly Bayesian models of contextual effects in perception and neural network models of such effects, particularly the connectionist interactive activation (IA) model of perception. The article is in part an historical review and in part a tutorial, reviewing the probabilistic Bayesian approach to understanding perception and how it may be shaped by context, and also reviewing ideas about how such probabilistic computations may be carried out in neural networks, focusing on the role of context in interactive neural networks, in which both bottom-up and top-down signals affect the interpretation of sensory inputs. It is pointed out that connectionist units that use the logistic or softmax activation functions can exactly compute Bayesian posterior probabilities when the bias terms and connection weights affecting such units are set to the logarithms of appropriate probabilistic quantities. Bayesian concepts such the prior, likelihood, (joint and marginal) posterior, probability matching and maximizing, and calculating vs. sampling from the posterior are all reviewed and linked to neural network computations. Probabilistic and neural network models are explicitly linked to the concept of a probabilistic generative model that describes the relationship between the underlying target of perception (e.g., the word intended by a speaker or other source of sensory stimuli) and the sensory input that reaches the perceiver for use in inferring the underlying target. It is shown how a new version of the IA model called the multinomial interactive activation (MIA) model can sample correctly from the joint posterior of a proposed generative model for perception of letters in words, indicating that interactive processing is fully consistent with principled probabilistic computation. Ways in which these computations might be realized in real neural systems are also considered. PMID:23970868

  10. Integrating probabilistic models of perception and interactive neural networks: a historical and tutorial review.

    PubMed

    McClelland, James L

    2013-01-01

    This article seeks to establish a rapprochement between explicitly Bayesian models of contextual effects in perception and neural network models of such effects, particularly the connectionist interactive activation (IA) model of perception. The article is in part an historical review and in part a tutorial, reviewing the probabilistic Bayesian approach to understanding perception and how it may be shaped by context, and also reviewing ideas about how such probabilistic computations may be carried out in neural networks, focusing on the role of context in interactive neural networks, in which both bottom-up and top-down signals affect the interpretation of sensory inputs. It is pointed out that connectionist units that use the logistic or softmax activation functions can exactly compute Bayesian posterior probabilities when the bias terms and connection weights affecting such units are set to the logarithms of appropriate probabilistic quantities. Bayesian concepts such the prior, likelihood, (joint and marginal) posterior, probability matching and maximizing, and calculating vs. sampling from the posterior are all reviewed and linked to neural network computations. Probabilistic and neural network models are explicitly linked to the concept of a probabilistic generative model that describes the relationship between the underlying target of perception (e.g., the word intended by a speaker or other source of sensory stimuli) and the sensory input that reaches the perceiver for use in inferring the underlying target. It is shown how a new version of the IA model called the multinomial interactive activation (MIA) model can sample correctly from the joint posterior of a proposed generative model for perception of letters in words, indicating that interactive processing is fully consistent with principled probabilistic computation. Ways in which these computations might be realized in real neural systems are also considered.

  11. Complex architecture of primes and natural numbers.

    PubMed

    García-Pérez, Guillermo; Serrano, M Ángeles; Boguñá, Marián

    2014-08-01

    Natural numbers can be divided in two nonoverlapping infinite sets, primes and composites, with composites factorizing into primes. Despite their apparent simplicity, the elucidation of the architecture of natural numbers with primes as building blocks remains elusive. Here, we propose a new approach to decoding the architecture of natural numbers based on complex networks and stochastic processes theory. We introduce a parameter-free non-Markovian dynamical model that naturally generates random primes and their relation with composite numbers with remarkable accuracy. Our model satisfies the prime number theorem as an emerging property and a refined version of Cramér's conjecture about the statistics of gaps between consecutive primes that seems closer to reality than the original Cramér's version. Regarding composites, the model helps us to derive the prime factors counting function, giving the probability of distinct prime factors for any integer. Probabilistic models like ours can help to get deeper insights about primes and the complex architecture of natural numbers.

  12. Sampling properties of directed networks

    NASA Astrophysics Data System (ADS)

    Son, S.-W.; Christensen, C.; Bizhani, G.; Foster, D. V.; Grassberger, P.; Paczuski, M.

    2012-10-01

    For many real-world networks only a small “sampled” version of the original network may be investigated; those results are then used to draw conclusions about the actual system. Variants of breadth-first search (BFS) sampling, which are based on epidemic processes, are widely used. Although it is well established that BFS sampling fails, in most cases, to capture the IN component(s) of directed networks, a description of the effects of BFS sampling on other topological properties is all but absent from the literature. To systematically study the effects of sampling biases on directed networks, we compare BFS sampling to random sampling on complete large-scale directed networks. We present new results and a thorough analysis of the topological properties of seven complete directed networks (prior to sampling), including three versions of Wikipedia, three different sources of sampled World Wide Web data, and an Internet-based social network. We detail the differences that sampling method and coverage can make to the structural properties of sampled versions of these seven networks. Most notably, we find that sampling method and coverage affect both the bow-tie structure and the number and structure of strongly connected components in sampled networks. In addition, at a low sampling coverage (i.e., less than 40%), the values of average degree, variance of out-degree, degree autocorrelation, and link reciprocity are overestimated by 30% or more in BFS-sampled networks and only attain values within 10% of the corresponding values in the complete networks when sampling coverage is in excess of 65%. These results may cause us to rethink what we know about the structure, function, and evolution of real-world directed networks.

  13. The DIVA model: A neural theory of speech acquisition and production

    PubMed Central

    Tourville, Jason A.; Guenther, Frank H.

    2013-01-01

    The DIVA model of speech production provides a computationally and neuroanatomically explicit account of the network of brain regions involved in speech acquisition and production. An overview of the model is provided along with descriptions of the computations performed in the different brain regions represented in the model. The latest version of the model, which contains a new right-lateralized feedback control map in ventral premotor cortex, will be described, and experimental results that motivated this new model component will be discussed. Application of the model to the study and treatment of communication disorders will also be briefly described. PMID:23667281

  14. Identifiability of large-scale non-linear dynamic network models applied to the ADM1-case study.

    PubMed

    Nimmegeers, Philippe; Lauwers, Joost; Telen, Dries; Logist, Filip; Impe, Jan Van

    2017-06-01

    In this work, both the structural and practical identifiability of the Anaerobic Digestion Model no. 1 (ADM1) is investigated, which serves as a relevant case study of large non-linear dynamic network models. The structural identifiability is investigated using the probabilistic algorithm, adapted to deal with the specifics of the case study (i.e., a large-scale non-linear dynamic system of differential and algebraic equations). The practical identifiability is analyzed using a Monte Carlo parameter estimation procedure for a 'non-informative' and 'informative' experiment, which are heuristically designed. The model structure of ADM1 has been modified by replacing parameters by parameter combinations, to provide a generally locally structurally identifiable version of ADM1. This means that in an idealized theoretical situation, the parameters can be estimated accurately. Furthermore, the generally positive structural identifiability results can be explained from the large number of interconnections between the states in the network structure. This interconnectivity, however, is also observed in the parameter estimates, making uncorrelated parameter estimations in practice difficult. Copyright © 2017. Published by Elsevier Inc.

  15. Determination of the geophysical model function of the ERS-1 scatterometer by the use of neural networks

    NASA Astrophysics Data System (ADS)

    Mejia, Carlos; Thiria, Sylvie; Tran, Ngan; CréPon, Michel; Badran, Fouad

    1998-06-01

    We present a geophysical model function (GMF) for the ERS-1 scatterometer computed by the use of neural networks. The neural networks GMF (NN GMF) is calibrated with ERS-1 scatterometer sigma 0 collocated with European Center for Medium-Range Weather Forecasts (ECMWF) analyzed wind vectors. Four different NN GMFs have been computed: one for each antenna and an average NN GMF. These NN GMFs do not present any significant differences which means that the three antenna are quasi-identical. The NN GMFs exhibit a biharmonic dependence on the wind azimuth with a small upwind-downwind modulation as found on previous GMFs. In order to check the validity of the NN GMF systematic comparisons with the European Space Agency (ESA) C band model (CMOD4) GMF (version 2 of March 25, 1993) and the Institut Français de Recherche pour l'Exploitation de la Mer (IFREMER) CMOD213 GMF are done. It is found that the NN GMFs are highly accurate and relevant functions to model the ERS-1 scatterometer sigma 0.

  16. Teaching Structured Design of Network Algorithms in Enhanced Versions of SQL

    ERIC Educational Resources Information Center

    de Brock, Bert

    2004-01-01

    From time to time developers of (database) applications will encounter, explicitly or implicitly, structures such as trees, graphs, and networks. Such applications can, for instance, relate to bills of material, organization charts, networks of (rail)roads, networks of conduit pipes (e.g., plumbing, electricity), telecom networks, and data…

  17. Sharing good NEWS across the world: developing comparable scores across 12 countries for the Neighborhood Environment Walkability Scale (NEWS).

    PubMed

    Cerin, Ester; Conway, Terry L; Cain, Kelli L; Kerr, Jacqueline; De Bourdeaudhuij, Ilse; Owen, Neville; Reis, Rodrigo S; Sarmiento, Olga L; Hinckson, Erica A; Salvo, Deborah; Christiansen, Lars B; Macfarlane, Duncan J; Davey, Rachel; Mitáš, Josef; Aguinaga-Ontoso, Ines; Sallis, James F

    2013-04-08

    The IPEN (International Physical Activity and Environment Network) Adult project seeks to conduct pooled analyses of associations of perceived neighborhood environment, as measured by the Neighborhood Environment Walkability Scale (NEWS) and its abbreviated version (NEWS-A), with physical activity using data from 12 countries. As IPEN countries used adapted versions of the NEWS/NEWS-A, this paper aimed to develop scoring protocols that maximize cross-country comparability in responses. This information is also highly relevant to non-IPEN studies employing the NEWS/NEWS-A, which is one of the most popular measures of perceived environment globally. The following countries participated in the IPEN Adult study: Australia, Belgium, Brazil, Colombia, Czech Republic, Denmark, Hong Kong, Mexico, New Zealand, Spain, the United Kingdom, and the United States. Participants (N = 14,305) were recruited from neighborhoods varying in walkability and socio-economic status. Countries collected data on the perceived environment using a self- or interviewer-administered version of the NEWS/NEWS-A. Confirmatory Factor Analysis (CFA) was used to derive comparable country-specific measurement models of the NEWS/NEWS-A. The level of correspondence between standard and alternative versions of the NEWS/NEWS-A factor-analyzable subscales was determined by estimating the correlations and mean standardized difference (Cohen's d) between them using data from countries that had included items from both standard and alternative versions of the subscales. Final country-specific measurement models of the NEWS/NEWS-A provided acceptable levels of fit to the data and shared the same factorial structure with six latent factors and two single items. The correspondence between the standard and alternative versions of subscales of Land use mix - access, Infrastructure and safety for walking/cycling, and Aesthetics was high. The Brazilian version of the Traffic safety subscale was highly, while the Australian and Belgian versions were marginally, comparable to the standard version. Single-item versions of the Street connectivity subscale used in Australia and Belgium showed marginally acceptable correspondence to the standard version. We have proposed country-specific modifications to the original scoring protocol of the NEWS/NEWS-A that enhance inter-country comparability. These modifications have yielded sufficiently equivalent measurement models of the NEWS/NEWS-A. Some inter-country discrepancies remain. These need to be considered when interpreting findings from different countries.

  18. Sharing good NEWS across the world: developing comparable scores across 12 countries for the neighborhood environment walkability scale (NEWS)

    PubMed Central

    2013-01-01

    Background The IPEN (International Physical Activity and Environment Network) Adult project seeks to conduct pooled analyses of associations of perceived neighborhood environment, as measured by the Neighborhood Environment Walkability Scale (NEWS) and its abbreviated version (NEWS-A), with physical activity using data from 12 countries. As IPEN countries used adapted versions of the NEWS/NEWS-A, this paper aimed to develop scoring protocols that maximize cross-country comparability in responses. This information is also highly relevant to non-IPEN studies employing the NEWS/NEWS-A, which is one of the most popular measures of perceived environment globally. Methods The following countries participated in the IPEN Adult study: Australia, Belgium, Brazil, Colombia, Czech Republic, Denmark, Hong Kong, Mexico, New Zealand, Spain, the United Kingdom, and the United States. Participants (N = 14,305) were recruited from neighborhoods varying in walkability and socio-economic status. Countries collected data on the perceived environment using a self- or interviewer-administered version of the NEWS/NEWS-A. Confirmatory Factor Analysis (CFA) was used to derive comparable country-specific measurement models of the NEWS/NEWS-A. The level of correspondence between standard and alternative versions of the NEWS/NEWS-A factor-analyzable subscales was determined by estimating the correlations and mean standardized difference (Cohen’s d) between them using data from countries that had included items from both standard and alternative versions of the subscales. Results Final country-specific measurement models of the NEWS/NEWS-A provided acceptable levels of fit to the data and shared the same factorial structure with six latent factors and two single items. The correspondence between the standard and alternative versions of subscales of Land use mix – access, Infrastructure and safety for walking/cycling, and Aesthetics was high. The Brazilian version of the Traffic safety subscale was highly, while the Australian and Belgian versions were marginally, comparable to the standard version. Single-item versions of the Street connectivity subscale used in Australia and Belgium showed marginally acceptable correspondence to the standard version. Conclusions We have proposed country-specific modifications to the original scoring protocol of the NEWS/NEWS-A that enhance inter-country comparability. These modifications have yielded sufficiently equivalent measurement models of the NEWS/NEWS-A. Some inter-country discrepancies remain. These need to be considered when interpreting findings from different countries. PMID:23566032

  19. Coarse-Grain Bandwidth Estimation Scheme for Large-Scale Network

    NASA Technical Reports Server (NTRS)

    Cheung, Kar-Ming; Jennings, Esther H.; Sergui, John S.

    2013-01-01

    A large-scale network that supports a large number of users can have an aggregate data rate of hundreds of Mbps at any time. High-fidelity simulation of a large-scale network might be too complicated and memory-intensive for typical commercial-off-the-shelf (COTS) tools. Unlike a large commercial wide-area-network (WAN) that shares diverse network resources among diverse users and has a complex topology that requires routing mechanism and flow control, the ground communication links of a space network operate under the assumption of a guaranteed dedicated bandwidth allocation between specific sparse endpoints in a star-like topology. This work solved the network design problem of estimating the bandwidths of a ground network architecture option that offer different service classes to meet the latency requirements of different user data types. In this work, a top-down analysis and simulation approach was created to size the bandwidths of a store-and-forward network for a given network topology, a mission traffic scenario, and a set of data types with different latency requirements. These techniques were used to estimate the WAN bandwidths of the ground links for different architecture options of the proposed Integrated Space Communication and Navigation (SCaN) Network. A new analytical approach, called the "leveling scheme," was developed to model the store-and-forward mechanism of the network data flow. The term "leveling" refers to the spreading of data across a longer time horizon without violating the corresponding latency requirement of the data type. Two versions of the leveling scheme were developed: 1. A straightforward version that simply spreads the data of each data type across the time horizon and doesn't take into account the interactions among data types within a pass, or between data types across overlapping passes at a network node, and is inherently sub-optimal. 2. Two-state Markov leveling scheme that takes into account the second order behavior of the store-and-forward mechanism, and the interactions among data types within a pass. The novelty of this approach lies in the modeling of the store-and-forward mechanism of each network node. The term store-and-forward refers to the data traffic regulation technique in which data is sent to an intermediate network node where they are temporarily stored and sent at a later time to the destination node or to another intermediate node. Store-and-forward can be applied to both space-based networks that have intermittent connectivity, and ground-based networks with deterministic connectivity. For groundbased networks, the store-and-forward mechanism is used to regulate the network data flow and link resource utilization such that the user data types can be delivered to their destination nodes without violating their respective latency requirements.

  20. Entangling mobility and interactions in social media.

    PubMed

    Grabowicz, Przemyslaw A; Ramasco, José J; Gonçalves, Bruno; Eguíluz, Víctor M

    2014-01-01

    Daily interactions naturally define social circles. Individuals tend to be friends with the people they spend time with and they choose to spend time with their friends, inextricably entangling physical location and social relationships. As a result, it is possible to predict not only someone's location from their friends' locations but also friendship from spatial and temporal co-occurrence. While several models have been developed to separately describe mobility and the evolution of social networks, there is a lack of studies coupling social interactions and mobility. In this work, we introduce a model that bridges this gap by explicitly considering the feedback of mobility on the formation of social ties. Data coming from three online social networks (Twitter, Gowalla and Brightkite) is used for validation. Our model reproduces various topological and physical properties of the networks not captured by models uncoupling mobility and social interactions such as: i) the total size of the connected components, ii) the distance distribution between connected users, iii) the dependence of the reciprocity on the distance, iv) the variation of the social overlap and the clustering with the distance. Besides numerical simulations, a mean-field approach is also used to study analytically the main statistical features of the networks generated by a simplified version of our model. The robustness of the results to changes in the model parameters is explored, finding that a balance between friend visits and long-range random connections is essential to reproduce the geographical features of the empirical networks.

  1. Design and Implementation of a Distributed Version of the NASA Engine Performance Program

    NASA Technical Reports Server (NTRS)

    Cours, Jeffrey T.

    1994-01-01

    Distributed NEPP is a new version of the NASA Engine Performance Program that runs in parallel on a collection of Unix workstations connected through a network. The program is fault-tolerant, efficient, and shows significant speed-up in a multi-user, heterogeneous environment. This report describes the issues involved in designing distributed NEPP, the algorithms the program uses, and the performance distributed NEPP achieves. It develops an analytical model to predict and measure the performance of the simple distribution, multiple distribution, and fault-tolerant distribution algorithms that distributed NEPP incorporates. Finally, the appendices explain how to use distributed NEPP and document the organization of the program's source code.

  2. Recent Experiments with INQUERY

    DTIC Science & Technology

    1995-11-01

    were conducted with version of the INQUERY information retrieval system INQUERY is based on the Bayesian inference network retrieval model It is...corpus based query expansion For TREC a subset of of the adhoc document set was used to build the InFinder database None of the...experiments that showed signi cant improvements in retrieval eectiveness when document rankings based on the entire document text are combined with

  3. Social Support and Well-Being at Mid-Life among Mothers of Adolescents and Adults with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Smith, Leann E.; Greenberg, Jan S.; Seltzer, Marsha Mailick

    2012-01-01

    The present study investigated the impact of social support on the psychological well-being of mothers of adolescents and adults with ASD (n = 269). Quantity of support (number of social network members) as well as valence of support (positive support and negative support) were assessed using a modified version of the "convoy model" developed by…

  4. Discrete Address Beacon System (DABS) Baseline Test and Evaluation.

    DTIC Science & Technology

    1980-04-01

    Organization ReportNo 7. ~/ - 9. PorTorming Organisation Name and Address 10. Work Unit No. (TRALS) Federal Aviation Administration National Aviation...version of the Common International Civil Aviation Organization (ICAO) Data Interchange Network (CIDIN) protocol used in the DABS engineering model. 8. All...grouped into two subsets, one for surveillance data communications and one for Common International Civil Aviation Organization (ICAO) Data Interchange

  5. Tuning the overlap and the cross-layer correlations in two-layer networks: Application to a susceptible-infectious-recovered model with awareness dissemination

    NASA Astrophysics Data System (ADS)

    Juher, David; Saldaña, Joan

    2018-03-01

    We study the properties of the potential overlap between two networks A ,B sharing the same set of N nodes (a two-layer network) whose respective degree distributions pA(k ) ,pB(k ) are given. Defining the overlap coefficient α as the Jaccard index, we prove that α is very close to 0 when A and B are random and independently generated. We derive an upper bound αM for the maximum overlap coefficient permitted in terms of pA(k ) , pB(k ) , and N . Then we present an algorithm based on cross rewiring of links to obtain a two-layer network with any prescribed α inside the range (0 ,αM) . A refined version of the algorithm allows us to minimize the cross-layer correlations that unavoidably appear for values of α beyond a critical overlap αc<αM . Finally, we present a very simple example of a susceptible-infectious-recovered epidemic model with information dissemination and use the algorithms to determine the impact of the overlap on the final outbreak size predicted by the model.

  6. Social dilemmas in an online social network: The structure and evolution of cooperation

    NASA Astrophysics Data System (ADS)

    Fu, Feng; Chen, Xiaojie; Liu, Lianghuan; Wang, Long

    2007-11-01

    We investigate two paradigms for studying the evolution of cooperation—Prisoner's Dilemma and Snowdrift game in an online friendship network, obtained from a social networking site. By structural analysis, it is revealed that the empirical social network has small-world and scale-free properties. Besides, it exhibits assortative mixing pattern. Then, we study the evolutionary version of the two types of games on it. It is found that cooperation is substantially promoted with small values of game matrix parameters in both games. Whereas the competent cooperators induced by the underlying network of contacts will be dramatically inhibited with increasing values of the game parameters. Further, we explore the role of assortativity in evolution of cooperation by random edge rewiring. We find that increasing amount of assortativity will to a certain extent diminish the cooperation level. We also show that connected large hubs are capable of maintaining cooperation. The evolution of cooperation on empirical networks is influenced by various network effects in a combined manner, compared with that on model networks. Our results can help understand the cooperative behaviors in human groups and society.

  7. The Systems Biology Markup Language (SBML) Level 3 Package: Qualitative Models, Version 1, Release 1.

    PubMed

    Chaouiya, Claudine; Keating, Sarah M; Berenguier, Duncan; Naldi, Aurélien; Thieffry, Denis; van Iersel, Martijn P; Le Novère, Nicolas; Helikar, Tomáš

    2015-09-04

    Quantitative methods for modelling biological networks require an in-depth knowledge of the biochemical reactions and their stoichiometric and kinetic parameters. In many practical cases, this knowledge is missing. This has led to the development of several qualitative modelling methods using information such as, for example, gene expression data coming from functional genomic experiments. The SBML Level 3 Version 1 Core specification does not provide a mechanism for explicitly encoding qualitative models, but it does provide a mechanism for SBML packages to extend the Core specification and add additional syntactical constructs. The SBML Qualitative Models package for SBML Level 3 adds features so that qualitative models can be directly and explicitly encoded. The approach taken in this package is essentially based on the definition of regulatory or influence graphs. The SBML Qualitative Models package defines the structure and syntax necessary to describe qualitative models that associate discrete levels of activities with entity pools and the transitions between states that describe the processes involved. This is particularly suited to logical models (Boolean or multi-valued) and some classes of Petri net models can be encoded with the approach.

  8. The Application and Future Direction of the SPASE Metadata Standard in the U.S. and Worldwide

    NASA Astrophysics Data System (ADS)

    King, Todd; Thieman, James; Roberts, D. Aaron

    2013-04-01

    The Space Physics Archive Search and Extract (SPASE) Metadata standard for Heliophysics and related data is now an established standard within the NASA-funded space and solar physics community and is spreading to the international groups within that community. Development of SPASE had involved a number of international partners and the current version of the SPASE Metadata Model (version 2.2.2) has been stable since January 2011. The SPASE standard has been adopted by groups such as NASA's Heliophysics division, the Canadian Space Science Data Portal (CSSDP), Canada's AUTUMN network, Japan's Inter-university Upper atmosphere Global Observation NETwork (IUGONET), Centre de Données de la Physique des Plasmas (CDPP), and the near-Earth space data infrastructure for e-Science (ESPAS). In addition, portions of the SPASE dictionary have been modeled in semantic web ontologies for use with reasoners and semantic searches. In development are modifications to accommodate simulation and model data, as well as enhancements to describe data accessibility. These additions will add features to describe a broader range of data types. In keeping with a SPASE principle of back-compatibility, these changes will not affect the data descriptions already generated for instrument-related datasets. We also look at the long term commitment by NASA to support the SPASE effort and how SPASE metadata can enable value-added services.

  9. APINetworks: A general API for the treatment of complex networks in arbitrary computational environments

    NASA Astrophysics Data System (ADS)

    Niño, Alfonso; Muñoz-Caro, Camelia; Reyes, Sebastián

    2015-11-01

    The last decade witnessed a great development of the structural and dynamic study of complex systems described as a network of elements. Therefore, systems can be described as a set of, possibly, heterogeneous entities or agents (the network nodes) interacting in, possibly, different ways (defining the network edges). In this context, it is of practical interest to model and handle not only static and homogeneous networks but also dynamic, heterogeneous ones. Depending on the size and type of the problem, these networks may require different computational approaches involving sequential, parallel or distributed systems with or without the use of disk-based data structures. In this work, we develop an Application Programming Interface (APINetworks) for the modeling and treatment of general networks in arbitrary computational environments. To minimize dependency between components, we decouple the network structure from its function using different packages for grouping sets of related tasks. The structural package, the one in charge of building and handling the network structure, is the core element of the system. In this work, we focus in this API structural component. We apply an object-oriented approach that makes use of inheritance and polymorphism. In this way, we can model static and dynamic networks with heterogeneous elements in the nodes and heterogeneous interactions in the edges. In addition, this approach permits a unified treatment of different computational environments. Tests performed on a C++11 version of the structural package show that, on current standard computers, the system can handle, in main memory, directed and undirected linear networks formed by tens of millions of nodes and edges. Our results compare favorably to those of existing tools.

  10. a Numerical Investigation of the Jamming Transition in Traffic Flow on Diluted Planar Networks

    NASA Astrophysics Data System (ADS)

    Achler, Gabriele; Barra, Adriano

    In order to develop a toy model for car's traffic in cities, in this paper we analyze, by means of numerical simulations, the transition among fluid regimes and a congested jammed phase of the flow of kinetically constrained hard spheres in planar random networks similar to urban roads. In order to explore as timescales as possible, at a microscopic level we implement an event driven dynamics as the infinite time limit of a class of already existing model (Follow the Leader) on an Erdos-Renyi two-dimensional graph, the crossroads being accounted by standard Kirchoff density conservations. We define a dynamical order parameter as the ratio among the moving spheres versus the total number and by varying two control parameters (density of the spheres and coordination number of the network) we study the phase transition. At a mesoscopic level it respects an, again suitable, adapted version of the Lighthill-Whitham model, which belongs to the fluid-dynamical approach to the problem. At a macroscopic level, the model seems to display a continuous transition from a fluid phase to a jammed phase when varying the density of the spheres (the amount of cars in a city-like scenario) and a discontinuous jump when varying the connectivity of the underlying network.

  11. The Social Network of Tracer Variations and O(100) Uncertain Photochemical Parameters in the Community Atmosphere Model

    NASA Astrophysics Data System (ADS)

    Lucas, D. D.; Labute, M.; Chowdhary, K.; Debusschere, B.; Cameron-Smith, P. J.

    2014-12-01

    Simulating the atmospheric cycles of ozone, methane, and other radiatively important trace gases in global climate models is computationally demanding and requires the use of 100's of photochemical parameters with uncertain values. Quantitative analysis of the effects of these uncertainties on tracer distributions, radiative forcing, and other model responses is hindered by the "curse of dimensionality." We describe efforts to overcome this curse using ensemble simulations and advanced statistical methods. Uncertainties from 95 photochemical parameters in the trop-MOZART scheme were sampled using a Monte Carlo method and propagated through 10,000 simulations of the single column version of the Community Atmosphere Model (CAM). The variance of the ensemble was represented as a network with nodes and edges, and the topology and connections in the network were analyzed using lasso regression, Bayesian compressive sensing, and centrality measures from the field of social network theory. Despite the limited sample size for this high dimensional problem, our methods determined the key sources of variation and co-variation in the ensemble and identified important clusters in the network topology. Our results can be used to better understand the flow of photochemical uncertainty in simulations using CAM and other climate models. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and supported by the DOE Office of Science through the Scientific Discovery Through Advanced Computing (SciDAC).

  12. The Neurona at Home project: Simulating a large-scale cellular automata brain in a distributed computing environment

    NASA Astrophysics Data System (ADS)

    Acedo, L.; Villanueva-Oller, J.; Moraño, J. A.; Villanueva, R.-J.

    2013-01-01

    The Berkeley Open Infrastructure for Network Computing (BOINC) has become the standard open source solution for grid computing in the Internet. Volunteers use their computers to complete an small part of the task assigned by a dedicated server. We have developed a BOINC project called Neurona@Home whose objective is to simulate a cellular automata random network with, at least, one million neurons. We consider a cellular automata version of the integrate-and-fire model in which excitatory and inhibitory nodes can activate or deactivate neighbor nodes according to a set of probabilistic rules. Our aim is to determine the phase diagram of the model and its behaviour and to compare it with the electroencephalographic signals measured in real brains.

  13. Pathway Tools version 13.0: integrated software for pathway/genome informatics and systems biology

    PubMed Central

    Paley, Suzanne M.; Krummenacker, Markus; Latendresse, Mario; Dale, Joseph M.; Lee, Thomas J.; Kaipa, Pallavi; Gilham, Fred; Spaulding, Aaron; Popescu, Liviu; Altman, Tomer; Paulsen, Ian; Keseler, Ingrid M.; Caspi, Ron

    2010-01-01

    Pathway Tools is a production-quality software environment for creating a type of model-organism database called a Pathway/Genome Database (PGDB). A PGDB such as EcoCyc integrates the evolving understanding of the genes, proteins, metabolic network and regulatory network of an organism. This article provides an overview of Pathway Tools capabilities. The software performs multiple computational inferences including prediction of metabolic pathways, prediction of metabolic pathway hole fillers and prediction of operons. It enables interactive editing of PGDBs by DB curators. It supports web publishing of PGDBs, and provides a large number of query and visualization tools. The software also supports comparative analyses of PGDBs, and provides several systems biology analyses of PGDBs including reachability analysis of metabolic networks, and interactive tracing of metabolites through a metabolic network. More than 800 PGDBs have been created using Pathway Tools by scientists around the world, many of which are curated DBs for important model organisms. Those PGDBs can be exchanged using a peer-to-peer DB sharing system called the PGDB Registry. PMID:19955237

  14. Causal premise semantics.

    PubMed

    Kaufmann, Stefan

    2013-08-01

    The rise of causality and the attendant graph-theoretic modeling tools in the study of counterfactual reasoning has had resounding effects in many areas of cognitive science, but it has thus far not permeated the mainstream in linguistic theory to a comparable degree. In this study I show that a version of the predominant framework for the formal semantic analysis of conditionals, Kratzer-style premise semantics, allows for a straightforward implementation of the crucial ideas and insights of Pearl-style causal networks. I spell out the details of such an implementation, focusing especially on the notions of intervention on a network and backtracking interpretations of counterfactuals. Copyright © 2013 Cognitive Science Society, Inc.

  15. The Mpi-M Aerosol Climatology (MAC)

    NASA Astrophysics Data System (ADS)

    Kinne, S.

    2014-12-01

    Monthly gridded global data-sets for aerosol optical properties (AOD, SSA and g) and for aerosol microphysical properties (CCN and IN) offer a (less complex) alternate path to include aerosol radiative effects and aerosol impacts on cloud-microphysics in global simulations. Based on merging AERONET sun-/sky-photometer data onto background maps provided by AeroCom phase 1 modeling output and AERONET sun-/the MPI-M Aerosol Climatology (MAC) version 1 was developed and applied in IPCC simulations with ECHAM and as ancillary data-set in satellite-based global data-sets. An updated version 2 of this climatology will be presented now applying central values from the more recent AeroCom phase 2 modeling and utilizing the better global coverage of trusted sun-photometer data - including statistics from the Marine Aerosol network (MAN). Applications include spatial distributions of estimates for aerosol direct and aerosol indirect radiative effects.

  16. GRN2SBML: automated encoding and annotation of inferred gene regulatory networks complying with SBML.

    PubMed

    Vlaic, Sebastian; Hoffmann, Bianca; Kupfer, Peter; Weber, Michael; Dräger, Andreas

    2013-09-01

    GRN2SBML automatically encodes gene regulatory networks derived from several inference tools in systems biology markup language. Providing a graphical user interface, the networks can be annotated via the simple object access protocol (SOAP)-based application programming interface of BioMart Central Portal and minimum information required in the annotation of models registry. Additionally, we provide an R-package, which processes the output of supported inference algorithms and automatically passes all required parameters to GRN2SBML. Therefore, GRN2SBML closes a gap in the processing pipeline between the inference of gene regulatory networks and their subsequent analysis, visualization and storage. GRN2SBML is freely available under the GNU Public License version 3 and can be downloaded from http://www.hki-jena.de/index.php/0/2/490. General information on GRN2SBML, examples and tutorials are available at the tool's web page.

  17. Water Security Toolkit User Manual: Version 1.3 | Science ...

    EPA Pesticide Factsheets

    User manual: Data Product/Software The Water Security Toolkit (WST) is a suite of tools that help provide the information necessary to make good decisions resulting in the minimization of further human exposure to contaminants, and the maximization of the effectiveness of intervention strategies. WST assists in the evaluation of multiple response actions in order to select the most beneficial consequence management strategy. It includes hydraulic and water quality modeling software and optimization methodologies to identify: (1) sensor locations to detect contamination, (2) locations in the network in which the contamination was introduced, (3) hydrants to remove contaminated water from the distribution system, (4) locations in the network to inject decontamination agents to inactivate, remove or destroy contaminants, (5) locations in the network to take grab sample to confirm contamination or cleanup and (6) valves to close in order to isolate contaminated areas of the network.

  18. The NERC Vocabulary Server: Version 2.0

    NASA Astrophysics Data System (ADS)

    Leadbetter, A. M.; Lowry, R. K.

    2012-12-01

    The Natural Environment Research Council (NERC) Vocabulary Server (NVS) has been used to publish controlled vocabularies of terms relevant to marine environmental sciences since 2006 (version 0) with version 1 being introduced in 2007. It has been used for - metadata mark-up with verifiable content - populating dynamic drop down lists - semantic cross-walk between metadata schemata - so-called smart search - and the semantic enablement of Open Geospatial Consortium (OGC) Web Processing Services in the NERC Data Grid and the European Commission SeaDataNet, Geo-Seas, and European Marine Observation and Data Network (EMODnet) projects. The NVS is based on the Simple Knowledge Organization System (SKOS) model. SKOS is based on the "concept", which it defines as a "unit of thought", that is an idea or notion such as "oil spill". Following a version change for SKOS in 2009 there was a desire to upgrade the NVS to incorporate the changes. This version of SKOS introduces the ability to aggregate concepts in both collections and schemes. The design of version 2 of the NVS uses both types of aggregation: schemes for the discovery of content through hierarchical thesauri and collections for the publication and addressing of content. Other desired changes from version 1 of the NVS included: - the removal of the potential for multiple identifiers for the same concept to ensure consistent addressing of concepts - the addition of content and technical governance information in the payload documents to provide an audit trail to users of NVS content - the removal of XML snippets from concept definitions in order to correctly validate XML serializations of the SKOS - the addition of the ability to map into external knowledge organization systems in order to extend the knowledge base - a more truly RESTful approach URL access to the NVS to make the development of applications on top of the NVS easier - and support for multiple human languages to increase the user base of the NVS Version 2 of the NVS (NVS2.0) underpins the semantic layer for the Open Service Network for Marine Environmental Data (NETMAR) project, funded by the European Commission under the Seventh Framework Programme. Within NETMAR, NVS2.0 has been used for: - semantic validation of inputs to chained OGC Web Processing Services - smart discovery of data and services - integration of data from distributed nodes of the International Coastal Atlas Network Since its deployment, NVS2.0 has been adopted within the European SeaDataNet community's software products which has significantly increased the usage of the NVS2.0 Application Programming Interace (API), as illustrated in Table 1. Here we present the results of upgrading the NVS to version 2 and show applications which have been built on top of the NVS2.0 API, including a SPARQL endpoint and a hierarchical catalogue of oceanographic hardware.Table 1. NVS2.0 API usage by month from 467 unique IP addressest;

  19. A Plan Recognition Model for Subdialogues in Conversations.

    DTIC Science & Technology

    1984-11-01

    82-K-0193. A simplified, shortened version appears in tho Proceedings of the 10th International Conference on Computational Linguistics , Stanford... linguistic results from such work. Consider the following two dialogue fragments. Dialogue 1 was collected at an information booth in a train station in...network structures [Sidner and Bates, 19831. Unlike Dialogue 1, the system’s interaction with the user is primarily non- linguistic , with utterances only

  20. Corrigendum to "Matrix-algebra-based calculations of the time evolution of the binary spin-bath model for magnetization transfer" [J. Magn. Reson. 230 (2013) 88-97

    NASA Astrophysics Data System (ADS)

    Müller, Dirk K.; Pampel, André; Möller, Harald E.

    2015-12-01

    In the print version of this article initially published, reference to a funding source was missing. The following information should be added to the Acknowledgements section: This work was funded (in part) by the Helmholtz Alliance ICEMED-Imaging and Curing Environmental Metabolic Diseases, through the Initiative and Networking Fund of the Helmholtz Association.

  1. A report on FY06 IPv6 deployment activities and issues at Sandia National Laboratories.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tolendino, Lawrence F.; Eldridge, John M.; Hu, Tan Chang

    2006-06-01

    Internet Protocol version 4 (IPv4) has been a mainstay of the both the Internet and corporate networks for delivering network packets to the desired destination. However, rapid proliferation of network appliances, evolution of corporate networks, and the expanding Internet has begun to stress the limitations of the protocol. Internet Protocol version 6 (IPv6) is the replacement protocol that overcomes the constraints of IPv4. IPv6 deployment in government network backbones has been mandated to occur by 2008. This paper explores the readiness of the Sandia National Laboratories' network backbone to support IPv6, the issues that must be addressed before a deploymentmore » begins, and recommends the next steps to take to comply with government mandates. The paper describes a joint, work effort of the Sandia National Laboratories ASC WAN project team and members of the System Analysis & Trouble Resolution and Network System Design & Implementation Departments.« less

  2. Resolution of ranking hierarchies in directed networks.

    PubMed

    Letizia, Elisa; Barucca, Paolo; Lillo, Fabrizio

    2018-01-01

    Identifying hierarchies and rankings of nodes in directed graphs is fundamental in many applications such as social network analysis, biology, economics, and finance. A recently proposed method identifies the hierarchy by finding the ordered partition of nodes which minimises a score function, termed agony. This function penalises the links violating the hierarchy in a way depending on the strength of the violation. To investigate the resolution of ranking hierarchies we introduce an ensemble of random graphs, the Ranked Stochastic Block Model. We find that agony may fail to identify hierarchies when the structure is not strong enough and the size of the classes is small with respect to the whole network. We analytically characterise the resolution threshold and we show that an iterated version of agony can partly overcome this resolution limit.

  3. Resolution of ranking hierarchies in directed networks

    PubMed Central

    Barucca, Paolo; Lillo, Fabrizio

    2018-01-01

    Identifying hierarchies and rankings of nodes in directed graphs is fundamental in many applications such as social network analysis, biology, economics, and finance. A recently proposed method identifies the hierarchy by finding the ordered partition of nodes which minimises a score function, termed agony. This function penalises the links violating the hierarchy in a way depending on the strength of the violation. To investigate the resolution of ranking hierarchies we introduce an ensemble of random graphs, the Ranked Stochastic Block Model. We find that agony may fail to identify hierarchies when the structure is not strong enough and the size of the classes is small with respect to the whole network. We analytically characterise the resolution threshold and we show that an iterated version of agony can partly overcome this resolution limit. PMID:29394278

  4. [Severity classification of chronic obstructive pulmonary disease based on deep learning].

    PubMed

    Ying, Jun; Yang, Ceyuan; Li, Quanzheng; Xue, Wanguo; Li, Tanshi; Cao, Wenzhe

    2017-12-01

    In this paper, a deep learning method has been raised to build an automatic classification algorithm of severity of chronic obstructive pulmonary disease. Large sample clinical data as input feature were analyzed for their weights in classification. Through feature selection, model training, parameter optimization and model testing, a classification prediction model based on deep belief network was built to predict severity classification criteria raised by the Global Initiative for Chronic Obstructive Lung Disease (GOLD). We get accuracy over 90% in prediction for two different standardized versions of severity criteria raised in 2007 and 2011 respectively. Moreover, we also got the contribution ranking of different input features through analyzing the model coefficient matrix and confirmed that there was a certain degree of agreement between the more contributive input features and the clinical diagnostic knowledge. The validity of the deep belief network model was proved by this result. This study provides an effective solution for the application of deep learning method in automatic diagnostic decision making.

  5. Incorporating seismic phase correlations into a probabilistic model of global-scale seismology

    NASA Astrophysics Data System (ADS)

    Arora, Nimar

    2013-04-01

    We present a probabilistic model of seismic phases whereby the attributes of the body-wave phases are correlated to those of the first arriving P phase. This model has been incorporated into NET-VISA (Network processing Vertically Integrated Seismic Analysis) a probabilistic generative model of seismic events, their transmission, and detection on a global seismic network. In the earlier version of NET-VISA, seismic phase were assumed to be independent of each other. Although this didn't affect the quality of the inferred seismic bulletin, for the most part, it did result in a few instances of anomalous phase association. For example, an S phase with a smaller slowness than the corresponding P phase. We demonstrate that the phase attributes are indeed highly correlated, for example the uncertainty in the S phase travel time is significantly reduced given the P phase travel time. Our new model exploits these correlations to produce better calibrated probabilities for the events, as well as fewer anomalous associations.

  6. Random walk in degree space and the time-dependent Watts-Strogatz model

    NASA Astrophysics Data System (ADS)

    Casa Grande, H. L.; Cotacallapa, M.; Hase, M. O.

    2017-01-01

    In this work, we propose a scheme that provides an analytical estimate for the time-dependent degree distribution of some networks. This scheme maps the problem into a random walk in degree space, and then we choose the paths that are responsible for the dominant contributions. The method is illustrated on the dynamical versions of the Erdős-Rényi and Watts-Strogatz graphs, which were introduced as static models in the original formulation. We have succeeded in obtaining an analytical form for the dynamics Watts-Strogatz model, which is asymptotically exact for some regimes.

  7. Random walk in degree space and the time-dependent Watts-Strogatz model.

    PubMed

    Casa Grande, H L; Cotacallapa, M; Hase, M O

    2017-01-01

    In this work, we propose a scheme that provides an analytical estimate for the time-dependent degree distribution of some networks. This scheme maps the problem into a random walk in degree space, and then we choose the paths that are responsible for the dominant contributions. The method is illustrated on the dynamical versions of the Erdős-Rényi and Watts-Strogatz graphs, which were introduced as static models in the original formulation. We have succeeded in obtaining an analytical form for the dynamics Watts-Strogatz model, which is asymptotically exact for some regimes.

  8. Towards Meaningful Learning through Digital Video Supported, Case Based Teaching

    ERIC Educational Resources Information Center

    Hakkarainen, Paivi; Saarelainen, Tarja; Ruokamo, Heli

    2007-01-01

    This paper reports an action research case study in which a traditional lecture based, face to face "Network Management" course at the University of Lapland's Faculty of Social Sciences was developed into two different course versions resorting to case based teaching: a face to face version and an online version. In the face to face…

  9. Multi-species Identification of Polymorphic Peptide Variants via Propagation in Spectral Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Na, Seungjin; Payne, Samuel H.; Bandeira, Nuno

    The spectral networks approach enables the detection of pairs of spectra from related peptides and thus allows for the propagation of annotations from identified peptides to unidentified spectra. Beyond allowing for unbiased discovery of unexpected post-translational modifications, spectral networks are also applicable to multi-species comparative proteomics or metaproteomics to identify numerous orthologous versions of a protein. We present algorithmic and statistical advances in spectral networks that have made it possible to rigorously assess the statistical significance of spectral pairs and accurately estimate the error rate of identifications via propagation. In the analysis of three related Cyanothece species, a model organismmore » for biohydrogen production, spectral networks identified peptides with highly divergent sequences with up to dozens of variants per peptide, including many novel peptides in species that lack a sequenced genome. Furthermore, spectral networks strongly suggested the presence of novel peptides even in genomically characterized species (i.e. missing from databases) in that a significant portion of unidentified multi-species networks included at least two polymorphic peptide variants.« less

  10. Network approach to patterns in stratocumulus clouds

    NASA Astrophysics Data System (ADS)

    Glassmeier, Franziska; Feingold, Graham

    2017-10-01

    Stratocumulus clouds (Sc) have a significant impact on the amount of sunlight reflected back to space, with important implications for Earth’s climate. Representing Sc and their radiative impact is one of the largest challenges for global climate models. Sc fields self-organize into cellular patterns and thus lend themselves to analysis and quantification in terms of natural cellular networks. Based on large-eddy simulations of Sc fields, we present a first analysis of the geometric structure and self-organization of Sc patterns from this network perspective. Our network analysis shows that the Sc pattern is scale-invariant as a consequence of entropy maximization that is known as Lewis’s Law (scaling parameter: 0.16) and is largely independent of the Sc regime (cloud-free vs. cloudy cell centers). Cells are, on average, hexagonal with a neighbor number variance of about 2, and larger cells tend to be surrounded by smaller cells, as described by an Aboav-Weaire parameter of 0.9. The network structure is neither completely random nor characteristic of natural convection. Instead, it emerges from Sc-specific versions of cell division and cell merging that are shaped by cell expansion. This is shown with a heuristic model of network dynamics that incorporates our physical understanding of cloud processes.

  11. Cooperation in N-person evolutionary snowdrift game in scale-free Barabási Albert networks

    NASA Astrophysics Data System (ADS)

    Lee, K. H.; Chan, Chun-Him; Hui, P. M.; Zheng, Da-Fang

    2008-09-01

    Cooperation in the N-person evolutionary snowdrift game (NESG) is studied in scale-free Barabási-Albert (BA) networks. Due to the inhomogeneity of the network, two versions of NESG are proposed and studied. In a model where the size of the competing group varies from agent to agent, the fraction of cooperators drops as a function of the payoff parameter. The networking effect is studied via the fraction of cooperative agents for nodes with a particular degree. For small payoff parameters, it is found that the small- k agents are dominantly cooperators, while large- k agents are of non-cooperators. Studying the spatial correlation reveals that cooperative agents will avoid to be nearest neighbors and the correlation disappears beyond the next-nearest neighbors. The behavior can be explained in terms of the networking effect and payoffs. In another model with a fixed size of competing groups, the fraction of cooperators could show a non-monotonic behavior in the regime of small payoff parameters. This non-trivial behavior is found to be a combined effect of the many agents with the smallest degree in the BA network and the increasing fraction of cooperators among these agents with the payoff for small payoffs.

  12. Network approach to patterns in stratocumulus clouds.

    PubMed

    Glassmeier, Franziska; Feingold, Graham

    2017-10-03

    Stratocumulus clouds (Sc) have a significant impact on the amount of sunlight reflected back to space, with important implications for Earth's climate. Representing Sc and their radiative impact is one of the largest challenges for global climate models. Sc fields self-organize into cellular patterns and thus lend themselves to analysis and quantification in terms of natural cellular networks. Based on large-eddy simulations of Sc fields, we present a first analysis of the geometric structure and self-organization of Sc patterns from this network perspective. Our network analysis shows that the Sc pattern is scale-invariant as a consequence of entropy maximization that is known as Lewis's Law (scaling parameter: 0.16) and is largely independent of the Sc regime (cloud-free vs. cloudy cell centers). Cells are, on average, hexagonal with a neighbor number variance of about 2, and larger cells tend to be surrounded by smaller cells, as described by an Aboav-Weaire parameter of 0.9. The network structure is neither completely random nor characteristic of natural convection. Instead, it emerges from Sc-specific versions of cell division and cell merging that are shaped by cell expansion. This is shown with a heuristic model of network dynamics that incorporates our physical understanding of cloud processes.

  13. Network approach to patterns in stratocumulus clouds

    PubMed Central

    Feingold, Graham

    2017-01-01

    Stratocumulus clouds (Sc) have a significant impact on the amount of sunlight reflected back to space, with important implications for Earth’s climate. Representing Sc and their radiative impact is one of the largest challenges for global climate models. Sc fields self-organize into cellular patterns and thus lend themselves to analysis and quantification in terms of natural cellular networks. Based on large-eddy simulations of Sc fields, we present a first analysis of the geometric structure and self-organization of Sc patterns from this network perspective. Our network analysis shows that the Sc pattern is scale-invariant as a consequence of entropy maximization that is known as Lewis’s Law (scaling parameter: 0.16) and is largely independent of the Sc regime (cloud-free vs. cloudy cell centers). Cells are, on average, hexagonal with a neighbor number variance of about 2, and larger cells tend to be surrounded by smaller cells, as described by an Aboav–Weaire parameter of 0.9. The network structure is neither completely random nor characteristic of natural convection. Instead, it emerges from Sc-specific versions of cell division and cell merging that are shaped by cell expansion. This is shown with a heuristic model of network dynamics that incorporates our physical understanding of cloud processes. PMID:28904097

  14. Reconstruction of the regulatory network for Bacillus subtilis and reconciliation with gene expression data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faria, Jose P.; Overbeek, Ross; Taylor, Ronald C.

    Here, we introduce a manually constructed and curated regulatory network model that describes the current state of knowledge of transcriptional regulation of B. subtilis. The model corresponds to an updated and enlarged version of the regulatory model of central metabolism originally proposed in 2008. We extended the original network to the whole genome by integration of information from DBTBS, a compendium of regulatory data that includes promoters, transcription factors (TFs), binding sites, motifs and regulated operons. Additionally, we consolidated our network with all the information on regulation included in the SporeWeb and Subtiwiki community-curated resources on B. subtilis. Finally, wemore » reconciled our network with data from RegPrecise, which recently released their own less comprehensive reconstruction of the regulatory network for B. subtilis. Our model describes 275 regulators and their target genes, representing 30 different mechanisms of regulation such as TFs, RNA switches, Riboswitches and small regulatory RNAs. Overall, regulatory information is included in the model for approximately 2500 of the ~4200 genes in B. subtilis 168. In an effort to further expand our knowledge of B. subtilis regulation, we reconciled our model with expression data. For this process, we reconstructed the Atomic Regulons (ARs) for B. subtilis, which are the sets of genes that share the same “ON” and “OFF” gene expression profiles across multiple samples of experimental data. We show how atomic regulons for B. subtilis are able to capture many sets of genes corresponding to regulated operons in our manually curated network. Additionally, we demonstrate how atomic regulons can be used to help expand or validate the knowledge of the regulatory networks by looking at highly correlated genes in the ARs for which regulatory information is lacking. During this process, we were also able to infer novel stimuli for hypothetical genes by exploring the genome expression metadata relating to experimental conditions, gaining insights into novel biology.« less

  15. Reconstruction of the regulatory network for Bacillus subtilis and reconciliation with gene expression data

    DOE PAGES

    Faria, Jose P.; Overbeek, Ross; Taylor, Ronald C.; ...

    2016-03-18

    Here, we introduce a manually constructed and curated regulatory network model that describes the current state of knowledge of transcriptional regulation of B. subtilis. The model corresponds to an updated and enlarged version of the regulatory model of central metabolism originally proposed in 2008. We extended the original network to the whole genome by integration of information from DBTBS, a compendium of regulatory data that includes promoters, transcription factors (TFs), binding sites, motifs and regulated operons. Additionally, we consolidated our network with all the information on regulation included in the SporeWeb and Subtiwiki community-curated resources on B. subtilis. Finally, wemore » reconciled our network with data from RegPrecise, which recently released their own less comprehensive reconstruction of the regulatory network for B. subtilis. Our model describes 275 regulators and their target genes, representing 30 different mechanisms of regulation such as TFs, RNA switches, Riboswitches and small regulatory RNAs. Overall, regulatory information is included in the model for approximately 2500 of the ~4200 genes in B. subtilis 168. In an effort to further expand our knowledge of B. subtilis regulation, we reconciled our model with expression data. For this process, we reconstructed the Atomic Regulons (ARs) for B. subtilis, which are the sets of genes that share the same “ON” and “OFF” gene expression profiles across multiple samples of experimental data. We show how atomic regulons for B. subtilis are able to capture many sets of genes corresponding to regulated operons in our manually curated network. Additionally, we demonstrate how atomic regulons can be used to help expand or validate the knowledge of the regulatory networks by looking at highly correlated genes in the ARs for which regulatory information is lacking. During this process, we were also able to infer novel stimuli for hypothetical genes by exploring the genome expression metadata relating to experimental conditions, gaining insights into novel biology.« less

  16. Parallel and orthogonal stimulus in ultradiluted neural networks

    NASA Astrophysics Data System (ADS)

    Sobral, G. A., Jr.; Vieira, V. M.; Lyra, M. L.; da Silva, C. R.

    2006-10-01

    Extending a model due to Derrida, Gardner, and Zippelius, we have studied the recognition ability of an extreme and asymmetrically diluted version of the Hopfield model for associative memory by including the effect of a stimulus in the dynamics of the system. We obtain exact results for the dynamic evolution of the average network superposition. The stimulus field was considered as proportional to the overlapping of the state of the system with a particular stimulated pattern. Two situations were analyzed, namely, the external stimulus acting on the initialization pattern (parallel stimulus) and the external stimulus acting on a pattern orthogonal to the initialization one (orthogonal stimulus). In both cases, we obtained the complete phase diagram in the parameter space composed of the stimulus field, thermal noise, and network capacity. Our results show that the system improves its recognition ability for parallel stimulus. For orthogonal stimulus two recognition phases emerge with the system locking at the initialization or stimulated pattern. We confront our analytical results with numerical simulations for the noiseless case T=0 .

  17. Conversion of National Health Insurance Service-National Sample Cohort (NHIS-NSC) Database into Observational Medical Outcomes Partnership-Common Data Model (OMOP-CDM).

    PubMed

    You, Seng Chan; Lee, Seongwon; Cho, Soo-Yeon; Park, Hojun; Jung, Sungjae; Cho, Jaehyeong; Yoon, Dukyong; Park, Rae Woong

    2017-01-01

    It is increasingly necessary to generate medical evidence applicable to Asian people compared to those in Western countries. Observational Health Data Sciences a Informatics (OHDSI) is an international collaborative which aims to facilitate generating high-quality evidence via creating and applying open-source data analytic solutions to a large network of health databases across countries. We aimed to incorporate Korean nationwide cohort data into the OHDSI network by converting the national sample cohort into Observational Medical Outcomes Partnership-Common Data Model (OMOP-CDM). The data of 1.13 million subjects was converted to OMOP-CDM, resulting in average 99.1% conversion rate. The ACHILLES, open-source OMOP-CDM-based data profiling tool, was conducted on the converted database to visualize data-driven characterization and access the quality of data. The OMOP-CDM version of National Health Insurance Service-National Sample Cohort (NHIS-NSC) can be a valuable tool for multiple aspects of medical research by incorporation into the OHDSI research network.

  18. Correlations induced by depressing synapses in critically self-organized networks with quenched dynamics

    NASA Astrophysics Data System (ADS)

    Campos, João Guilherme Ferreira; Costa, Ariadne de Andrade; Copelli, Mauro; Kinouchi, Osame

    2017-04-01

    In a recent work, mean-field analysis and computer simulations were employed to analyze critical self-organization in networks of excitable cellular automata where randomly chosen synapses in the network were depressed after each spike (the so-called annealed dynamics). Calculations agree with simulations of the annealed version, showing that the nominal branching ratio σ converges to unity in the thermodynamic limit, as expected of a self-organized critical system. However, the question remains whether the same results apply to the biological case where only the synapses of firing neurons are depressed (the so-called quenched dynamics). We show that simulations of the quenched model yield significant deviations from σ =1 due to spatial correlations. However, the model is shown to be critical, as the largest eigenvalue of the synaptic matrix approaches unity in the thermodynamic limit, that is, λc=1 . We also study the finite size effects near the critical state as a function of the parameters of the synaptic dynamics.

  19. The NERC Vocabulary Server: Version 2.0

    NASA Astrophysics Data System (ADS)

    Leadbetter, A.; Lowry, R.; Clements, O.

    2012-04-01

    The NERC Vocabulary Server (NVS) has been used to publish controlled vocabularies of terms relevant to the marine environmental sciences domain since 2006 (version 0) with version 1 being introduced in 2007. It has been used for • metadata mark-up with verifiable content • populating dynamic drop down lists • semantic cross-walk between metadata schemata • so-called smart search • and the semantic enablement of Open Geospatial Consortium Web Processing Services in projects including: the NERC Data Grid; SeaDataNet; Geo-Seas; and the European Marine Observation and Data Network (EMODnet). The NVS is based on the Simple Knowledge Organization System (SKOS) model and following a version change for SKOS in 2009 there was a desire to upgrade the NVS to incorporate the changes in this standard. SKOS is based on the "concept", which it defines as a "unit of thought", that is an idea or notion such as "oil spill". The latest version of SKOS introduces the ability to aggregate concepts in both collections and schemes. The design of version 2 of the NVS uses both types of aggregation: schemes for the discovery of content through hierarchical thesauri and collections for the publication and addressing of content. Other desired changes from version 1 of the NVS included: • the removal of the potential for multiple Uniform Resource Names for the same concept to ensure consistent identification of concepts • the addition of content and technical governance information in the payload documents to provide an audit trail to users of NVS content • the removal of XML snippets from concept definitions in order to correctly validate XML serializations of the SKOS • the addition of the ability to map into external knowledge organization systems in order to extend the knowledge base • a more truly RESTful approach URL access to the NVS to make the development of applications on top of the NVS easier • and support for multiple human languages to increase the user base of the NVS Version 2 of the NVS underpins the semantic layer for the Open Service Network for Marine Environmental Data (NETMAR) project, funded by the European Commission under the Seventh Framework Programme. Here we present the results of upgrading the NVS from version 1 to 2 and show applications which have been built on top of the NVS using its Application Programming Interface, including a demonstration version of a SPARQL interface.

  20. An Overview of the Micro Pulse Lidar Network (MPLNET)

    NASA Technical Reports Server (NTRS)

    Welton, Ellsworth

    2010-01-01

    The NASA Micro Pulse Lidar Network (MPLNET) is a federated network of Micro Pulse Lidar (MPL) systems designed to measure aerosol and cloud vertical structure continuously, day and night, over long time periods required to contribute to climate change studies and provide ground validation for models and satellite sensors in the NASA Earth Observing System (FOS). At present, there are eighteen active sites worldwide, and several more in the planning stage. Numerous temporary sites are deployed in support of various field campaigns. Most MPLNET sites are co-located with sites in the NASA Aerosol Robotic Network (AERONET) to provide both column and vertically resolved aerosol and cloud data. MPLNET data and more information on the project are available at http://mpinet.gsfc.nasa.gov . Here we present a summary of the first ten years of MPLNET, along with an overview of our current status, specifically our version two data products and applications. Future network plans will be presented, with a focus on our activities in South East Asia.

  1. Evolution of Boolean networks under selection for a robust response to external inputs yields an extensive neutral space

    NASA Astrophysics Data System (ADS)

    Szejka, Agnes; Drossel, Barbara

    2010-02-01

    We study the evolution of Boolean networks as model systems for gene regulation. Inspired by biological networks, we select simultaneously for robust attractors and for the ability to respond to external inputs by changing the attractor. Mutations change the connections between the nodes and the update functions. In order to investigate the influence of the type of update functions, we perform our simulations with canalizing as well as with threshold functions. We compare the properties of the fitness landscapes that result for different versions of the selection criterion and the update functions. We find that for all studied cases the fitness landscape has a plateau with maximum fitness resulting in the fact that structurally very different networks are able to fulfill the same task and are connected by neutral paths in network (“genotype”) space. We find furthermore a connection between the attractor length and the mutational robustness, and an extremely long memory of the initial evolutionary stage.

  2. Using the ACR/NEMA standard with TCP/IP and Ethernet

    NASA Astrophysics Data System (ADS)

    Chimiak, William J.; Williams, Rodney C.

    1991-07-01

    There is a need for a consolidated picture archival and communications system (PACS) in hospitals. At the Bowman Gray School of Medicine of Wake Forest University (BGSM), the authors are enhancing the ACR/NEMA Version 2 protocol using UNIX sockets and TCP/IP to greatly improve connectivity. Initially, nuclear medicine studies using gamma cameras are to be sent to PACS. The ACR/NEMA Version 2 protocol provides the functionality of the upper three layers of the open system interconnection (OSI) model in this implementation. The images, imaging equipment information, and patient information are then sent in ACR/NEMA format to a software socket. From there it is handed to the TCP/IP protocol, which provides the transport and network service. TCP/IP, in turn, uses the services of IEEE 802.3 (Ethernet) to complete the connectivity. The advantage of this implementation is threefold: (1) Only one I/O port is consumed by numerous nuclear medicine cameras, instead of a physical port for each camera. (2) Standard protocols are used which maximize interoperability with ACR/NEMA compliant PACSs. (3) The use of sockets allows a migration path to the transport and networking services of OSIs TP4 and connectionless network service as well as the high-performance protocol being considered by the American National Standards Institute (ANSI) and the International Standards Organization (ISO) -- the Xpress Transfer Protocol (XTP). The use of sockets also gives access to ANSI's Fiber Distributed Data Interface (FDDI) as well as other high-speed network standards.

  3. Network architectures and circuit function: testing alternative hypotheses in multifunctional networks.

    PubMed

    Leonard, J L

    2000-05-01

    Understanding how species-typical movement patterns are organized in the nervous system is a central question in neurobiology. The current explanations involve 'alphabet' models in which an individual neuron may participate in the circuit for several behaviors but each behavior is specified by a specific neural circuit. However, not all of the well-studied model systems fit the 'alphabet' model. The 'equation' model provides an alternative possibility, whereby a system of parallel motor neurons, each with a unique (but overlapping) field of innervation, can account for the production of stereotyped behavior patterns by variable circuits. That is, it is possible for such patterns to arise as emergent properties of a generalized neural network in the absence of feedback, a simple version of a 'self-organizing' behavioral system. Comparison of systems of identified neurons suggest that the 'alphabet' model may account for most observations where CPGs act to organize motor patterns. Other well-known model systems, involving architectures corresponding to feed-forward neural networks with a hidden layer, may organize patterned behavior in a manner consistent with the 'equation' model. Such architectures are found in the Mauthner and reticulospinal circuits, 'escape' locomotion in cockroaches, CNS control of Aplysia gill, and may also be important in the coordination of sensory information and motor systems in insect mushroom bodies and the vertebrate hippocampus. The hidden layer of such networks may serve as an 'internal representation' of the behavioral state and/or body position of the animal, allowing the animal to fine-tune oriented, or particularly context-sensitive, movements to the prevalent conditions. Experiments designed to distinguish between the two models in cases where they make mutually exclusive predictions provide an opportunity to elucidate the neural mechanisms by which behavior is organized in vivo and in vitro. Copyright 2000 S. Karger AG, Basel

  4. Multifractal geometry in analysis and processing of digital retinal photographs for early diagnosis of human diabetic macular edema.

    PubMed

    Tălu, Stefan

    2013-07-01

    The purpose of this paper is to determine a quantitative assessment of the human retinal vascular network architecture for patients with diabetic macular edema (DME). Multifractal geometry and lacunarity parameters are used in this study. A set of 10 segmented and skeletonized human retinal images, corresponding to both normal (five images) and DME states of the retina (five images), from the DRIVE database was analyzed using the Image J software. Statistical analyses were performed using Microsoft Office Excel 2003 and GraphPad InStat software. The human retinal vascular network architecture has a multifractal geometry. The average of generalized dimensions (Dq) for q = 0, 1, 2 of the normal images (segmented versions), is similar to the DME cases (segmented versions). The average of generalized dimensions (Dq) for q = 0, 1 of the normal images (skeletonized versions), is slightly greater than the DME cases (skeletonized versions). However, the average of D2 for the normal images (skeletonized versions) is similar to the DME images. The average of lacunarity parameter, Λ, for the normal images (segmented and skeletonized versions) is slightly lower than the corresponding values for DME images (segmented and skeletonized versions). The multifractal and lacunarity analysis provides a non-invasive predictive complementary tool for an early diagnosis of patients with DME.

  5. The Systems Biology Markup Language (SBML) Level 3 Package: Layout, Version 1 Core.

    PubMed

    Gauges, Ralph; Rost, Ursula; Sahle, Sven; Wengler, Katja; Bergmann, Frank T

    2015-06-01

    Many software tools provide facilities for depicting reaction network diagrams in a visual form. Two aspects of such a visual diagram can be distinguished: the layout (i.e.: the positioning and connections) of the elements in the diagram, and the graphical form of the elements (for example, the glyphs used for symbols, the properties of the lines connecting them, and so on). For software tools that also read and write models in SBML (Systems Biology Markup Language) format, a common need is to store the network diagram together with the SBML representation of the model. This in turn raises the question of how to encode the layout and the rendering of these diagrams. The SBML Level 3 Version 1 Core specification does not provide a mechanism for explicitly encoding diagrams, but it does provide a mechanism for SBML packages to extend the Core specification and add additional syntactical constructs. The Layout package for SBML Level 3 adds the necessary features to SBML so that diagram layouts can be encoded in SBML files, and a companion package called SBML Rendering specifies how the graphical rendering of elements can be encoded. The SBML Layout package is based on the principle that reaction network diagrams should be described as representations of entities such as species and reactions (with direct links to the underlying SBML elements), and not as arbitrary drawings or graphs; for this reason, existing languages for the description of vector drawings (such as SVG) or general graphs (such as GraphML) cannot be used.

  6. The Systems Biology Markup Language (SBML) Level 3 Package: Layout, Version 1 Core.

    PubMed

    Gauges, Ralph; Rost, Ursula; Sahle, Sven; Wengler, Katja; Bergmann, Frank Thomas

    2015-09-04

    Many software tools provide facilities for depicting reaction network diagrams in a visual form. Two aspects of such a visual diagram can be distinguished: the layout (i.e.: the positioning and connections) of the elements in the diagram, and the graphical form of the elements (for example, the glyphs used for symbols, the properties of the lines connecting them, and so on). For software tools that also read and write models in SBML (Systems Biology Markup Language) format, a common need is to store the network diagram together with the SBML representation of the model. This in turn raises the question of how to encode the layout and the rendering of these diagrams. The SBML Level 3 Version 1 Core specification does not provide a mechanism for explicitly encoding diagrams, but it does provide a mechanism for SBML packages to extend the Core specification and add additional syntactical constructs. The Layout package for SBML Level 3 adds the necessary features to SBML so that diagram layouts can be encoded in SBML files, and a companion package called SBML Rendering specifies how the graphical rendering of elements can be encoded. The SBML Layout package is based on the principle that reaction network diagrams should be described as representations of entities such as species and reactions (with direct links to the underlying SBML elements), and not as arbitrary drawings or graphs; for this reason, existing languages for the description of vector drawings (such as SVG) or general graphs (such as GraphML) cannot be used.

  7. [Formula: see text]A longitudinal analysis of the attention networks in 6- to 11-year-old children.

    PubMed

    Lewis, Frances C; Reeve, Robert A; Johnson, Katherine A

    2018-02-01

    Attention is critical for everyday functioning. Posner and Petersen's model of attention describes three neural networks involved in attention control-the alerting network for arousal, the orienting network for selecting sensory input and reorienting attention, and the executive network for the regulatory control of attention. No longitudinal research has examined relative change in these networks in children. A modified version of the attention network task (ANT) was used to examine changes in the three attention networks, three times over 12 months, in 114 6-, 8- and 10-year-olds. Findings showed that the alerting network continued to develop over this period, the orienting network had stabilized by 6 years, and the conflict network had largely stabilized by 7 years. The reorienting of attention was also assessed using invalid cues, which showed a similar developmental trajectory to the orienting attention network and had stabilized by 6 years. The results confirm that age 6 to 7 years is a critical period in the development of attention, in particular executive attention. The largest improvement over the evaluation period was between 6 and 7 years; however, subtle changes were found in attention beyond 8 years of age.

  8. Topology design and performance analysis of an integrated communication network

    NASA Technical Reports Server (NTRS)

    Li, V. O. K.; Lam, Y. F.; Hou, T. C.; Yuen, J. H.

    1985-01-01

    A research study on the topology design and performance analysis for the Space Station Information System (SSIS) network is conducted. It is begun with a survey of existing research efforts in network topology design. Then a new approach for topology design is presented. It uses an efficient algorithm to generate candidate network designs (consisting of subsets of the set of all network components) in increasing order of their total costs, and checks each design to see if it forms an acceptable network. This technique gives the true cost-optimal network, and is particularly useful when the network has many constraints and not too many components. The algorithm for generating subsets is described in detail, and various aspects of the overall design procedure are discussed. Two more efficient versions of this algorithm (applicable in specific situations) are also given. Next, two important aspects of network performance analysis: network reliability and message delays are discussed. A new model is introduced to study the reliability of a network with dependent failures. For message delays, a collection of formulas from existing research results is given to compute or estimate the delays of messages in a communication network without making the independence assumption. The design algorithm coded in PASCAL is included as an appendix.

  9. SiGN-SSM: open source parallel software for estimating gene networks with state space models.

    PubMed

    Tamada, Yoshinori; Yamaguchi, Rui; Imoto, Seiya; Hirose, Osamu; Yoshida, Ryo; Nagasaki, Masao; Miyano, Satoru

    2011-04-15

    SiGN-SSM is an open-source gene network estimation software able to run in parallel on PCs and massively parallel supercomputers. The software estimates a state space model (SSM), that is a statistical dynamic model suitable for analyzing short time and/or replicated time series gene expression profiles. SiGN-SSM implements a novel parameter constraint effective to stabilize the estimated models. Also, by using a supercomputer, it is able to determine the gene network structure by a statistical permutation test in a practical time. SiGN-SSM is applicable not only to analyzing temporal regulatory dependencies between genes, but also to extracting the differentially regulated genes from time series expression profiles. SiGN-SSM is distributed under GNU Affero General Public Licence (GNU AGPL) version 3 and can be downloaded at http://sign.hgc.jp/signssm/. The pre-compiled binaries for some architectures are available in addition to the source code. The pre-installed binaries are also available on the Human Genome Center supercomputer system. The online manual and the supplementary information of SiGN-SSM is available on our web site. tamada@ims.u-tokyo.ac.jp.

  10. Smart-DS: Synthetic Models for Advanced, Realistic Testing: Distribution Systems and Scenarios

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnan, Venkat K; Palmintier, Bryan S; Hodge, Brian S

    The National Renewable Energy Laboratory (NREL) in collaboration with Massachusetts Institute of Technology (MIT), Universidad Pontificia Comillas (Comillas-IIT, Spain) and GE Grid Solutions, is working on an ARPA-E GRID DATA project, titled Smart-DS, to create: 1) High-quality, realistic, synthetic distribution network models, and 2) Advanced tools for automated scenario generation based on high-resolution weather data and generation growth projections. Through these advancements, the Smart-DS project is envisioned to accelerate the development, testing, and adoption of advanced algorithms, approaches, and technologies for sustainable and resilient electric power systems, especially in the realm of U.S. distribution systems. This talk will present themore » goals and overall approach of the Smart-DS project, including the process of creating the synthetic distribution datasets using reference network model (RNM) and the comprehensive validation process to ensure network realism, feasibility, and applicability to advanced use cases. The talk will provide demonstrations of early versions of synthetic models, along with the lessons learnt from expert engagements to enhance future iterations. Finally, the scenario generation framework, its development plans, and co-ordination with GRID DATA repository teams to house these datasets for public access will also be discussed.« less

  11. BIOLOGICAL NETWORK EXPLORATION WITH CYTOSCAPE 3

    PubMed Central

    Su, Gang; Morris, John H.; Demchak, Barry; Bader, Gary D.

    2014-01-01

    Cytoscape is one of the most popular open-source software tools for the visual exploration of biomedical networks composed of protein, gene and other types of interactions. It offers researchers a versatile and interactive visualization interface for exploring complex biological interconnections supported by diverse annotation and experimental data, thereby facilitating research tasks such as predicting gene function and pathway construction. Cytoscape provides core functionality to load, visualize, search, filter and save networks, and hundreds of Apps extend this functionality to address specific research needs. The latest generation of Cytoscape (version 3.0 and later) has substantial improvements in function, user interface and performance relative to previous versions. This protocol aims to jump-start new users with specific protocols for basic Cytoscape functions, such as installing Cytoscape and Cytoscape Apps, loading data, visualizing and navigating the network, visualizing network associated data (attributes) and identifying clusters. It also highlights new features that benefit experienced users. PMID:25199793

  12. An Improved, Bias-Reduced Probabilistic Functional Gene Network of Baker's Yeast, Saccharomyces cerevisiae

    PubMed Central

    Lee, Insuk; Li, Zhihua; Marcotte, Edward M.

    2007-01-01

    Background Probabilistic functional gene networks are powerful theoretical frameworks for integrating heterogeneous functional genomics and proteomics data into objective models of cellular systems. Such networks provide syntheses of millions of discrete experimental observations, spanning DNA microarray experiments, physical protein interactions, genetic interactions, and comparative genomics; the resulting networks can then be easily applied to generate testable hypotheses regarding specific gene functions and associations. Methodology/Principal Findings We report a significantly improved version (v. 2) of a probabilistic functional gene network [1] of the baker's yeast, Saccharomyces cerevisiae. We describe our optimization methods and illustrate their effects in three major areas: the reduction of functional bias in network training reference sets, the application of a probabilistic model for calculating confidences in pair-wise protein physical or genetic interactions, and the introduction of simple thresholds that eliminate many false positive mRNA co-expression relationships. Using the network, we predict and experimentally verify the function of the yeast RNA binding protein Puf6 in 60S ribosomal subunit biogenesis. Conclusions/Significance YeastNet v. 2, constructed using these optimizations together with additional data, shows significant reduction in bias and improvements in precision and recall, in total covering 102,803 linkages among 5,483 yeast proteins (95% of the validated proteome). YeastNet is available from http://www.yeastnet.org. PMID:17912365

  13. Feature selection in feature network models: finding predictive subsets of features with the Positive Lasso.

    PubMed

    Frank, Laurence E; Heiser, Willem J

    2008-05-01

    A set of features is the basis for the network representation of proximity data achieved by feature network models (FNMs). Features are binary variables that characterize the objects in an experiment, with some measure of proximity as response variable. Sometimes features are provided by theory and play an important role in the construction of the experimental conditions. In some research settings, the features are not known a priori. This paper shows how to generate features in this situation and how to select an adequate subset of features that takes into account a good compromise between model fit and model complexity, using a new version of least angle regression that restricts coefficients to be non-negative, called the Positive Lasso. It will be shown that features can be generated efficiently with Gray codes that are naturally linked to the FNMs. The model selection strategy makes use of the fact that FNM can be considered as univariate multiple regression model. A simulation study shows that the proposed strategy leads to satisfactory results if the number of objects is less than or equal to 22. If the number of objects is larger than 22, the number of features selected by our method exceeds the true number of features in some conditions.

  14. Coarse-graining and self-dissimilarity of complex networks

    NASA Astrophysics Data System (ADS)

    Itzkovitz, Shalev; Levitt, Reuven; Kashtan, Nadav; Milo, Ron; Itzkovitz, Michael; Alon, Uri

    2005-01-01

    Can complex engineered and biological networks be coarse-grained into smaller and more understandable versions in which each node represents an entire pattern in the original network? To address this, we define coarse-graining units as connectivity patterns which can serve as the nodes of a coarse-grained network and present algorithms to detect them. We use this approach to systematically reverse-engineer electronic circuits, forming understandable high-level maps from incomprehensible transistor wiring: first, a coarse-grained version in which each node is a gate made of several transistors is established. Then the coarse-grained network is itself coarse-grained, resulting in a high-level blueprint in which each node is a circuit module made of many gates. We apply our approach also to a mammalian protein signal-transduction network, to find a simplified coarse-grained network with three main signaling channels that resemble multi-layered perceptrons made of cross-interacting MAP-kinase cascades. We find that both biological and electronic networks are “self-dissimilar,” with different network motifs at each level. The present approach may be used to simplify a variety of directed and nondirected, natural and designed networks.

  15. A report on IPv6 deployment activities and issues at Sandia National Laboratories:FY2007.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tolendino, Lawrence F.; Eldridge, John M.; Hu, Tan Chang

    2007-06-01

    Internet Protocol version 4 (IPv4) has been a mainstay of the both the Internet and corporate networks for delivering network packets to the desired destination. However, rapid proliferation of network appliances, evolution of corporate networks, and the expanding Internet has begun to stress the limitations of the protocol. Internet Protocol version 6 (IPv6) is the replacement protocol that overcomes the constraints of IPv4. As the emerging Internet network protocol, SNL needs to prepare for its eventual deployment in international, national, customer, and local networks. Additionally, the United States Office of Management and Budget has mandated that IPv6 deployment in governmentmore » network backbones occurs by 2008. This paper explores the readiness of the Sandia National Laboratories network backbone to support IPv6, the issues that must be addressed before a deployment begins, and recommends the next steps to take to comply with government mandates. The paper describes a joint work effort of the Sandia National Laboratories ASC WAN project team and members of the System Analysis & Trouble Resolution, the Communication & Network Systems, and Network System Design & Implementation Departments.« less

  16. A hydrologic network supporting spatially referenced regression modeling in the Chesapeake Bay watershed

    USGS Publications Warehouse

    Brakebill, J.W.; Preston, S.D.

    2003-01-01

    The U.S. Geological Survey has developed a methodology for statistically relating nutrient sources and land-surface characteristics to nutrient loads of streams. The methodology is referred to as SPAtially Referenced Regressions On Watershed attributes (SPARROW), and relates measured stream nutrient loads to nutrient sources using nonlinear statistical regression models. A spatially detailed digital hydrologic network of stream reaches, stream-reach characteristics such as mean streamflow, water velocity, reach length, and travel time, and their associated watersheds supports the regression models. This network serves as the primary framework for spatially referencing potential nutrient source information such as atmospheric deposition, septic systems, point-sources, land use, land cover, and agricultural sources and land-surface characteristics such as land use, land cover, average-annual precipitation and temperature, slope, and soil permeability. In the Chesapeake Bay watershed that covers parts of Delaware, Maryland, Pennsylvania, New York, Virginia, West Virginia, and Washington D.C., SPARROW was used to generate models estimating loads of total nitrogen and total phosphorus representing 1987 and 1992 land-surface conditions. The 1987 models used a hydrologic network derived from an enhanced version of the U.S. Environmental Protection Agency's digital River Reach File, and course resolution Digital Elevation Models (DEMs). A new hydrologic network was created to support the 1992 models by generating stream reaches representing surface-water pathways defined by flow direction and flow accumulation algorithms from higher resolution DEMs. On a reach-by-reach basis, stream reach characteristics essential to the modeling were transferred to the newly generated pathways or reaches from the enhanced River Reach File used to support the 1987 models. To complete the new network, watersheds for each reach were generated using the direction of surface-water flow derived from the DEMs. This network improves upon existing digital stream data by increasing the level of spatial detail and providing consistency between the reach locations and topography. The hydrologic network also aids in illustrating the spatial patterns of predicted nutrient loads and sources contributed locally to each stream, and the percentages of nutrient load that reach Chesapeake Bay.

  17. Testing the Factorial Invariance of the English and Filipino Versions of the Inventory of School Motivation with Bilingual Students in the Philippines

    ERIC Educational Resources Information Center

    Ganotice, Fraide A., Jr.; Bernardo, Allan B. I.; King, Ronnel B.

    2012-01-01

    The study explored the invariance of Filipino and English versions of the Inventory of School Motivation (ISM) for Filipino-English bilingual students. There was invariance in the factor structure and factor loadings across the two language versions. Between-network construct validation showed consistent associations between ISM-mastery goals and…

  18. Employing Tropospheric Numerical Weather Prediction Model for High-Precision GNSS Positioning

    NASA Astrophysics Data System (ADS)

    Alves, Daniele; Gouveia, Tayna; Abreu, Pedro; Magário, Jackes

    2014-05-01

    In the past few years is increasing the necessity of realizing high accuracy positioning. In this sense, the spatial technologies have being widely used. The GNSS (Global Navigation Satellite System) has revolutionized the geodetic positioning activities. Among the existent methods one can emphasize the Precise Point Positioning (PPP) and network-based positioning. But, to get high accuracy employing these methods, mainly in real time, is indispensable to realize the atmospheric modeling (ionosphere and troposphere) accordingly. Related to troposphere, there are the empirical models (for example Saastamoinen and Hopfield). But when highly accuracy results (error of few centimeters) are desired, maybe these models are not appropriated to the Brazilian reality. In order to minimize this limitation arises the NWP (Numerical Weather Prediction) models. In Brazil the CPTEC/INPE (Center for Weather Prediction and Climate Studies / Brazilian Institute for Spatial Researches) provides a regional NWP model, currently used to produce Zenithal Tropospheric Delay (ZTD) predictions (http://satelite.cptec.inpe.br/zenital/). The actual version, called eta15km model, has a spatial resolution of 15 km and temporal resolution of 3 hours. In this paper the main goal is to accomplish experiments and analysis concerning the use of troposphere NWP model (eta15km model) in PPP and network-based positioning. Concerning PPP it was used data from dozens of stations over the Brazilian territory, including Amazon forest. The results obtained with NWP model were compared with Hopfield one. NWP model presented the best results in all experiments. Related to network-based positioning it was used data from GNSS/SP Network in São Paulo State, Brazil. This network presents the best configuration in the country to realize this kind of positioning. Actually the network is composed by twenty stations (http://www.fct.unesp.br/#!/pesquisa/grupos-de-estudo-e-pesquisa/gege//gnss-sp-network2789/). The results obtained employing NWP model also were compared to Hopfield one, and the results were very interesting. The theoretical concepts, experiments, results and analysis will be presented in this paper.

  19. A Compact Synchronous Cellular Model of Nonlinear Calcium Dynamics: Simulation and FPGA Synthesis Results.

    PubMed

    Soleimani, Hamid; Drakakis, Emmanuel M

    2017-06-01

    Recent studies have demonstrated that calcium is a widespread intracellular ion that controls a wide range of temporal dynamics in the mammalian body. The simulation and validation of such studies using experimental data would benefit from a fast large scale simulation and modelling tool. This paper presents a compact and fully reconfigurable cellular calcium model capable of mimicking Hopf bifurcation phenomenon and various nonlinear responses of the biological calcium dynamics. The proposed cellular model is synthesized on a digital platform for a single unit and a network model. Hardware synthesis, physical implementation on FPGA, and theoretical analysis confirm that the proposed cellular model can mimic the biological calcium behaviors with considerably low hardware overhead. The approach has the potential to speed up large-scale simulations of slow intracellular dynamics by sharing more cellular units in real-time. To this end, various networks constructed by pipelining 10 k to 40 k cellular calcium units are compared with an equivalent simulation run on a standard PC workstation. Results show that the cellular hardware model is, on average, 83 times faster than the CPU version.

  20. Analysis of multigrid methods on massively parallel computers: Architectural implications

    NASA Technical Reports Server (NTRS)

    Matheson, Lesley R.; Tarjan, Robert E.

    1993-01-01

    We study the potential performance of multigrid algorithms running on massively parallel computers with the intent of discovering whether presently envisioned machines will provide an efficient platform for such algorithms. We consider the domain parallel version of the standard V cycle algorithm on model problems, discretized using finite difference techniques in two and three dimensions on block structured grids of size 10(exp 6) and 10(exp 9), respectively. Our models of parallel computation were developed to reflect the computing characteristics of the current generation of massively parallel multicomputers. These models are based on an interconnection network of 256 to 16,384 message passing, 'workstation size' processors executing in an SPMD mode. The first model accomplishes interprocessor communications through a multistage permutation network. The communication cost is a logarithmic function which is similar to the costs in a variety of different topologies. The second model allows single stage communication costs only. Both models were designed with information provided by machine developers and utilize implementation derived parameters. With the medium grain parallelism of the current generation and the high fixed cost of an interprocessor communication, our analysis suggests an efficient implementation requires the machine to support the efficient transmission of long messages, (up to 1000 words) or the high initiation cost of a communication must be significantly reduced through an alternative optimization technique. Furthermore, with variable length message capability, our analysis suggests the low diameter multistage networks provide little or no advantage over a simple single stage communications network.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barrett, Brian; Brightwell, Ronald B.; Grant, Ryan

    This report presents a specification for the Portals 4 networ k programming interface. Portals 4 is intended to allow scalable, high-performance network communication betwee n nodes of a parallel computing system. Portals 4 is well suited to massively parallel processing and embedded syste ms. Portals 4 represents an adaption of the data movement layer developed for massively parallel processing platfor ms, such as the 4500-node Intel TeraFLOPS machine. Sandia's Cplant cluster project motivated the development of Version 3.0, which was later extended to Version 3.3 as part of the Cray Red Storm machine and XT line. Version 4 is tarmore » geted to the next generation of machines employing advanced network interface architectures that support enh anced offload capabilities.« less

  2. Gas Chromatography Data Classification Based on Complex Coefficients of an Autoregressive Model

    DOE PAGES

    Zhao, Weixiang; Morgan, Joshua T.; Davis, Cristina E.

    2008-01-01

    This paper introduces autoregressive (AR) modeling as a novel method to classify outputs from gas chromatography (GC). The inverse Fourier transformation was applied to the original sensor data, and then an AR model was applied to transform data to generate AR model complex coefficients. This series of coefficients effectively contains a compressed version of all of the information in the original GC signal output. We applied this method to chromatograms resulting from proliferating bacteria species grown in culture. Three types of neural networks were used to classify the AR coefficients: backward propagating neural network (BPNN), radial basis function-principal component analysismore » (RBF-PCA) approach, and radial basis function-partial least squares regression (RBF-PLSR) approach. This exploratory study demonstrates the feasibility of using complex root coefficient patterns to distinguish various classes of experimental data, such as those from the different bacteria species. This cognition approach also proved to be robust and potentially useful for freeing us from time alignment of GC signals.« less

  3. A biological network-based regularized artificial neural network model for robust phenotype prediction from gene expression data.

    PubMed

    Kang, Tianyu; Ding, Wei; Zhang, Luoyan; Ziemek, Daniel; Zarringhalam, Kourosh

    2017-12-19

    Stratification of patient subpopulations that respond favorably to treatment or experience and adverse reaction is an essential step toward development of new personalized therapies and diagnostics. It is currently feasible to generate omic-scale biological measurements for all patients in a study, providing an opportunity for machine learning models to identify molecular markers for disease diagnosis and progression. However, the high variability of genetic background in human populations hampers the reproducibility of omic-scale markers. In this paper, we develop a biological network-based regularized artificial neural network model for prediction of phenotype from transcriptomic measurements in clinical trials. To improve model sparsity and the overall reproducibility of the model, we incorporate regularization for simultaneous shrinkage of gene sets based on active upstream regulatory mechanisms into the model. We benchmark our method against various regression, support vector machines and artificial neural network models and demonstrate the ability of our method in predicting the clinical outcomes using clinical trial data on acute rejection in kidney transplantation and response to Infliximab in ulcerative colitis. We show that integration of prior biological knowledge into the classification as developed in this paper, significantly improves the robustness and generalizability of predictions to independent datasets. We provide a Java code of our algorithm along with a parsed version of the STRING DB database. In summary, we present a method for prediction of clinical phenotypes using baseline genome-wide expression data that makes use of prior biological knowledge on gene-regulatory interactions in order to increase robustness and reproducibility of omic-scale markers. The integrated group-wise regularization methods increases the interpretability of biological signatures and gives stable performance estimates across independent test sets.

  4. Quicksilver: Fast predictive image registration - A deep learning approach.

    PubMed

    Yang, Xiao; Kwitt, Roland; Styner, Martin; Niethammer, Marc

    2017-09-01

    This paper introduces Quicksilver, a fast deformable image registration method. Quicksilver registration for image-pairs works by patch-wise prediction of a deformation model based directly on image appearance. A deep encoder-decoder network is used as the prediction model. While the prediction strategy is general, we focus on predictions for the Large Deformation Diffeomorphic Metric Mapping (LDDMM) model. Specifically, we predict the momentum-parameterization of LDDMM, which facilitates a patch-wise prediction strategy while maintaining the theoretical properties of LDDMM, such as guaranteed diffeomorphic mappings for sufficiently strong regularization. We also provide a probabilistic version of our prediction network which can be sampled during the testing time to calculate uncertainties in the predicted deformations. Finally, we introduce a new correction network which greatly increases the prediction accuracy of an already existing prediction network. We show experimental results for uni-modal atlas-to-image as well as uni-/multi-modal image-to-image registrations. These experiments demonstrate that our method accurately predicts registrations obtained by numerical optimization, is very fast, achieves state-of-the-art registration results on four standard validation datasets, and can jointly learn an image similarity measure. Quicksilver is freely available as an open-source software. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Application of artificial neural network in precise prediction of cement elements percentages based on the neutron activation analysis

    NASA Astrophysics Data System (ADS)

    Eftekhari Zadeh, E.; Feghhi, S. A. H.; Roshani, G. H.; Rezaei, A.

    2016-05-01

    Due to variation of neutron energy spectrum in the target sample during the activation process and to peak overlapping caused by the Compton effect with gamma radiations emitted from activated elements, which results in background changes and consequently complex gamma spectrum during the measurement process, quantitative analysis will ultimately be problematic. Since there is no simple analytical correlation between peaks' counts with elements' concentrations, an artificial neural network for analyzing spectra can be a helpful tool. This work describes a study on the application of a neural network to determine the percentages of cement elements (mainly Ca, Si, Al, and Fe) using the neutron capture delayed gamma-ray spectra of the substance emitted by the activated nuclei as patterns which were simulated via the Monte Carlo N-particle transport code, version 2.7. The Radial Basis Function (RBF) network is developed with four specific peaks related to Ca, Si, Al and Fe, which were extracted as inputs. The proposed RBF model is developed and trained with MATLAB 7.8 software. To obtain the optimal RBF model, several structures have been constructed and tested. The comparison between simulated and predicted values using the proposed RBF model shows that there is a good agreement between them.

  6. Quantization and training of object detection networks with low-precision weights and activations

    NASA Astrophysics Data System (ADS)

    Yang, Bo; Liu, Jian; Zhou, Li; Wang, Yun; Chen, Jie

    2018-01-01

    As convolutional neural networks have demonstrated state-of-the-art performance in object recognition and detection, there is a growing need for deploying these systems on resource-constrained mobile platforms. However, the computational burden and energy consumption of inference for these networks are significantly higher than what most low-power devices can afford. To address these limitations, this paper proposes a method to train object detection networks with low-precision weights and activations. The probability density functions of weights and activations of each layer are first directly estimated using piecewise Gaussian models. Then, the optimal quantization intervals and step sizes for each convolution layer are adaptively determined according to the distribution of weights and activations. As the most computationally expensive convolutions can be replaced by effective fixed point operations, the proposed method can drastically reduce computation complexity and memory footprint. Performing on the tiny you only look once (YOLO) and YOLO architectures, the proposed method achieves comparable accuracy to their 32-bit counterparts. As an illustration, the proposed 4-bit and 8-bit quantized versions of the YOLO model achieve a mean average precision of 62.6% and 63.9%, respectively, on the Pascal visual object classes 2012 test dataset. The mAP of the 32-bit full-precision baseline model is 64.0%.

  7. An optimized inverse modelling method for determining the location and strength of a point source releasing airborne material in urban environment

    NASA Astrophysics Data System (ADS)

    Efthimiou, George C.; Kovalets, Ivan V.; Venetsanos, Alexandros; Andronopoulos, Spyros; Argyropoulos, Christos D.; Kakosimos, Konstantinos

    2017-12-01

    An improved inverse modelling method to estimate the location and the emission rate of an unknown point stationary source of passive atmospheric pollutant in a complex urban geometry is incorporated in the Computational Fluid Dynamics code ADREA-HF and presented in this paper. The key improvement in relation to the previous version of the method lies in a two-step segregated approach. At first only the source coordinates are analysed using a correlation function of measured and calculated concentrations. In the second step the source rate is identified by minimizing a quadratic cost function. The validation of the new algorithm is performed by simulating the MUST wind tunnel experiment. A grid-independent flow field solution is firstly attained by applying successive refinements of the computational mesh and the final wind flow is validated against the measurements quantitatively and qualitatively. The old and new versions of the source term estimation method are tested on a coarse and a fine mesh. The new method appeared to be more robust, giving satisfactory estimations of source location and emission rate on both grids. The performance of the old version of the method varied between failure and success and appeared to be sensitive to the selection of model error magnitude that needs to be inserted in its quadratic cost function. The performance of the method depends also on the number and the placement of sensors constituting the measurement network. Of significant interest for the practical application of the method in urban settings is the number of concentration sensors required to obtain a ;satisfactory; determination of the source. The probability of obtaining a satisfactory solution - according to specified criteria -by the new method has been assessed as function of the number of sensors that constitute the measurement network.

  8. Retrieval and Validation of Zenith and Slant Path Delays From the Irish GPS Network

    NASA Astrophysics Data System (ADS)

    Hanafin, Jennifer; Jennings, S. Gerard; O'Dowd, Colin; McGrath, Ray; Whelan, Eoin

    2010-05-01

    Retrieval of atmospheric integrated water vapour (IWV) from ground-based GPS receivers and provision of this data product for meteorological applications has been the focus of a number of Europe-wide networks and projects, most recently the EUMETNET GPS water vapour programme. The results presented here are from a project to provide such information about the state of the atmosphere around Ireland for climate monitoring and improved numerical weather prediction. Two geodetic reference GPS receivers have been deployed at Valentia Observatory in Co. Kerry and Mace Head Atmospheric Research Station in Co. Galway, Ireland. These two receivers supplement the existing Ordnance Survey Ireland active network of 17 permanent ground-based receivers. A system to retrieve column-integrated atmospheric water vapour from the data provided by this network has been developed, based on the GPS Analysis at MIT (GAMIT) software package. The data quality of the zenith retrievals has been assessed using co-located radiosondes at the Valentia site and observations from a microwave profiling radiometer at the Mace Head site. Validation of the slant path retrievals requires a numerical weather prediction model and HIRLAM (High-Resolution Limited Area Model) version 7.2, the current operational forecast model in use at Met Éireann for the region, has been used for this validation work. Results from the data processing and comparisons with the independent observations and model will be presented.

  9. Version 2 of the IASI NH3 neural network retrieval algorithm: near-real-time and reanalysed datasets

    NASA Astrophysics Data System (ADS)

    Van Damme, Martin; Whitburn, Simon; Clarisse, Lieven; Clerbaux, Cathy; Hurtmans, Daniel; Coheur, Pierre-François

    2017-12-01

    Recently, Whitburn et al.(2016) presented a neural-network-based algorithm for retrieving atmospheric ammonia (NH3) columns from Infrared Atmospheric Sounding Interferometer (IASI) satellite observations. In the past year, several improvements have been introduced, and the resulting new baseline version, Artificial Neural Network for IASI (ANNI)-NH3-v2.1, is documented here. One of the main changes to the algorithm is that separate neural networks were trained for land and sea observations, resulting in a better training performance for both groups. By reducing and transforming the input parameter space, performance is now also better for observations associated with favourable sounding conditions (i.e. enhanced thermal contrasts). Other changes relate to the introduction of a bias correction over land and sea and the treatment of the satellite zenith angle. In addition to these algorithmic changes, new recommendations for post-filtering the data and for averaging data in time or space are formulated. We also introduce a second dataset (ANNI-NH3-v2.1R-I) which relies on ERA-Interim ECMWF meteorological input data, along with surface temperature retrieved from a dedicated network, rather than the operationally provided Eumetsat IASI Level 2 (L2) data used for the standard near-real-time version. The need for such a dataset emerged after a series of sharp discontinuities were identified in the NH3 time series, which could be traced back to incremental changes in the IASI L2 algorithms for temperature and clouds. The reanalysed dataset is coherent in time and can therefore be used to study trends. Furthermore, both datasets agree reasonably well in the mean on recent data, after the date when the IASI meteorological L2 version 6 became operational (30 September 2014).

  10. An Examination of the Design, Development, and Implementation of an Internet Protocol Version 6 Network: The ADTRAN Inc. Case Study

    ERIC Educational Resources Information Center

    Perigo, Levi

    2013-01-01

    In this dissertation, the author examined the capabilities of Internet Protocol version 6 (IPv6) in regard to replacing Internet Protocol version 4 (IPv4) as the internetworking technology for Medium-sized Businesses (MBs) in the Information Systems (IS) field. Transition to IPv6 is inevitable, and, thus, organizations are adopting this protocol…

  11. Secure and Cost-Effective Distributed Aggregation for Mobile Sensor Networks

    PubMed Central

    Guo, Kehua; Zhang, Ping; Ma, Jianhua

    2016-01-01

    Secure data aggregation (SDA) schemes are widely used in distributed applications, such as mobile sensor networks, to reduce communication cost, prolong the network life cycle and provide security. However, most SDA are only suited for a single type of statistics (i.e., summation-based or comparison-based statistics) and are not applicable to obtaining multiple statistic results. Most SDA are also inefficient for dynamic networks. This paper presents multi-functional secure data aggregation (MFSDA), in which the mapping step and coding step are introduced to provide value-preserving and order-preserving and, later, to enable arbitrary statistics support in the same query. MFSDA is suited for dynamic networks because these active nodes can be counted directly from aggregation data. The proposed scheme is tolerant to many types of attacks. The network load of the proposed scheme is balanced, and no significant bottleneck exists. The MFSDA includes two versions: MFSDA-I and MFSDA-II. The first one can obtain accurate results, while the second one is a more generalized version that can significantly reduce network traffic at the expense of less accuracy loss. PMID:27120599

  12. Neural-network classifiers for automatic real-world aerial image recognition

    NASA Astrophysics Data System (ADS)

    Greenberg, Shlomo; Guterman, Hugo

    1996-08-01

    We describe the application of the multilayer perceptron (MLP) network and a version of the adaptive resonance theory version 2-A (ART 2-A) network to the problem of automatic aerial image recognition (AAIR). The classification of aerial images, independent of their positions and orientations, is required for automatic tracking and target recognition. Invariance is achieved by the use of different invariant feature spaces in combination with supervised and unsupervised neural networks. The performance of neural-network-based classifiers in conjunction with several types of invariant AAIR global features, such as the Fourier-transform space, Zernike moments, central moments, and polar transforms, are examined. The advantages of this approach are discussed. The performance of the MLP network is compared with that of a classical correlator. The MLP neural-network correlator outperformed the binary phase-only filter (BPOF) correlator. It was found that the ART 2-A distinguished itself with its speed and its low number of required training vectors. However, only the MLP classifier was able to deal with a combination of shift and rotation geometric distortions.

  13. Neural-network classifiers for automatic real-world aerial image recognition.

    PubMed

    Greenberg, S; Guterman, H

    1996-08-10

    We describe the application of the multilayer perceptron (MLP) network and a version of the adaptive resonance theory version 2-A (ART 2-A) network to the problem of automatic aerial image recognition (AAIR). The classification of aerial images, independent of their positions and orientations, is required for automatic tracking and target recognition. Invariance is achieved by the use of different invariant feature spaces in combination with supervised and unsupervised neural networks. The performance of neural-network-based classifiers in conjunction with several types of invariant AAIR global features, such as the Fourier-transform space, Zernike moments, central moments, and polar transforms, are examined. The advantages of this approach are discussed. The performance of the MLP network is compared with that of a classical correlator. The MLP neural-network correlator outperformed the binary phase-only filter (BPOF) correlator. It was found that the ART 2-A distinguished itself with its speed and its low number of required training vectors. However, only the MLP classifier was able to deal with a combination of shift and rotation geometric distortions.

  14. Solving gap metabolites and blocked reactions in genome-scale models: application to the metabolic network of Blattabacterium cuenoti.

    PubMed

    Ponce-de-León, Miguel; Montero, Francisco; Peretó, Juli

    2013-10-31

    Metabolic reconstruction is the computational-based process that aims to elucidate the network of metabolites interconnected through reactions catalyzed by activities assigned to one or more genes. Reconstructed models may contain inconsistencies that appear as gap metabolites and blocked reactions. Although automatic methods for solving this problem have been previously developed, there are many situations where manual curation is still needed. We introduce a general definition of gap metabolite that allows its detection in a straightforward manner. Moreover, a method for the detection of Unconnected Modules, defined as isolated sets of blocked reactions connected through gap metabolites, is proposed. The method has been successfully applied to the curation of iCG238, the genome-scale metabolic model for the bacterium Blattabacterium cuenoti, obligate endosymbiont of cockroaches. We found the proposed approach to be a valuable tool for the curation of genome-scale metabolic models. The outcome of its application to the genome-scale model B. cuenoti iCG238 is a more accurate model version named as B. cuenoti iMP240.

  15. Author Correction: The physics of spreading processes in multilayer networks

    NASA Astrophysics Data System (ADS)

    De Domenico, Manlio; Granell, Clara; Porter, Mason A.; Arenas, Alex

    2018-05-01

    In the version of this Progress Article originally published, the left and right panels of Fig. 3, clarifying the details indicated within the centre panel, were mistakenly interchanged. This has now been corrected in all versions of the Progress Article.

  16. Is First-Order Vector Autoregressive Model Optimal for fMRI Data?

    PubMed

    Ting, Chee-Ming; Seghouane, Abd-Krim; Khalid, Muhammad Usman; Salleh, Sh-Hussain

    2015-09-01

    We consider the problem of selecting the optimal orders of vector autoregressive (VAR) models for fMRI data. Many previous studies used model order of one and ignored that it may vary considerably across data sets depending on different data dimensions, subjects, tasks, and experimental designs. In addition, the classical information criteria (IC) used (e.g., the Akaike IC (AIC)) are biased and inappropriate for the high-dimensional fMRI data typically with a small sample size. We examine the mixed results on the optimal VAR orders for fMRI, especially the validity of the order-one hypothesis, by a comprehensive evaluation using different model selection criteria over three typical data types--a resting state, an event-related design, and a block design data set--with varying time series dimensions obtained from distinct functional brain networks. We use a more balanced criterion, Kullback's IC (KIC) based on Kullback's symmetric divergence combining two directed divergences. We also consider the bias-corrected versions (AICc and KICc) to improve VAR model selection in small samples. Simulation results show better small-sample selection performance of the proposed criteria over the classical ones. Both bias-corrected ICs provide more accurate and consistent model order choices than their biased counterparts, which suffer from overfitting, with KICc performing the best. Results on real data show that orders greater than one were selected by all criteria across all data sets for the small to moderate dimensions, particularly from small, specific networks such as the resting-state default mode network and the task-related motor networks, whereas low orders close to one but not necessarily one were chosen for the large dimensions of full-brain networks.

  17. AERONET-Based Nonspherical Dust Optical Models and Effects on the VIIRS Deep Blue/SOAR Over Water Aerosol Product

    NASA Astrophysics Data System (ADS)

    Lee, Jaehwa; Hsu, N. Christina; Sayer, Andrew M.; Bettenhausen, Corey; Yang, Ping

    2017-10-01

    Aerosol Robotic Network (AERONET)-based nonspherical dust optical models are developed and applied to the Satellite Ocean Aerosol Retrieval (SOAR) algorithm as part of the Version 1 Visible Infrared Imaging Radiometer Suite (VIIRS) NASA "Deep Blue" aerosol data product suite. The optical models are created using Version 2 AERONET inversion data at six distinct sites influenced frequently by dust aerosols from different source regions. The same spheroid shape distribution as used in the AERONET inversion algorithm is assumed to account for the nonspherical characteristics of mineral dust, which ensures the consistency between the bulk scattering properties of the developed optical models and the AERONET-retrieved microphysical and optical properties. For the Version 1 SOAR aerosol product, the dust optical model representative for Capo Verde site is used, considering the strong influence of Saharan dust over the global ocean in terms of amount and spatial coverage. Comparisons of the VIIRS-retrieved aerosol optical properties against AERONET direct-Sun observations at five island/coastal sites suggest that the use of nonspherical dust optical models significantly improves the retrievals of aerosol optical depth (AOD) and Ångström exponent by mitigating the well-known artifact of scattering angle dependence of the variables, which is observed when incorrectly assuming spherical dust. The resulting removal of these artifacts results in a more natural spatial pattern of AOD along the transport path of Saharan dust to the Atlantic Ocean; that is, AOD decreases with increasing distance transported, whereas the spherical assumption leads to a strong wave pattern due to the spurious scattering angle dependence of AOD.

  18. Distributed Visualization Project

    NASA Technical Reports Server (NTRS)

    Craig, Douglas; Conroy, Michael; Kickbusch, Tracey; Mazone, Rebecca

    2016-01-01

    Distributed Visualization allows anyone, anywhere to see any simulation at any time. Development focuses on algorithms, software, data formats, data systems and processes to enable sharing simulation-based information across temporal and spatial boundaries without requiring stakeholders to possess highly-specialized and very expensive display systems. It also introduces abstraction between the native and shared data, which allows teams to share results without giving away proprietary or sensitive data. The initial implementation of this capability is the Distributed Observer Network (DON) version 3.1. DON 3.1 is available for public release in the NASA Software Store (https://software.nasa.gov/software/KSC-13775) and works with version 3.0 of the Model Process Control specification (an XML Simulation Data Representation and Communication Language) to display complex graphical information and associated Meta-Data.

  19. Rule-based modeling with Virtual Cell

    PubMed Central

    Schaff, James C.; Vasilescu, Dan; Moraru, Ion I.; Loew, Leslie M.; Blinov, Michael L.

    2016-01-01

    Summary: Rule-based modeling is invaluable when the number of possible species and reactions in a model become too large to allow convenient manual specification. The popular rule-based software tools BioNetGen and NFSim provide powerful modeling and simulation capabilities at the cost of learning a complex scripting language which is used to specify these models. Here, we introduce a modeling tool that combines new graphical rule-based model specification with existing simulation engines in a seamless way within the familiar Virtual Cell (VCell) modeling environment. A mathematical model can be built integrating explicit reaction networks with reaction rules. In addition to offering a large choice of ODE and stochastic solvers, a model can be simulated using a network free approach through the NFSim simulation engine. Availability and implementation: Available as VCell (versions 6.0 and later) at the Virtual Cell web site (http://vcell.org/). The application installs and runs on all major platforms and does not require registration for use on the user’s computer. Tutorials are available at the Virtual Cell website and Help is provided within the software. Source code is available at Sourceforge. Contact: vcell_support@uchc.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27497444

  20. Microphone Array Phased Processing System (MAPPS): Version 4.0 Manual

    NASA Technical Reports Server (NTRS)

    Watts, Michael E.; Mosher, Marianne; Barnes, Michael; Bardina, Jorge

    1999-01-01

    A processing system has been developed to meet increasing demands for detailed noise measurement of individual model components. The Microphone Array Phased Processing System (MAPPS) uses graphical user interfaces to control all aspects of data processing and visualization. The system uses networked parallel computers to provide noise maps at selected frequencies in a near real-time testing environment. The system has been successfully used in the NASA Ames 7- by 10-Foot Wind Tunnel.

  1. Assimilating Tropospheric Airborne Meteorological Data Reporting (TAMDAR) Observations and the Relative Value of Other Observation Types

    DTIC Science & Technology

    2014-08-01

    Using real-time weather data from an unmanned aircraft system to support the advanced research version of the weather research and forecast model... system that is used to transmit some MDCRS observations, the Aircraft Communications Addressing and Reporting System (ACARS). A new network of aircraft ...Technical Analysis and Applications Center, and AirDat LLC developed a modified TAMDAR sensor referred to as TAMDAR- Unmanned Aerial System (TAMDAR-U) for

  2. From quiescence to proliferation: Cdk oscillations drive the mammalian cell cycle

    PubMed Central

    Gérard, Claude; Goldbeter, Albert

    2012-01-01

    We recently proposed a detailed model describing the dynamics of the network of cyclin-dependent kinases (Cdks) driving the mammalian cell cycle (Gérard and Goldbeter, 2009). The model contains four modules, each centered around one cyclin/Cdk complex. Cyclin D/Cdk4–6 and cyclin E/Cdk2 promote progression in G1 and elicit the G1/S transition, respectively; cyclin A/Cdk2 ensures progression in S and the transition S/G2, while the activity of cyclin B/Cdk1 brings about the G2/M transition. This model shows that in the presence of sufficient amounts of growth factor the Cdk network is capable of temporal self-organization in the form of sustained oscillations, which correspond to the ordered, sequential activation of the various cyclin/Cdk complexes that control the successive phases of the cell cycle. The results suggest that the switch from cellular quiescence to cell proliferation corresponds to the transition from a stable steady state to sustained oscillations in the Cdk network. The transition depends on a finely tuned balance between factors that promote or hinder progression in the cell cycle. We show that the transition from quiescence to proliferation can occur in multiple ways that alter this balance. By resorting to bifurcation diagrams, we analyze the mechanism of oscillations in the Cdk network. Finally, we show that the complexity of the detailed model can be greatly reduced, without losing its key dynamical properties, by considering a skeleton model for the Cdk network. Using such a skeleton model for the mammalian cell cycle we show that positive feedback (PF) loops enhance the amplitude and the robustness of Cdk oscillations with respect to molecular noise. We compare the relative merits of the detailed and skeleton versions of the model for the Cdk network driving the mammalian cell cycle. PMID:23130001

  3. Development of an Aeromedical Scientific Information System for Aviation Safety

    DTIC Science & Technology

    2008-01-01

    math- ematics, engineering, computer hardware, software , and networking, was assembled to glean the most knowledge from the complicated aeromedical...9, SPlus Enterprise Developer 8, and Insightful Miner version 7. Process flow charts were done with SmartDraw Suite Edition version 7. Static and

  4. MPLNET V3 Cloud and Planetary Boundary Layer Detection

    NASA Technical Reports Server (NTRS)

    Lewis, Jasper R.; Welton, Ellsworth J.; Campbell, James R.; Haftings, Phillip C.

    2016-01-01

    The NASA Micropulse Lidar Network Version 3 algorithms for planetary boundary layer and cloud detection are described and differences relative to the previous Version 2 algorithms are highlighted. A year of data from the Goddard Space Flight Center site in Greenbelt, MD consisting of diurnal and seasonal trends is used to demonstrate the results. Both the planetary boundary layer and cloud algorithms show significant improvement of the previous version.

  5. Simulation of an SEIR infectious disease model on the dynamic contact network of conference attendees

    PubMed Central

    2011-01-01

    Background The spread of infectious diseases crucially depends on the pattern of contacts between individuals. Knowledge of these patterns is thus essential to inform models and computational efforts. However, there are few empirical studies available that provide estimates of the number and duration of contacts between social groups. Moreover, their space and time resolutions are limited, so that data are not explicit at the person-to-person level, and the dynamic nature of the contacts is disregarded. In this study, we aimed to assess the role of data-driven dynamic contact patterns between individuals, and in particular of their temporal aspects, in shaping the spread of a simulated epidemic in the population. Methods We considered high-resolution data about face-to-face interactions between the attendees at a conference, obtained from the deployment of an infrastructure based on radiofrequency identification (RFID) devices that assessed mutual face-to-face proximity. The spread of epidemics along these interactions was simulated using an SEIR (Susceptible, Exposed, Infectious, Recovered) model, using both the dynamic network of contacts defined by the collected data, and two aggregated versions of such networks, to assess the role of the data temporal aspects. Results We show that, on the timescales considered, an aggregated network taking into account the daily duration of contacts is a good approximation to the full resolution network, whereas a homogeneous representation that retains only the topology of the contact network fails to reproduce the size of the epidemic. Conclusions These results have important implications for understanding the level of detail needed to correctly inform computational models for the study and management of real epidemics. Please see related article BMC Medicine, 2011, 9:88 PMID:21771290

  6. Chaos in a dynamic model of traffic flows in an origin-destination network.

    PubMed

    Zhang, Xiaoyan; Jarrett, David F.

    1998-06-01

    In this paper we investigate the dynamic behavior of road traffic flows in an area represented by an origin-destination (O-D) network. Probably the most widely used model for estimating the distribution of O-D flows is the gravity model, [J. de D. Ortuzar and L. G. Willumsen, Modelling Transport (Wiley, New York, 1990)] which originated from an analogy with Newton's gravitational law. The conventional gravity model, however, is static. The investigation in this paper is based on a dynamic version of the gravity model proposed by Dendrinos and Sonis by modifying the conventional gravity model [D. S. Dendrinos and M. Sonis, Chaos and Social-Spatial Dynamics (Springer-Verlag, Berlin, 1990)]. The dynamic model describes the variations of O-D flows over discrete-time periods, such as each day, each week, and so on. It is shown that when the dimension of the system is one or two, the O-D flow pattern either approaches an equilibrium or oscillates. When the dimension is higher, the behavior found in the model includes equilibria, oscillations, periodic doubling, and chaos. Chaotic attractors are characterized by (positive) Liapunov exponents and fractal dimensions.(c) 1998 American Institute of Physics.

  7. Simulating the 2012 High Plains Drought Using Three Single Column Models (SCM)

    NASA Astrophysics Data System (ADS)

    Medina, I. D.; Baker, I. T.; Denning, S.; Dazlich, D. A.

    2015-12-01

    The impact of changes in the frequency and severity of drought on fresh water sustainability is a great concern for many regions of the world. One such location is the High Plains, where the local economy is primarily driven by fresh water withdrawals from the Ogallala Aquifer, which accounts for approximately 30% of total irrigation withdrawals from all U.S. aquifers combined. Modeling studies that focus on the feedback mechanisms that control the climate and eco-hydrology during times of drought are limited, and have used conventional General Circulation Models (GCMs) with grid length scales ranging from one hundred to several hundred kilometers. Additionally, these models utilize crude statistical parameterizations of cloud processes for estimating sub-grid fluxes of heat and moisture and have a poor representation of land surface heterogeneity. For this research, we focus on the 2012 High Plains drought and perform numerical simulations using three single column model (SCM) versions of BUGS5 (Colorado State University (CSU) GCM coupled to the Simple Biosphere Model (SiB3)). In the first version of BUGS5, the model is used in its standard bulk setting (single atmospheric column coupled to a single instance of SiB3), secondly, the Super-Parameterized Community Atmospheric Model (SP-CAM), a cloud resolving model (CRM) (CRM consists of 32 atmospheric columns), replaces the single CSU GCM atmospheric parameterization and is coupled to a single instance of SiB3, and for the third version of BUGS5, an instance of SiB3 is coupled to each CRM column of the SP-CAM (32 CRM columns coupled to 32 instances of SiB3). To assess the physical realism of the land-atmosphere feedbacks simulated by all three versions of BUGS5, differences in simulated energy and moisture fluxes are computed between the 2011 and 2012 period and are compared to those calculated using observational data from the AmeriFlux Tower Network for the same period at the ARM Site in Lamont, OK. This research will provide a better understanding of model deficiencies in reproducing and predicting droughts in the future, which is essential to the economic, ecologic and social well being of the High Plains.

  8. A Multilayer perspective for the analysis of urban transportation systems

    PubMed Central

    Aleta, Alberto; Meloni, Sandro; Moreno, Yamir

    2017-01-01

    Public urban mobility systems are composed by several transportation modes connected together. Most studies in urban mobility and planning often ignore the multi-layer nature of transportation systems considering only aggregated versions of this complex scenario. In this work we present a model for the representation of the transportation system of an entire city as a multiplex network. Using two different perspectives, one in which each line is a layer and one in which lines of the same transportation mode are grouped together, we study the interconnected structure of 9 different cities in Europe raging from small towns to mega-cities like London and Berlin highlighting their vulnerabilities and possible improvements. Finally, for the city of Zaragoza in Spain, we also consider data about service schedule and waiting times, which allow us to create a simple yet realistic model for urban mobility able to reproduce real-world facts and to test for network improvements. PMID:28295015

  9. A Multilayer perspective for the analysis of urban transportation systems.

    PubMed

    Aleta, Alberto; Meloni, Sandro; Moreno, Yamir

    2017-03-15

    Public urban mobility systems are composed by several transportation modes connected together. Most studies in urban mobility and planning often ignore the multi-layer nature of transportation systems considering only aggregated versions of this complex scenario. In this work we present a model for the representation of the transportation system of an entire city as a multiplex network. Using two different perspectives, one in which each line is a layer and one in which lines of the same transportation mode are grouped together, we study the interconnected structure of 9 different cities in Europe raging from small towns to mega-cities like London and Berlin highlighting their vulnerabilities and possible improvements. Finally, for the city of Zaragoza in Spain, we also consider data about service schedule and waiting times, which allow us to create a simple yet realistic model for urban mobility able to reproduce real-world facts and to test for network improvements.

  10. Air Traffic Control Improvement Using Prioritized CSMA

    NASA Technical Reports Server (NTRS)

    Robinson, Daryl C.

    2001-01-01

    Version 7 simulations of the industry-standard network simulation software "OPNET" are presented of two applications of the Aeronautical Telecommunications Network (ATN), Controller Pilot Data Link Communications (CPDLC) and Automatic Dependent Surveillance-Broadcast mode (ADS-B), over VHF Data Link mode 2 (VDL-2). Communication is modeled for air traffic between just three cities. All aircraft are assumed to have the same equipage. The simulation involves Air Traffic Control (ATC) ground stations and 105 aircraft taking off, flying realistic free-flight trajectories, and landing in a 24-hr period. All communication is modeled as unreliable. Collision-less, prioritized carrier sense multiple access (CSMA) is successfully tested. The statistics presented include latency, queue length, and packet loss. This research may show that a communications system simpler than the currently accepted standard envisioned may not only suffice, but also surpass performance of the standard at a lower cost of deployment.

  11. Estimating surface longwave radiative fluxes from satellites utilizing artificial neural networks

    NASA Astrophysics Data System (ADS)

    Nussbaumer, Eric A.; Pinker, Rachel T.

    2012-04-01

    A novel approach for calculating downwelling surface longwave (DSLW) radiation under all sky conditions is presented. The DSLW model (hereafter, DSLW/UMD v2) similarly to its predecessor, DSLW/UMD v1, is driven with a combination of Moderate Resolution Imaging Spectroradiometer (MODIS) level-3 cloud parameters and information from the European Centre for Medium-Range Weather Forecasts (ECMWF) ERA-Interim model. To compute the clear sky component of DSLW a two layer feed-forward artificial neural network with sigmoid hidden neurons and linear output neurons is implemented; it is trained with simulations derived from runs of the Rapid Radiative Transfer Model (RRTM). When computing the cloud contribution to DSLW, the cloud base temperature is estimated by using an independent artificial neural network approach of similar architecture as previously mentioned, and parameterizations. The cloud base temperature neural network is trained using spatially and temporally co-located MODIS and CloudSat Cloud Profiling Radar (CPR) and the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) observations. Daily average estimates of DSLW from 2003 to 2009 are compared against ground measurements from the Baseline Surface Radiation Network (BSRN) giving an overall correlation coefficient of 0.98, root mean square error (rmse) of 15.84 W m-2, and a bias of -0.39 W m-2. This is an improvement over an earlier version of the model (DSLW/UMD v1) which for the same time period has an overall correlation coefficient 0.97 rmse of 17.27 W m-2, and bias of 0.73 W m-2.

  12. Commercial vehicle information systems and networks (CVISN) glossary : baseline version

    DOT National Transportation Integrated Search

    1999-01-01

    This document defines terms and acronyms used in current Commercial Vehicle Information Systems and Networks (CVISN) documents and used in activities relevant to development of a national Intelligent Transportation System (ITS) system architecture fo...

  13. Development of Attention Networks and Their Interactions in Childhood

    ERIC Educational Resources Information Center

    Pozuelos, Joan P.; Paz-Alonso, Pedro M.; Castillo, Alejandro; Fuentes, Luis J.; Rueda, M. Rosario

    2014-01-01

    In the present study, we investigated developmental trajectories of alerting, orienting, and executive attention networks and their interactions over childhood. Two cross-sectional experiments were conducted with different samples of 6-to 12-year-old children using modified versions of the attention network task (ANT). In Experiment 1 (N = 106),…

  14. 78 FR 78493 - National Rural Transportation Assistance Program: Solicitation for Proposals

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-26

    ... 5. Task 5: RTAP Rural Resource Center 6. Task 6: Peer-to-Peer Networking 7. Task 7: Research and... for networking with State RTAP managers while establishing communication for information dissemination... Community Edition (DNN) version 05.06.02 (144). 6. Task 6: Peer-to-Peer Networking The recipient will...

  15. Inhibitory neurons promote robust critical firing dynamics in networks of integrate-and-fire neurons.

    PubMed

    Lu, Zhixin; Squires, Shane; Ott, Edward; Girvan, Michelle

    2016-12-01

    We study the firing dynamics of a discrete-state and discrete-time version of an integrate-and-fire neuronal network model with both excitatory and inhibitory neurons. When the integer-valued state of a neuron exceeds a threshold value, the neuron fires, sends out state-changing signals to its connected neurons, and returns to the resting state. In this model, a continuous phase transition from non-ceaseless firing to ceaseless firing is observed. At criticality, power-law distributions of avalanche size and duration with the previously derived exponents, -3/2 and -2, respectively, are observed. Using a mean-field approach, we show analytically how the critical point depends on model parameters. Our main result is that the combined presence of both inhibitory neurons and integrate-and-fire dynamics greatly enhances the robustness of critical power-law behavior (i.e., there is an increased range of parameters, including both sub- and supercritical values, for which several decades of power-law behavior occurs).

  16. Greenhouse gas network design using backward Lagrangian particle dispersion modelling - Part 1: Methodology and Australian test case

    NASA Astrophysics Data System (ADS)

    Ziehn, T.; Nickless, A.; Rayner, P. J.; Law, R. M.; Roff, G.; Fraser, P.

    2014-09-01

    This paper describes the generation of optimal atmospheric measurement networks for determining carbon dioxide fluxes over Australia using inverse methods. A Lagrangian particle dispersion model is used in reverse mode together with a Bayesian inverse modelling framework to calculate the relationship between weekly surface fluxes, comprising contributions from the biosphere and fossil fuel combustion, and hourly concentration observations for the Australian continent. Meteorological driving fields are provided by the regional version of the Australian Community Climate and Earth System Simulator (ACCESS) at 12 km resolution at an hourly timescale. Prior uncertainties are derived on a weekly timescale for biosphere fluxes and fossil fuel emissions from high-resolution model runs using the Community Atmosphere Biosphere Land Exchange (CABLE) model and the Fossil Fuel Data Assimilation System (FFDAS) respectively. The influence from outside the modelled domain is investigated, but proves to be negligible for the network design. Existing ground-based measurement stations in Australia are assessed in terms of their ability to constrain local flux estimates from the land. We find that the six stations that are currently operational are already able to reduce the uncertainties on surface flux estimates by about 30%. A candidate list of 59 stations is generated based on logistic constraints and an incremental optimisation scheme is used to extend the network of existing stations. In order to achieve an uncertainty reduction of about 50%, we need to double the number of measurement stations in Australia. Assuming equal data uncertainties for all sites, new stations would be mainly located in the northern and eastern part of the continent.

  17. Field-theoretic approach to fluctuation effects in neural networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buice, Michael A.; Cowan, Jack D.; Mathematics Department, University of Chicago, Chicago, Illinois 60637

    A well-defined stochastic theory for neural activity, which permits the calculation of arbitrary statistical moments and equations governing them, is a potentially valuable tool for theoretical neuroscience. We produce such a theory by analyzing the dynamics of neural activity using field theoretic methods for nonequilibrium statistical processes. Assuming that neural network activity is Markovian, we construct the effective spike model, which describes both neural fluctuations and response. This analysis leads to a systematic expansion of corrections to mean field theory, which for the effective spike model is a simple version of the Wilson-Cowan equation. We argue that neural activity governedmore » by this model exhibits a dynamical phase transition which is in the universality class of directed percolation. More general models (which may incorporate refractoriness) can exhibit other universality classes, such as dynamic isotropic percolation. Because of the extremely high connectivity in typical networks, it is expected that higher-order terms in the systematic expansion are small for experimentally accessible measurements, and thus, consistent with measurements in neocortical slice preparations, we expect mean field exponents for the transition. We provide a quantitative criterion for the relative magnitude of each term in the systematic expansion, analogous to the Ginsburg criterion. Experimental identification of dynamic universality classes in vivo is an outstanding and important question for neuroscience.« less

  18. A model of gene expression based on random dynamical systems reveals modularity properties of gene regulatory networks.

    PubMed

    Antoneli, Fernando; Ferreira, Renata C; Briones, Marcelo R S

    2016-06-01

    Here we propose a new approach to modeling gene expression based on the theory of random dynamical systems (RDS) that provides a general coupling prescription between the nodes of any given regulatory network given the dynamics of each node is modeled by a RDS. The main virtues of this approach are the following: (i) it provides a natural way to obtain arbitrarily large networks by coupling together simple basic pieces, thus revealing the modularity of regulatory networks; (ii) the assumptions about the stochastic processes used in the modeling are fairly general, in the sense that the only requirement is stationarity; (iii) there is a well developed mathematical theory, which is a blend of smooth dynamical systems theory, ergodic theory and stochastic analysis that allows one to extract relevant dynamical and statistical information without solving the system; (iv) one may obtain the classical rate equations form the corresponding stochastic version by averaging the dynamic random variables (small noise limit). It is important to emphasize that unlike the deterministic case, where coupling two equations is a trivial matter, coupling two RDS is non-trivial, specially in our case, where the coupling is performed between a state variable of one gene and the switching stochastic process of another gene and, hence, it is not a priori true that the resulting coupled system will satisfy the definition of a random dynamical system. We shall provide the necessary arguments that ensure that our coupling prescription does indeed furnish a coupled regulatory network of random dynamical systems. Finally, the fact that classical rate equations are the small noise limit of our stochastic model ensures that any validation or prediction made on the basis of the classical theory is also a validation or prediction of our model. We illustrate our framework with some simple examples of single-gene system and network motifs. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Design and development of compact monitoring system for disaster remote health centres.

    PubMed

    Santhi, S; Sadasivam, G S

    2015-02-01

    To enhance speedy communication between the patient and the doctor through newly proposed routing protocol at the mobile node. The proposed model is applied for a telemedicine application during disaster recovery management. In this paper, Energy Efficient Link Stability Routing Protocol (EELSRP) has been developed by simulation and real time. This framework is designed for the immediate healing of affected persons in remote areas, especially at the time of the disaster where there is no hospital proximity. In case of disasters, there might be an outbreak of infectious diseases. In such cases, the patient's medical record is also transferred by the field operator from disaster place to the hospital to facilitate the identification of the disease-causing agent and to prescribe the necessary medication. The heterogeneous networking framework provides reliable, energy efficientand speedy communication between the patient and the doctor using the proposed routing protocol at the mobile node. The performance of the simulation and real time versions of the Energy Efficient Link Stability Routing Protocol (EELSRP) protocol has been analyzed. Experimental results prove the efficiency of the real-time version of EESLRP protocol. The packet delivery ratio and throughput of the real time version of EELSRP protocol is increased by 3% and 10%, respectively, when compared to the simulated version of EELSRP. The end-to-end delay and energy consumption are reduced by 10% and 2% in the real time version of EELSRP.

  20. Sensitivity of Assimilated Tropical Tropospheric Ozone to the Meteorological Analyses

    NASA Technical Reports Server (NTRS)

    Hayashi, Hiroo; Stajner, Ivanka; Pawson, Steven; Thompson, Anne M.

    2002-01-01

    Tropical tropospheric ozone fields from two different experiments performed with an off-line ozone assimilation system developed in NASA's Data Assimilation Office (DAO) are examined. Assimilated ozone fields from the two experiments are compared with the collocated ozone profiles from the Southern Hemispheric Additional Ozonesondes (SHADOZ) network. Results are presented for 1998. The ozone assimilation system includes a chemistry-transport model, which uses analyzed winds from the Goddard Earth Observing System (GEOS) Data Assimilation System (DAS). The two experiments use wind fields from different versions of GEOS DAS: an operational version of the GEOS-2 system and a prototype of the GEOS-4 system. While both versions of the DAS utilize the Physical-space Statistical Analysis System and use comparable observations, they use entirely different general circulation models and data insertion techniques. The shape of the annual-mean vertical profile of the assimilated ozone fields is sensitive to the meteorological analyses, with the GEOS-4-based ozone being closest to the observations. This indicates that the resolved transport in GEOS-4 is more realistic than in GEOS-2. Remaining uncertainties include quantification of the representation of sub-grid-scale processes in the transport calculations, which plays an important role in the locations and seasons where convection dominates the transport.

  1. Design and Test of Pseudorandom Number Generator Using a Star Network of Lorenz Oscillators

    NASA Astrophysics Data System (ADS)

    Cho, Kenichiro; Miyano, Takaya

    We have recently developed a chaos-based stream cipher based on augmented Lorenz equations as a star network of Lorenz subsystems. In our method, the augmented Lorenz equations are used as a pseudorandom number generator. In this study, we propose a new method based on the augmented Lorenz equations for generating binary pseudorandom numbers and evaluate its security using the statistical tests of SP800-22 published by the National Institute for Standards and Technology in comparison with the performances of other chaotic dynamical models used as binary pseudorandom number generators. We further propose a faster version of the proposed method and evaluate its security using the statistical tests of TestU01 published by L’Ecuyer and Simard.

  2. EPANET VERSION 2.0

    EPA Science Inventory

    EPANET is a Windows program that performs extended period simulation of hydraulic and water-quality behavior within pressurized pipe networks. A network can consist of pipes, nodes (pipe junctions), pumps, valves and storage tanks or reservoirs. EPANET tracks the flow of water in...

  3. Multiplexing of Radio-Frequency Single Electron Transistors

    NASA Technical Reports Server (NTRS)

    Stevenson, Thomas R.; Pellerano, F. A.; Stahle, C. M.; Aidala, K.; Schoelkopf, R. J.; Krebs, Carolyn (Technical Monitor)

    2001-01-01

    We present results on wavelength division multiplexing of radio-frequency single electron transistors. We use a network of resonant impedance matching circuits to direct applied rf carrier waves to different transistors depending on carrier frequency. A two-channel demonstration of this concept using discrete components successfully reconstructed input signals with small levels of cross coupling. A lithographic version of the rf circuits had measured parameters in agreement with electromagnetic modeling, with reduced cross capacitance and inductance, and should allow 20 to 50 channels to be multiplexed.

  4. Artificial neural network retrained to detect myocardial ischemia using a Japanese multicenter database.

    PubMed

    Nakajima, Kenichi; Okuda, Koichi; Watanabe, Satoru; Matsuo, Shinro; Kinuya, Seigo; Toth, Karin; Edenbrandt, Lars

    2018-03-07

    An artificial neural network (ANN) has been applied to detect myocardial perfusion defects and ischemia. The present study compares the diagnostic accuracy of a more recent ANN version (1.1) with the initial version 1.0. We examined 106 patients (age, 77 ± 10 years) with coronary angiographic findings, comprising multi-vessel disease (≥ 50% stenosis) (52%) or old myocardial infarction (27%), or who had undergone coronary revascularization (30%). The ANN versions 1.0 and 1.1 were trained in Sweden (n = 1051) and Japan (n = 1001), respectively, using 99m Tc-methoxyisobutylisonitrile myocardial perfusion images. The ANN probabilities (from 0.0 to 1.0) of stress defects and ischemia were calculated in candidate regions of abnormalities. The diagnostic accuracy was compared using receiver-operating characteristics (ROC) analysis and the calculated area under the ROC curve (AUC) using expert interpretation as the gold standard. Although the AUC for stress defects was 0.95 and 0.93 (p = 0.27) for versions 1.1 and 1.0, respectively, that for detecting ischemia was significantly improved in version 1.1 (p = 0.0055): AUC 0.96 for version 1.1 (sensitivity 87%, specificity 96%) vs. 0.89 for version 1.0 (sensitivity 78%, specificity 97%). The improvement in the AUC shown by version 1.1 was also significant for patients with neither coronary revascularization nor old myocardial infarction (p = 0.0093): AUC = 0.98 for version 1.1 (sensitivity 88%, specificity 100%) and 0.88 for version 1.0 (sensitivity 76%, specificity 100%). Intermediate ANN probability between 0.1 and 0.7 was more often calculated by version 1.1 compared with version 1.0, which contributed to the improved diagnostic accuracy. The diagnostic accuracy of the new version was also improved in patients with either single-vessel disease or no stenosis (n = 47; AUC, 0.81 vs. 0.66 vs. p = 0.0060) when coronary stenosis was used as a gold standard. The diagnostic ability of the ANN version 1.1 was improved by retraining using the Japanese database, particularly for identifying ischemia.

  5. Inhibiting diffusion of complex contagions in social networks: theoretical and experimental results

    PubMed Central

    Anil Kumar, V.S.; Marathe, Madhav V.; Ravi, S.S.; Rosenkrantz, Daniel J.

    2014-01-01

    We consider the problem of inhibiting undesirable contagions (e.g. rumors, spread of mob behavior) in social networks. Much of the work in this context has been carried out under the 1-threshold model, where diffusion occurs when a node has just one neighbor with the contagion. We study the problem of inhibiting more complex contagions in social networks where nodes may have thresholds larger than 1. The goal is to minimize the propagation of the contagion by removing a small number of nodes (called critical nodes) from the network. We study several versions of this problem and prove that, in general, they cannot even be efficiently approximated to within any factor ρ ≥ 1, unless P = NP. We develop efficient and practical heuristics for these problems and carry out an experimental study of their performance on three well known social networks, namely epinions, wikipedia and slashdot. Our results show that these heuristics perform significantly better than five other known methods. We also establish an efficiently computable upper bound on the number of nodes to which a contagion can spread and evaluate this bound on many real and synthetic networks. PMID:25750583

  6. An empirical Bayes approach to network recovery using external knowledge.

    PubMed

    Kpogbezan, Gino B; van der Vaart, Aad W; van Wieringen, Wessel N; Leday, Gwenaël G R; van de Wiel, Mark A

    2017-09-01

    Reconstruction of a high-dimensional network may benefit substantially from the inclusion of prior knowledge on the network topology. In the case of gene interaction networks such knowledge may come for instance from pathway repositories like KEGG, or be inferred from data of a pilot study. The Bayesian framework provides a natural means of including such prior knowledge. Based on a Bayesian Simultaneous Equation Model, we develop an appealing Empirical Bayes (EB) procedure that automatically assesses the agreement of the used prior knowledge with the data at hand. We use variational Bayes method for posterior densities approximation and compare its accuracy with that of Gibbs sampling strategy. Our method is computationally fast, and can outperform known competitors. In a simulation study, we show that accurate prior data can greatly improve the reconstruction of the network, but need not harm the reconstruction if wrong. We demonstrate the benefits of the method in an analysis of gene expression data from GEO. In particular, the edges of the recovered network have superior reproducibility (compared to that of competitors) over resampled versions of the data. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Characterization of Combustion Dynamics, Detection, and Prevention of an Unstable Combustion State Based on a Complex-Network Theory

    NASA Astrophysics Data System (ADS)

    Gotoda, Hiroshi; Kinugawa, Hikaru; Tsujimoto, Ryosuke; Domen, Shohei; Okuno, Yuta

    2017-04-01

    Complex-network theory has attracted considerable attention for nearly a decade, and it enables us to encompass our understanding of nonlinear dynamics in complex systems in a wide range of fields, including applied physics and mechanical, chemical, and electrical engineering. We conduct an experimental study using a pragmatic online detection methodology based on complex-network theory to prevent a limiting unstable state such as blowout in a confined turbulent combustion system. This study introduces a modified version of the natural visibility algorithm based on the idea of a visibility limit to serve as a pragmatic online detector. The average degree of the modified version of the natural visibility graph allows us to detect the onset of blowout, resulting in online prevention.

  8. A non-linear model of economic production processes

    NASA Astrophysics Data System (ADS)

    Ponzi, A.; Yasutomi, A.; Kaneko, K.

    2003-06-01

    We present a new two phase model of economic production processes which is a non-linear dynamical version of von Neumann's neoclassical model of production, including a market price-setting phase as well as a production phase. The rate of an economic production process is observed, for the first time, to depend on the minimum of its input supplies. This creates highly non-linear supply and demand dynamics. By numerical simulation, production networks are shown to become unstable when the ratio of different products to total processes increases. This provides some insight into observed stability of competitive capitalist economies in comparison to monopolistic economies. Capitalist economies are also shown to have low unemployment.

  9. An algorithm to detect and communicate the differences in computational models describing biological systems.

    PubMed

    Scharm, Martin; Wolkenhauer, Olaf; Waltemath, Dagmar

    2016-02-15

    Repositories support the reuse of models and ensure transparency about results in publications linked to those models. With thousands of models available in repositories, such as the BioModels database or the Physiome Model Repository, a framework to track the differences between models and their versions is essential to compare and combine models. Difference detection not only allows users to study the history of models but also helps in the detection of errors and inconsistencies. Existing repositories lack algorithms to track a model's development over time. Focusing on SBML and CellML, we present an algorithm to accurately detect and describe differences between coexisting versions of a model with respect to (i) the models' encoding, (ii) the structure of biological networks and (iii) mathematical expressions. This algorithm is implemented in a comprehensive and open source library called BiVeS. BiVeS helps to identify and characterize changes in computational models and thereby contributes to the documentation of a model's history. Our work facilitates the reuse and extension of existing models and supports collaborative modelling. Finally, it contributes to better reproducibility of modelling results and to the challenge of model provenance. The workflow described in this article is implemented in BiVeS. BiVeS is freely available as source code and binary from sems.uni-rostock.de. The web interface BudHat demonstrates the capabilities of BiVeS at budhat.sems.uni-rostock.de. © The Author 2015. Published by Oxford University Press.

  10. CREST v2.1 Refined by a Distributed Linear Reservoir Routing Scheme

    NASA Astrophysics Data System (ADS)

    Shen, X.; Hong, Y.; Zhang, K.; Hao, Z.; Wang, D.

    2014-12-01

    Hydrologic modeling is important in water resources management, and flooding disaster warning and management. Routing scheme is among the most important components of a hydrologic model. In this study, we replace the lumped LRR (linear reservoir routing) scheme used in previous versions of the distributed hydrological model, CREST (coupled routing and excess storage) by a newly proposed distributed LRR method, which is theoretically more suitable for distributed hydrological models. Consequently, we have effectively solved the problems of: 1) low values of channel flow in daily simulation, 2) discontinuous flow value along the river network during flood events and 3) irrational model parameters. The CREST model equipped with both the routing schemes have been tested in the Gan River basin. The distributed LRR scheme has been confirmed to outperform the lumped counterpart by two comparisons, hydrograph validation and visual speculation of the continuity of stream flow along the river: 1) The CREST v2.1 (version 2.1) with the implementation of the distributed LRR achieved excellent result of [NSCE(Nash coefficient), CC (correlation coefficient), bias] =[0.897, 0.947 -1.57%] while the original CREST v2.0 produced only negative NSCE, close to zero CC and large bias. 2) CREST v2.1 produced more naturally smooth river flow pattern along the river network while v2.0 simulated bumping and discontinuous discharge along the mainstream. Moreover, we further observe that by using the distributed LRR method, 1) all model parameters fell within their reasonable region after an automatic optimization; 2) CREST forced by satellite-based precipitation and PET products produces a reasonably well result, i.e., (NSCE, CC, bias) = (0.756, 0.871, -0.669%) in the case study, although there is still room to improve regarding their low spatial resolution and underestimation of the heavy rainfall events in the satellite products.

  11. Seluge++: A Secure Over-the-Air Programming Scheme in Wireless Sensor Networks

    PubMed Central

    Doroodgar, Farzan; Razzaque, Mohammad Abdur; Isnin, Ismail Fauzi

    2014-01-01

    Over-the-air dissemination of code updates in wireless sensor networks have been researchers' point of interest in the last few years, and, more importantly, security challenges toward the remote propagation of code updating have occupied the majority of efforts in this context. Many security models have been proposed to establish a balance between the energy consumption and security strength, having their concentration on the constrained nature of wireless sensor network (WSN) nodes. For authentication purposes, most of them have used a Merkle hash tree to avoid using multiple public cryptography operations. These models mostly have assumed an environment in which security has to be at a standard level. Therefore, they have not investigated the tree structure for mission-critical situations in which security has to be at the maximum possible level (e.g., military applications, healthcare). Considering this, we investigate existing security models used in over-the-air dissemination of code updates for possible vulnerabilities, and then, we provide a set of countermeasures, correspondingly named Security Model Requirements. Based on the investigation, we concentrate on Seluge, one of the existing over-the-air programming schemes, and we propose an improved version of it, named Seluge++, which complies with the Security Model Requirements and replaces the use of the inefficient Merkle tree with a novel method. Analytical and simulation results show the improvements in Seluge++ compared to Seluge. PMID:24618781

  12. Seluge++: a secure over-the-air programming scheme in wireless sensor networks.

    PubMed

    Doroodgar, Farzan; Abdur Razzaque, Mohammad; Isnin, Ismail Fauzi

    2014-03-11

    Over-the-air dissemination of code updates in wireless sensor networks have been researchers' point of interest in the last few years, and, more importantly, security challenges toward the remote propagation of code updating have occupied the majority of efforts in this context. Many security models have been proposed to establish a balance between the energy consumption and security strength, having their concentration on the constrained nature of wireless sensor network (WSN) nodes. For authentication purposes, most of them have used a Merkle hash tree to avoid using multiple public cryptography operations. These models mostly have assumed an environment in which security has to be at a standard level. Therefore, they have not investigated the tree structure for mission-critical situations in which security has to be at the maximum possible level (e.g., military applications, healthcare). Considering this, we investigate existing security models used in over-the-air dissemination of code updates for possible vulnerabilities, and then, we provide a set of countermeasures, correspondingly named Security Model Requirements. Based on the investigation, we concentrate on Seluge, one of the existing over-the-air programming schemes, and we propose an improved version of it, named Seluge++, which complies with the Security Model Requirements and replaces the use of the inefficient Merkle tree with a novel method. Analytical and simulation results show the improvements in Seluge++ compared to Seluge.

  13. Structured prediction models for RNN based sequence labeling in clinical text.

    PubMed

    Jagannatha, Abhyuday N; Yu, Hong

    2016-11-01

    Sequence labeling is a widely used method for named entity recognition and information extraction from unstructured natural language data. In clinical domain one major application of sequence labeling involves extraction of medical entities such as medication, indication, and side-effects from Electronic Health Record narratives. Sequence labeling in this domain, presents its own set of challenges and objectives. In this work we experimented with various CRF based structured learning models with Recurrent Neural Networks. We extend the previously studied LSTM-CRF models with explicit modeling of pairwise potentials. We also propose an approximate version of skip-chain CRF inference with RNN potentials. We use these methodologies for structured prediction in order to improve the exact phrase detection of various medical entities.

  14. Structured prediction models for RNN based sequence labeling in clinical text

    PubMed Central

    Jagannatha, Abhyuday N; Yu, Hong

    2016-01-01

    Sequence labeling is a widely used method for named entity recognition and information extraction from unstructured natural language data. In clinical domain one major application of sequence labeling involves extraction of medical entities such as medication, indication, and side-effects from Electronic Health Record narratives. Sequence labeling in this domain, presents its own set of challenges and objectives. In this work we experimented with various CRF based structured learning models with Recurrent Neural Networks. We extend the previously studied LSTM-CRF models with explicit modeling of pairwise potentials. We also propose an approximate version of skip-chain CRF inference with RNN potentials. We use these methodologies1 for structured prediction in order to improve the exact phrase detection of various medical entities. PMID:28004040

  15. NetMOD Version 2.0 User?s Manual.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merchant, Bion J.

    2015-10-01

    NetMOD ( Net work M onitoring for O ptimal D etection) is a Java-based software package for conducting simulation of seismic, hydracoustic, and infrasonic networks. Specifically, NetMOD simulates the detection capabilities of monitoring networks. Network simulations have long been used to study network resilience to station outages and to determine where additional stations are needed to reduce monitoring thresholds. NetMOD makes use of geophysical models to determine the source characteristics, signal attenuation along the path between the source and station, and the performance and noise properties of the station. These geophysical models are combined to simulate the relative amplitudes ofmore » signal and noise that are observed at each of the stations. From these signal-to-noise ratios (SNR), the probability of detection can be computed given a detection threshold. This manual describes how to configure and operate NetMOD to perform detection simulations. In addition, NetMOD is distributed with simulation datasets for the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) International Monitoring System (IMS) seismic, hydroacoustic, and infrasonic networks for the purpose of demonstrating NetMOD's capabilities and providing user training. The tutorial sections of this manual use this dataset when describing how to perform the steps involved when running a simulation. ACKNOWLEDGEMENTS We would like to thank the reviewers of this document for their contributions.« less

  16. Greenhouse gas network design using backward Lagrangian particle dispersion modelling - Part 1: Methodology and Australian test case

    NASA Astrophysics Data System (ADS)

    Ziehn, T.; Nickless, A.; Rayner, P. J.; Law, R. M.; Roff, G.; Fraser, P.

    2014-03-01

    This paper describes the generation of optimal atmospheric measurement networks for determining carbon dioxide fluxes over Australia using inverse methods. A Lagrangian particle dispersion model is used in reverse mode together with a Bayesian inverse modelling framework to calculate the relationship between weekly surface fluxes and hourly concentration observations for the Australian continent. Meteorological driving fields are provided by the regional version of the Australian Community Climate and Earth System Simulator (ACCESS) at 12 km resolution at an hourly time scale. Prior uncertainties are derived on a weekly time scale for biosphere fluxes and fossil fuel emissions from high resolution BIOS2 model runs and from the Fossil Fuel Data Assimilation System (FFDAS), respectively. The influence from outside the modelled domain is investigated, but proves to be negligible for the network design. Existing ground based measurement stations in Australia are assessed in terms of their ability to constrain local flux estimates from the land. We find that the six stations that are currently operational are already able to reduce the uncertainties on surface flux estimates by about 30%. A candidate list of 59 stations is generated based on logistic constraints and an incremental optimization scheme is used to extend the network of existing stations. In order to achieve an uncertainty reduction of about 50% we need to double the number of measurement stations in Australia. Assuming equal data uncertainties for all sites, new stations would be mainly located in the northern and eastern part of the continent.

  17. Soil Moisture Active Passive Mission L4_SM Data Product Assessment (Version 2 Validated Release)

    NASA Technical Reports Server (NTRS)

    Reichle, Rolf Helmut; De Lannoy, Gabrielle J. M.; Liu, Qing; Ardizzone, Joseph V.; Chen, Fan; Colliander, Andreas; Conaty, Austin; Crow, Wade; Jackson, Thomas; Kimball, John; hide

    2016-01-01

    During the post-launch SMAP calibration and validation (Cal/Val) phase there are two objectives for each science data product team: 1) calibrate, verify, and improve the performance of the science algorithm, and 2) validate the accuracy of the science data product as specified in the science requirements and according to the Cal/Val schedule. This report provides an assessment of the SMAP Level 4 Surface and Root Zone Soil Moisture Passive (L4_SM) product specifically for the product's public Version 2 validated release scheduled for 29 April 2016. The assessment of the Version 2 L4_SM data product includes comparisons of SMAP L4_SM soil moisture estimates with in situ soil moisture observations from core validation sites and sparse networks. The assessment further includes a global evaluation of the internal diagnostics from the ensemble-based data assimilation system that is used to generate the L4_SM product. This evaluation focuses on the statistics of the observation-minus-forecast (O-F) residuals and the analysis increments. Together, the core validation site comparisons and the statistics of the assimilation diagnostics are considered primary validation methodologies for the L4_SM product. Comparisons against in situ measurements from regional-scale sparse networks are considered a secondary validation methodology because such in situ measurements are subject to up-scaling errors from the point-scale to the grid cell scale of the data product. Based on the limited set of core validation sites, the wide geographic range of the sparse network sites, and the global assessment of the assimilation diagnostics, the assessment presented here meets the criteria established by the Committee on Earth Observing Satellites for Stage 2 validation and supports the validated release of the data. An analysis of the time average surface and root zone soil moisture shows that the global pattern of arid and humid regions are captured by the L4_SM estimates. Results from the core validation site comparisons indicate that "Version 2" of the L4_SM data product meets the self-imposed L4_SM accuracy requirement, which is formulated in terms of the ubRMSE: the RMSE (Root Mean Square Error) after removal of the long-term mean difference. The overall ubRMSE of the 3-hourly L4_SM surface soil moisture at the 9 km scale is 0.035 cubic meters per cubic meter requirement. The corresponding ubRMSE for L4_SM root zone soil moisture is 0.024 cubic meters per cubic meter requirement. Both of these metrics are comfortably below the 0.04 cubic meters per cubic meter requirement. The L4_SM estimates are an improvement over estimates from a model-only SMAP Nature Run version 4 (NRv4), which demonstrates the beneficial impact of the SMAP brightness temperature data. L4_SM surface soil moisture estimates are consistently more skillful than NRv4 estimates, although not by a statistically significant margin. The lack of statistical significance is not surprising given the limited data record available to date. Root zone soil moisture estimates from L4_SM and NRv4 have similar skill. Results from comparisons of the L4_SM product to in situ measurements from nearly 400 sparse network sites corroborate the core validation site results. The instantaneous soil moisture and soil temperature analysis increments are within a reasonable range and result in spatially smooth soil moisture analyses. The O-F residuals exhibit only small biases on the order of 1-3 degrees Kelvin between the (re-scaled) SMAP brightness temperature observations and the L4_SM model forecast, which indicates that the assimilation system is largely unbiased. The spatially averaged time series standard deviation of the O-F residuals is 5.9 degrees Kelvin, which reduces to 4.0 degrees Kelvin for the observation-minus-analysis (O-A) residuals, reflecting the impact of the SMAP observations on the L4_SM system. Averaged globally, the time series standard deviation of the normalized O-F residuals is close to unity, which would suggest that the magnitude of the modeled errors approximately reflects that of the actual errors. The assessment report also notes several limitations of the "Version 2" L4_SM data product and science algorithm calibration that will be addressed in future releases. Regionally, the time series standard deviation of the normalized O-F residuals deviates considerably from unity, which indicates that the L4_SM assimilation algorithm either over- or under-estimates the actual errors that are present in the system. Planned improvements include revised land model parameters, revised error parameters for the land model and the assimilated SMAP observations, and revised surface meteorological forcing data for the operational period and underlying climatological data. Moreover, a refined analysis of the impact of SMAP observations will be facilitated by the construction of additional variants of the model-only reference data. Nevertheless, the “Version 2” validated release of the L4_SM product is sufficiently mature and of adequate quality for distribution to and use by the larger science and application communities.

  18. Mesoscopic Effects in an Agent-Based Bargaining Model in Regular Lattices

    PubMed Central

    Poza, David J.; Santos, José I.; Galán, José M.; López-Paredes, Adolfo

    2011-01-01

    The effect of spatial structure has been proved very relevant in repeated games. In this work we propose an agent based model where a fixed finite population of tagged agents play iteratively the Nash demand game in a regular lattice. The model extends the multiagent bargaining model by Axtell, Epstein and Young [1] modifying the assumption of global interaction. Each agent is endowed with a memory and plays the best reply against the opponent's most frequent demand. We focus our analysis on the transient dynamics of the system, studying by computer simulation the set of states in which the system spends a considerable fraction of the time. The results show that all the possible persistent regimes in the global interaction model can also be observed in this spatial version. We also find that the mesoscopic properties of the interaction networks that the spatial distribution induces in the model have a significant impact on the diffusion of strategies, and can lead to new persistent regimes different from those found in previous research. In particular, community structure in the intratype interaction networks may cause that communities reach different persistent regimes as a consequence of the hindering diffusion effect of fluctuating agents at their borders. PMID:21408019

  19. Mesoscopic effects in an agent-based bargaining model in regular lattices.

    PubMed

    Poza, David J; Santos, José I; Galán, José M; López-Paredes, Adolfo

    2011-03-09

    The effect of spatial structure has been proved very relevant in repeated games. In this work we propose an agent based model where a fixed finite population of tagged agents play iteratively the Nash demand game in a regular lattice. The model extends the multiagent bargaining model by Axtell, Epstein and Young modifying the assumption of global interaction. Each agent is endowed with a memory and plays the best reply against the opponent's most frequent demand. We focus our analysis on the transient dynamics of the system, studying by computer simulation the set of states in which the system spends a considerable fraction of the time. The results show that all the possible persistent regimes in the global interaction model can also be observed in this spatial version. We also find that the mesoscopic properties of the interaction networks that the spatial distribution induces in the model have a significant impact on the diffusion of strategies, and can lead to new persistent regimes different from those found in previous research. In particular, community structure in the intratype interaction networks may cause that communities reach different persistent regimes as a consequence of the hindering diffusion effect of fluctuating agents at their borders.

  20. Youth-Nominated Support Team for Suicidal Adolescents (Version 1): A Randomized Controlled Trial

    ERIC Educational Resources Information Center

    King, Cheryl A.; Kramer, Anne; Preuss, Lesli; Kerr, David C. R.; Weisse, Lois; Venkataraman, Sanjeev

    2006-01-01

    In this study, the authors investigated the efficacy of the Youth-Nominated Support Team-Version 1 (YST-1), a psychoeducational social network intervention, with 289 suicidal, psychiatrically hospitalized adolescents (197 girls, 92 boys). Adolescents were randomly assigned to treatment-as-usual plus YST-1 or treatment-as-usual only. Assessments…

  1. NCCN Guidelines Insights: Multiple Myeloma, Version 3.2018.

    PubMed

    Kumar, Shaji K; Callander, Natalie S; Alsina, Melissa; Atanackovic, Djordje; Biermann, J Sybil; Castillo, Jorge; Chandler, Jason C; Costello, Caitlin; Faiman, Matthew; Fung, Henry C; Godby, Kelly; Hofmeister, Craig; Holmberg, Leona; Holstein, Sarah; Huff, Carol Ann; Kang, Yubin; Kassim, Adetola; Liedtke, Michaela; Malek, Ehsan; Martin, Thomas; Neppalli, Vishala T; Omel, James; Raje, Noopur; Singhal, Seema; Somlo, George; Stockerl-Goldstein, Keith; Weber, Donna; Yahalom, Joachim; Kumar, Rashmi; Shead, Dorothy A

    2018-01-01

    The NCCN Guidelines for Multiple Myeloma provide recommendations for diagnosis, evaluation, treatment, including supportive-care, and follow-up for patients with myeloma. These NCCN Guidelines Insights highlight the important updates/changes specific to the myeloma therapy options in the 2018 version of the NCCN Guidelines. Copyright © 2018 by the National Comprehensive Cancer Network.

  2. Phase transitions in cooperative coinfections: Simulation results for networks and lattices

    NASA Astrophysics Data System (ADS)

    Grassberger, Peter; Chen, Li; Ghanbarnejad, Fakhteh; Cai, Weiran

    2016-04-01

    We study the spreading of two mutually cooperative diseases on different network topologies, and with two microscopic realizations, both of which are stochastic versions of a susceptible-infected-removed type model studied by us recently in mean field approximation. There it had been found that cooperativity can lead to first order transitions from spreading to extinction. However, due to the rapid mixing implied by the mean field assumption, first order transitions required nonzero initial densities of sick individuals. For the stochastic model studied here the results depend strongly on the underlying network. First order transitions are found when there are few short but many long loops: (i) No first order transitions exist on trees and on 2-d lattices with local contacts. (ii) They do exist on Erdős-Rényi (ER) networks, on d -dimensional lattices with d ≥4 , and on 2-d lattices with sufficiently long-ranged contacts. (iii) On 3-d lattices with local contacts the results depend on the microscopic details of the implementation. (iv) While single infected seeds can always lead to infinite epidemics on regular lattices, on ER networks one sometimes needs finite initial densities of infected nodes. (v) In all cases the first order transitions are actually "hybrid"; i.e., they display also power law scaling usually associated with second order transitions. On regular lattices, our model can also be interpreted as the growth of an interface due to cooperative attachment of two species of particles. Critically pinned interfaces in this model seem to be in different universality classes than standard critically pinned interfaces in models with forbidden overhangs. Finally, the detailed results mentioned above hold only when both diseases propagate along the same network of links. If they use different links, results can be rather different in detail, but are similar overall.

  3. Factor Structure Evaluation of the French Version of the Digital Natives Assessment Scale.

    PubMed

    Wagner, Vincent; Acier, Didier

    2017-03-01

    "Digital natives" concept defines young adults particularly familiar with emerging technologies such as computers, smartphones, or Internet. This notion is still controversial and so far, the primary identifying criterion was to consider their date of birth. However, literature highlighted the need to describe specific characteristics. The purpose of this research was to evaluate the factor structure of a French version of the Digital Natives Assessment Scale (DNAS). The sample of this study includes 590 participants from a 6-week massive open online course and from Web sites, electronic forums, and social networks. The DNAS was translated in French and then back-translated to English. A principal component analysis with orthogonal rotation followed by a confirmatory factorial analysis showed that a 15-item four-correlated component model provided the best fit for the data of our sample. Factor structure of this French-translated version of the DNAS was rather similar than those found in earlier studies. This study provides evidence of the DNAS robustness through cross-cultural and cross-generational validation. The French version of the DNAS appears to be appropriate as a quick and effective questionnaire to assess digital natives. More studies are needed to better define further features of this particular group.

  4. TRANSMISSION NETWORK PLANNING METHOD FOR COMPARATIVE STUDIES (JOURNAL VERSION)

    EPA Science Inventory

    An automated transmission network planning method for comparative studies is presented. This method employs logical steps that may closely parallel those taken in practice by the planning engineers. Use is made of a sensitivity matrix to simulate the engineers' experience in sele...

  5. Mobile Virtual Private Networking

    NASA Astrophysics Data System (ADS)

    Pulkkis, Göran; Grahn, Kaj; Mårtens, Mathias; Mattsson, Jonny

    Mobile Virtual Private Networking (VPN) solutions based on the Internet Security Protocol (IPSec), Transport Layer Security/Secure Socket Layer (SSL/TLS), Secure Shell (SSH), 3G/GPRS cellular networks, Mobile IP, and the presently experimental Host Identity Protocol (HIP) are described, compared and evaluated. Mobile VPN solutions based on HIP are recommended for future networking because of superior processing efficiency and network capacity demand features. Mobile VPN implementation issues associated with the IP protocol versions IPv4 and IPv6 are also evaluated. Mobile VPN implementation experiences are presented and discussed.

  6. Monthly and seasonally verification of precipitation in Poland

    NASA Astrophysics Data System (ADS)

    Starosta, K.; Linkowska, J.

    2009-04-01

    The national meteorological service of Poland - the Institute of Meteorology and Water Management (IMWM) joined COSMO - The Consortium for Small Scale Modelling on July 2004. In Poland, the COSMO _PL model version 3.5 had run till June 2007. Since July 2007, the model version 4.0 has been running. The model runs in an operational mode at 14-km grid spacing, twice a day (00 UTC, 12 UTC). For scientific research also model with 7-km grid spacing is ran. Monthly and seasonally verification for the 24-hours (06 UTC - 06 UTC) accumulated precipitation is presented in this paper. The precipitation field of COSMO_LM had been verified against rain gauges network (308 points). The verification had been made for every month and all seasons from December 2007 to December 2008. The verification was made for three forecast days for selected thresholds: 0.5, 1, 2.5, 5, 10, 20, 25, 30 mm. Following indices from contingency table were calculated: FBI (bias), POD (probability of detection), PON (probability of detection of non event), FAR (False alarm rate), TSS (True sill statistic), HSS (Heidke skill score), ETS (Equitable skill score). Also percentile ranks and ROC-relative operating characteristic are presented. The ROC is a graph of the hit rate (Y-axis) against false alarm rate (X-axis) for different decision thresholds

  7. Monthly and seasonally verification of precipitation in Poland

    NASA Astrophysics Data System (ADS)

    Starosta, K.; Linkowska, J.

    2009-04-01

    The national meteorological service of Poland - the Institute of Meteorology and Water Management (IMWM) joined COSMO - The Consortium for Small Scale Modelling on July 2004. In Poland, the COSMO _PL model version 3.5 had run till June 2007. Since July 2007, the model version 4.0 has been running. The model runs in an operational mode at 14-km grid spacing, twice a day (00 UTC, 12 UTC). For scientific research also model with 7-km grid spacing is ran. Monthly and seasonally verification for the 24-hours (06 UTC - 06 UTC) accumulated precipitation is presented in this paper. The precipitation field of COSMO_LM had been verified against rain gauges network (308 points). The verification had been made for every month and all seasons from December 2007 to December 2008. The verification was made for three forecast days for selected thresholds: 0.5, 1, 2.5, 5, 10, 20, 25, 30 mm. Following indices from contingency table were calculated: FBI (bias), POD (probability of detection), PON (probability of detection of non event), FAR (False alarm rate), TSS (True sill statistic), HSS (Heidke skill score), ETS (Equitable skill score). Also percentile ranks and ROC-relative operating characteristic are presented. The ROC is a graph of the hit rate (Y-axis) against false alarm rate (X-axis) for different decision thresholds.

  8. Neural Networks for Segregation of Multiple Objects: Visual Figure-Ground Separation and Auditory Pitch Perception.

    NASA Astrophysics Data System (ADS)

    Wyse, Lonce

    An important component of perceptual object recognition is the segmentation into coherent perceptual units of the "blooming buzzing confusion" that bombards the senses. The work presented herein develops neural network models of some key processes of pre-attentive vision and audition that serve this goal. A neural network model, called an FBF (Feature -Boundary-Feature) network, is proposed for automatic parallel separation of multiple figures from each other and their backgrounds in noisy images. Figure-ground separation is accomplished by iterating operations of a Boundary Contour System (BCS) that generates a boundary segmentation of a scene, and a Feature Contour System (FCS) that compensates for variable illumination and fills-in surface properties using boundary signals. A key new feature is the use of the FBF filling-in process for the figure-ground separation of connected regions, which are subsequently more easily recognized. The new CORT-X 2 model is a feed-forward version of the BCS that is designed to detect, regularize, and complete boundaries in up to 50 percent noise. It also exploits the complementary properties of on-cells and off -cells to generate boundary segmentations and to compensate for boundary gaps during filling-in. In the realm of audition, many sounds are dominated by energy at integer multiples, or "harmonics", of a fundamental frequency. For such sounds (e.g., vowels in speech), the individual frequency components fuse, so that they are perceived as one sound source with a pitch at the fundamental frequency. Pitch is integral to separating auditory sources, as well as to speaker identification and speech understanding. A neural network model of pitch perception called SPINET (SPatial PItch NETwork) is developed and used to simulate a broader range of perceptual data than previous spectral models. The model employs a bank of narrowband filters as a simple model of basilar membrane mechanics, spectral on-center off-surround competitive interactions, and a "harmonic sieve" mechanism whereby the strength of a pitch depends only on spectral regions near harmonics. The model is evaluated using data involving mistuned components, shifted harmonics, complex tones with varying phase relationships, and continuous spectra such as rippled noise and narrow noise bands.

  9. Proactive Alleviation Procedure to Handle Black Hole Attack and Its Version

    PubMed Central

    Babu, M. Rajesh; Dian, S. Moses; Chelladurai, Siva; Palaniappan, Mathiyalagan

    2015-01-01

    The world is moving towards a new realm of computing such as Internet of Things. The Internet of Things, however, envisions connecting almost all objects within the world to the Internet by recognizing them as smart objects. In doing so, the existing networks which include wired, wireless, and ad hoc networks should be utilized. Moreover, apart from other networks, the ad hoc network is full of security challenges. For instance, the MANET (mobile ad hoc network) is susceptible to various attacks in which the black hole attacks and its versions do serious damage to the entire MANET infrastructure. The severity of this attack increases, when the compromised MANET nodes work in cooperation with each other to make a cooperative black hole attack. Therefore this paper proposes an alleviation procedure which consists of timely mandate procedure, hole detection algorithm, and sensitive guard procedure to detect the maliciously behaving nodes. It has been observed that the proposed procedure is cost-effective and ensures QoS guarantee by assuring resource availability thus making the MANET appropriate for Internet of Things. PMID:26495430

  10. Proactive Alleviation Procedure to Handle Black Hole Attack and Its Version.

    PubMed

    Babu, M Rajesh; Dian, S Moses; Chelladurai, Siva; Palaniappan, Mathiyalagan

    2015-01-01

    The world is moving towards a new realm of computing such as Internet of Things. The Internet of Things, however, envisions connecting almost all objects within the world to the Internet by recognizing them as smart objects. In doing so, the existing networks which include wired, wireless, and ad hoc networks should be utilized. Moreover, apart from other networks, the ad hoc network is full of security challenges. For instance, the MANET (mobile ad hoc network) is susceptible to various attacks in which the black hole attacks and its versions do serious damage to the entire MANET infrastructure. The severity of this attack increases, when the compromised MANET nodes work in cooperation with each other to make a cooperative black hole attack. Therefore this paper proposes an alleviation procedure which consists of timely mandate procedure, hole detection algorithm, and sensitive guard procedure to detect the maliciously behaving nodes. It has been observed that the proposed procedure is cost-effective and ensures QoS guarantee by assuring resource availability thus making the MANET appropriate for Internet of Things.

  11. Repeated Measurement of the Components of Attention with Young Children Using the Attention Network Test: Stability, Isolability, Robustness, and Reliability

    ERIC Educational Resources Information Center

    Ishigami, Yoko; Klein, Raymond M.

    2015-01-01

    The current study examined the robustness, stability, reliability, and isolability of the attention network scores (alerting, orienting, and executive control) when young children experienced repeated administrations of the child version of the Attention Network Test (ANT; Rueda et al., 2004). Ten test sessions of the ANT were administered to 12…

  12. Internet and Intranet Use with a PC: Effects of Adapter Cards, Windows Versions and TCP/IP Software on Networking Performance.

    ERIC Educational Resources Information Center

    Nieuwenhuysen, Paul

    1997-01-01

    Explores data transfer speeds obtained with various combinations of hardware and software components through a study of access to the Internet from a notebook computer connected to a local area network based on Ethernet and TCP/IP (transmission control protocol/Internet protocol) network protocols. Upgrading is recommended for higher transfer…

  13. A Digital Hydrologic Network Supporting NAWQA MRB SPARROW Modeling--MRB_E2RF1WS

    USGS Publications Warehouse

    Brakebill, J.W.; Terziotti, S.E.

    2011-01-01

    A digital hydrologic network was developed to support SPAtially Referenced Regression on Watershed attributes (SPARROW) models within selected regions of the United States. These regions correspond with the U.S. Geological Survey's National Water Quality Assessment (NAWQA) Program Major River Basin (MRB) study units 2, 3, 4, 5, and 7 (Preston and others, 2009). MRB2, covers the South Atlantic-Gulf and Tennessee River basins. MRB3, covers the Great Lakes, Ohio, Upper Mississippi, and Souris-Red-Rainy River basins. MRB4, covers the Missouri River basins. MRB5, covers the Lower Mississippi, Arkansas-White-Red, and Texas-Gulf River basins. MRB7, covers the Pacific Northwest River basins. The digital hydrologic network described here represents surface-water pathways (MRB_E2RF1) and associated catchments (MRB_E2RF1WS). It serves as the fundamental framework to spatially reference and summarize explanatory information supporting nutrient SPARROW models (Brakebill and others, 2011; Wieczorek and LaMotte, 2011). The principal geospatial dataset used to support this regional effort was based on an enhanced version of a 1:500,000 scale digital stream-reach network (ERF1_2) (Nolan et al., 2002). Enhancements included associating over 3,500 water-quality monitoring sites to the reach network, improving physical locations of stream reaches at or near monitoring locations, and generating drainage catchments based on 100m elevation data. A unique number (MRB_ID) identifies each reach as a single unit. This unique number is also shared by the catchment area drained by the reach, thus spatially linking the hydrologically connected streams and the respective drainage area characteristics. In addition, other relevant physical, environmental, and monitoring information can be associated to the common network and accessed using the unique identification number.

  14. A Digital Hydrologic Network Supporting NAWQA MRB SPARROW Modeling--MRB_E2RF1

    USGS Publications Warehouse

    Brakebill, J.W.; Terziotti, S.E.

    2011-01-01

    A digital hydrologic network was developed to support SPAtially Referenced Regression on Watershed attributes (SPARROW) models within selected regions of the United States. These regions correspond with the U.S. Geological Survey's National Water Quality Assessment (NAWQA) Program Major River Basin (MRB) study units 2, 3, 4, 5, and 7 (Preston and others, 2009). MRB2, covers the South Atlantic-Gulf and Tennessee River basins. MRB3, covers the Great Lakes, Ohio, Upper Mississippi, and Souris-Red-Rainy River basins. MRB4, covers the Missouri River basins. MRB5, covers the Lower Mississippi, Arkansas-White-Red, and Texas-Gulf River basins. MRB7, covers the Pacific Northwest River basins. The digital hydrologic network described here represents surface-water pathways (MRB_E2RF1) and associated catchments (MRB_E2RF1WS). It serves as the fundamental framework to spatially reference and summarize explanatory information supporting nutrient SPARROW models (Brakebill and others, 2011; Wieczorek and LaMotte, 2011). The principal geospatial dataset used to support this regional effort was based on an enhanced version of a 1:500,000 scale digital stream-reach network (ERF1_2) (Nolan et al., 2002). Enhancements included associating over 3,500 water-quality monitoring sites to the reach network, improving physical locations of stream reaches at or near monitoring locations, and generating drainage catchments based on 100m elevation data. A unique number (MRB_ID) identifies each reach as a single unit. This unique number is also shared by the catchment area drained by the reach, thus spatially linking the hydrologically connected streams and the respective drainage area characteristics. In addition, other relevant physical, environmental, and monitoring information can be associated to the common network and accessed using the unique identification number.

  15. A statistical learning framework for groundwater nitrate models of the Central Valley, California, USA

    USGS Publications Warehouse

    Nolan, Bernard T.; Fienen, Michael N.; Lorenz, David L.

    2015-01-01

    We used a statistical learning framework to evaluate the ability of three machine-learning methods to predict nitrate concentration in shallow groundwater of the Central Valley, California: boosted regression trees (BRT), artificial neural networks (ANN), and Bayesian networks (BN). Machine learning methods can learn complex patterns in the data but because of overfitting may not generalize well to new data. The statistical learning framework involves cross-validation (CV) training and testing data and a separate hold-out data set for model evaluation, with the goal of optimizing predictive performance by controlling for model overfit. The order of prediction performance according to both CV testing R2 and that for the hold-out data set was BRT > BN > ANN. For each method we identified two models based on CV testing results: that with maximum testing R2 and a version with R2 within one standard error of the maximum (the 1SE model). The former yielded CV training R2 values of 0.94–1.0. Cross-validation testing R2 values indicate predictive performance, and these were 0.22–0.39 for the maximum R2 models and 0.19–0.36 for the 1SE models. Evaluation with hold-out data suggested that the 1SE BRT and ANN models predicted better for an independent data set compared with the maximum R2 versions, which is relevant to extrapolation by mapping. Scatterplots of predicted vs. observed hold-out data obtained for final models helped identify prediction bias, which was fairly pronounced for ANN and BN. Lastly, the models were compared with multiple linear regression (MLR) and a previous random forest regression (RFR) model. Whereas BRT results were comparable to RFR, MLR had low hold-out R2 (0.07) and explained less than half the variation in the training data. Spatial patterns of predictions by the final, 1SE BRT model agreed reasonably well with previously observed patterns of nitrate occurrence in groundwater of the Central Valley.

  16. NCCN Guidelines Insights: Breast Cancer, Version 1.2017.

    PubMed

    Gradishar, William J; Anderson, Benjamin O; Balassanian, Ron; Blair, Sarah L; Burstein, Harold J; Cyr, Amy; Elias, Anthony D; Farrar, William B; Forero, Andres; Giordano, Sharon Hermes; Goetz, Matthew P; Goldstein, Lori J; Isakoff, Steven J; Lyons, Janice; Marcom, P Kelly; Mayer, Ingrid A; McCormick, Beryl; Moran, Meena S; O'Regan, Ruth M; Patel, Sameer A; Pierce, Lori J; Reed, Elizabeth C; Salerno, Kilian E; Schwartzberg, Lee S; Sitapati, Amy; Smith, Karen Lisa; Smith, Mary Lou; Soliman, Hatem; Somlo, George; Telli, Melinda; Ward, John H; Shead, Dorothy A; Kumar, Rashmi

    2017-04-01

    These NCCN Guidelines Insights highlight the important updates/changes to the surgical axillary staging, radiation therapy, and systemic therapy recommendations for hormone receptor-positive disease in the 1.2017 version of the NCCN Guidelines for Breast Cancer. This report summarizes these updates and discusses the rationale behind them. Updates on new drug approvals, not available at press time, can be found in the most recent version of these guidelines at NCCN.org. Copyright © 2017 by the National Comprehensive Cancer Network.

  17. Aerosol Optical Depth Changes in Version 4 CALIPSO Level 2 Product

    NASA Technical Reports Server (NTRS)

    Kim, Man-Hae; Omar, Ali H.; Tackett, Jason L.; Vaughan, Mark A.; Winker, David M.; Trepte, Charles R.; Hu, Yongxiang; Liu, Zhaoyan

    2017-01-01

    The Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) version 4.10 (V4) products were released in November 2016 with substantial enhancements. There have been improvements in the V4 CALIOP level 2 aerosol optical depth (AOD) compared to V3 (version 3) due to various factors. AOD change from V3 to V4 is investigated by separating factors. CALIOP AOD was compared with the Moderate Resolution Imaging Spectroradiometer (MODIS) and Aerosol Robotic Network (AERONET) for both V3 and V4.

  18. A model of the regulatory network involved in the control of the cell cycle and cell differentiation in the Caenorhabditis elegans vulva.

    PubMed

    Weinstein, Nathan; Ortiz-Gutiérrez, Elizabeth; Muñoz, Stalin; Rosenblueth, David A; Álvarez-Buylla, Elena R; Mendoza, Luis

    2015-03-13

    There are recent experimental reports on the cross-regulation between molecules involved in the control of the cell cycle and the differentiation of the vulval precursor cells (VPCs) of Caenorhabditis elegans. Such discoveries provide novel clues on how the molecular mechanisms involved in the cell cycle and cell differentiation processes are coordinated during vulval development. Dynamic computational models are helpful to understand the integrated regulatory mechanisms affecting these cellular processes. Here we propose a simplified model of the regulatory network that includes sufficient molecules involved in the control of both the cell cycle and cell differentiation in the C. elegans vulva to recover their dynamic behavior. We first infer both the topology and the update rules of the cell cycle module from an expected time series. Next, we use a symbolic algorithmic approach to find which interactions must be included in the regulatory network. Finally, we use a continuous-time version of the update rules for the cell cycle module to validate the cyclic behavior of the network, as well as to rule out the presence of potential artifacts due to the synchronous updating of the discrete model. We analyze the dynamical behavior of the model for the wild type and several mutants, finding that most of the results are consistent with published experimental results. Our model shows that the regulation of Notch signaling by the cell cycle preserves the potential of the VPCs and the three vulval fates to differentiate and de-differentiate, allowing them to remain completely responsive to the concentration of LIN-3 and lateral signal in the extracellular microenvironment.

  19. Robust Resilience of the Frontotemporal Syntax System to Aging

    PubMed Central

    Samu, Dávid; Davis, Simon W.; Geerligs, Linda; Mustafa, Abdur; Tyler, Lorraine K.

    2016-01-01

    Brain function is thought to become less specialized with age. However, this view is largely based on findings of increased activation during tasks that fail to separate task-related processes (e.g., attention, decision making) from the cognitive process under examination. Here we take a systems-level approach to separate processes specific to language comprehension from those related to general task demands and to examine age differences in functional connectivity both within and between those systems. A large population-based sample (N = 111; 22–87 years) from the Cambridge Centre for Aging and Neuroscience (Cam-CAN) was scanned using functional MRI during two versions of an experiment: a natural listening version in which participants simply listened to spoken sentences and an explicit task version in which they rated the acceptability of the same sentences. Independent components analysis across the combined data from both versions showed that although task-free language comprehension activates only the auditory and frontotemporal (FTN) syntax networks, performing a simple task with the same sentences recruits several additional networks. Remarkably, functionality of the critical FTN is maintained across age groups, showing no difference in within-network connectivity or responsivity to syntactic processing demands despite gray matter loss and reduced connectivity to task-related networks. We found no evidence for reduced specialization or compensation with age. Overt task performance was maintained across the lifespan and performance in older, but not younger, adults related to crystallized knowledge, suggesting that decreased between-network connectivity may be compensated for by older adults' richer knowledge base. SIGNIFICANCE STATEMENT Understanding spoken language requires the rapid integration of information at many different levels of analysis. Given the complexity and speed of this process, it is remarkably well preserved with age. Although previous work claims that this preserved functionality is due to compensatory activation of regions outside the frontotemporal language network, we use a novel systems-level approach to show that these “compensatory” activations simply reflect age differences in response to experimental task demands. Natural, task-free language comprehension solely recruits auditory and frontotemporal networks, the latter of which is similarly responsive to language-processing demands across the lifespan. These findings challenge the conventional approach to neurocognitive aging by showing that the neural underpinnings of a given cognitive function depend on how you test it. PMID:27170120

  20. Network capability estimation. Vela network evaluation and automatic processing research. Technical report. [NETWORTH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snell, N.S.

    1976-09-24

    NETWORTH is a computer program which calculates the detection and location capability of seismic networks. A modified version of NETWORTH has been developed. This program has been used to evaluate the effect of station 'downtime', the signal amplitude variance, and the station detection threshold upon network detection capability. In this version all parameters may be changed separately for individual stations. The capability of using signal amplitude corrections has been added. The function of amplitude corrections is to remove possible bias in the magnitude estimate due to inhomogeneous signal attenuation. These corrections may be applied to individual stations, individual epicenters, ormore » individual station/epicenter combinations. An option has been added to calculate the effect of station 'downtime' upon network capability. This study indicates that, if capability loss due to detection errors can be minimized, then station detection threshold and station reliability will be the fundamental limits to network performance. A baseline network of thirteen stations has been performed. These stations are as follows: Alaskan Long Period Array, (ALPA); Ankara, (ANK); Chiang Mai, (CHG); Korean Seismic Research Station, (KSRS); Large Aperture Seismic Array, (LASA); Mashhad, (MSH); Mundaring, (MUN); Norwegian Seismic Array, (NORSAR); New Delhi, (NWDEL); Red Knife, Ontario, (RK-ON); Shillong, (SHL); Taipei, (TAP); and White Horse, Yukon, (WH-YK).« less

  1. Ada technology support for NASA-GSFC

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Utilization of the Ada programming language and environments to perform directorate functions was reviewed. The Mission and Data Operations Directorate Network (MNET) conversion effort was chosen as the first task for evaluation and assistance. The MNET project required the rewriting of the existing Network Control Program (NCP) in the Ada programming language. The DEC Ada compiler running on the VAX under WMS was used for the initial development efforts. Stress tests on the newly delivered version of the DEC Ada compiler were performed. The new Alsys Ada compiler was purchased for the IBM PC AT. A prevalidated version of the compiler was obtained. The compiler was then validated.

  2. Flight Test of an Intelligent Flight-Control System

    NASA Technical Reports Server (NTRS)

    Davidson, Ron; Bosworth, John T.; Jacobson, Steven R.; Thomson, Michael Pl; Jorgensen, Charles C.

    2003-01-01

    The F-15 Advanced Controls Technology for Integrated Vehicles (ACTIVE) airplane (see figure) was the test bed for a flight test of an intelligent flight control system (IFCS). This IFCS utilizes a neural network to determine critical stability and control derivatives for a control law, the real-time gains of which are computed by an algorithm that solves the Riccati equation. These derivatives are also used to identify the parameters of a dynamic model of the airplane. The model is used in a model-following portion of the control law, in order to provide specific vehicle handling characteristics. The flight test of the IFCS marks the initiation of the Intelligent Flight Control System Advanced Concept Program (IFCS ACP), which is a collaboration between NASA and Boeing Phantom Works. The goals of the IFCS ACP are to (1) develop the concept of a flight-control system that uses neural-network technology to identify aircraft characteristics to provide optimal aircraft performance, (2) develop a self-training neural network to update estimates of aircraft properties in flight, and (3) demonstrate the aforementioned concepts on the F-15 ACTIVE airplane in flight. The activities of the initial IFCS ACP were divided into three Phases, each devoted to the attainment of a different objective. The objective of Phase I was to develop a pre-trained neural network to store and recall the wind-tunnel-based stability and control derivatives of the vehicle. The objective of Phase II was to develop a neural network that can learn how to adjust the stability and control derivatives to account for failures or modeling deficiencies. The objective of Phase III was to develop a flight control system that uses the neural network outputs as a basis for controlling the aircraft. The flight test of the IFCS was performed in stages. In the first stage, the Phase I version of the pre-trained neural network was flown in a passive mode. The neural network software was running using flight data inputs with the outputs provided to instrumentation only. The IFCS was not used to control the airplane. In another stage of the flight test, the Phase I pre-trained neural network was integrated into a Phase III version of the flight control system. The Phase I pretrained neural network provided realtime stability and control derivatives to a Phase III controller that was based on a stochastic optimal feedforward and feedback technique (SOFFT). This combined Phase I/III system was operated together with the research flight-control system (RFCS) of the F-15 ACTIVE during the flight test. The RFCS enables the pilot to switch quickly from the experimental- research flight mode back to the safe conventional mode. These initial IFCS ACP flight tests were completed in April 1999. The Phase I/III flight test milestone was to demonstrate, across a range of subsonic and supersonic flight conditions, that the pre-trained neural network could be used to supply real-time aerodynamic stability and control derivatives to the closed-loop optimal SOFFT flight controller. Additional objectives attained in the flight test included (1) flight qualification of a neural-network-based control system; (2) the use of a combined neural-network/closed-loop optimal flight-control system to obtain level-one handling qualities; and (3) demonstration, through variation of control gains, that different handling qualities can be achieved by setting new target parameters. In addition, data for the Phase-II (on-line-learning) neural network were collected, during the use of stacked-frequency- sweep excitation, for post-flight analysis. Initial analysis of these data showed the potential for future flight tests that will incorporate the real-time identification and on-line learning aspects of the IFCS.

  3. Coexistence: Threat to the Performance of Heterogeneous Network

    NASA Astrophysics Data System (ADS)

    Sharma, Neetu; Kaur, Amanpreet

    2010-11-01

    Wireless technology is gaining broad acceptance as users opt for the freedom that only wireless network can provide. Well-accepted wireless communication technologies generally operate in frequency bands that are shared among several users, often using different RF schemes. This is true in particular for WiFi, Bluetooth, and more recently ZigBee. These all three operate in the unlicensed 2.4 GHz band, also known as ISM band, which has been key to the development of a competitive and innovative market for wireless embedded devices. But, as with any resource held in common, it is crucial that those technologies coexist peacefully to allow each user of the band to fulfill its communication goals. This has led to an increase in wireless devices intended for use in IEEE 802.11 wireless local area networks (WLANs) and wireless personal area networks (WPANs), both of which support operation in the crowded 2.4-GHz industrial, scientific and medical (ISM) band. Despite efforts made by standardization bodies to ensure smooth coexistence it may occur that communication technologies transmitting for instance at very different power levels interfere with each other. In particular, it has been pointed out that ZigBee could potentially experience interference from WiFi traffic given that while both protocols can transmit on the same channel, WiFi transmissions usually occur at much higher power level. In this work, we considered a heterogeneous network and analyzed the impact of coexistence between IEEE 802.15.4 and IEEE 802.11b. To evaluate the performance of this network, measurement and simulation study are conducted and developed in the QualNet Network simulator, version 5.0.Model is analyzed for different placement models or topologies such as Random. Grid & Uniform. Performance is analyzed on the basis of characteristics such as throughput, average jitter and average end to end delay. Here, the impact of varying different antenna gain & shadowing model for this heterogeneous network is considered for the purpose of analysis.

  4. Construction and modelling of an inducible positive feedback loop stably integrated in a mammalian cell-line.

    PubMed

    Siciliano, Velia; Menolascina, Filippo; Marucci, Lucia; Fracassi, Chiara; Garzilli, Immacolata; Moretti, Maria Nicoletta; di Bernardo, Diego

    2011-06-01

    Understanding the relationship between topology and dynamics of transcriptional regulatory networks in mammalian cells is essential to elucidate the biology of complex regulatory and signaling pathways. Here, we characterised, via a synthetic biology approach, a transcriptional positive feedback loop (PFL) by generating a clonal population of mammalian cells (CHO) carrying a stable integration of the construct. The PFL network consists of the Tetracycline-controlled transactivator (tTA), whose expression is regulated by a tTA responsive promoter (CMV-TET), thus giving rise to a positive feedback. The same CMV-TET promoter drives also the expression of a destabilised yellow fluorescent protein (d2EYFP), thus the dynamic behaviour can be followed by time-lapse microscopy. The PFL network was compared to an engineered version of the network lacking the positive feedback loop (NOPFL), by expressing the tTA mRNA from a constitutive promoter. Doxycycline was used to repress tTA activation (switch off), and the resulting changes in fluorescence intensity for both the PFL and NOPFL networks were followed for up to 43 h. We observed a striking difference in the dynamics of the PFL and NOPFL networks. Using non-linear dynamical models, able to recapitulate experimental observations, we demonstrated a link between network topology and network dynamics. Namely, transcriptional positive autoregulation can significantly slow down the "switch off" times, as compared to the non-autoregulated system. Doxycycline concentration can modulate the response times of the PFL, whereas the NOPFL always switches off with the same dynamics. Moreover, the PFL can exhibit bistability for a range of Doxycycline concentrations. Since the PFL motif is often found in naturally occurring transcriptional and signaling pathways, we believe our work can be instrumental to characterise their behaviour.

  5. Simulating the 2012 High Plains drought using three single column versions (SCM) of BUGS5

    NASA Astrophysics Data System (ADS)

    Medina, I. D.; Denning, S.

    2013-12-01

    The impact of changes in the frequency and severity of drought on fresh water sustainability is a great concern for many regions of the world. One such location is the High Plains, where the local economy is primarily driven by fresh water withdrawals from the Ogallala Aquifer, which accounts for approximately 30% of total irrigation withdrawals from all U.S. aquifers combined. Modeling studies that focus on the feedback mechanisms that control the climate and eco-hydrology during times of drought are limited, and have used conventional General Circulation Models (GCMs) with grid length scales ranging from one hundred to several hundred kilometers. Additionally, these models utilize crude statistical parameterizations of cloud processes for estimating sub-grid fluxes of heat and moisture and have a poor representation of land surface heterogeneity. For this research, we will focus on the 2012 High Plains drought and will perform numerical simulations using three single column versions (SCM) of BUGS5 (Colorado State University (CSU) GCM coupled to the Simple Biosphere Model (SiB3)) at multiple sites overlying the Ogallala Aquifer for the 2011-2012 periods. In the first version of BUGS5, the model will be used in its standard bulk setting (single atmospheric column coupled to a single instance of SiB3), secondly, the Super-Parameterized Community Atmospheric Model (SP-CAM), a cloud resolving model (CRM consists of 64 atmospheric columns), will replace the single CSU GCM atmospheric parameterization and will be coupled to a single instance of SiB3, and for the third version of BUGS5, an instance of SiB3 will be coupled to each CRM column of the SP-CAM (64 CRM columns coupled to 64 instances of SiB3). To assess the physical realism of the land-atmosphere feedbacks simulated at each site by all versions of BUGS5, differences in simulated energy and moisture fluxes will be computed between the 2011 and 2012 period and will be compared to differences calculated using observational data from the AmeriFlux tower network for the same period. These results will give some insight to the land-atmosphere feedbacks GCMs may produce when atmospheric and land surface heterogeneity are included within a single framework. Furthermore, this research will provide a better understanding of model deficiencies in reproducing and predicting droughts in the future, which is essential to the economic, ecologic and social well being of the High Plains.

  6. O*NET[TM] Career Exploration Tools. Version 3.0.

    ERIC Educational Resources Information Center

    Employment and Training Administration (DOL), Washington, DC.

    Developed by the U.S. Department of Labor's Occupational Information Network (O*NET) team, the O*NET[TM] Career Exploration Tools (Version 3.0) consist of three main parts: (1) the Interest Profiler; (2) the Work Importance Locator; and (3) the O*NET[TM] Occupations Combined List. The Interest Profiler is a self-assessment career exploration tool…

  7. Introducing a Compendium of Psychological Literacy Case Studies: Reflections on Psychological Literacy in Practice

    ERIC Educational Resources Information Center

    Taylor, Jacqui; Hulme, Julie

    2015-01-01

    This article introduces a set of case studies that were submitted to us following requests in psychology conferences and publications, and through professional networks. The full versions of the case studies make up the first version of a Psychological Literacy Compendium of Practice that is available online at www.psychologicalliteracy.com. The…

  8. Master Teachers as Professional Developers: Managing Conflicting Versions of Professionalism

    ERIC Educational Resources Information Center

    Montecinos, Carmen; Pino, Mauricio; Campos-Martinez, Javier; Domínguez, Rosario; Carreño, Claudia

    2014-01-01

    As education's main workforce, teachers have been the target of policies designed to shape and affirm new versions of professionalism. This paper examines this issue as it is exemplified by the Teachers of Teachers Network (TTN), a program developed by Chile's Ministry of Education. As a program designed to identify and reward high quality…

  9. Dynamical System Approach for Edge Detection Using Coupled FitzHugh-Nagumo Neurons.

    PubMed

    Li, Shaobai; Dasmahapatra, Srinandan; Maharatna, Koushik

    2015-12-01

    The prospect of emulating the impressive computational capabilities of biological systems has led to considerable interest in the design of analog circuits that are potentially implementable in very large scale integration CMOS technology and are guided by biologically motivated models. For example, simple image processing tasks, such as the detection of edges in binary and grayscale images, have been performed by networks of FitzHugh-Nagumo-type neurons using the reaction-diffusion models. However, in these studies, the one-to-one mapping of image pixels to component neurons makes the size of the network a critical factor in any such implementation. In this paper, we develop a simplified version of the employed reaction-diffusion model in three steps. In the first step, we perform a detailed study to locate this threshold using continuous Lyapunov exponents from dynamical system theory. Furthermore, we render the diffusion in the system to be anisotropic, with the degree of anisotropy being set by the gradients of grayscale values in each image. The final step involves a simplification of the model that is achieved by eliminating the terms that couple the membrane potentials of adjacent neurons. We apply our technique to detect edges in data sets of artificially generated and real images, and we demonstrate that the performance is as good if not better than that of the previous methods without increasing the size of the network.

  10. Performance of the High Sensitivity Open Source Multi-GNSS Assisted GNSS Reference Server.

    NASA Astrophysics Data System (ADS)

    Sarwar, Ali; Rizos, Chris; Glennon, Eamonn

    2015-06-01

    The Open Source GNSS Reference Server (OSGRS) exploits the GNSS Reference Interface Protocol (GRIP) to provide assistance data to GPS receivers. Assistance can be in terms of signal acquisition and in the processing of the measurement data. The data transfer protocol is based on Extensible Mark-up Language (XML) schema. The first version of the OSGRS required a direct hardware connection to a GPS device to acquire the data necessary to generate the appropriate assistance. Scenarios of interest for the OSGRS users are weak signal strength indoors, obstructed outdoors or heavy multipath environments. This paper describes an improved version of OSGRS that provides alternative assistance support from a number of Global Navigation Satellite Systems (GNSS). The underlying protocol to transfer GNSS assistance data from global casters is the Networked Transport of RTCM (Radio Technical Commission for Maritime Services) over Internet Protocol (NTRIP), and/or the RINEX (Receiver Independent Exchange) format. This expands the assistance and support model of the OSGRS to globally available GNSS data servers connected via internet casters. A variety of formats and versions of RINEX and RTCM streams become available, which strengthens the assistance provisioning capability of the OSGRS platform. The prime motivation for this work was to enhance the system architecture of the OSGRS to take advantage of globally available GNSS data sources. Open source software architectures and assistance models provide acquisition and data processing assistance for GNSS receivers operating in weak signal environments. This paper describes test scenarios to benchmark the OSGRSv2 performance against other Assisted-GNSS solutions. Benchmarking devices include the SPOT satellite messenger, MS-Based & MS-Assisted GNSS, HSGNSS (SiRFstar-III) and Wireless Sensor Networks Assisted-GNSS. Benchmarked parameters include the number of tracked satellites, the Time to Fix First (TTFF), navigation availability and accuracy. Three different configurations of Multi-GNSS assistance servers were used, namely Cloud-Client-Server, the Demilitarized Zone (DMZ) Client-Server and PC-Client-Server; with respect to the connectivity location of client and server. The impact on the performance based on server and/or client initiation, hardware capability, network latency, processing delay and computation times with their storage, scalability, processing and load sharing capabilities, were analysed. The performance of the OSGRS is compared against commercial GNSS, Assisted-GNSS and WSN-enabled GNSS devices. The OSGRS system demonstrated lower TTFF and higher availability.

  11. Integrated Farm System Model Version 4.1 and Dairy Gas Emissions Model Version 3.1 software release and distribution

    USDA-ARS?s Scientific Manuscript database

    Animal facilities are significant contributors of gaseous emissions including ammonia (NH3) and nitrous oxide (N2O). Previous versions of the Integrated Farm System Model (IFSM version 4.0) and Dairy Gas Emissions Model (DairyGEM version 3.0), two whole-farm simulation models developed by USDA-ARS, ...

  12. A global gridded dataset of daily precipitation going back to 1950, ideal for analysing precipitation extremes

    NASA Astrophysics Data System (ADS)

    Contractor, S.; Donat, M.; Alexander, L. V.

    2017-12-01

    Reliable observations of precipitation are necessary to determine past changes in precipitation and validate models, allowing for reliable future projections. Existing gauge based gridded datasets of daily precipitation and satellite based observations contain artefacts and have a short length of record, making them unsuitable to analyse precipitation extremes. The largest limiting factor for the gauge based datasets is a dense and reliable station network. Currently, there are two major data archives of global in situ daily rainfall data, first is Global Historical Station Network (GHCN-Daily) hosted by National Oceanic and Atmospheric Administration (NOAA) and the other by Global Precipitation Climatology Centre (GPCC) part of the Deutsche Wetterdienst (DWD). We combine the two data archives and use automated quality control techniques to create a reliable long term network of raw station data, which we then interpolate using block kriging to create a global gridded dataset of daily precipitation going back to 1950. We compare our interpolated dataset with existing global gridded data of daily precipitation: NOAA Climate Prediction Centre (CPC) Global V1.0 and GPCC Full Data Daily Version 1.0, as well as various regional datasets. We find that our raw station density is much higher than other datasets. To avoid artefacts due to station network variability, we provide multiple versions of our dataset based on various completeness criteria, as well as provide the standard deviation, kriging error and number of stations for each grid cell and timestep to encourage responsible use of our dataset. Despite our efforts to increase the raw data density, the in situ station network remains sparse in India after the 1960s and in Africa throughout the timespan of the dataset. Our dataset would allow for more reliable global analyses of rainfall including its extremes and pave the way for better global precipitation observations with lower and more transparent uncertainties.

  13. Overview of MPLNET Version 3 Cloud Detection

    NASA Technical Reports Server (NTRS)

    Lewis, Jasper R.; Campbell, James; Welton, Ellsworth J.; Stewart, Sebastian A.; Haftings, Phillip

    2016-01-01

    The National Aeronautics and Space Administration Micro Pulse Lidar Network, version 3, cloud detection algorithm is described and differences relative to the previous version are highlighted. Clouds are identified from normalized level 1 signal profiles using two complementary methods. The first method considers vertical signal derivatives for detecting low-level clouds. The second method, which detects high-level clouds like cirrus, is based on signal uncertainties necessitated by the relatively low signal-to-noise ratio exhibited in the upper troposphere by eye-safe network instruments, especially during daytime. Furthermore, a multitemporal averaging scheme is used to improve cloud detection under conditions of a weak signal-to-noise ratio. Diurnal and seasonal cycles of cloud occurrence frequency based on one year of measurements at the Goddard Space Flight Center (Greenbelt, Maryland) site are compared for the new and previous versions. The largest differences, and perceived improvement, in detection occurs for high clouds (above 5 km, above MSL), which increase in occurrence by over 5%. There is also an increase in the detection of multilayered cloud profiles from 9% to 19%. Macrophysical properties and estimates of cloud optical depth are presented for a transparent cirrus dataset. However, the limit to which the cirrus cloud optical depth could be reliably estimated occurs between 0.5 and 0.8. A comparison using collocated CALIPSO measurements at the Goddard Space Flight Center and Singapore Micro Pulse Lidar Network (MPLNET) sites indicates improvements in cloud occurrence frequencies and layer heights.

  14. Visualization of metabolic interaction networks in microbial communities using VisANT 5.0

    DOE PAGES

    Granger, Brian R.; Chang, Yi -Chien; Wang, Yan; ...

    2016-04-15

    Here, the complexity of metabolic networks in microbial communities poses an unresolved visualization and interpretation challenge. We address this challenge in the newly expanded version of a software tool for the analysis of biological networks, VisANT 5.0. We focus in particular on facilitating the visual exploration of metabolic interaction between microbes in a community, e.g. as predicted by COMETS (Computation of Microbial Ecosystems in Time and Space), a dynamic stoichiometric modeling framework. Using VisANT's unique meta-graph implementation, we show how one can use VisANT 5.0 to explore different time-dependent ecosystem-level metabolic networks. In particular, we analyze the metabolic interaction networkmore » between two bacteria previously shown to display an obligate cross-feeding interdependency. In addition, we illustrate how a putative minimal gut microbiome community could be represented in our framework, making it possible to highlight interactions across multiple coexisting species. We envisage that the "symbiotic layout" of VisANT can be employed as a general tool for the analysis of metabolism in complex microbial communities as well as heterogeneous human tissues.« less

  15. Development and application of a regional-scale atmospheric mercury model based on WRF/Chem: a Mediterranean area investigation.

    PubMed

    Gencarelli, Christian Natale; De Simone, Francesco; Hedgecock, Ian Michael; Sprovieri, Francesca; Pirrone, Nicola

    2014-03-01

    The emission, transport, deposition and eventual fate of mercury (Hg) in the Mediterranean area has been studied using a modified version of the Weather Research and Forecasting model coupled with Chemistry (WRF/Chem). This model version has been developed specifically with the aim to simulate the atmospheric processes determining atmospheric Hg emissions, concentrations and deposition online at high spatial resolution. For this purpose, the gas phase chemistry of Hg and a parametrised representation of atmospheric Hg aqueous chemistry have been added to the regional acid deposition model version 2 chemical mechanism in WRF/Chem. Anthropogenic mercury emissions from the Arctic Monitoring and Assessment Programme included in the emissions preprocessor, mercury evasion from the sea surface and Hg released from biomass burning have also been included. Dry and wet deposition processes for Hg have been implemented. The model has been tested for the whole of 2009 using measurements of total gaseous mercury from the European Monitoring and Evaluation Programme monitoring network. Speciated measurement data of atmospheric elemental Hg, gaseous oxidised Hg and Hg associated with particulate matter, from a Mediterranean oceanographic campaign (June 2009), has permitted the model's ability to simulate the atmospheric redox chemistry of Hg to be assessed. The model results highlight the importance of both the boundary conditions employed and the accuracy of the mercury speciation in the emission database. The model has permitted the reevaluation of the deposition to, and the emission from, the Mediterranean Sea. In light of the well-known high concentrations of methylmercury in a number of Mediterranean fish species, this information is important in establishing the mass balance of Hg for the Mediterranean Sea. The model results support the idea that the Mediterranean Sea is a net source of Hg to the atmosphere and suggest that the net flux is ≈30 Mg year(-1) of elemental Hg.

  16. Task 28: Web Accessible APIs in the Cloud Trade Study

    NASA Technical Reports Server (NTRS)

    Gallagher, James; Habermann, Ted; Jelenak, Aleksandar; Lee, Joe; Potter, Nathan; Yang, Muqun

    2017-01-01

    This study explored three candidate architectures for serving NASA Earth Science Hierarchical Data Format Version 5 (HDF5) data via Hyrax running on Amazon Web Services (AWS). We studied the cost and performance for each architecture using several representative Use-Cases. The objectives of the project are: Conduct a trade study to identify one or more high performance integrated solutions for storing and retrieving NASA HDF5 and Network Common Data Format Version 4 (netCDF4) data in a cloud (web object store) environment. The target environment is Amazon Web Services (AWS) Simple Storage Service (S3).Conduct needed level of software development to properly evaluate solutions in the trade study and to obtain required benchmarking metrics for input into government decision of potential follow-on prototyping. Develop a cloud cost model for the preferred data storage solution (or solutions) that accounts for different granulation and aggregation schemes as well as cost and performance trades.

  17. Publisher Correction: Hierarchical self-entangled carbon nanotube tube networks.

    PubMed

    Schütt, Fabian; Signetti, Stefano; Krüger, Helge; Röder, Sarah; Smazna, Daria; Kaps, Sören; Gorb, Stanislav N; Mishra, Yogendra Kumar; Pugno, Nicola M; Adelung, Rainer

    2018-01-09

    The original version of this Article was missing the ORCID ID of Professor Nicola Pugno.Also in the original version of this Article, the third to last sentence of the fourth paragraph of the Results incorrectly read 'However, the stepwise addition of CNTs increases the self-entanglement and thereby the compressive strength value as well as the Young's modulus (up to 2.5 MPa (normalized by density 6.4) and 24.5 MPa (normalized by density 62 MPa cm 3 g -1 ).' The correct version adds the units 'MPa cm 3 g -1 ' to '6.4'.Finally, in the original version of this Article, the y-axis label of Figure 3f incorrectly read 'Comp. strengthy'. The new version corrects that to 'Comp. Strength'.These errors have now been corrected in both the PDF and the HTML versions of the Article.

  18. Visualization of Metabolic Interaction Networks in Microbial Communities Using VisANT 5.0

    PubMed Central

    Wang, Yan; DeLisi, Charles; Segrè, Daniel; Hu, Zhenjun

    2016-01-01

    The complexity of metabolic networks in microbial communities poses an unresolved visualization and interpretation challenge. We address this challenge in the newly expanded version of a software tool for the analysis of biological networks, VisANT 5.0. We focus in particular on facilitating the visual exploration of metabolic interaction between microbes in a community, e.g. as predicted by COMETS (Computation of Microbial Ecosystems in Time and Space), a dynamic stoichiometric modeling framework. Using VisANT’s unique metagraph implementation, we show how one can use VisANT 5.0 to explore different time-dependent ecosystem-level metabolic networks. In particular, we analyze the metabolic interaction network between two bacteria previously shown to display an obligate cross-feeding interdependency. In addition, we illustrate how a putative minimal gut microbiome community could be represented in our framework, making it possible to highlight interactions across multiple coexisting species. We envisage that the “symbiotic layout” of VisANT can be employed as a general tool for the analysis of metabolism in complex microbial communities as well as heterogeneous human tissues. VisANT is freely available at: http://visant.bu.edu and COMETS at http://comets.bu.edu. PMID:27081850

  19. Visualization of Metabolic Interaction Networks in Microbial Communities Using VisANT 5.0.

    PubMed

    Granger, Brian R; Chang, Yi-Chien; Wang, Yan; DeLisi, Charles; Segrè, Daniel; Hu, Zhenjun

    2016-04-01

    The complexity of metabolic networks in microbial communities poses an unresolved visualization and interpretation challenge. We address this challenge in the newly expanded version of a software tool for the analysis of biological networks, VisANT 5.0. We focus in particular on facilitating the visual exploration of metabolic interaction between microbes in a community, e.g. as predicted by COMETS (Computation of Microbial Ecosystems in Time and Space), a dynamic stoichiometric modeling framework. Using VisANT's unique metagraph implementation, we show how one can use VisANT 5.0 to explore different time-dependent ecosystem-level metabolic networks. In particular, we analyze the metabolic interaction network between two bacteria previously shown to display an obligate cross-feeding interdependency. In addition, we illustrate how a putative minimal gut microbiome community could be represented in our framework, making it possible to highlight interactions across multiple coexisting species. We envisage that the "symbiotic layout" of VisANT can be employed as a general tool for the analysis of metabolism in complex microbial communities as well as heterogeneous human tissues. VisANT is freely available at: http://visant.bu.edu and COMETS at http://comets.bu.edu.

  20. Motif structure and cooperation in real-world complex networks

    NASA Astrophysics Data System (ADS)

    Salehi, Mostafa; Rabiee, Hamid R.; Jalili, Mahdi

    2010-12-01

    Networks of dynamical nodes serve as generic models for real-world systems in many branches of science ranging from mathematics to physics, technology, sociology and biology. Collective behavior of agents interacting over complex networks is important in many applications. The cooperation between selfish individuals is one of the most interesting collective phenomena. In this paper we address the interplay between the motifs’ cooperation properties and their abundance in a number of real-world networks including yeast protein-protein interaction, human brain, protein structure, email communication, dolphins’ social interaction, Zachary karate club and Net-science coauthorship networks. First, the amount of cooperativity for all possible undirected subgraphs with three to six nodes is calculated. To this end, the evolutionary dynamics of the Prisoner’s Dilemma game is considered and the cooperativity of each subgraph is calculated as the percentage of cooperating agents at the end of the simulation time. Then, the three- to six-node motifs are extracted for each network. The significance of the abundance of a motif, represented by a Z-value, is obtained by comparing them with some properly randomized versions of the original network. We found that there is always a group of motifs showing a significant inverse correlation between their cooperativity amount and Z-value, i.e. the more the Z-value the less the amount of cooperativity. This suggests that networks composed of well-structured units do not have good cooperativity properties.

  1. Department of the Navy Naval Networking Environment (NNE)-2016. Strategic Definition, Scope and Strategy Paper, Version 1.1

    DTIC Science & Technology

    2008-05-13

    IA capabilities applied to protect, defend, and respond to them. This will provide decision makers and network operators, at all command levels...procedures to recognize, react, and respond to potential system and network compromises must be in place and provide control sufficient to protect the...to respond to and track users’ needs. • Information Service Visibility. Interview responses described a need for the reporting of network status and

  2. Global 30m Height Above the Nearest Drainage

    NASA Astrophysics Data System (ADS)

    Donchyts, Gennadii; Winsemius, Hessel; Schellekens, Jaap; Erickson, Tyler; Gao, Hongkai; Savenije, Hubert; van de Giesen, Nick

    2016-04-01

    Variability of the Earth surface is the primary characteristics affecting the flow of surface and subsurface water. Digital elevation models, usually represented as height maps above some well-defined vertical datum, are used a lot to compute hydrologic parameters such as local flow directions, drainage area, drainage network pattern, and many others. Usually, it requires a significant effort to derive these parameters at a global scale. One hydrological characteristic introduced in the last decade is Height Above the Nearest Drainage (HAND): a digital elevation model normalized using nearest drainage. This parameter has been shown to be useful for many hydrological and more general purpose applications, such as landscape hazard mapping, landform classification, remote sensing and rainfall-runoff modeling. One of the essential characteristics of HAND is its ability to capture heterogeneities in local environments, difficult to measure or model otherwise. While many applications of HAND were published in the academic literature, no studies analyze its variability on a global scale, especially, using higher resolution DEMs, such as the new, one arc-second (approximately 30m) resolution version of SRTM. In this work, we will present the first global version of HAND computed using a mosaic of two DEMS: 30m SRTM and Viewfinderpanorama DEM (90m). The lower resolution DEM was used to cover latitudes above 60 degrees north and below 56 degrees south where SRTM is not available. We compute HAND using the unmodified version of the input DEMs to ensure consistency with the original elevation model. We have parallelized processing by generating a homogenized, equal-area version of HydroBASINS catchments. The resulting catchment boundaries were used to perform processing using 30m resolution DEM. To compute HAND, a new version of D8 local drainage directions as well as flow accumulation were calculated. The latter was used to estimate river head by incorporating fixed and variable thresholding methods. The resulting HAND dataset was analyzed regarding its spatial variability and to assess the global distribution of the main landform types: valley, ecotone, slope, and plateau. The method used to compute HAND was implemented using PCRaster software, running on Google Compute Engine platform running under Ubuntu Linux. The Google Earth Engine was used to perform mosaicing and clipping of the original DEMs as well as to provide access to the final product. The effort took about three months of computing time on eight core CPU virtual machine.

  3. Static Chemistry in Disks or Clouds

    NASA Astrophysics Data System (ADS)

    Semenov, D.; Wiebe, D.

    2006-11-01

    This FORTRAN77 code can be used to model static, time-dependent chemistry in ISM and circumstellar disks. Current version is based on the OSU'06 gas-grain astrochemical network with all updates to the reaction rates, and includes surface chemistry from Hasegawa & Herbst (1993) and Hasegawa, Herbst, and Leung (1992). Surface chemistry can be modeled either with the standard rate equation approach or modified rate equation approach (useful in disks). Gas-grain interactions include sticking of neutral molecules to grains, dissociative recombination of ions on grains as well as thermal, UV, X-ray, and CRP-induced desorption of frozen species. An advanced X-ray chemistry and 3 grain sizes with power-law size distribution are also included. An deuterium extension to this chemical model is available.

  4. Resource Tracking Model Updates and Trade Studies

    NASA Technical Reports Server (NTRS)

    Chambliss, Joe; Stambaugh, Imelda; Moore, Michael

    2016-01-01

    The Resource Tracking Model has been updated to capture system manager and project manager inputs. Both the Trick/General Use Nodal Network Solver Resource Tracking Model (RTM) simulator and the RTM mass balance spreadsheet have been revised to address inputs from system managers and to refine the way mass balance is illustrated. The revisions to the RTM included the addition of a Plasma Pyrolysis Assembly (PPA) to recover hydrogen from Sabatier Reactor methane, which was vented in the prior version of the RTM. The effect of the PPA on the overall balance of resources in an exploration vehicle is illustrated in the increased recycle of vehicle oxygen. Case studies have been run to show the relative effect of performance changes on vehicle resources.

  5. Multichannel Convolutional Neural Network for Biological Relation Extraction.

    PubMed

    Quan, Chanqin; Hua, Lei; Sun, Xiao; Bai, Wenjun

    2016-01-01

    The plethora of biomedical relations which are embedded in medical logs (records) demands researchers' attention. Previous theoretical and practical focuses were restricted on traditional machine learning techniques. However, these methods are susceptible to the issues of "vocabulary gap" and data sparseness and the unattainable automation process in feature extraction. To address aforementioned issues, in this work, we propose a multichannel convolutional neural network (MCCNN) for automated biomedical relation extraction. The proposed model has the following two contributions: (1) it enables the fusion of multiple (e.g., five) versions in word embeddings; (2) the need for manual feature engineering can be obviated by automated feature learning with convolutional neural network (CNN). We evaluated our model on two biomedical relation extraction tasks: drug-drug interaction (DDI) extraction and protein-protein interaction (PPI) extraction. For DDI task, our system achieved an overall f -score of 70.2% compared to the standard linear SVM based system (e.g., 67.0%) on DDIExtraction 2013 challenge dataset. And for PPI task, we evaluated our system on Aimed and BioInfer PPI corpus; our system exceeded the state-of-art ensemble SVM system by 2.7% and 5.6% on f -scores.

  6. IntNetDB v1.0: an integrated protein-protein interaction network database generated by a probabilistic model

    PubMed Central

    Xia, Kai; Dong, Dong; Han, Jing-Dong J

    2006-01-01

    Background Although protein-protein interaction (PPI) networks have been explored by various experimental methods, the maps so built are still limited in coverage and accuracy. To further expand the PPI network and to extract more accurate information from existing maps, studies have been carried out to integrate various types of functional relationship data. A frequently updated database of computationally analyzed potential PPIs to provide biological researchers with rapid and easy access to analyze original data as a biological network is still lacking. Results By applying a probabilistic model, we integrated 27 heterogeneous genomic, proteomic and functional annotation datasets to predict PPI networks in human. In addition to previously studied data types, we show that phenotypic distances and genetic interactions can also be integrated to predict PPIs. We further built an easy-to-use, updatable integrated PPI database, the Integrated Network Database (IntNetDB) online, to provide automatic prediction and visualization of PPI network among genes of interest. The networks can be visualized in SVG (Scalable Vector Graphics) format for zooming in or out. IntNetDB also provides a tool to extract topologically highly connected network neighborhoods from a specific network for further exploration and research. Using the MCODE (Molecular Complex Detections) algorithm, 190 such neighborhoods were detected among all the predicted interactions. The predicted PPIs can also be mapped to worm, fly and mouse interologs. Conclusion IntNetDB includes 180,010 predicted protein-protein interactions among 9,901 human proteins and represents a useful resource for the research community. Our study has increased prediction coverage by five-fold. IntNetDB also provides easy-to-use network visualization and analysis tools that allow biological researchers unfamiliar with computational biology to access and analyze data over the internet. The web interface of IntNetDB is freely accessible at . Visualization requires Mozilla version 1.8 (or higher) or Internet Explorer with installation of SVGviewer. PMID:17112386

  7. Wavelength Division Multiplexing Scheme for Radio-Frequency Single Electron Transistors

    NASA Technical Reports Server (NTRS)

    Stevenson, Thomas R.; Pellerano, F. A.; Stahle, C. M.; Aidala, K.; Schoelkopf, R. J.; Krebs, Carolyn (Technical Monitor)

    2001-01-01

    We describe work on a wavelength division multiplexing scheme for radio-frequency single electron transistors. We use a network of resonant impedance matching circuits to direct applied rf carrier waves to different transistors depending on carrier frequency. Using discrete components, we made a two-channel demonstration of this concept and successfully reconstructed input signals with small levels of cross coupling. A lithographic version of the rf circuits had measured parameters in agreement with electromagnetic modeling, with reduced cross capacitance and inductance, and should allow 20 to 50 channels to be multiplexed.

  8. Modeling and Multiresponse Optimization for Anaerobic Codigestion of Oil Refinery Wastewater and Chicken Manure by Using Artificial Neural Network and the Taguchi Method

    PubMed Central

    Hemmat, Abbas; Kafashan, Jalal; Huang, Hongying

    2017-01-01

    To study the optimum process conditions for pretreatments and anaerobic codigestion of oil refinery wastewater (ORWW) with chicken manure, L9 (34) Taguchi's orthogonal array was applied. The biogas production (BGP), biomethane content (BMP), and chemical oxygen demand solubilization (CODS) in stabilization rate were evaluated as the process outputs. The optimum conditions were obtained by using Design Expert software (Version 7.0.0). The results indicated that the optimum conditions could be achieved with 44% ORWW, 36°C temperature, 30 min sonication, and 6% TS in the digester. The optimum BGP, BMP, and CODS removal rates by using the optimum conditions were 294.76 mL/gVS, 151.95 mL/gVS, and 70.22%, respectively, as concluded by the experimental results. In addition, the artificial neural network (ANN) technique was implemented to develop an ANN model for predicting BGP yield and BMP content. The Levenberg-Marquardt algorithm was utilized to train ANN, and the architecture of 9-19-2 for the ANN model was obtained. PMID:29441352

  9. A Spectral Element Ocean Model on the Cray T3D: the interannual variability of the Mediterranean Sea general circulation

    NASA Astrophysics Data System (ADS)

    Molcard, A. J.; Pinardi, N.; Ansaloni, R.

    A new numerical model, SEOM (Spectral Element Ocean Model, (Iskandarani et al, 1994)), has been implemented in the Mediterranean Sea. Spectral element methods combine the geometric flexibility of finite element techniques with the rapid convergence rate of spectral schemes. The current version solves the shallow water equations with a fifth (or sixth) order accuracy spectral scheme and about 50.000 nodes. The domain decomposition philosophy makes it possible to exploit the power of parallel machines. The original MIMD master/slave version of SEOM, written in F90 and PVM, has been ported to the Cray T3D. When critical for performance, Cray specific high-performance one-sided communication routines (SHMEM) have been adopted to fully exploit the Cray T3D interprocessor network. Tests performed with highly unstructured and irregular grid, on up to 128 processors, show an almost linear scalability even with unoptimized domain decomposition techniques. Results from various case studies on the Mediterranean Sea are shown, involving realistic coastline geometry, and monthly mean 1000mb winds from the ECMWF's atmospheric model operational analysis from the period January 1987 to December 1994. The simulation results show that variability in the wind forcing considerably affect the circulation dynamics of the Mediterranean Sea.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sobral, G. A. Jr.; Vieira, V. M.; Lyra, M. L.

    Extending a model due to Derrida, Gardner, and Zippelius, we have studied the recognition ability of an extreme and asymmetrically diluted version of the Hopfield model for associative memory by including the effect of a stimulus in the dynamics of the system. We obtain exact results for the dynamic evolution of the average network superposition. The stimulus field was considered as proportional to the overlapping of the state of the system with a particular stimulated pattern. Two situations were analyzed, namely, the external stimulus acting on the initialization pattern (parallel stimulus) and the external stimulus acting on a pattern orthogonalmore » to the initialization one (orthogonal stimulus). In both cases, we obtained the complete phase diagram in the parameter space composed of the stimulus field, thermal noise, and network capacity. Our results show that the system improves its recognition ability for parallel stimulus. For orthogonal stimulus two recognition phases emerge with the system locking at the initialization or stimulated pattern. We confront our analytical results with numerical simulations for the noiseless case T=0.« less

  11. Influence of sub-kilometer precipitation datasets on simulated snowpack and glacier winter balance in alpine terrain.

    NASA Astrophysics Data System (ADS)

    Vionnet, Vincent; Six, Delphine; Auger, Ludovic; Lafaysse, Matthieu; Quéno, Louis; Réveillet, Marion; Dombrowski-Etchevers, Ingrid; Thibert, Emmanuel; Dumont, Marie

    2017-04-01

    Capturing spatial and temporal variabilities of meteorological conditions at fine scale is necessary for modelling snowpack and glacier winter mass balance in alpine terrain. In particular, precipitation amount and phase are strongly influenced by the complex topography. In this study, we assess the impact of three sub-kilometer precipitation datasets (rainfall and snowfall) on distributed simulations of snowpack and glacier winter mass balance with the detailed snowpack model Crocus for winter 2011-2012. The different precipitation datasets at 500-m grid spacing over part of the French Alps (200*200 km2 area) are coming either from (i) the SAFRAN precipitation analysis specially developed for alpine terrain, or from (ii) operational outputs of the atmospheric model AROME at 2.5-km grid spacing downscaled to 500 m with fixed lapse rate or from (iii) a version of the atmospheric model AROME at 500-m grid spacing. Others atmospherics forcings (air temperature and humidity, incoming longwave and shortwave radiation, wind speed) are taken from the AROME simulations at 500-m grid spacing. These atmospheric forcings are firstly compared against a network of automatic weather stations. Results are analysed with respect to station location (valley, mid- and high-altitude). The spatial pattern of seasonal snowfall and its dependency with elevation is then analysed for the different precipitation datasets. Large differences between SAFRAN and the two versions of AROME are found at high-altitude. Finally, results of Crocus snowpack simulations are evaluated against (i) punctual in-situ measurements of snow depth and snow water equivalent, and (ii) maps of snow covered areas retrieved from optical satellite data (MODIS). Measurements of winter accumulation of six glaciers of the French Alps are also used and provide very valuable information on precipitation at high-altitude where the conventional observation network is scarce. This study illustrates the potential and limitations of high-resolution atmospheric models to drive simulations of snowpack and glacier winter mass balance in alpine terrain.

  12. Evolution of Linux operating system network

    NASA Astrophysics Data System (ADS)

    Xiao, Guanping; Zheng, Zheng; Wang, Haoqin

    2017-01-01

    Linux operating system (LOS) is a sophisticated man-made system and one of the most ubiquitous operating systems. However, there is little research on the structure and functionality evolution of LOS from the prospective of networks. In this paper, we investigate the evolution of the LOS network. 62 major releases of LOS ranging from versions 1.0 to 4.1 are modeled as directed networks in which functions are denoted by nodes and function calls are denoted by edges. It is found that the size of the LOS network grows almost linearly, while clustering coefficient monotonically decays. The degree distributions are almost the same: the out-degree follows an exponential distribution while both in-degree and undirected degree follow power-law distributions. We further explore the functionality evolution of the LOS network. It is observed that the evolution of functional modules is shown as a sequence of seven events (changes) succeeding each other, including continuing, growth, contraction, birth, splitting, death and merging events. By means of a statistical analysis of these events in the top 4 largest components (i.e., arch, drivers, fs and net), it is shown that continuing, growth and contraction events occupy more than 95% events. Our work exemplifies a better understanding and describing of the dynamics of LOS evolution.

  13. Dynamical systems approach to the study of a sociophysics agent-based model

    NASA Astrophysics Data System (ADS)

    Timpanaro, André M.; Prado, Carmen P. C.

    2011-03-01

    The Sznajd model is a Potts-like model that has been studied in the context of sociophysics [1,2] (where spins are interpreted as opinions). In a recent work [3], we generalized the Sznajd model to include assymetric interactions between the spins (interpreted as biases towards opinions) and used dynamical systems techniques to tackle its mean-field version, given by the flow: ησ = ∑ σ' = 1Mησησ'(ησρσ'→σ-σ'ρσ→σ'). Where hs is the proportion of agents with opinion (spin) σ', M is the number of opinions and σ'→σ' is the probability weight for an agent with opinion σ being convinced by another agent with opinion σ'. We made Monte Carlo simulations of the model in a complex network (using Barabási-Albert networks [4]) and they displayed the same attractors than the mean-field. Using linear stability analysis, we were able to determine the mean-field attractor structure analytically and to show that it has connections with well known graph theory problems (maximal independent sets and positive fluxes in directed graphs). Our dynamical systems approach is quite simple and can be used also in other models, like the voter model.

  14. Dynamical systems approach to the study of a sociophysics agent-based model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Timpanaro, Andre M.; Prado, Carmen P. C.

    2011-03-24

    The Sznajd model is a Potts-like model that has been studied in the context of sociophysics [1,2](where spins are interpreted as opinions). In a recent work [3], we generalized the Sznajd model to include assymetric interactions between the spins (interpreted as biases towards opinions) and used dynamical systems techniques to tackle its mean-field version, given by the flow: {eta}{sub {sigma}} = {Sigma}{sub {sigma}}'{sup M} = 1{eta}{sub {sigma}}{eta}{sigma}'({eta}{sub {sigma}}{rho}{sigma}'{yields}{sigma}-{sigma}'{rho}{sigma}{yields}{sigma}').Where hs is the proportion of agents with opinion (spin){sigma}', M is the number of opinions and {sigma}'{yields}{sigma}' is the probability weight for an agent with opinion {sigma} being convinced by another agentmore » with opinion {sigma}'. We made Monte Carlo simulations of the model in a complex network (using Barabasi-Albert networks [4]) and they displayed the same attractors than the mean-field. Using linear stability analysis, we were able to determine the mean-field attractor structure analytically and to show that it has connections with well known graph theory problems (maximal independent sets and positive fluxes in directed graphs). Our dynamical systems approach is quite simple and can be used also in other models, like the voter model.« less

  15. Arachne User Guide. Version 1.2.

    DTIC Science & Technology

    1980-04-01

    Arachne is an experimental operating system for controlling a network of microcomputers. It is currently implemented on a network of five Digital Equipment...ex- ception to be raised. 4.1 Alias ( Libary Routine) int alias(fslink,fnamel,fname2) char *fnamel, *fname2; The new name "fname2" is associated with

  16. Neural methods based on modified reputation rules for detection and identification of intrusion attacks in wireless ad hoc sensor networks

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    2010-04-01

    Determining methods to secure the process of data fusion against attacks by compromised nodes in wireless sensor networks (WSNs) and to quantify the uncertainty that may exist in the aggregation results is a critical issue in mitigating the effects of intrusion attacks. Published research has introduced the concept of the trustworthiness (reputation) of a single sensor node. Reputation is evaluated using an information-theoretic concept, the Kullback- Leibler (KL) distance. Reputation is added to the set of security features. In data aggregation, an opinion, a metric of the degree of belief, is generated to represent the uncertainty in the aggregation result. As aggregate information is disseminated along routes to the sink node(s), its corresponding opinion is propagated and regulated by Josang's belief model. By applying subjective logic on the opinion to manage trust propagation, the uncertainty inherent in aggregation results can be quantified for use in decision making. The concepts of reputation and opinion are modified to allow their application to a class of dynamic WSNs. Using reputation as a factor in determining interim aggregate information is equivalent to implementation of a reputation-based security filter at each processing stage of data fusion, thereby improving the intrusion detection and identification results based on unsupervised techniques. In particular, the reputation-based version of the probabilistic neural network (PNN) learns the signature of normal network traffic with the random probability weights normally used in the PNN replaced by the trust-based quantified reputations of sensor data or subsequent aggregation results generated by the sequential implementation of a version of Josang's belief model. A two-stage, intrusion detection and identification algorithm is implemented to overcome the problems of large sensor data loads and resource restrictions in WSNs. Performance of the twostage algorithm is assessed in simulations of WSN scenarios with multiple sensors at edge nodes for known intrusion attacks. Simulation results show improved robustness of the two-stage design based on reputation-based NNs to intrusion anomalies from compromised nodes and external intrusion attacks.

  17. An international standard for observation data

    NASA Astrophysics Data System (ADS)

    Cox, Simon

    2010-05-01

    A generic information model for observations and related features supports data exchange both within and between different scientific and technical communities. Observations and Measurements (O&M) formalizes a neutral terminology for observation data and metadata. It was based on a model developed for medical observations, and draws on experience from geology and mineral exploration, in-situ monitoring, remote sensing, intelligence, biodiversity studies, ocean observations and climate simulations. Hundreds of current deployments of Sensor Observation Services (SOS), covering multiple disciplines, provide validation of the O&M model. A W3C Incubator group on 'Semantic Sensor Networks' is now using O&M as one of the bases for development of a formal ontology for sensor networks. O&M defines the information describing observation acts and their results, including the following key terms: observation, result, observed-property, feature-of-interest, procedure, phenomenon-time, and result-time. The model separates of the (meta-)data associated with the observation procedure, the observed feature, and the observation event itself. Observation results may take various forms, including scalar quantities, categories, vectors, grids, or any data structure required to represent the value of some property of some observed feature. O&M follows the ISO/TC 211 General Feature Model so non-geometric properties must be associated with typed feature instances. This requires formalization of information that may be trivial when working within some earth-science sub-disciplines (e.g. temperature, pressure etc. are associated with the atmosphere or ocean, and not just a location) but is critical to cross-disciplinary applications. It also allows the same structure and terminology to be used for in-situ, ex-situ and remote sensing observations, as well as for simulations. For example: a stream level observation is an in-situ monitoring application where the feature-of-interest is a reach, the observed property is water-level, and the result is a time-series of heights; stream quality is usually determined by ex-situ observation where the feature-of-interest is a specimen that is recovered from the stream, the observed property is water-quality, and the result is a set of measures of various parameters, or an assessment derived from these; on the other hand, distribution of surface temperature of a water body is typically determined through remote-sensing, where at observation time the procedure is located distant from the feature-of-interest, and the result is an image or grid. Observations usually involve sampling of an ultimate feature-of-interest. In the environmental sciences common sampling strategies are used. Spatial sampling is classified primarily by topological dimension (point, curve, surface, volume) and is supported by standard processing and visualisation tools. Specimens are used for ex-situ processing in most disciplines. Sampling features are often part of complexes (e.g. specimens are sub-divided; specimens are retrieved from points along a transect; sections are taken across tracts), so relationships between instances must be recorded. And observational campaigns involve collections of sampling features. The sampling feature model is a core part of O&M, and application experience has shown that describing the relationships between sampling features and observations is generally critical to successful use of the model. O&M was developed through Open Geospatial Consortium (OGC) as part of the Sensor Web Enablement (SWE) initiative. Other SWE standards include SensorML, SOS, Sensor Planning Service (SPS). The OGC O&M standard (Version 1) had two parts: part 1 describes observation events, and part 2 provides a schema sampling features. A revised version of O&M (Version 2) is to be published in a single document as ISO 19156. O&M Version 1 included an XML encoding for data exchange, which is used as the payload for SOS responses. The new version will provide a UML model only. Since an XML encoding may be generated following a rule, such as that presented in ISO 19136 (GML 3.2), it is not included in the standard directly. O&M Version 2 thus supports multiple physical implementations and versions.

  18. Spectra of Adjacency Matrices in Networks of Extreme Introverts and Extroverts

    NASA Astrophysics Data System (ADS)

    Bassler, Kevin E.; Ezzatabadipour, Mohammadmehdi; Zia, R. K. P.

    Many interesting properties were discovered in recent studies of preferred degree networks, suitable for describing social behavior of individuals who tend to prefer a certain number of contacts. In an extreme version (coined the XIE model), introverts always cut links while extroverts always add them. While the intra-group links are static, the cross-links are dynamic and lead to an ensemble of bipartite graphs, with extraordinary correlations between elements of the incidence matrix: nij In the steady state, this system can be regarded as one in thermal equilibrium with long-ranged interactions between the nij's, and displays an extreme Thouless effect. Here, we report simulation studies of a different perspective of networks, namely, the spectra associated with this ensemble of adjacency matrices {aij } . As a baseline, we first consider the spectra associated with a simple random (Erdős-Rényi) ensemble of bipartite graphs, where simulation results can be understood analytically. Work supported by the NSF through Grant DMR-1507371.

  19. Assessing the benefit of snow data assimilation for runoff modeling in Alpine catchments

    NASA Astrophysics Data System (ADS)

    Griessinger, Nena; Seibert, Jan; Magnusson, Jan; Jonas, Tobias

    2016-09-01

    In Alpine catchments, snowmelt is often a major contribution to runoff. Therefore, modeling snow processes is important when concerned with flood or drought forecasting, reservoir operation and inland waterway management. In this study, we address the question of how sensitive hydrological models are to the representation of snow cover dynamics and whether the performance of a hydrological model can be enhanced by integrating data from a dedicated external snow monitoring system. As a framework for our tests we have used the hydrological model HBV (Hydrologiska Byråns Vattenbalansavdelning) in the version HBV-light, which has been applied in many hydrological studies and is also in use for operational purposes. While HBV originally follows a temperature-index approach with time-invariant calibrated degree-day factors to represent snowmelt, in this study the HBV model was modified to use snowmelt time series from an external and spatially distributed snow model as model input. The external snow model integrates three-dimensional sequential assimilation of snow monitoring data with a snowmelt model, which is also based on the temperature-index approach but uses a time-variant degree-day factor. The following three variations of this external snow model were applied: (a) the full model with assimilation of observational snow data from a dense monitoring network, (b) the same snow model but with data assimilation switched off and (c) a downgraded version of the same snow model representing snowmelt with a time-invariant degree-day factor. Model runs were conducted for 20 catchments at different elevations within Switzerland for 15 years. Our results show that at low and mid-elevations the performance of the runoff simulations did not vary considerably with the snow model version chosen. At higher elevations, however, best performance in terms of simulated runoff was obtained when using the snowmelt time series from the snow model, which utilized data assimilation. This was especially true for snow-rich years. These findings suggest that with increasing elevation and the correspondingly increased contribution of snowmelt to runoff, the accurate estimation of snow water equivalent (SWE) and snowmelt rates has gained importance.

  20. Stochastic theory of large-scale enzyme-reaction networks: Finite copy number corrections to rate equation models

    NASA Astrophysics Data System (ADS)

    Thomas, Philipp; Straube, Arthur V.; Grima, Ramon

    2010-11-01

    Chemical reactions inside cells occur in compartment volumes in the range of atto- to femtoliters. Physiological concentrations realized in such small volumes imply low copy numbers of interacting molecules with the consequence of considerable fluctuations in the concentrations. In contrast, rate equation models are based on the implicit assumption of infinitely large numbers of interacting molecules, or equivalently, that reactions occur in infinite volumes at constant macroscopic concentrations. In this article we compute the finite-volume corrections (or equivalently the finite copy number corrections) to the solutions of the rate equations for chemical reaction networks composed of arbitrarily large numbers of enzyme-catalyzed reactions which are confined inside a small subcellular compartment. This is achieved by applying a mesoscopic version of the quasisteady-state assumption to the exact Fokker-Planck equation associated with the Poisson representation of the chemical master equation. The procedure yields impressively simple and compact expressions for the finite-volume corrections. We prove that the predictions of the rate equations will always underestimate the actual steady-state substrate concentrations for an enzyme-reaction network confined in a small volume. In particular we show that the finite-volume corrections increase with decreasing subcellular volume, decreasing Michaelis-Menten constants, and increasing enzyme saturation. The magnitude of the corrections depends sensitively on the topology of the network. The predictions of the theory are shown to be in excellent agreement with stochastic simulations for two types of networks typically associated with protein methylation and metabolism.

  1. Representations of the Stratospheric Polar Vortices in Versions 1 and 2 of the Goddard Earth Observing System Chemistry-Climate Model (GEOS CCM)

    NASA Technical Reports Server (NTRS)

    Pawson, S.; Stolarski, R.S.; Nielsen, J.E.; Perlwitz, J.; Oman, L.; Waugh, D.

    2009-01-01

    This study will document the behavior of the polar vortices in two versions of the GEOS CCM. Both versions of the model include the same stratospheric chemistry, They differ in the underlying circulation model. Version 1 of the GEOS CCM is based on the Goddard Earth Observing System, Version 4, general circulation model which includes the finite-volume (Lin-Rood) dynamical core and physical parameterizations from Community Climate Model, Version 3. GEOS CCM Version 2 is based on the GEOS-5 GCM that includes a different tropospheric physics package. Baseline simulations of both models, performed at two-degree spatial resolution, show some improvements in Version 2, but also some degradation, In the Antarctic, both models show an over-persistent stratospheric polar vortex with late breakdown, but the year-to-year variations that are overestimated in Version I are more realistic in Version 2. The implications of this for the interactions with tropospheric climate, the Southern Annular Mode, will be discussed. In the Arctic both model versions show a dominant dynamically forced variabi;ity, but Version 2 has a persistent warm bias in the low stratosphere and there are seasonal differences in the simulations. These differences will be quantified in terms of climate change and ozone loss. Impacts of model resolution, using simulations at one-degree and half-degree, and changes in physical parameterizations (especially the gravity wave drag) will be discussed.

  2. The Deployment of IPv6 in an IPv4 World and Transition Strategies.

    ERIC Educational Resources Information Center

    Bouras, C.; Ganos, P.; Karaliotas, A.

    2003-01-01

    The current version of the IP protocol, IPv4, is the most widely used protocol in computer networks. This article describes mechanisms that can be used to facilitate the transition to the new version of the IP protocol, IPv6, and examines usability, usefulness and manageability. Describes how some of these mechanisms were applied to the Greek…

  3. Tweaked residual convolutional network for face alignment

    NASA Astrophysics Data System (ADS)

    Du, Wenchao; Li, Ke; Zhao, Qijun; Zhang, Yi; Chen, Hu

    2017-08-01

    We propose a novel Tweaked Residual Convolutional Network approach for face alignment with two-level convolutional networks architecture. Specifically, the first-level Tweaked Convolutional Network (TCN) module predicts the landmark quickly but accurately enough as a preliminary, by taking low-resolution version of the detected face holistically as the input. The following Residual Convolutional Networks (RCN) module progressively refines the landmark by taking as input the local patch extracted around the predicted landmark, particularly, which allows the Convolutional Neural Network (CNN) to extract local shape-indexed features to fine tune landmark position. Extensive evaluations show that the proposed Tweaked Residual Convolutional Network approach outperforms existing methods.

  4. Networked sensors for the combat forces

    NASA Astrophysics Data System (ADS)

    Klager, Gene

    2004-11-01

    Real-time and detailed information is critical to the success of ground combat forces. Current manned reconnaissance, surveillance, and target acquisition (RSTA) capabilities are not sufficient to cover battlefield intelligence gaps, provide Beyond-Line-of-Sight (BLOS) targeting, and the ambush avoidance information necessary for combat forces operating in hostile situations, complex terrain, and conducting military operations in urban terrain. This paper describes a current US Army program developing advanced networked unmanned/unattended sensor systems to survey these gaps and provide the Commander with real-time, pertinent information. Networked Sensors for the Combat Forces plans to develop and demonstrate a new generation of low cost distributed unmanned sensor systems organic to the RSTA Element. Networked unmanned sensors will provide remote monitoring of gaps, will increase a unit"s area of coverage, and will provide the commander organic assets to complete his Battlefield Situational Awareness (BSA) picture for direct and indirect fire weapons, early warning, and threat avoidance. Current efforts include developing sensor packages for unmanned ground vehicles, small unmanned aerial vehicles, and unattended ground sensors using advanced sensor technologies. These sensors will be integrated with robust networked communications and Battle Command tools for mission planning, intelligence "reachback", and sensor data management. The network architecture design is based on a model that identifies a three-part modular design: 1) standardized sensor message protocols, 2) Sensor Data Management, and 3) Service Oriented Architecture. This simple model provides maximum flexibility for data exchange, information management and distribution. Products include: Sensor suites optimized for unmanned platforms, stationary and mobile versions of the Sensor Data Management Center, Battle Command planning tools, networked communications, and sensor management software. Details of these products and recent test results will be presented.

  5. Evaluation of multisectional and two-section particulate matter photochemical grid models in the Western United States.

    PubMed

    Morris, Ralph; Koo, Bonyoung; Yarwood, Greg

    2005-11-01

    Version 4.10s of the comprehensive air-quality model with extensions (CAMx) photochemical grid model has been developed, which includes two options for representing particulate matter (PM) size distribution: (1) a two-section representation that consists of fine (PM2.5) and coarse (PM2.5-10) modes that has no interactions between the sections and assumes all of the secondary PM is fine; and (2) a multisectional representation that divides the PM size distribution into N sections (e.g., N = 10) and simulates the mass transfer between sections because of coagulation, accumulation, evaporation, and other processes. The model was applied to Southern California using the two-section and multisection representation of PM size distribution, and we found that allowing secondary PM to grow into the coarse mode had a substantial effect on PM concentration estimates. CAMx was then applied to the Western United States for the 1996 annual period with a 36-km grid resolution using both the two-section and multisection PM representation. The Community Multiscale Air Quality (CMAQ) and Regional Modeling for Aerosol and Deposition (REMSAD) models were also applied to the 1996 annual period. Similar model performance was exhibited by the four models across the Interagency Monitoring of Protected Visual Environments (IMPROVE) and Clean Air Status and Trends Network monitoring networks. All four of the models exhibited fairly low annual bias for secondary PM sulfate and nitrate but with a winter overestimation and summer underestimation bias. The CAMx multisectional model estimated that coarse mode secondary sulfate and nitrate typically contribute <10% of the total sulfate and nitrate when averaged across the more rural IMPROVE monitoring network.

  6. Developing a New Wireless Sensor Network Platform and Its Application in Precision Agriculture

    PubMed Central

    Aquino-Santos, Raúl; González-Potes, Apolinar; Edwards-Block, Arthur; Virgen-Ortiz, Raúl Alejandro

    2011-01-01

    Wireless sensor networks are gaining greater attention from the research community and industrial professionals because these small pieces of “smart dust” offer great advantages due to their small size, low power consumption, easy integration and support for “green” applications. Green applications are considered a hot topic in intelligent environments, ubiquitous and pervasive computing. This work evaluates a new wireless sensor network platform and its application in precision agriculture, including its embedded operating system and its routing algorithm. To validate the technological platform and the embedded operating system, two different routing strategies were compared: hierarchical and flat. Both of these routing algorithms were tested in a small-scale network applied to a watermelon field. However, we strongly believe that this technological platform can be also applied to precision agriculture because it incorporates a modified version of LORA-CBF, a wireless location-based routing algorithm that uses cluster-based flooding. Cluster-based flooding addresses the scalability concerns of wireless sensor networks, while the modified LORA-CBF routing algorithm includes a metric to monitor residual battery energy. Furthermore, results show that the modified version of LORA-CBF functions well with both the flat and hierarchical algorithms, although it functions better with the flat algorithm in a small-scale agricultural network. PMID:22346622

  7. Developing a new wireless sensor network platform and its application in precision agriculture.

    PubMed

    Aquino-Santos, Raúl; González-Potes, Apolinar; Edwards-Block, Arthur; Virgen-Ortiz, Raúl Alejandro

    2011-01-01

    Wireless sensor networks are gaining greater attention from the research community and industrial professionals because these small pieces of "smart dust" offer great advantages due to their small size, low power consumption, easy integration and support for "green" applications. Green applications are considered a hot topic in intelligent environments, ubiquitous and pervasive computing. This work evaluates a new wireless sensor network platform and its application in precision agriculture, including its embedded operating system and its routing algorithm. To validate the technological platform and the embedded operating system, two different routing strategies were compared: hierarchical and flat. Both of these routing algorithms were tested in a small-scale network applied to a watermelon field. However, we strongly believe that this technological platform can be also applied to precision agriculture because it incorporates a modified version of LORA-CBF, a wireless location-based routing algorithm that uses cluster-based flooding. Cluster-based flooding addresses the scalability concerns of wireless sensor networks, while the modified LORA-CBF routing algorithm includes a metric to monitor residual battery energy. Furthermore, results show that the modified version of LORA-CBF functions well with both the flat and hierarchical algorithms, although it functions better with the flat algorithm in a small-scale agricultural network.

  8. A network approach to the geometric structure of shallow cloud fields

    NASA Astrophysics Data System (ADS)

    Glassmeier, F.; Feingold, G.

    2017-12-01

    The representation of shallow clouds and their radiative impact is one of the largest challenges for global climate models. While the bulk properties of cloud fields, including effects of organization, are a very active area of research, the potential of the geometric arrangement of cloud fields for the development of new parameterizations has hardly been explored. Self-organized patterns are particularly evident in the cellular structure of Stratocumulus (Sc) clouds so readily visible in satellite imagery. Inspired by similar patterns in biology and physics, we approach pattern formation in Sc fields from the perspective of natural cellular networks. Our network analysis is based on large-eddy simulations of open- and closed-cell Sc cases. We find the network structure to be neither random nor characteristic to natural convection. It is independent of macroscopic cloud fields properties like the Sc regime (open vs closed) and its typical length scale (boundary layer height). The latter is a consequence of entropy maximization (Lewis's Law with parameter 0.16). The cellular pattern is on average hexagonal, where non-6 sided cells occur according to a neighbor-number distribution variance of about 2. Reflecting the continuously renewing dynamics of Sc fields, large (many-sided) cells tend to neighbor small (few-sided) cells (Aboav-Weaire Law with parameter 0.9). These macroscopic network properties emerge independent of the Sc regime because the different processes governing the evolution of closed as compared to open cells correspond to topologically equivalent network dynamics. By developing a heuristic model, we show that open and closed cell dynamics can both be mimicked by versions of cell division and cell disappearance and are biased towards the expansion of smaller cells. This model offers for the first time a fundamental and universal explanation for the geometric pattern of Sc clouds. It may contribute to the development of advanced Sc parameterizations. As an outlook, we discuss how a similar network approach can be applied to describe and quantify the geometric structure of shallow cumulus cloud fields.

  9. Validation of Community Models: Identifying Events in Space Weather Model Timelines

    NASA Technical Reports Server (NTRS)

    MacNeice, Peter

    2009-01-01

    I develop and document a set of procedures which test the quality of predictions of solar wind speed and polarity of the interplanetary magnetic field (IMF) made by coupled models of the ambient solar corona and heliosphere. The Wang-Sheeley-Arge (WSA) model is used to illustrate the application of these validation procedures. I present an algorithm which detects transitions of the solar wind from slow to high speed. I also present an algorithm which processes the measured polarity of the outward directed component of the IMF. This removes high-frequency variations to expose the longer-scale changes that reflect IMF sector changes. I apply these algorithms to WSA model predictions made using a small set of photospheric synoptic magnetograms obtained by the Global Oscillation Network Group as input to the model. The results of this preliminary validation of the WSA model (version 1.6) are summarized.

  10. A provisional regulatory gene network for specification of endomesoderm in the sea urchin embryo

    NASA Technical Reports Server (NTRS)

    Davidson, Eric H.; Rast, Jonathan P.; Oliveri, Paola; Ransick, Andrew; Calestani, Cristina; Yuh, Chiou-Hwa; Minokawa, Takuya; Amore, Gabriele; Hinman, Veronica; Arenas-Mena, Cesar; hide

    2002-01-01

    We present the current form of a provisional DNA sequence-based regulatory gene network that explains in outline how endomesodermal specification in the sea urchin embryo is controlled. The model of the network is in a continuous process of revision and growth as new genes are added and new experimental results become available; see http://www.its.caltech.edu/mirsky/endomeso.htm (End-mes Gene Network Update) for the latest version. The network contains over 40 genes at present, many newly uncovered in the course of this work, and most encoding DNA-binding transcriptional regulatory factors. The architecture of the network was approached initially by construction of a logic model that integrated the extensive experimental evidence now available on endomesoderm specification. The internal linkages between genes in the network have been determined functionally, by measurement of the effects of regulatory perturbations on the expression of all relevant genes in the network. Five kinds of perturbation have been applied: (1) use of morpholino antisense oligonucleotides targeted to many of the key regulatory genes in the network; (2) transformation of other regulatory factors into dominant repressors by construction of Engrailed repressor domain fusions; (3) ectopic expression of given regulatory factors, from genetic expression constructs and from injected mRNAs; (4) blockade of the beta-catenin/Tcf pathway by introduction of mRNA encoding the intracellular domain of cadherin; and (5) blockade of the Notch signaling pathway by introduction of mRNA encoding the extracellular domain of the Notch receptor. The network model predicts the cis-regulatory inputs that link each gene into the network. Therefore, its architecture is testable by cis-regulatory analysis. Strongylocentrotus purpuratus and Lytechinus variegatus genomic BAC recombinants that include a large number of the genes in the network have been sequenced and annotated. Tests of the cis-regulatory predictions of the model are greatly facilitated by interspecific computational sequence comparison, which affords a rapid identification of likely cis-regulatory elements in advance of experimental analysis. The network specifies genomically encoded regulatory processes between early cleavage and gastrula stages. These control the specification of the micromere lineage and of the initial veg(2) endomesodermal domain; the blastula-stage separation of the central veg(2) mesodermal domain (i.e., the secondary mesenchyme progenitor field) from the peripheral veg(2) endodermal domain; the stabilization of specification state within these domains; and activation of some downstream differentiation genes. Each of the temporal-spatial phases of specification is represented in a subelement of the network model, that treats regulatory events within the relevant embryonic nuclei at particular stages. (c) 2002 Elsevier Science (USA).

  11. Qualities and Inequalities in Online Social Networks through the Lens of the Generalized Friendship Paradox.

    PubMed

    Momeni, Naghmeh; Rabbat, Michael

    2016-01-01

    The friendship paradox is the phenomenon that in social networks, people on average have fewer friends than their friends do. The generalized friendship paradox is an extension to attributes other than the number of friends. The friendship paradox and its generalized version have gathered recent attention due to the information they provide about network structure and local inequalities. In this paper, we propose several measures of nodal qualities which capture different aspects of their activities and influence in online social networks. Using these measures we analyse the prevalence of the generalized friendship paradox over Twitter and we report high levels of prevalence (up to over 90% of nodes). We contend that this prevalence of the friendship paradox and its generalized version arise because of the hierarchical nature of the connections in the network. This hierarchy is nested as opposed to being star-like. We conclude that these paradoxes are collective phenomena not created merely by a minority of well-connected or high-attribute nodes. Moreover, our results show that a large fraction of individuals can experience the generalized friendship paradox even in the absence of a significant correlation between degrees and attributes.

  12. A Software Package for Neural Network Applications Development

    NASA Technical Reports Server (NTRS)

    Baran, Robert H.

    1993-01-01

    Original Backprop (Version 1.2) is an MS-DOS package of four stand-alone C-language programs that enable users to develop neural network solutions to a variety of practical problems. Original Backprop generates three-layer, feed-forward (series-coupled) networks which map fixed-length input vectors into fixed length output vectors through an intermediate (hidden) layer of binary threshold units. Version 1.2 can handle up to 200 input vectors at a time, each having up to 128 real-valued components. The first subprogram, TSET, appends a number (up to 16) of classification bits to each input, thus creating a training set of input output pairs. The second subprogram, BACKPROP, creates a trilayer network to do the prescribed mapping and modifies the weights of its connections incrementally until the training set is leaned. The learning algorithm is the 'back-propagating error correction procedures first described by F. Rosenblatt in 1961. The third subprogram, VIEWNET, lets the trained network be examined, tested, and 'pruned' (by the deletion of unnecessary hidden units). The fourth subprogram, DONET, makes a TSR routine by which the finished product of the neural net design-and-training exercise can be consulted under other MS-DOS applications.

  13. MetNetMaker: a free and open-source tool for the creation of novel metabolic networks in SBML format.

    PubMed

    Forth, Thomas; McConkey, Glenn A; Westhead, David R

    2010-09-15

    An application has been developed to help with the creation and editing of Systems Biology Markup Language (SBML) format metabolic networks up to the organism scale. Networks are defined as a collection of Kyoto Encyclopedia of Genes and Genomes (KEGG) LIGAND reactions with an optional associated Enzyme Classification (EC) number for each reaction. Additional custom reactions can be defined by the user. Reactions within the network can be assigned flux constraints and compartmentalization is supported for each reaction in addition to the support for reactions that occur across compartment boundaries. Exported networks are fully SBML L2V4 compatible with an optional L2V1 export for compatibility with old versions of the COBRA toolbox. The software runs in the free Microsoft Access 2007 Runtime (Microsoft Inc.), which is included with the installer and works on Windows XP SP2 or better. Full source code is viewable in the full version of Access 2007 or 2010. Users must have a license to use the KEGG LIGAND database (free academic licensing is available). Please go to www.bioinformatics.leeds.ac.uk/~pytf/metnetmaker for software download, help and tutorials.

  14. OXlearn: a new MATLAB-based simulation tool for connectionist models.

    PubMed

    Ruh, Nicolas; Westermann, Gert

    2009-11-01

    OXlearn is a free, platform-independent MATLAB toolbox in which standard connectionist neural network models can be set up, run, and analyzed by means of a user-friendly graphical interface. Due to its seamless integration with the MATLAB programming environment, the inner workings of the simulation tool can be easily inspected and/or extended using native MATLAB commands or components. This combination of usability, transparency, and extendability makes OXlearn an efficient tool for the implementation of basic research projects or the prototyping of more complex research endeavors, as well as for teaching. Both the MATLAB toolbox and a compiled version that does not require access to MATLAB can be downloaded from http://psych.brookes.ac.uk/oxlearn/.

  15. Dynamic patterning by the Drosophila pair-rule network reconciles long-germ and short-germ segmentation

    PubMed Central

    2017-01-01

    Drosophila segmentation is a well-established paradigm for developmental pattern formation. However, the later stages of segment patterning, regulated by the “pair-rule” genes, are still not well understood at the system level. Building on established genetic interactions, I construct a logical model of the Drosophila pair-rule system that takes into account the demonstrated stage-specific architecture of the pair-rule gene network. Simulation of this model can accurately recapitulate the observed spatiotemporal expression of the pair-rule genes, but only when the system is provided with dynamic “gap” inputs. This result suggests that dynamic shifts of pair-rule stripes are essential for segment patterning in the trunk and provides a functional role for observed posterior-to-anterior gap domain shifts that occur during cellularisation. The model also suggests revised patterning mechanisms for the parasegment boundaries and explains the aetiology of the even-skipped null mutant phenotype. Strikingly, a slightly modified version of the model is able to pattern segments in either simultaneous or sequential modes, depending only on initial conditions. This suggests that fundamentally similar mechanisms may underlie segmentation in short-germ and long-germ arthropods. PMID:28953896

  16. A Two-dimensional Version of the Niblett-Bostick Transformation for Magnetotelluric Interpretations

    NASA Astrophysics Data System (ADS)

    Esparza, F.

    2005-05-01

    An imaging technique for two-dimensional magnetotelluric interpretations is developed following the well known Niblett-Bostick transformation for one-dimensional profiles. The algorithm uses a Hopfield artificial neural network to process series and parallel magnetotelluric impedances along with their analytical influence functions. The adaptive, weighted average approximation preserves part of the nonlinearity of the original problem. No initial model in the usual sense is required for the recovery of a functional model. Rather, the built-in relationship between model and data considers automatically, all at the same time, many half spaces whose electrical conductivities vary according to the data. The use of series and parallel impedances, a self-contained pair of invariants of the impedance tensor, avoids the need to decide on best angles of rotation for TE and TM separations. Field data from a given profile can thus be fed directly into the algorithm without much processing. The solutions offered by the Hopfield neural network correspond to spatial averages computed through rectangular windows that can be chosen at will. Applications of the algorithm to simple synthetic models and to the COPROD2 data set illustrate the performance of the approximation.

  17. Biocharts: a visual formalism for complex biological systems

    PubMed Central

    Kugler, Hillel; Larjo, Antti; Harel, David

    2010-01-01

    We address one of the central issues in devising languages, methods and tools for the modelling and analysis of complex biological systems, that of linking high-level (e.g. intercellular) information with lower-level (e.g. intracellular) information. Adequate ways of dealing with this issue are crucial for understanding biological networks and pathways, which typically contain huge amounts of data that continue to grow as our knowledge and understanding of a system increases. Trying to comprehend such data using the standard methods currently in use is often virtually impossible. We propose a two-tier compound visual language, which we call Biocharts, that is geared towards building fully executable models of biological systems. One of the main goals of our approach is to enable biologists to actively participate in the computational modelling effort, in a natural way. The high-level part of our language is a version of statecharts, which have been shown to be extremely successful in software and systems engineering. The statecharts can be combined with any appropriately well-defined language (preferably a diagrammatic one) for specifying the low-level dynamics of the pathways and networks. We illustrate the language and our general modelling approach using the well-studied process of bacterial chemotaxis. PMID:20022895

  18. Service-Based Extensions to an OAIS Archive for Science Data Management

    NASA Astrophysics Data System (ADS)

    Flathers, E.; Seamon, E.; Gessler, P. E.

    2014-12-01

    With new data management mandates from major funding sources such as the National Institutes for Health and the National Science Foundation, architecture of science data archive systems is becoming a critical concern for research institutions. The Consultative Committee for Space Data Systems (CCSDS), in 2002, released their first version of a Reference Model for an Open Archival Information System (OAIS). The CCSDS document (now an ISO standard) was updated in 2012 with additional focus on verifying the authenticity of data and developing concepts of access rights and a security model. The OAIS model is a good fit for research data archives, having been designed to support data collections of heterogeneous types, disciplines, storage formats, etc. for the space sciences. As fast, reliable, persistent Internet connectivity spreads, new network-available resources have been developed that can support the science data archive. A natural extension of an OAIS archive is the interconnection with network- or cloud-based services and resources. We use the Service Oriented Architecture (SOA) design paradigm to describe a set of extensions to an OAIS-type archive: purpose and justification for each extension, where and how each extension connects to the model, and an example of a specific service that meets the purpose.

  19. Improvement and speed optimization of numerical tsunami modelling program using OpenMP technology

    NASA Astrophysics Data System (ADS)

    Chernov, A.; Zaytsev, A.; Yalciner, A.; Kurkin, A.

    2009-04-01

    Currently, the basic problem of tsunami modeling is low speed of calculations which is unacceptable for services of the operative notification. Existing algorithms of numerical modeling of hydrodynamic processes of tsunami waves are developed without taking the opportunities of modern computer facilities. There is an opportunity to have considerable acceleration of process of calculations by using parallel algorithms. We discuss here new approach to parallelization tsunami modeling code using OpenMP Technology (for multiprocessing systems with the general memory). Nowadays, multiprocessing systems are easily accessible for everyone. The cost of the use of such systems becomes much lower comparing to the costs of clusters. This opportunity also benefits all programmers to apply multithreading algorithms on desktop computers of researchers. Other important advantage of the given approach is the mechanism of the general memory - there is no necessity to send data on slow networks (for example Ethernet). All memory is the common for all computing processes; it causes almost linear scalability of the program and processes. In the new version of NAMI DANCE using OpenMP technology and multi-threading algorithm provide 80% gain in speed in comparison with the one-thread version for dual-processor unit. The speed increased and 320% gain was attained for four core processor unit of PCs. Thus, it was possible to reduce considerably time of performance of calculations on the scientific workstations (desktops) without complete change of the program and user interfaces. The further modernization of algorithms of preparation of initial data and processing of results using OpenMP looks reasonable. The final version of NAMI DANCE with the increased computational speed can be used not only for research purposes but also in real time Tsunami Warning Systems.

  20. Performance Improvements of the CYCOFOS Flow Model

    NASA Astrophysics Data System (ADS)

    Radhakrishnan, Hari; Moulitsas, Irene; Syrakos, Alexandros; Zodiatis, George; Nikolaides, Andreas; Hayes, Daniel; Georgiou, Georgios C.

    2013-04-01

    The CYCOFOS-Cyprus Coastal Ocean Forecasting and Observing System has been operational since early 2002, providing daily sea current, temperature, salinity and sea level forecasting data for the next 4 and 10 days to end-users in the Levantine Basin, necessary for operational application in marine safety, particularly concerning oil spills and floating objects predictions. CYCOFOS flow model, similar to most of the coastal and sub-regional operational hydrodynamic forecasting systems of the MONGOOS-Mediterranean Oceanographic Network for Global Ocean Observing System is based on the POM-Princeton Ocean Model. CYCOFOS is nested with the MyOcean Mediterranean regional forecasting data and with SKIRON and ECMWF for surface forcing. The increasing demand for higher and higher resolution data to meet coastal and offshore downstream applications motivated the parallelization of the CYCOFOS POM model. This development was carried out in the frame of the IPcycofos project, funded by the Cyprus Research Promotion Foundation. The parallel processing provides a viable solution to satisfy these demands without sacrificing accuracy or omitting any physical phenomena. Prior to IPcycofos project, there are been several attempts to parallelise the POM, as for example the MP-POM. The existing parallel code models rely on the use of specific outdated hardware architectures and associated software. The objective of the IPcycofos project is to produce an operational parallel version of the CYCOFOS POM code that can replicate the results of the serial version of the POM code used in CYCOFOS. The parallelization of the CYCOFOS POM model use Message Passing Interface-MPI, implemented on commodity computing clusters running open source software and not depending on any specialized vendor hardware. The parallel CYCOFOS POM code constructed in a modular fashion, allowing a fast re-locatable downscaled implementation. The MPI takes advantage of the Cartesian nature of the POM mesh, and use the built-in functionality of MPI routines to split the mesh, using a weighting scheme, along longitude and latitude among the processors. Each server processor work on the model based on domain decomposition techniques. The new parallel CYCOFOS POM code has been benchmarked against the serial POM version of CYCOFOS for speed, accuracy, and resolution and the results are more than satisfactory. With a higher resolution CYCOFOS Levantine model domain the forecasts need much less time than the serial CYCOFOS POM coarser version, both with identical accuracy.

  1. Extending Transfer Entropy Improves Identification of Effective Connectivity in a Spiking Cortical Network Model

    PubMed Central

    Ito, Shinya; Hansen, Michael E.; Heiland, Randy; Lumsdaine, Andrew; Litke, Alan M.; Beggs, John M.

    2011-01-01

    Transfer entropy (TE) is an information-theoretic measure which has received recent attention in neuroscience for its potential to identify effective connectivity between neurons. Calculating TE for large ensembles of spiking neurons is computationally intensive, and has caused most investigators to probe neural interactions at only a single time delay and at a message length of only a single time bin. This is problematic, as synaptic delays between cortical neurons, for example, range from one to tens of milliseconds. In addition, neurons produce bursts of spikes spanning multiple time bins. To address these issues, here we introduce a free software package that allows TE to be measured at multiple delays and message lengths. To assess performance, we applied these extensions of TE to a spiking cortical network model (Izhikevich, 2006) with known connectivity and a range of synaptic delays. For comparison, we also investigated single-delay TE, at a message length of one bin (D1TE), and cross-correlation (CC) methods. We found that D1TE could identify 36% of true connections when evaluated at a false positive rate of 1%. For extended versions of TE, this dramatically improved to 73% of true connections. In addition, the connections correctly identified by extended versions of TE accounted for 85% of the total synaptic weight in the network. Cross correlation methods generally performed more poorly than extended TE, but were useful when data length was short. A computational performance analysis demonstrated that the algorithm for extended TE, when used on currently available desktop computers, could extract effective connectivity from 1 hr recordings containing 200 neurons in ∼5 min. We conclude that extending TE to multiple delays and message lengths improves its ability to assess effective connectivity between spiking neurons. These extensions to TE soon could become practical tools for experimentalists who record hundreds of spiking neurons. PMID:22102894

  2. A comparison of Monte Carlo-based Bayesian parameter estimation methods for stochastic models of genetic networks

    PubMed Central

    Zaikin, Alexey; Míguez, Joaquín

    2017-01-01

    We compare three state-of-the-art Bayesian inference methods for the estimation of the unknown parameters in a stochastic model of a genetic network. In particular, we introduce a stochastic version of the paradigmatic synthetic multicellular clock model proposed by Ullner et al., 2007. By introducing dynamical noise in the model and assuming that the partial observations of the system are contaminated by additive noise, we enable a principled mechanism to represent experimental uncertainties in the synthesis of the multicellular system and pave the way for the design of probabilistic methods for the estimation of any unknowns in the model. Within this setup, we tackle the Bayesian estimation of a subset of the model parameters. Specifically, we compare three Monte Carlo based numerical methods for the approximation of the posterior probability density function of the unknown parameters given a set of partial and noisy observations of the system. The schemes we assess are the particle Metropolis-Hastings (PMH) algorithm, the nonlinear population Monte Carlo (NPMC) method and the approximate Bayesian computation sequential Monte Carlo (ABC-SMC) scheme. We present an extensive numerical simulation study, which shows that while the three techniques can effectively solve the problem there are significant differences both in estimation accuracy and computational efficiency. PMID:28797087

  3. Computationally-efficient stochastic cluster dynamics method for modeling damage accumulation in irradiated materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoang, Tuan L.; Physical and Life Sciences Directorate, Lawrence Livermore National Laboratory, CA 94550; Marian, Jaime, E-mail: jmarian@ucla.edu

    2015-11-01

    An improved version of a recently developed stochastic cluster dynamics (SCD) method (Marian and Bulatov, 2012) [6] is introduced as an alternative to rate theory (RT) methods for solving coupled ordinary differential equation (ODE) systems for irradiation damage simulations. SCD circumvents by design the curse of dimensionality of the variable space that renders traditional ODE-based RT approaches inefficient when handling complex defect population comprised of multiple (more than two) defect species. Several improvements introduced here enable efficient and accurate simulations of irradiated materials up to realistic (high) damage doses characteristic of next-generation nuclear systems. The first improvement is a proceduremore » for efficiently updating the defect reaction-network and event selection in the context of a dynamically expanding reaction-network. Next is a novel implementation of the τ-leaping method that speeds up SCD simulations by advancing the state of the reaction network in large time increments when appropriate. Lastly, a volume rescaling procedure is introduced to control the computational complexity of the expanding reaction-network through occasional reductions of the defect population while maintaining accurate statistics. The enhanced SCD method is then applied to model defect cluster accumulation in iron thin films subjected to triple ion-beam (Fe{sup 3+}, He{sup +} and H{sup +}) irradiations, for which standard RT or spatially-resolved kinetic Monte Carlo simulations are prohibitively expensive.« less

  4. Computationally-efficient stochastic cluster dynamics method for modeling damage accumulation in irradiated materials

    NASA Astrophysics Data System (ADS)

    Hoang, Tuan L.; Marian, Jaime; Bulatov, Vasily V.; Hosemann, Peter

    2015-11-01

    An improved version of a recently developed stochastic cluster dynamics (SCD) method (Marian and Bulatov, 2012) [6] is introduced as an alternative to rate theory (RT) methods for solving coupled ordinary differential equation (ODE) systems for irradiation damage simulations. SCD circumvents by design the curse of dimensionality of the variable space that renders traditional ODE-based RT approaches inefficient when handling complex defect population comprised of multiple (more than two) defect species. Several improvements introduced here enable efficient and accurate simulations of irradiated materials up to realistic (high) damage doses characteristic of next-generation nuclear systems. The first improvement is a procedure for efficiently updating the defect reaction-network and event selection in the context of a dynamically expanding reaction-network. Next is a novel implementation of the τ-leaping method that speeds up SCD simulations by advancing the state of the reaction network in large time increments when appropriate. Lastly, a volume rescaling procedure is introduced to control the computational complexity of the expanding reaction-network through occasional reductions of the defect population while maintaining accurate statistics. The enhanced SCD method is then applied to model defect cluster accumulation in iron thin films subjected to triple ion-beam (Fe3+, He+ and H+) irradiations, for which standard RT or spatially-resolved kinetic Monte Carlo simulations are prohibitively expensive.

  5. Rapid social network assessment for predicting HIV and STI risk among men attending bars and clubs in San Diego, California.

    PubMed

    Drumright, Lydia N; Frost, Simon D W

    2010-12-01

    To test the use of a rapid assessment tool to determine social network size, and to test whether social networks with a high density of HIV/sexually transmitted infection (STI) or substance using persons were independent predictors of HIV and STI status among men who have sex with men (MSM) using a rapid tool for collecting network information. We interviewed 609 MSM from 14 bars in San Diego, California, USA, using an enhanced version of the Priorities for Local AIDS Control Efforts (PLACE) methodology. Social network size was assessed using a series of 19 questions of the form 'How many people do you know that have the name X?', where X included specific male and female names (eg, Keith), use illicit substances, and have HIV. Generalised linear models were used to estimate average and group-specific network sizes, and their association with HIV status, STI history and methamphetamine use. Despite possible errors in ascertaining network size, average reported network sizes were larger for larger groups. Those who reported having HIV infection or having past STI reported significantly more HIV infected and methamphetamine or popper using individuals in their social network. There was a dose-dependent effect of social network size of HIV infected individuals on self-reported HIV status, past STI and use of methamphetamine in the last 12 months, after controlling for age, ethnicity and numbers of sexual partners in the last year. Relatively simple measures of social networks are associated with HIV/STI risk, and may provide a useful tool for targeting HIV/STI surveillance and prevention.

  6. Integrated Farm System Model Version 4.3 and Dairy Gas Emissions Model Version 3.3 Software development and distribution

    USDA-ARS?s Scientific Manuscript database

    Modeling routines of the Integrated Farm System Model (IFSM version 4.2) and Dairy Gas Emission Model (DairyGEM version 3.2), two whole-farm simulation models developed and maintained by USDA-ARS, were revised with new components for: (1) simulation of ammonia (NH3) and greenhouse gas emissions gene...

  7. Configuring a Context-Aware Middleware for Wireless Sensor Networks

    PubMed Central

    Gámez, Nadia; Cubo, Javier; Fuentes, Lidia; Pimentel, Ernesto

    2012-01-01

    In the Future Internet, applications based on Wireless Sensor Networks will have to support reconfiguration with minimum human intervention, depending on dynamic context changes in their environment. These situations create a need for building these applications as adaptive software and including techniques that allow the context acquisition and decisions about adaptation. However, contexts use to be made up of complex information acquired from heterogeneous devices and user characteristics, making them difficult to manage. So, instead of building context-aware applications from scratch, we propose to use FamiWare, a family of middleware for Ambient Intelligence specifically designed to be aware of contexts in sensor and smartphone devices. It provides both, several monitoring services to acquire contexts from devices and users, and a context-awareness service to analyze and detect context changes. However, the current version of FamiWare does not allow the automatic incorporation related to the management of new contexts into the FamiWare family. To overcome this shortcoming, in this work, we first present how to model the context using a metamodel to define the contexts that must to be taken into account in an instantiation of FamiWare for a certain Ambient Intelligence system. Then, to configure a new context-aware version of FamiWare and to generate code ready-to-install within heterogeneous devices, we define a mapping that automatically transforms metamodel elements defining contexts into elements of the FamiWare family, and we also use the FamiWare configuration process to customize the new context-aware variant. Finally, we evaluate the benefits of our process, and we analyze both that the new version of the middleware works as expected and that it manages the contexts in an efficient way. PMID:23012505

  8. Minimizing the Diameter of a Network Using Shortcut Edges

    NASA Astrophysics Data System (ADS)

    Demaine, Erik D.; Zadimoghaddam, Morteza

    We study the problem of minimizing the diameter of a graph by adding k shortcut edges, for speeding up communication in an existing network design. We develop constant-factor approximation algorithms for different variations of this problem. We also show how to improve the approximation ratios using resource augmentation to allow more than k shortcut edges. We observe a close relation between the single-source version of the problem, where we want to minimize the largest distance from a given source vertex, and the well-known k-median problem. First we show that our constant-factor approximation algorithms for the general case solve the single-source problem within a constant factor. Then, using a linear-programming formulation for the single-source version, we find a (1 + ɛ)-approximation using O(klogn) shortcut edges. To show the tightness of our result, we prove that any ({3 over 2}-ɛ)-approximation for the single-source version must use Ω(klogn) shortcut edges assuming P ≠ NP.

  9. Implementing parallel spreadsheet models for health policy decisions: The impact of unintentional errors on model projections

    PubMed Central

    Bailey, Stephanie L.; Bono, Rose S.; Nash, Denis; Kimmel, April D.

    2018-01-01

    Background Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. Methods We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. Results We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Conclusions Standard error-checking techniques may not identify all errors in spreadsheet-based models. Comparing parallel model versions can aid in identifying unintentional errors and promoting reliable model projections, particularly when resources are limited. PMID:29570737

  10. Implementing parallel spreadsheet models for health policy decisions: The impact of unintentional errors on model projections.

    PubMed

    Bailey, Stephanie L; Bono, Rose S; Nash, Denis; Kimmel, April D

    2018-01-01

    Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Standard error-checking techniques may not identify all errors in spreadsheet-based models. Comparing parallel model versions can aid in identifying unintentional errors and promoting reliable model projections, particularly when resources are limited.

  11. Social support and well-being at mid-life among mothers of adolescents and adults with autism spectrum disorders.

    PubMed

    Smith, Leann E; Greenberg, Jan S; Seltzer, Marsha Mailick

    2012-09-01

    The present study investigated the impact of social support on the psychological well-being of mothers of adolescents and adults with ASD (n = 269). Quantity of support (number of social network members) as well as valence of support (positive support and negative support) were assessed using a modified version of the "convoy model" developed by Antonucci and Akiyama (1987). Having a larger social network was associated with improvements in maternal well-being over an 18-month period. Higher levels of negative support as well as increases in negative support over the study period were associated with increases in depressive symptoms and negative affect and decreases in positive affect. Social support predicted changes in well-being above and beyond the impact of child behavior problems. Implications for clinical practice are discussed.

  12. HONTIOR - HIGHER-ORDER NEURAL NETWORK FOR TRANSFORMATION INVARIANT OBJECT RECOGNITION

    NASA Technical Reports Server (NTRS)

    Spirkovska, L.

    1994-01-01

    Neural networks have been applied in numerous fields, including transformation invariant object recognition, wherein an object is recognized despite changes in the object's position in the input field, size, or rotation. One of the more successful neural network methods used in invariant object recognition is the higher-order neural network (HONN) method. With a HONN, known relationships are exploited and the desired invariances are built directly into the architecture of the network, eliminating the need for the network to learn invariance to transformations. This results in a significant reduction in the training time required, since the network needs to be trained on only one view of each object, not on numerous transformed views. Moreover, one hundred percent accuracy is guaranteed for images characterized by the built-in distortions, providing noise is not introduced through pixelation. The program HONTIOR implements a third-order neural network having invariance to translation, scale, and in-plane rotation built directly into the architecture, Thus, for 2-D transformation invariance, the network needs only to be trained on just one view of each object. HONTIOR can also be used for 3-D transformation invariant object recognition by training the network only on a set of out-of-plane rotated views. Historically, the major drawback of HONNs has been that the size of the input field was limited to the memory required for the large number of interconnections in a fully connected network. HONTIOR solves this problem by coarse coding the input images (coding an image as a set of overlapping but offset coarser images). Using this scheme, large input fields (4096 x 4096 pixels) can easily be represented using very little virtual memory (30Mb). The HONTIOR distribution consists of three main programs. The first program contains the training and testing routines for a third-order neural network. The second program contains the same training and testing procedures as the first, but it also contains a number of functions to display and edit training and test images. Finally, the third program is an auxiliary program which calculates the included angles for a given input field size. HONTIOR is written in C language, and was originally developed for Sun3 and Sun4 series computers. Both graphic and command line versions of the program are provided. The command line version has been successfully compiled and executed both on computers running the UNIX operating system and on DEC VAX series computer running VMS. The graphic version requires the SunTools windowing environment, and therefore runs only on Sun series computers. The executable for the graphics version of HONTIOR requires 1Mb of RAM. The standard distribution medium for HONTIOR is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format. The package includes sample input and output data. HONTIOR was developed in 1991. Sun, Sun3 and Sun4 are trademarks of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories. DEC, VAX, and VMS are trademarks of Digital Equipment Corporation.

  13. Model-Based Analysis for Qualitative Data: An Application in Drosophila Germline Stem Cell Regulation

    PubMed Central

    Pargett, Michael; Rundell, Ann E.; Buzzard, Gregery T.; Umulis, David M.

    2014-01-01

    Discovery in developmental biology is often driven by intuition that relies on the integration of multiple types of data such as fluorescent images, phenotypes, and the outcomes of biochemical assays. Mathematical modeling helps elucidate the biological mechanisms at play as the networks become increasingly large and complex. However, the available data is frequently under-utilized due to incompatibility with quantitative model tuning techniques. This is the case for stem cell regulation mechanisms explored in the Drosophila germarium through fluorescent immunohistochemistry. To enable better integration of biological data with modeling in this and similar situations, we have developed a general parameter estimation process to quantitatively optimize models with qualitative data. The process employs a modified version of the Optimal Scaling method from social and behavioral sciences, and multi-objective optimization to evaluate the trade-off between fitting different datasets (e.g. wild type vs. mutant). Using only published imaging data in the germarium, we first evaluated support for a published intracellular regulatory network by considering alternative connections of the same regulatory players. Simply screening networks against wild type data identified hundreds of feasible alternatives. Of these, five parsimonious variants were found and compared by multi-objective analysis including mutant data and dynamic constraints. With these data, the current model is supported over the alternatives, but support for a biochemically observed feedback element is weak (i.e. these data do not measure the feedback effect well). When also comparing new hypothetical models, the available data do not discriminate. To begin addressing the limitations in data, we performed a model-based experiment design and provide recommendations for experiments to refine model parameters and discriminate increasingly complex hypotheses. PMID:24626201

  14. Predicting CD4 count changes among patients on antiretroviral treatment: Application of data mining techniques.

    PubMed

    Kebede, Mihiretu; Zegeye, Desalegn Tigabu; Zeleke, Berihun Megabiaw

    2017-12-01

    To monitor the progress of therapy and disease progression, periodic CD4 counts are required throughout the course of HIV/AIDS care and support. The demand for CD4 count measurement is increasing as ART programs expand over the last decade. This study aimed to predict CD4 count changes and to identify the predictors of CD4 count changes among patients on ART. A cross-sectional study was conducted at the University of Gondar Hospital from 3,104 adult patients on ART with CD4 counts measured at least twice (baseline and most recent). Data were retrieved from the HIV care clinic electronic database and patients` charts. Descriptive data were analyzed by SPSS version 20. Cross-Industry Standard Process for Data Mining (CRISP-DM) methodology was followed to undertake the study. WEKA version 3.8 was used to conduct a predictive data mining. Before building the predictive data mining models, information gain values and correlation-based Feature Selection methods were used for attribute selection. Variables were ranked according to their relevance based on their information gain values. J48, Neural Network, and Random Forest algorithms were experimented to assess model accuracies. The median duration of ART was 191.5 weeks. The mean CD4 count change was 243 (SD 191.14) cells per microliter. Overall, 2427 (78.2%) patients had their CD4 counts increased by at least 100 cells per microliter, while 4% had a decline from the baseline CD4 value. Baseline variables including age, educational status, CD8 count, ART regimen, and hemoglobin levels predicted CD4 count changes with predictive accuracies of J48, Neural Network, and Random Forest being 87.1%, 83.5%, and 99.8%, respectively. Random Forest algorithm had a superior performance accuracy level than both J48 and Artificial Neural Network. The precision, sensitivity and recall values of Random Forest were also more than 99%. Nearly accurate prediction results were obtained using Random Forest algorithm. This algorithm could be used in a low-resource setting to build a web-based prediction model for CD4 count changes. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. NQS - NETWORK QUEUING SYSTEM, VERSION 2.0 (UNIX VERSION)

    NASA Technical Reports Server (NTRS)

    Walter, H.

    1994-01-01

    The Network Queuing System, NQS, is a versatile batch and device queuing facility for a single Unix computer or a group of networked computers. With the Unix operating system as a common interface, the user can invoke the NQS collection of user-space programs to move batch and device jobs freely around the different computer hardware tied into the network. NQS provides facilities for remote queuing, request routing, remote status, queue status controls, batch request resource quota limits, and remote output return. This program was developed as part of an effort aimed at tying together diverse UNIX based machines into NASA's Numerical Aerodynamic Simulator Processing System Network. This revision of NQS allows for creating, deleting, adding and setting of complexes that aid in limiting the number of requests to be handled at one time. It also has improved device-oriented queues along with some revision of the displays. NQS was designed to meet the following goals: 1) Provide for the full support of both batch and device requests. 2) Support all of the resource quotas enforceable by the underlying UNIX kernel implementation that are relevant to any particular batch request and its corresponding batch queue. 3) Support remote queuing and routing of batch and device requests throughout the NQS network. 4) Support queue access restrictions through user and group access lists for all queues. 5) Enable networked output return of both output and error files to possibly remote machines. 6) Allow mapping of accounts across machine boundaries. 7) Provide friendly configuration and modification mechanisms for each installation. 8) Support status operations across the network, without requiring a user to log in on remote target machines. 9) Provide for file staging or copying of files for movement to the actual execution machine. To support batch and device requests, NQS v.2 implements three queue types--batch, device and pipe. Batch queues hold and prioritize batch requests; device queues hold and prioritize device requests; pipe queues transport both batch and device requests to other batch, device, or pipe queues at local or remote machines. Unique to batch queues are resource quota limits that restrict the amounts of different resources that a batch request can consume during execution. Unique to each device queue is a set of one or more devices, such as a line printer, to which requests can be sent for execution. Pipe queues have associated destinations to which they route and deliver requests. If the proper destination machine is down or unreachable, pipe queues are able to requeue the request and deliver it later when the destination is available. All NQS network conversations are performed using the Berkeley socket mechanism as ported into the respective vendor kernels. NQS is written in C language. The generic UNIX version (ARC-13179) has been successfully implemented on a variety of UNIX platforms, including Sun3 and Sun4 series computers, SGI IRIS computers running IRIX 3.3, DEC computers running ULTRIX 4.1, AMDAHL computers running UTS 1.3 and 2.1, platforms running BSD 4.3 UNIX. The IBM RS/6000 AIX version (COS-10042) is a vendor port. NQS 2.0 will also communicate with the Cray Research, Inc. and Convex, Inc. versions of NQS. The standard distribution medium for either machine version of NQS 2.0 is a 60Mb, QIC-24, .25 inch streaming magnetic tape cartridge in UNIX tar format. Upon request the generic UNIX version (ARC-13179) can be provided in UNIX tar format on alternate media. Please contact COSMIC to discuss the availability and cost of media to meet your specific needs. An electronic copy of the NQS 2.0 documentation is included on the program media. NQS 2.0 was released in 1991. The IBM RS/6000 port of NQS was developed in 1992. IRIX is a trademark of Silicon Graphics Inc. IRIS is a registered trademark of Silicon Graphics Inc. UNIX is a registered trademark of UNIX System Laboratories Inc. Sun3 and Sun4 are trademarks of Sun Microsystems Inc. DEC and ULTRIX are trademarks of Digital Equipment Corporation.

  16. Trusted Network Interpretation of the Trusted Computer System Evaluation Criteria. Version 1.

    DTIC Science & Technology

    1987-07-01

    for Secure Computer Systema, MTR-3153, The MITRE Corporation, Bedford, MA, June 1975. 1 See, for example, M. D. Abrams and H. J. Podell , Tutorial...References References Abrams, M. D. and H. J. Podell , Tutorial: Computer and Network Security, IEEE Com- puter Society Press, 1987. Addendum to the

  17. Children of Katrina: Lessons Learned about Postdisaster Symptoms and Recovery Patterns

    ERIC Educational Resources Information Center

    Kronenberg, Mindy E.; Hansel, Tonya Cross; Brennan, Adrianne M.; Osofsky, Howard J.; Osofsky, Joy D.; Lawrason, Beverly

    2010-01-01

    Trauma symptoms, recovery patterns, and life stressors of children between the ages of 9 and 18 (n = 387) following Hurricane Katrina were assessed using an adapted version of the National Child Traumatic Stress Network Hurricane Assessment and Referral Tool for Children and Adolescents (National Child Traumatic Stress Network, 2005). Based on…

  18. GeoSciML v3.0 - a significant upgrade of the CGI-IUGS geoscience data model

    NASA Astrophysics Data System (ADS)

    Raymond, O.; Duclaux, G.; Boisvert, E.; Cipolloni, C.; Cox, S.; Laxton, J.; Letourneau, F.; Richard, S.; Ritchie, A.; Sen, M.; Serrano, J.-J.; Simons, B.; Vuollo, J.

    2012-04-01

    GeoSciML version 3.0 (http://www.geosciml.org), released in late 2011, is the latest version of the CGI-IUGS* Interoperability Working Group geoscience data interchange standard. The new version is a significant upgrade and refactoring of GeoSciML v2 which was released in 2008. GeoSciML v3 has already been adopted by several major international interoperability initiatives, including OneGeology, the EU INSPIRE program, and the US Geoscience Information Network, as their standard data exchange format for geoscience data. GeoSciML v3 makes use of recently upgraded versions of several Open Geospatial Consortium (OGC) and ISO data transfer standards, including GML v3.2, SWE Common v2.0, and Observations and Measurements v2 (ISO 19156). The GeoSciML v3 data model has been refactored from a single large application schema with many packages, into a number of smaller, but related, application schema modules with individual namespaces. This refactoring allows the use and future development of modules of GeoSciML (eg; GeologicUnit, GeologicStructure, GeologicAge, Borehole) in smaller, more manageable units. As a result of this refactoring and the integration with new OGC and ISO standards, GeoSciML v3 is not backwardly compatible with previous GeoSciML versions. The scope of GeoSciML has been extended in version 3.0 to include new models for geomorphological data (a Geomorphology application schema), and for geological specimens, geochronological interpretations, and metadata for geochemical and geochronological analyses (a LaboratoryAnalysis-Specimen application schema). In addition, there is better support for borehole data, and the PhysicalProperties model now supports a wider range of petrophysical measurements. The previously used CGI_Value data type has been superseded in favour of externally governed data types provided by OGC's SWE Common v2 and GML v3.2 data standards. The GeoSciML v3 release includes worked examples of best practice in delivering geochemical analytical data using the Observations and Measurements (ISO19156) and SWE Common v2 models. The GeoSciML v3 data model does not include vocabularies to support the data model. However, it does provide a standard pattern to reference controlled vocabulary concepts using HTTP-URIs. The international GeoSciML community has developed distributed RDF-based geoscience vocabularies that can be accessed by GeoSciML web services using the standard pattern recommended in GeoSciML v3. GeoSciML v3 is the first version of GeoSciML that will be accompanied by web service validation tools using Schematron rules. For example, these validation tools may check for compliance of a web service to a particular profile of GeoSciML, or for logical consistency of data content that cannot be enforced by the application schemas. This validation process will support accreditation of GeoSciML services and a higher degree of semantic interoperability. * International Union of Geological Sciences Commission for Management and Application of Geoscience Information (CGI-IUGS)

  19. CUNY+ Web: Usability Study of the Web-Based GUI Version of the Bibliographic Database of the City University of New York (CUNY).

    ERIC Educational Resources Information Center

    Oulanov, Alexei; Pajarillo, Edmund J. Y.

    2002-01-01

    Describes the usability evaluation of the CUNY (City University of New York) information system in Web and Graphical User Interface (GUI) versions. Compares results to an earlier usability study of the basic information database available on CUNY's wide-area network and describes the applicability of the previous usability instrument to this…

  20. Publisher Correction: Evolutionary adaptations to new environments generally reverse plastic phenotypic changes.

    PubMed

    Ho, Wei-Chin; Zhang, Jianzhi

    2018-02-21

    The originally published HTML version of this Article contained errors in the three equations in the Methods sub-section 'Metabolic network analysis', whereby the Greek letter eta (η) was inadvertently used in place of beta (β) during the production process. These errors have now been corrected in the HTML version of the Article; the PDF was correct at the time of publication.

  1. Standardized reporting for rapid relative effectiveness assessments of pharmaceuticals.

    PubMed

    Kleijnen, Sarah; Pasternack, Iris; Van de Casteele, Marc; Rossi, Bernardette; Cangini, Agnese; Di Bidino, Rossella; Jelenc, Marjetka; Abrishami, Payam; Autti-Rämö, Ilona; Seyfried, Hans; Wildbacher, Ingrid; Goettsch, Wim G

    2014-11-01

    Many European countries perform rapid assessments of the relative effectiveness (RE) of pharmaceuticals as part of the reimbursement decision making process. Increased sharing of information on RE across countries may save costs and reduce duplication of work. The objective of this article is to describe the development of a tool for rapid assessment of RE of new pharmaceuticals that enter the market, the HTA Core Model® for Rapid Relative Effectiveness Assessment (REA) of Pharmaceuticals. Eighteen member organisations of the European Network of Health Technology Assessment (EUnetHTA) participated in the development of the model. Different versions of the model were developed and piloted in this collaboration and adjusted accordingly based on feedback on the content and feasibility of the model. The final model deviates from the traditional HTA Core Model® used for assessing other types of technologies. This is due to the limited scope (strong focus on RE), the timing of the assessment (just after market authorisation), and strict timelines (e.g. 90 days) required for performing the assessment. The number of domains and assessment elements was limited and it was decided that the primary information sources should preferably be a submission file provided by the marketing authorisation holder and the European Public Assessment Report. The HTA Core Model® for Rapid REA (version 3.0) was developed to produce standardised transparent RE information of pharmaceuticals. Further piloting can provide input for possible improvements, such as further refining the assessment elements and new methodological guidance on relevant areas.

  2. Expansion of the Center for Network Innovation and Experimentation (CENETIX) Network to a Worldwide Presence

    DTIC Science & Technology

    2006-09-01

    data transform set contains : the security protocol (AH and/or ESP, connection mode (tunnel or transport), encryption information (DES, 3DES, AES...Management Information Base, version 2) objects are variables that contain data about the system. They are defined as part of the Simple Network...Avon Park was configured for access on the concentrator. c. Security Association (SA) • A security association contains all of the information

  3. A Fresh Look at Internet Protocol Version 6 (IPv6) for Department of Defense (DoD) Networks

    DTIC Science & Technology

    2010-08-01

    since system administration practices (such as the use of security appliances) depend heavily on tools for network management, diagnosis and protection...are mobile ad hoc networks (MANETs) and yet there is limited practical experience with MANETs and their performance. Further, the interaction between...Systems FCS Future Combat System IETF Internet Engineering Task Force ISAT Information Science and Technology BAST Board on Army Science and

  4. The DSFPN, a new neural network for optical character recognition.

    PubMed

    Morns, L P; Dlay, S S

    1999-01-01

    A new type of neural network for recognition tasks is presented in this paper. The network, called the dynamic supervised forward-propagation network (DSFPN), is based on the forward only version of the counterpropagation network (CPN). The DSFPN, trains using a supervised algorithm and can grow dynamically during training, allowing subclasses in the training data to be learnt in an unsupervised manner. It is shown to train in times comparable to the CPN while giving better classification accuracies than the popular backpropagation network. Both Fourier descriptors and wavelet descriptors are used for image preprocessing and the wavelets are proven to give a far better performance.

  5. SSER: Species specific essential reactions database.

    PubMed

    Labena, Abraham A; Ye, Yuan-Nong; Dong, Chuan; Zhang, Fa-Z; Guo, Feng-Biao

    2017-04-19

    Essential reactions are vital components of cellular networks. They are the foundations of synthetic biology and are potential candidate targets for antimetabolic drug design. Especially if a single reaction is catalyzed by multiple enzymes, then inhibiting the reaction would be a better option than targeting the enzymes or the corresponding enzyme-encoding gene. The existing databases such as BRENDA, BiGG, KEGG, Bio-models, Biosilico, and many others offer useful and comprehensive information on biochemical reactions. But none of these databases especially focus on essential reactions. Therefore, building a centralized repository for this class of reactions would be of great value. Here, we present a species-specific essential reactions database (SSER). The current version comprises essential biochemical and transport reactions of twenty-six organisms which are identified via flux balance analysis (FBA) combined with manual curation on experimentally validated metabolic network models. Quantitative data on the number of essential reactions, number of the essential reactions associated with their respective enzyme-encoding genes and shared essential reactions across organisms are the main contents of the database. SSER would be a prime source to obtain essential reactions data and related gene and metabolite information and it can significantly facilitate the metabolic network models reconstruction and analysis, and drug target discovery studies. Users can browse, search, compare and download the essential reactions of organisms of their interest through the website http://cefg.uestc.edu.cn/sser .

  6. An approach to verification and validation of a reliable multicasting protocol: Extended Abstract

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Montgomery, Todd L.

    1995-01-01

    This paper describes the process of implementing a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally using a combination of formal and informal techniques in an attempt to ensure the correctness of its implementation. Our development process involved three concurrent activities: (1) the initial construction and incremental enhancement of a formal state model of the protocol machine; (2) the initial coding and incremental enhancement of the implementation; and (3) model-based testing of iterative implementations of the protocol. These activities were carried out by two separate teams: a design team and a V&V team. The design team built the first version of RMP with limited functionality to handle only nominal requirements of data delivery. This initial version did not handle off-nominal cases such as network partitions or site failures. Meanwhile, the V&V team concurrently developed a formal model of the requirements using a variant of SCR-based state tables. Based on these requirements tables, the V&V team developed test cases to exercise the implementation. In a series of iterative steps, the design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation. This was done by generating test cases based on suspected errant or off-nominal behaviors predicted by the current model. If the execution of a test in the model and implementation agreed, then the test either found a potential problem or verified a required behavior. However, if the execution of a test was different in the model and implementation, then the differences helped identify inconsistencies between the model and implementation. In either case, the dialogue between both teams drove the co-evolution of the model and implementation. We have found that this interactive, iterative approach to development allows software designers to focus on delivery of nominal functionality while the V&V team can focus on analysis of off nominal cases. Testing serves as the vehicle for keeping the model and implementation in fidelity with each other. This paper describes (1) our experiences in developing our process model; and (2) three example problems found during the development of RMP. Although RMP has provided our research effort with a rich set of test cases, it also has practical applications within NASA. For example, RMP is being considered for use in the NASA EOSDIS project due to its significant performance benefits in applications that need to replicate large amounts of data to many network sites.

  7. Multi-model perspectives and inter-comparison of soil moisture and evapotranspiration in East Africa—an application of Famine Early Warning Systems Network (FEWS NET) Land Data Assimilation System (FLDAS)

    NASA Astrophysics Data System (ADS)

    Pervez, M. S.; McNally, A.; Arsenault, K. R.

    2017-12-01

    Convergence of evidence from different agro-hydrologic sources is particularly important for drought monitoring in data sparse regions. In Africa, a combination of remote sensing and land surface modeling experiments are used to evaluate past, present and future drought conditions. The Famine Early Warning Systems Network (FEWS NET) Land Data Assimilation System (FLDAS) routinely simulates daily soil moisture, evapotranspiration (ET) and other variables over Africa using multiple models and inputs. We found that Noah 3.3, Variable Infiltration Capacity (VIC) 4.1.2, and Catchment Land Surface Model based FLDAS simulations of monthly soil moisture percentile maps captured concurrent drought and water surplus episodes effectively over East Africa. However, the results are sensitive to selection of land surface model and hydrometeorological forcings. We seek to identify sources of uncertainty (input, model, parameter) to eventually improve the accuracy of FLDAS outputs. In absence of in situ data, previous work used European Space Agency Climate Change Initiative Soil Moisture (CCI-SM) data measured from merged active-passive microwave remote sensing to evaluate FLDAS soil moisture, and found that during the high rainfall months of April-May and November-December Noah-based soil moisture correlate well with CCI-SM over the Greater Horn of Africa region. We have found good correlations (r>0.6) for FLDAS Noah 3.3 ET anomalies and Operational Simplified Surface Energy Balance (SSEBop) ET over East Africa. Recently, SSEBop ET estimates (version 4) were improved by implementing a land surface temperature correction factor. We re-evaluate the correlations between FLDAS ET and version 4 SSEBop ET. To further investigate the reasons for differences between models we evaluate FLDAS soil moisture with Advanced Scatterometer and SMAP soil moisture and FLDAS outputs with MODIS and AVHRR normalized difference vegetation index. By exploring longer historic time series and near-real time products we will be aiding convergence of evidence for better understanding of historic drought, improved monitoring and forecasting, and better understanding of uncertainties of water availability estimation over Africa

  8. Development of attention networks and their interactions in childhood.

    PubMed

    Pozuelos, Joan P; Paz-Alonso, Pedro M; Castillo, Alejandro; Fuentes, Luis J; Rueda, M Rosario

    2014-10-01

    In the present study, we investigated developmental trajectories of alerting, orienting, and executive attention networks and their interactions over childhood. Two cross-sectional experiments were conducted with different samples of 6- to 12-year-old children using modified versions of the attention network task (ANT). In Experiment 1 (N = 106), alerting and orienting cues were independently manipulated, thus allowing examination of interactions between these 2 networks, as well as between them and the executive attention network. In Experiment 2 (N = 159), additional changes were made to the task in order to foster exogenous orienting cues. Results from both studies consistently revealed separate developmental trajectories for each attention network. Children younger than 7 years exhibited stronger benefits from having an alerting auditory signal prior to the target presentation. Developmental changes in orienting were mostly observed on response accuracy between middle and late childhood, whereas executive attention showed increases in efficiency between 7 years and older ages, and further improvements in late childhood. Of importance, across both experiments, significant interactions between alerting and orienting, as well as between each of these and the executive attention network, were observed. Alerting cues led to speeding shifts of attention and enhancing orienting processes. Also, both alerting and orienting cues modulated the magnitude of the flanker interference effect. These findings inform current theoretical models of human attention and its development, characterizing for the first time, the age-related course of attention networks interactions that, present in adults, stem from further refinements over childhood.

  9. Effects of substrate network topologies on competition dynamics

    NASA Astrophysics Data System (ADS)

    Lee, Sang Hoon; Jeong, Hawoong

    2006-08-01

    We study a competition dynamics, based on the minority game, endowed with various substrate network structures. We observe the effects of the network topologies by investigating the volatility of the system and the structure of follower networks. The topology of substrate structures significantly influences the system efficiency represented by the volatility and such substrate networks are shown to amplify the herding effect and cause inefficiency in most cases. The follower networks emerging from the leadership structure show a power-law incoming degree distribution. This study shows the emergence of scale-free structures of leadership in the minority game and the effects of the interaction among players on the networked version of the game.

  10. A recurrent network mechanism of time integration in perceptual decisions.

    PubMed

    Wong, Kong-Fatt; Wang, Xiao-Jing

    2006-01-25

    Recent physiological studies using behaving monkeys revealed that, in a two-alternative forced-choice visual motion discrimination task, reaction time was correlated with ramping of spike activity of lateral intraparietal cortical neurons. The ramping activity appears to reflect temporal accumulation, on a timescale of hundreds of milliseconds, of sensory evidence before a decision is reached. To elucidate the cellular and circuit basis of such integration times, we developed and investigated a simplified two-variable version of a biophysically realistic cortical network model of decision making. In this model, slow time integration can be achieved robustly if excitatory reverberation is primarily mediated by NMDA receptors; our model with only fast AMPA receptors at recurrent synapses produces decision times that are not comparable with experimental observations. Moreover, we found two distinct modes of network behavior, in which decision computation by winner-take-all competition is instantiated with or without attractor states for working memory. Decision process is closely linked to the local dynamics, in the "decision space" of the system, in the vicinity of an unstable saddle steady state that separates the basins of attraction for the two alternative choices. This picture provides a rigorous and quantitative explanation for the dependence of performance and response time on the degree of task difficulty, and the reason for which reaction times are longer in error trials than in correct trials as observed in the monkey experiment. Our reduced two-variable neural model offers a simple yet biophysically plausible framework for studying perceptual decision making in general.

  11. Complex life cycles in a pond food web: effects of life stage structure and parasites on network properties, trophic positions and the fit of a probabilistic niche model.

    PubMed

    Preston, Daniel L; Jacobs, Abigail Z; Orlofske, Sarah A; Johnson, Pieter T J

    2014-03-01

    Most food webs use taxonomic or trophic species as building blocks, thereby collapsing variability in feeding linkages that occurs during the growth and development of individuals. This issue is particularly relevant to integrating parasites into food webs because parasites often undergo extreme ontogenetic niche shifts. Here, we used three versions of a freshwater pond food web with varying levels of node resolution (from taxonomic species to life stages) to examine how complex life cycles and parasites alter web properties, the perceived trophic position of organisms, and the fit of a probabilistic niche model. Consistent with prior studies, parasites increased most measures of web complexity in the taxonomic species web; however, when nodes were disaggregated into life stages, the effects of parasites on several network properties (e.g., connectance and nestedness) were reversed, due in part to the lower trophic generality of parasite life stages relative to free-living life stages. Disaggregation also reduced the trophic level of organisms with either complex or direct life cycles and was particularly useful when including predation on parasites, which can inflate trophic positions when life stages are collapsed. Contrary to predictions, disaggregation decreased network intervality and did not enhance the fit of a probabilistic niche model to the food webs with parasites. Although the most useful level of biological organization in food webs will vary with the questions of interest, our results suggest that disaggregating species-level nodes may refine our perception of how parasites and other complex life cycle organisms influence ecological networks.

  12. Application of the simplex method to the optimal adjustment of the parameters of a ventilation network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamba, G.M.; Jacques, E.; Patigny, J.

    1995-12-31

    Literature is rather abundant on the topic of steady-state network analysis programs. Many versions exist, some of them have real extended facilities such as full graphical manipulation, fire simulation in motion, etc. These programs are certainly of great help to any ventilation planning and often assist the ventilation engineer in his operational decision making. However, what ever the efficiency of the calculation algorithms might be, their weak point still is the overall validity of the model. This numerical model, apart from maybe the questionable application of some physical laws, depends directly on the quality of the data used to identifymore » its most influencing parameters such as the passive (resistance) or active (fan) characteristic of each of the branches in the network. Considering the non-linear character of the problem and the great number of variables involved, finding the closest numerical model of a real mine ventilation network is without any doubt a very difficult problem. This problem, often referred to as the parameter adjustment problem, is in almost every practical case solved on an experimental and {open_quotes}feeling{close_quotes} basis. Only a few papers put forward a mathematical solution based on a least square approach as the best fit criterion. The aim of this paper is to examine the possibility to apply the well-known simplex method to this problem. The performance of this method and its capability to reach the global optimum which corresponds to the best fit is discussed and compared to that of other methods.« less

  13. Python Processing and Version Control using VisTrails for the Netherlands Hydrological Instrument (Invited)

    NASA Astrophysics Data System (ADS)

    Verkaik, J.

    2013-12-01

    The Netherlands Hydrological Instrument (NHI) model predicts water demands in periods of drought, supporting the Dutch decision makers in taking operational as well as long-term decisions with respect to the water supply. Other applications of NHI are predicting fresh-salt interaction, nutrient loadings, and agriculture change. The NHI model consists of several coupled models: a saturated groundwater model (MODFLOW), an unsaturated groundwater model (MetaSWAP), a sub-catchment surface water model (MOZART), and a distribution network of surface waters model (DM/SOBEK). Each of these models requires specific, usually large, input data that may be the result of sophisticated schematization workflows. Input data can also be dependent on each other, for example, the precipitation data is input for the unsaturated zone model (cells) as well as for the surface water models (polygons). For efficient data management, we developed several Python tools such that the modeler or stakeholder can use the model in a user-friendly manner, and data is managed in a consistent, transparent and reproducible way. Two open source Python tools are presented here: the data version control module for the workflow manager VisTrails called FileSync, and the NHI model control script that uses FileSync. VisTrails is an open-source scientific workflow and provenance management system that provides support for simulations, data exploration and visualization. Since VisTrails does not directly support version control we developed a version control module called FileSync. With this generic module, the user can synchronize data from and to his workflow through a dialog window. The FileSync dialog calls the FileSync script that is command-line based and performs the actual data synchronization. This script allows the user to easily create a model repository, upload and download data, create releases and define scenarios. The data synchronization approach applied here differs from systems as Subversion or Git, since these systems do not perform well for large (binary) model data files. For this reason, a new concept of parameterization and data splitting has been implemented. Each file, or set of files, is uniquely labeled as a parameter, and for this parameter metadata is maintained by Subversion. The metadata data contains file hashes to identify data content and the location where the actual bulk data are stored that can be reached by FTP. The NHI model control script is a command-line driven Python script for pre-processing, running, and post-processing the NHI model and uses one single configuration file for all computational kernels. This configuration file is an easy-to-use, keyword-driven, Windows INI-file, having separate sections for all the kernels. It also includes a FileSync data section where the user can specify version controlled model data to be used as input. The NHI control script keeps all the data consistent during the pre-processing. Furthermore, this script is able to do model state handling when the NHI model is used for ensemble forecasting.

  14. Overview and Evaluation of the Community Multiscale Air Quality (CMAQ) Modeling System Version 5.2

    EPA Science Inventory

    A new version of the Community Multiscale Air Quality (CMAQ) model, version 5.2 (CMAQv5.2), is currently being developed, with a planned release date in 2017. The new model includes numerous updates from the previous version of the model (CMAQv5.1). Specific updates include a new...

  15. Using Rasch-models to compare the 30-, 20-, and 12-items version of the general health questionnaire taking four recoding schemes into account.

    PubMed

    Alexandrowicz, Rainer W; Friedrich, Fabian; Jahn, Rebecca; Soulier, Nathalie

    2015-01-01

    The present study compares the 30-, 20-, and 12-items versions of the General Health Questionnaire (GHQ) in the original coding and four different recoding schemes (Bimodal, Chronic, Modified Likert and a newly proposed Modified Chronic) with respect to their psychometric qualities. The dichotomized versions (i.e. Bimodal, Chronic and Modified Chronic) were evaluated with the Rasch-Model and the polytomous original version and the Modified Likert version were evaluated with the Partial Credit Model. In general, the versions under consideration showed agreement with the model assumption. However, the recoded versions exhibited some deficits with respect to the Outfit index. Because of the item deficits and for theoretical reasons we argue in favor of using the any of the three length versions with the original four-categorical coding scheme. Nevertheless, any of the versions appears apt for clinical use from a psychometric perspective.

  16. Generalized hamming networks and applications.

    PubMed

    Koutroumbas, Konstantinos; Kalouptsidis, Nicholas

    2005-09-01

    In this paper the classical Hamming network is generalized in various ways. First, for the Hamming maxnet, a generalized model is proposed, which covers under its umbrella most of the existing versions of the Hamming Maxnet. The network dynamics are time varying while the commonly used ramp function may be replaced by a much more general non-linear function. Also, the weight parameters of the network are time varying. A detailed convergence analysis is provided. A bound on the number of iterations required for convergence is derived and its distribution functions are given for the cases where the initial values of the nodes of the Hamming maxnet stem from the uniform and the peak distributions. Stabilization mechanisms aiming to prevent the node(s) with the maximum initial value diverging to infinity or decaying to zero are described. Simulations demonstrate the advantages of the proposed extension. Also, a rough comparison between the proposed generalized scheme as well as the original Hamming maxnet and its variants is carried out in terms of the time required for convergence, in hardware implementations. Finally, the other two parts of the Hamming network, namely the competitors generating module and the decoding module, are briefly considered in the framework of various applications such as classification/clustering, vector quantization and function optimization.

  17. ANALYSIS OF CLINICAL AND DERMOSCOPIC FEATURES FOR BASAL CELL CARCINOMA NEURAL NETWORK CLASSIFICATION

    PubMed Central

    Cheng, Beibei; Stanley, R. Joe; Stoecker, William V; Stricklin, Sherea M.; Hinton, Kristen A.; Nguyen, Thanh K.; Rader, Ryan K.; Rabinovitz, Harold S.; Oliviero, Margaret; Moss, Randy H.

    2012-01-01

    Background Basal cell carcinoma (BCC) is the most commonly diagnosed cancer in the United States. In this research, we examine four different feature categories used for diagnostic decisions, including patient personal profile (patient age, gender, etc.), general exam (lesion size and location), common dermoscopic (blue-gray ovoids, leaf-structure dirt trails, etc.), and specific dermoscopic lesion (white/pink areas, semitranslucency, etc.). Specific dermoscopic features are more restricted versions of the common dermoscopic features. Methods Combinations of the four feature categories are analyzed over a data set of 700 lesions, with 350 BCCs and 350 benign lesions, for lesion discrimination using neural network-based techniques, including Evolving Artificial Neural Networks and Evolving Artificial Neural Network Ensembles. Results Experiment results based on ten-fold cross validation for training and testing the different neural network-based techniques yielded an area under the receiver operating characteristic curve as high as 0.981 when all features were combined. The common dermoscopic lesion features generally yielded higher discrimination results than other individual feature categories. Conclusions Experimental results show that combining clinical and image information provides enhanced lesion discrimination capability over either information source separately. This research highlights the potential of data fusion as a model for the diagnostic process. PMID:22724561

  18. Bridging: Locating Critical Connectors in a Network

    PubMed Central

    Valente, Thomas W.; Fujimoto, Kayo

    2010-01-01

    This paper proposes several measures for bridging in networks derived from Granovetter's (1973) insight that links which reduce distances in a network are important structural bridges. Bridging is calculated by systematically deleting links and calculating the resultant changes in network cohesion (measured as the inverse average path length). The average change for each node's links provides an individual level measure of bridging. We also present a normalized version which controls for network size and a network level bridging index. Bridging properties are demonstrated on hypothetical networks, empirical networks, and a set of 100 randomly generated networks to show how the bridging measure correlates with existing network measures such as degree, personal network density, constraint, closeness centrality, betweenness centrality, and vitality. Bridging and the accompanying methodology provide a family of new network measures useful for studying network structure, network dynamics, and network effects on substantive behavioral phenomenon. PMID:20582157

  19. Advanced Networks in Motion Mobile Sensorweb

    NASA Technical Reports Server (NTRS)

    Ivancic, William D.; Stewart, David H.

    2011-01-01

    Advanced mobile networking technology applicable to mobile sensor platforms was developed, deployed and demonstrated. A two-tier sensorweb design was developed. The first tier utilized mobile network technology to provide mobility. The second tier, which sits above the first tier, utilizes 6LowPAN (Internet Protocol version 6 Low Power Wireless Personal Area Networks) sensors. The entire network was IPv6 enabled. Successful mobile sensorweb system field tests took place in late August and early September of 2009. The entire network utilized IPv6 and was monitored and controlled using a remote Web browser via IPv6 technology. This paper describes the mobile networking and 6LowPAN sensorweb design, implementation, deployment and testing as well as wireless systems and network monitoring software developed to support testing and validation.

  20. Low-Dimensional Models for Physiological Systems: Nonlinear Coupling of Gas and Liquid Flows

    NASA Astrophysics Data System (ADS)

    Staples, A. E.; Oran, E. S.; Boris, J. P.; Kailasanath, K.

    2006-11-01

    Current computational models of biological organisms focus on the details of a specific component of the organism. For example, very detailed models of the human heart, an aorta, a vein, or part of the respiratory or digestive system, are considered either independently from the rest of the body, or as interacting simply with other systems and components in the body. In actual biological organisms, these components and systems are strongly coupled and interact in complex, nonlinear ways leading to complicated global behavior. Here we describe a low-order computational model of two physiological systems, based loosely on a circulatory and respiratory system. Each system is represented as a one-dimensional fluid system with an interconnected series of mass sources, pumps, valves, and other network components, as appropriate, representing different physical organs and system components. Preliminary results from a first version of this model system are presented.

  1. A stochastic spatiotemporal model of a response-regulator network in the Caulobacter crescentus cell cycle

    NASA Astrophysics Data System (ADS)

    Li, Fei; Subramanian, Kartik; Chen, Minghan; Tyson, John J.; Cao, Yang

    2016-06-01

    The asymmetric cell division cycle in Caulobacter crescentus is controlled by an elaborate molecular mechanism governing the production, activation and spatial localization of a host of interacting proteins. In previous work, we proposed a deterministic mathematical model for the spatiotemporal dynamics of six major regulatory proteins. In this paper, we study a stochastic version of the model, which takes into account molecular fluctuations of these regulatory proteins in space and time during early stages of the cell cycle of wild-type Caulobacter cells. We test the stochastic model with regard to experimental observations of increased variability of cycle time in cells depleted of the divJ gene product. The deterministic model predicts that overexpression of the divK gene blocks cell cycle progression in the stalked stage; however, stochastic simulations suggest that a small fraction of the mutants cells do complete the cell cycle normally.

  2. Multiplex PageRank.

    PubMed

    Halu, Arda; Mondragón, Raúl J; Panzarasa, Pietro; Bianconi, Ginestra

    2013-01-01

    Many complex systems can be described as multiplex networks in which the same nodes can interact with one another in different layers, thus forming a set of interacting and co-evolving networks. Examples of such multiplex systems are social networks where people are involved in different types of relationships and interact through various forms of communication media. The ranking of nodes in multiplex networks is one of the most pressing and challenging tasks that research on complex networks is currently facing. When pairs of nodes can be connected through multiple links and in multiple layers, the ranking of nodes should necessarily reflect the importance of nodes in one layer as well as their importance in other interdependent layers. In this paper, we draw on the idea of biased random walks to define the Multiplex PageRank centrality measure in which the effects of the interplay between networks on the centrality of nodes are directly taken into account. In particular, depending on the intensity of the interaction between layers, we define the Additive, Multiplicative, Combined, and Neutral versions of Multiplex PageRank, and show how each version reflects the extent to which the importance of a node in one layer affects the importance the node can gain in another layer. We discuss these measures and apply them to an online multiplex social network. Findings indicate that taking the multiplex nature of the network into account helps uncover the emergence of rankings of nodes that differ from the rankings obtained from one single layer. Results provide support in favor of the salience of multiplex centrality measures, like Multiplex PageRank, for assessing the prominence of nodes embedded in multiple interacting networks, and for shedding a new light on structural properties that would otherwise remain undetected if each of the interacting networks were analyzed in isolation.

  3. Predicting the Success of Off-Network Television Programs in the Syndication Marketplace: The Case of Broadcast Syndication.

    ERIC Educational Resources Information Center

    Robinson, Karla Salmon

    Like other industries, television has its own version of the used-car dealership or second-hand store: off-network syndication. Since researchers who study television have rarely investigated the market for these programs, a study examined program and marketplace characteristics to determine which contributes most to the successful syndication of…

  4. Method Accelerates Training Of Some Neural Networks

    NASA Technical Reports Server (NTRS)

    Shelton, Robert O.

    1992-01-01

    Three-layer networks trained faster provided two conditions are satisfied: numbers of neurons in layers are such that majority of work done in synaptic connections between input and hidden layers, and number of neurons in input layer at least as great as number of training pairs of input and output vectors. Based on modified version of back-propagation method.

  5. 77 FR 28857 - Development of the State and Local Implementation Grant Program for the Nationwide Public Safety...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-16

    ..., .pdf, or Word format (please specify version), which should be labeled with the name and organizational... workers have long been hindered by incompatible, and often outdated, communications equipment and this Act... broadband network (PSBN), based on a single, national network architecture.\\2\\ FirstNet is responsible for...

  6. Integration of Wireless Sensor Networks into a Commercial Off-the-Shelf (COTS) Multimedia Network

    DTIC Science & Technology

    2008-12-01

    IDE) which supports JAVA ME or in a basic text editor. The simplest IDE to use is the Netbeans IDE, which is supported by Sun 34 Microsystems...discussion,” https://www.sunspotworld.com/forums, November 2008. [28] Netbeans.org, “ Netbeans IDE version 6.5 download,” http://www.netbeans.org

  7. A multiscale model of distributed fracture and permeability in solids in all-round compression

    NASA Astrophysics Data System (ADS)

    De Bellis, Maria Laura; Della Vecchia, Gabriele; Ortiz, Michael; Pandolfi, Anna

    2017-07-01

    We present a microstructural model of permeability in fractured solids, where the fractures are described in terms of recursive families of parallel, equidistant cohesive faults. Faults originate upon the attainment of tensile or shear strength in the undamaged material. Secondary faults may form in a hierarchical organization, creating a complex network of connected fractures that modify the permeability of the solid. The undamaged solid may possess initial porosity and permeability. The particular geometry of the superposed micro-faults lends itself to an explicit analytical quantification of the porosity and permeability of the damaged material. The model is the finite kinematics version of a recently proposed porous material model, applied with success to the simulation of laboratory tests and excavation problems [De Bellis, M. L., Della Vecchia, G., Ortiz, M., Pandolfi, A., 2016. A linearized porous brittle damage material model with distributed frictional-cohesive faults. Engineering Geology 215, 10-24. Cited By 0. 10.1016/j.enggeo.2016.10.010]. The extension adds over and above the linearized kinematics version for problems characterized by large deformations localized in narrow zones, while the remainder of the solid undergoes small deformations, as typically observed in soil and rock mechanics problems. The approach is particularly appealing as a means of modeling a wide scope of engineering problems, ranging from the prevention of water or gas outburst into underground mines, to the prediction of the integrity of reservoirs for CO2 sequestration or hazardous waste storage, to hydraulic fracturing processes.

  8. Long-range correlations improve understanding of the influence of network structure on contact dynamics.

    PubMed

    Peyrard, N; Dieckmann, U; Franc, A

    2008-05-01

    Models of infectious diseases are characterized by a phase transition between extinction and persistence. A challenge in contemporary epidemiology is to understand how the geometry of a host's interaction network influences disease dynamics close to the critical point of such a transition. Here we address this challenge with the help of moment closures. Traditional moment closures, however, do not provide satisfactory predictions close to such critical points. We therefore introduce a new method for incorporating longer-range correlations into existing closures. Our method is technically simple, remains computationally tractable and significantly improves the approximation's performance. Our extended closures thus provide an innovative tool for quantifying the influence of interaction networks on spatially or socially structured disease dynamics. In particular, we examine the effects of a network's clustering coefficient, as well as of new geometrical measures, such as a network's square clustering coefficients. We compare the relative performance of different closures from the literature, with or without our long-range extension. In this way, we demonstrate that the normalized version of the Bethe approximation-extended to incorporate long-range correlations according to our method-is an especially good candidate for studying influences of network structure. Our numerical results highlight the importance of the clustering coefficient and the square clustering coefficient for predicting disease dynamics at low and intermediate values of transmission rate, and demonstrate the significance of path redundancy for disease persistence.

  9. Research in the Aloha system

    NASA Technical Reports Server (NTRS)

    Abramson, N.

    1974-01-01

    The Aloha system was studied and developed and extended to advanced forms of computer communications networks. Theoretical and simulation studies of Aloha type radio channels for use in packet switched communications networks were performed. Improved versions of the Aloha communications techniques and their extensions were tested experimentally. A packet radio repeater suitable for use with the Aloha system operational network was developed. General studies of the organization of multiprocessor systems centered on the development of the BCC 500 computer were concluded.

  10. Cross-Layer Service Discovery Mechanism for OLSRv2 Mobile Ad Hoc Networks.

    PubMed

    Vara, M Isabel; Campo, Celeste

    2015-07-20

    Service discovery plays an important role in mobile ad hoc networks (MANETs). The lack of central infrastructure, limited resources and high mobility make service discovery a challenging issue for this kind of network. This article proposes a new service discovery mechanism for discovering and advertising services integrated into the Optimized Link State Routing Protocol Version 2 (OLSRv2). In previous studies, we demonstrated the validity of a similar service discovery mechanism integrated into the previous version of OLSR (OLSRv1). In order to advertise services, we have added a new type-length-value structure (TLV) to the OLSRv2 protocol, called service discovery message (SDM), according to the Generalized MANET Packet/Message Format defined in Request For Comments (RFC) 5444. Each node in the ad hoc network only advertises its own services. The advertisement frequency is a user-configurable parameter, so that it can be modified depending on the user requirements. Each node maintains two service tables, one to store information about its own services and another one to store information about the services it discovers in the network. We present simulation results, that compare our service discovery integrated into OLSRv2 with the one defined for OLSRv1 and with the integration of service discovery in Ad hoc On-demand Distance Vector (AODV) protocol, in terms of service discovery ratio, service latency and network overhead.

  11. Cross-Layer Service Discovery Mechanism for OLSRv2 Mobile Ad Hoc Networks

    PubMed Central

    Vara, M. Isabel; Campo, Celeste

    2015-01-01

    Service discovery plays an important role in mobile ad hoc networks (MANETs). The lack of central infrastructure, limited resources and high mobility make service discovery a challenging issue for this kind of network. This article proposes a new service discovery mechanism for discovering and advertising services integrated into the Optimized Link State Routing Protocol Version 2 (OLSRv2). In previous studies, we demonstrated the validity of a similar service discovery mechanism integrated into the previous version of OLSR (OLSRv1). In order to advertise services, we have added a new type-length-value structure (TLV) to the OLSRv2 protocol, called service discovery message (SDM), according to the Generalized MANET Packet/Message Format defined in Request For Comments (RFC) 5444. Each node in the ad hoc network only advertises its own services. The advertisement frequency is a user-configurable parameter, so that it can be modified depending on the user requirements. Each node maintains two service tables, one to store information about its own services and another one to store information about the services it discovers in the network. We present simulation results, that compare our service discovery integrated into OLSRv2 with the one defined for OLSRv1 and with the integration of service discovery in Ad hoc On-demand Distance Vector (AODV) protocol, in terms of service discovery ratio, service latency and network overhead. PMID:26205272

  12. Unidata LDM-7: a Hybrid Multicast/unicast System for Highly Efficient and Reliable Real-Time Data Distribution

    NASA Astrophysics Data System (ADS)

    Emmerson, S. R.; Veeraraghavan, M.; Chen, S.; Ji, X.

    2015-12-01

    Results of a pilot deployment of a major new version of the Unidata Local Data Manager (LDM-7) are presented. The Unidata LDM was developed by the University Corporation for Atmospheric Research (UCAR) and comprises a suite of software for the distribution and local processing of data in near real-time. It is widely used in the geoscience community to distribute observational data and model output, most notably as the foundation of the Unidata Internet Data Distribution (IDD) system run by UCAR, but also in private networks operated by NOAA, NASA, USGS, etc. The current version, LDM-6, uses at least one unicast TCP connection per receiving host. With over 900 connections, the bit-rate of total outgoing IDD traffic from UCAR averages approximately 3.0 GHz, with peak data rates exceeding 6.6 GHz. Expected increases in data volume suggest that a more efficient distribution mechanism will be required in the near future. LDM-7 greatly reduces the outgoing bandwidth requirement by incorporating a recently-developed "semi-reliable" IP multicast protocol while retaining the unicast TCP mechanism for reliability. During the summer of 2015, UCAR and the University of Virginia conducted a pilot deployment of the Unidata LDM-7 among U.S. university participants with access to the Internet2 network. Results of this pilot program, along with comparisons to the existing Unidata LDM-6 system, are presented.

  13. Optimizing one-shot learning with binary synapses.

    PubMed

    Romani, Sandro; Amit, Daniel J; Amit, Yali

    2008-08-01

    A network of excitatory synapses trained with a conservative version of Hebbian learning is used as a model for recognizing the familiarity of thousands of once-seen stimuli from those never seen before. Such networks were initially proposed for modeling memory retrieval (selective delay activity). We show that the same framework allows the incorporation of both familiarity recognition and memory retrieval, and estimate the network's capacity. In the case of binary neurons, we extend the analysis of Amit and Fusi (1994) to obtain capacity limits based on computations of signal-to-noise ratio of the field difference between selective and non-selective neurons of learned signals. We show that with fast learning (potentiation probability approximately 1), the most recently learned patterns can be retrieved in working memory (selective delay activity). A much higher number of once-seen learned patterns elicit a realistic familiarity signal in the presence of an external field. With potentiation probability much less than 1 (slow learning), memory retrieval disappears, whereas familiarity recognition capacity is maintained at a similarly high level. This analysis is corroborated in simulations. For analog neurons, where such analysis is more difficult, we simplify the capacity analysis by studying the excess number of potentiated synapses above the steady-state distribution. In this framework, we derive the optimal constraint between potentiation and depression probabilities that maximizes the capacity.

  14. Performance of Versions 1,2 and 3 of the Goddard Earth Observing System (GEOS) Chemistry-Climate Model (CCM)

    NASA Technical Reports Server (NTRS)

    Pawson, Steven; Stolarski, Richard S.; Nielsen, J. Eric; Duncan, Bryan N.

    2008-01-01

    Version 1 of the Goddard Earth Observing System Chemistry-Climate Model (GEOS CCM) was used in the first CCMVa1 model evaluation and forms the basis for several studies of links between ozone and the circulation. That version of the CCM was based on the GEOS-4 GCM. Versions 2 and 3 of the GEOS CCM are based on the GEOS-5 GCM, which retains the "Lin-Rood" dynamical core but has a totally different set of physical parameterizatiOns to GEOS-4. In Version 2 of the GEOS CCM the Goddard stratospheric chemistry module is retained. Difference between Versions 1 and 2 thus reflect the physics changes of the underlying GCMs. Several comparisons between these two models are made, several of which reveal improvements in Version 2 (including a more realistic representation of the interannual variability of the Antarctic vortex). In Version 3 of the GEOS CCM, the stratospheric chemistry mechanism is replaced by the "GMI COMBO" code that includes tropospheric chemistry and different computational approaches. An advantage of this model version. is the reduction of high ozone biases that prevail at low chlorine loadings in Versions 1 and 2. This poster will compare and contrast various aspects of the three model versions that are relevant for understanding interactions between ozone and climate.

  15. Revealing determinants of two-phase dynamics of P53 network under gamma irradiation based on a reduced 2D relaxation oscillator model.

    PubMed

    Demirkıran, Gökhan; Kalaycı Demir, Güleser; Güzeliş, Cüneyt

    2018-02-01

    This study proposes a two-dimensional (2D) oscillator model of p53 network, which is derived via reducing the multidimensional two-phase dynamics model into a model of ataxia telangiectasia mutated (ATM) and Wip1 variables, and studies the impact of p53-regulators on cell fate decision. First, the authors identify a 6D core oscillator module, then reduce this module into a 2D oscillator model while preserving the qualitative behaviours. The introduced 2D model is shown to be an excitable relaxation oscillator. This oscillator provides a mechanism that leads diverse modes underpinning cell fate, each corresponding to a cell state. To investigate the effects of p53 inhibitors and the intrinsic time delay of Wip1 on the characteristics of oscillations, they introduce also a delay differential equation version of the 2D oscillator. They observe that the suppression of p53 inhibitors decreases the amplitudes of p53 oscillation, though the suppression increases the sustained level of p53. They identify Wip1 and P53DINP1 as possible targets for cancer therapies considering their impact on the oscillator, supported by biological findings. They model some mutations as critical changes of the phase space characteristics. Possible cancer therapeutic strategies are then proposed for preventing these mutations' effects using the phase space approach.

  16. Integrating Information in Biological Ontologies and Molecular Networks to Infer Novel Terms.

    PubMed

    Li, Le; Yip, Kevin Y

    2016-12-15

    Currently most terms and term-term relationships in Gene Ontology (GO) are defined manually, which creates cost, consistency and completeness issues. Recent studies have demonstrated the feasibility of inferring GO automatically from biological networks, which represents an important complementary approach to GO construction. These methods (NeXO and CliXO) are unsupervised, which means 1) they cannot use the information contained in existing GO, 2) the way they integrate biological networks may not optimize the accuracy, and 3) they are not customized to infer the three different sub-ontologies of GO. Here we present a semi-supervised method called Unicorn that extends these previous methods to tackle the three problems. Unicorn uses a sub-tree of an existing GO sub-ontology as training part to learn parameters in integrating multiple networks. Cross-validation results show that Unicorn reliably inferred the left-out parts of each specific GO sub-ontology. In addition, by training Unicorn with an old version of GO together with biological networks, it successfully re-discovered some terms and term-term relationships present only in a new version of GO. Unicorn also successfully inferred some novel terms that were not contained in GO but have biological meanings well-supported by the literature. Source code of Unicorn is available at http://yiplab.cse.cuhk.edu.hk/unicorn/.

  17. An open-source wireless sensor stack: from Arduino to SDI-12 to Water One Flow

    NASA Astrophysics Data System (ADS)

    Hicks, S.; Damiano, S. G.; Smith, K. M.; Olexy, J.; Horsburgh, J. S.; Mayorga, E.; Aufdenkampe, A. K.

    2013-12-01

    Implementing a large-scale streaming environmental sensor network has previously been limited by the high cost of the datalogging and data communication infrastructure. The Christina River Basin Critical Zone Observatory (CRB-CZO) is overcoming the obstacles to large near-real-time data collection networks by using Arduino, an open source electronics platform, in combination with XBee ZigBee wireless radio modules. These extremely low-cost and easy-to-use open source electronics are at the heart of the new DIY movement and have provided solutions to countless projects by over half a million users worldwide. However, their use in environmental sensing is in its infancy. At present a primary limitation to widespread deployment of open-source electronics for environmental sensing is the lack of a simple, open-source software stack to manage streaming data from heterogeneous sensor networks. Here we present a functioning prototype software stack that receives sensor data over a self-meshing ZigBee wireless network from over a hundred sensors, stores the data locally and serves it on demand as a CUAHSI Water One Flow (WOF) web service. We highlight a few new, innovative components, including: (1) a versatile open data logger design based the Arduino electronics platform and ZigBee radios; (2) a software library implementing SDI-12 communication protocol between any Arduino platform and SDI12-enabled sensors without the need for additional hardware (https://github.com/StroudCenter/Arduino-SDI-12); and (3) 'midStream', a light-weight set of Python code that receives streaming sensor data, appends it with metadata on the fly by querying a relational database structured on an early version of the Observations Data Model version 2.0 (ODM2), and uses the WOFpy library to serve the data as WaterML via SOAP and REST web services.

  18. Using Satellite Data and Land Surface Models to Monitor and Forecast Drought Conditions in Africa and Middle East

    NASA Astrophysics Data System (ADS)

    Arsenault, K. R.; Shukla, S.; Getirana, A.; Peters-Lidard, C. D.; Kumar, S.; McNally, A.; Zaitchik, B. F.; Badr, H. S.; Funk, C. C.; Koster, R. D.; Narapusetty, B.; Jung, H. C.; Roningen, J. M.

    2017-12-01

    Drought and water scarcity are among the important issues facing several regions within Africa and the Middle East. In addition, these regions typically have sparse ground-based data networks, where sometimes remotely sensed observations may be the only data available. Long-term satellite records can help with determining historic and current drought conditions. In recent years, several new satellites have come on-line that monitor different hydrological variables, including soil moisture and terrestrial water storage. Though these recent data records may be considered too short for the use in identifying major droughts, they do provide additional information that can better characterize where water deficits may occur. We utilize recent satellite data records of Gravity Recovery and Climate Experiment (GRACE) terrestrial water storage (TWS) and the European Space Agency's Advanced Scatterometer (ASCAT) soil moisture retrievals. Combining these records with land surface models (LSMs), NASA's Catchment and the Noah Multi-Physics (MP), is aimed at improving the land model states and initialization for seasonal drought forecasts. The LSMs' total runoff is routed through the Hydrological Modeling and Analysis Platform (HyMAP) to simulate surface water dynamics, which can provide an additional means of validation against in situ streamflow data. The NASA Land Information System (LIS) software framework drives the LSMs and HyMAP and also supports the capability to assimilate these satellite retrievals, such as soil moisture and TWS. The LSMs are driven for 30+ years with NASA's Modern-Era Retrospective analysis for Research and Applications, Version 2 (MERRA-2), and the USGS/UCSB Climate Hazards Group InfraRed Precipitation with Stations (CHIRPS) rainfall dataset. The seasonal water deficit forecasts are generated using downscaled and bias-corrected versions of NASA's Goddard Earth Observing System Model (GEOS-5), and NOAA's Climate Forecast System (CFSv2) forecasts. These combined satellite and model records and forecasts are intended for use in different decision support tools, like the Famine Early Warning Systems Network (FEWS NET) and the Middle East-North Africa (MENA) Regional Drought Management System, for aiding and forecasting in water and food insecure regions.

  19. Online Simulations of Global Aerosol Distributions in the NASA GEOS-4 Model and Comparisons to Satellite and Ground-Based Aerosol Optical Depth

    NASA Technical Reports Server (NTRS)

    Colarco, Peter; daSilva, Arlindo; Chin, Mian; Diehl, Thomas

    2010-01-01

    We have implemented a module for tropospheric aerosols (GO CART) online in the NASA Goddard Earth Observing System version 4 model and simulated global aerosol distributions for the period 2000-2006. The new online system offers several advantages over the previous offline version, providing a platform for aerosol data assimilation, aerosol-chemistry-climate interaction studies, and short-range chemical weather forecasting and climate prediction. We introduce as well a methodology for sampling model output consistently with satellite aerosol optical thickness (AOT) retrievals to facilitate model-satellite comparison. Our results are similar to the offline GOCART model and to the models participating in the AeroCom intercomparison. The simulated AOT has similar seasonal and regional variability and magnitude to Aerosol Robotic Network (AERONET), Moderate Resolution Imaging Spectroradiometer, and Multiangle Imaging Spectroradiometer observations. The model AOT and Angstrom parameter are consistently low relative to AERONET in biomass-burning-dominated regions, where emissions appear to be underestimated, consistent with the results of the offline GOCART model. In contrast, the model AOT is biased high in sulfate-dominated regions of North America and Europe. Our model-satellite comparison methodology shows that diurnal variability in aerosol loading is unimportant compared to sampling the model where the satellite has cloud-free observations, particularly in sulfate-dominated regions. Simulated sea salt burden and optical thickness are high by a factor of 2-3 relative to other models, and agreement between model and satellite over-ocean AOT is improved by reducing the model sea salt burden by a factor of 2. The best agreement in both AOT magnitude and variability occurs immediately downwind of the Saharan dust plume.

  20. The CEOS International Directory Network: Progress and Plans, Spring, 1999

    NASA Technical Reports Server (NTRS)

    Olsen, Lola M.

    1999-01-01

    The Global Change Master Directory (GCMD) serves as the software development hub for the Committee on Earth observation Satellites' (CEOS) International Directory Network (IDN). The GCMD has upgraded the software for the IDN nodes as Version 7 of the GCMD: MD7-Oracle and MD7-Isite, as well as three other MD7 experimental interfaces. The contribution by DLR representatives (Germany) of the DLR Thesaurus will be demonstrated as an educational tool for use with MD7-Isite. The software will be installed at twelve nodes around the world: Brazil, Argentina, the Netherlands, Canada, France, Germany, Italy, Japan, Australia, New Zealand, Switzerland, and several sites in the United States. Representing NASA for the International Directory Network and the CEOS Data Access Subgroup, NASA's contribution to this international interoperability effort will be updated. Discussion will include interoperability with the CEOS Interoperability Protocol (CIP), features of the latest version of the software, including upgraded capabilities for distributed input by the IDN nodes, installation logistics, "mirroring", population objectives, and future plans.

  1. The CEOS International Directory Network Progress and Plans: Spring, 1999

    NASA Technical Reports Server (NTRS)

    Olsen, Lola M.

    1999-01-01

    The Global Change Master Directory (GCMD) serves as the software development hub for the Committee on Earth Observation Satellites' (CEOS) International Directory Network (IDN). The GCMD has upgraded the software for the IDN nodes as Version 7 of the GCMD: MD7-Oracle and MD7-Isite, as well as three other MD7 experimental interfaces. The contribution by DLR representatives (Germany) of the DLR Thesaurus will be demonstrated as an educational tool for use with MD7-Isite. The software will be installed at twelve nodes around the world: Brazil, Argentina, the Netherlands, Canada, France, Germany, Italy, Japan, Australia, New Zealand, Switzerland, and several sites in the United States. Representing NASA for the International Directory Network and the CEOS Data Access Subgroup, NASA's contribution to this international interoperability effort will be updated. Discussion will include interoperability with the CEOS Interoperability Protocol (CIP), features of the latest version of the software, including upgraded capabilities for distributed input by the IDN nodes, installation logistics, "mirroring', population objectives, and future plans.

  2. An engineering design approach to systems biology.

    PubMed

    Janes, Kevin A; Chandran, Preethi L; Ford, Roseanne M; Lazzara, Matthew J; Papin, Jason A; Peirce, Shayn M; Saucerman, Jeffrey J; Lauffenburger, Douglas A

    2017-07-17

    Measuring and modeling the integrated behavior of biomolecular-cellular networks is central to systems biology. Over several decades, systems biology has been shaped by quantitative biologists, physicists, mathematicians, and engineers in different ways. However, the basic and applied versions of systems biology are not typically distinguished, which blurs the separate aspirations of the field and its potential for real-world impact. Here, we articulate an engineering approach to systems biology, which applies educational philosophy, engineering design, and predictive models to solve contemporary problems in an age of biomedical Big Data. A concerted effort to train systems bioengineers will provide a versatile workforce capable of tackling the diverse challenges faced by the biotechnological and pharmaceutical sectors in a modern, information-dense economy.

  3. Parallelization of KENO-Va Monte Carlo code

    NASA Astrophysics Data System (ADS)

    Ramón, Javier; Peña, Jorge

    1995-07-01

    KENO-Va is a code integrated within the SCALE system developed by Oak Ridge that solves the transport equation through the Monte Carlo Method. It is being used at the Consejo de Seguridad Nuclear (CSN) to perform criticality calculations for fuel storage pools and shipping casks. Two parallel versions of the code: one for shared memory machines and other for distributed memory systems using the message-passing interface PVM have been generated. In both versions the neutrons of each generation are tracked in parallel. In order to preserve the reproducibility of the results in both versions, advanced seeds for random numbers were used. The CONVEX C3440 with four processors and shared memory at CSN was used to implement the shared memory version. A FDDI network of 6 HP9000/735 was employed to implement the message-passing version using proprietary PVM. The speedup obtained was 3.6 in both cases.

  4. Community structure from spectral properties in complex networks

    NASA Astrophysics Data System (ADS)

    Servedio, V. D. P.; Colaiori, F.; Capocci, A.; Caldarelli, G.

    2005-06-01

    We analyze the spectral properties of complex networks focusing on their relation to the community structure, and develop an algorithm based on correlations among components of different eigenvectors. The algorithm applies to general weighted networks, and, in a suitably modified version, to the case of directed networks. Our method allows to correctly detect communities in sharply partitioned graphs, however it is useful to the analysis of more complex networks, without a well defined cluster structure, as social and information networks. As an example, we test the algorithm on a large scale data-set from a psychological experiment of free word association, where it proves to be successful both in clustering words, and in uncovering mental association patterns.

  5. Noise-induced volatility of collective dynamics

    NASA Astrophysics Data System (ADS)

    Harras, Georges; Tessone, Claudio J.; Sornette, Didier

    2012-01-01

    Noise-induced volatility refers to a phenomenon of increased level of fluctuations in the collective dynamics of bistable units in the presence of a rapidly varying external signal, and intermediate noise levels. The archetypical signature of this phenomenon is that—beyond the increase in the level of fluctuations—the response of the system becomes uncorrelated with the external driving force, making it different from stochastic resonance. Numerical simulations and an analytical theory of a stochastic dynamical version of the Ising model on regular and random networks demonstrate the ubiquity and robustness of this phenomenon, which is argued to be a possible cause of excess volatility in financial markets, of enhanced effective temperatures in a variety of out-of-equilibrium systems, and of strong selective responses of immune systems of complex biological organisms. Extensive numerical simulations are compared with a mean-field theory for different network topologies.

  6. Projected power iteration for network alignment

    NASA Astrophysics Data System (ADS)

    Onaran, Efe; Villar, Soledad

    2017-08-01

    The network alignment problem asks for the best correspondence between two given graphs, so that the largest possible number of edges are matched. This problem appears in many scientific problems (like the study of protein-protein interactions) and it is very closely related to the quadratic assignment problem which has graph isomorphism, traveling salesman and minimum bisection problems as particular cases. The graph matching problem is NP-hard in general. However, under some restrictive models for the graphs, algorithms can approximate the alignment efficiently. In that spirit the recent work by Feizi and collaborators introduce EigenAlign, a fast spectral method with convergence guarantees for Erd-s-Renyí graphs. In this work we propose the algorithm Projected Power Alignment, which is a projected power iteration version of EigenAlign. We numerically show it improves the recovery rates of EigenAlign and we describe the theory that may be used to provide performance guarantees for Projected Power Alignment.

  7. Seasonal scale water deficit forecasting in Africa and the Middle East using NASA's Land Information System (LIS)

    NASA Astrophysics Data System (ADS)

    Shukla, Shraddhanand; Arsenault, Kristi R.; Getirana, Augusto; Kumar, Sujay V.; Roningen, Jeanne; Zaitchik, Ben; McNally, Amy; Koster, Randal D.; Peters-Lidard, Christa

    2017-04-01

    Drought and water scarcity are among the important issues facing several regions within Africa and the Middle East. A seamless and effective monitoring and early warning system is needed by regional/national stakeholders. Such system should support a proactive drought management approach and mitigate the socio-economic losses up to the extent possible. In this presentation, we report on the ongoing development and validation of a seasonal scale water deficit forecasting system based on NASA's Land Information System (LIS) and seasonal climate forecasts. First, our presentation will focus on the implementation and validation of the LIS models used for drought and water availability monitoring in the region. The second part will focus on evaluating drought and water availability forecasts. Finally, details will be provided of our ongoing collaboration with end-user partners in the region (e.g., USAID's Famine Early Warning Systems Network, FEWS NET), on formulating meaningful early warning indicators, effective communication and seamless dissemination of the monitoring and forecasting products through NASA's web-services. The water deficit forecasting system thus far incorporates NOAA's Noah land surface model (LSM), version 3.3, the Variable Infiltration Capacity (VIC) model, version 4.12, NASA GMAO's Catchment LSM, and the Noah Multi-Physics (MP) LSM (the latter two incorporate prognostic water table schemes). In addition, the LSMs' surface and subsurface runoff are routed through the Hydrological Modeling and Analysis Platform (HyMAP) to simulate surface water dynamics. The LSMs are driven by NASA/GMAO's Modern-Era Retrospective analysis for Research and Applications, Version 2 (MERRA-2), and the USGS and UCSB Climate Hazards Group InfraRed Precipitation with Station (CHIRPS) daily rainfall dataset. The LIS software framework integrates these forcing datasets and drives the four LSMs and HyMAP. The Land Verification Toolkit (LVT) is used for the evaluation of the LSMs, as it provides model ensemble metrics and the ability to compare against a variety of remotely sensed measurements, like different evapotranspiration (ET) and soil moisture products, and other reanalysis datasets that are available for this region. Comparison of the models' energy and hydrological budgets will be shown for this region (and sub-basin level, e.g., Blue Nile River) and time period (1981-2015), along with evaluating ET, streamflow, groundwater storage and soil moisture, using evaluation metrics (e.g., anomaly correlation, RMSE, etc.). The system uses seasonal climate forecasts from NASA's GMAO (the Goddard Earth Observing System Model, version 5) and NCEP's Climate Forecast System, version 2, and it produces forecasts of soil moisture, ET and streamflow out to 6 months in the future. Forecasts of those variables are formulated in terms of indicators to provide forecasts of drought and water availability in the region.

  8. MODEL VERSION CONTROL FOR GREAT LAKES MODELS ON UNIX SYSTEMS

    EPA Science Inventory

    Scientific results of the Lake Michigan Mass Balance Project were provided where atrazine was measured and modeled. The presentation also provided the model version control system which has been used for models at Grosse Ile for approximately a decade and contains various version...

  9. Evidence for the construct validity of self-motivation as a correlate of exercise adherence in French older adults.

    PubMed

    André, Nathalie; Dishman, Rod K

    2012-04-01

    Exercise adherence involves a number of sociocognitive factors that influence the adoption and maintenance of regular physical activity. Among trait-like factors, self-motivation is believed to be a unique predictor of persistence during behavior change. The aim of this study was to validate the factor structure of a French version of the Self-Motivation Inventory (SMI) and to provide initial convergent and discriminant evidence for its construct validity as a correlate of exercise adherence. Four hundred seventy-one elderly were recruited and administered the SMI-10. Structural equation modeling tested the relation of SMI-10 scores with exercise adherence in a correlated network that included decisional balance and perceived quality of life. Acceptable evidence was found to support the factor validity and measurement equivalence of the French version of the SMI-10. Moreover, self-motivation was related to exercise adherence independently of decisional balance and perceived quality of life, providing initial evidence for construct validity.

  10. Optimization of the graph model of the water conduit network, based on the approach of search space reducing

    NASA Astrophysics Data System (ADS)

    Korovin, Iakov S.; Tkachenko, Maxim G.

    2018-03-01

    In this paper we present a heuristic approach, improving the efficiency of methods, used for creation of efficient architecture of water distribution networks. The essence of the approach is a procedure of search space reduction the by limiting the range of available pipe diameters that can be used for each edge of the network graph. In order to proceed the reduction, two opposite boundary scenarios for the distribution of flows are analysed, after which the resulting range is further narrowed by applying a flow rate limitation for each edge of the network. The first boundary scenario provides the most uniform distribution of the flow in the network, the opposite scenario created the net with the highest possible flow level. The parameters of both distributions are calculated by optimizing systems of quadratic functions in a confined space, which can be effectively performed with small time costs. This approach was used to modify the genetic algorithm (GA). The proposed GA provides a variable number of variants of each gene, according to the number of diameters in list, taking into account flow restrictions. The proposed approach was implemented to the evaluation of a well-known test network - the Hanoi water distribution network [1], the results of research were compared with a classical GA with an unlimited search space. On the test data, the proposed trip significantly reduced the search space and provided faster and more obvious convergence in comparison with the classical version of GA.

  11. SINDA'85/FLUINT - SYSTEMS IMPROVED NUMERICAL DIFFERENCING ANALYZER AND FLUID INTEGRATOR (CONVEX VERSION)

    NASA Technical Reports Server (NTRS)

    Cullimore, B.

    1994-01-01

    SINDA, the Systems Improved Numerical Differencing Analyzer, is a software system for solving lumped parameter representations of physical problems governed by diffusion-type equations. SINDA was originally designed for analyzing thermal systems represented in electrical analog, lumped parameter form, although its use may be extended to include other classes of physical systems which can be modeled in this form. As a thermal analyzer, SINDA can handle such interrelated phenomena as sublimation, diffuse radiation within enclosures, transport delay effects, and sensitivity analysis. FLUINT, the FLUid INTegrator, is an advanced one-dimensional fluid analysis program that solves arbitrary fluid flow networks. The working fluids can be single phase vapor, single phase liquid, or two phase. The SINDA'85/FLUINT system permits the mutual influences of thermal and fluid problems to be analyzed. The SINDA system consists of a programming language, a preprocessor, and a subroutine library. The SINDA language is designed for working with lumped parameter representations and finite difference solution techniques. The preprocessor accepts programs written in the SINDA language and converts them into standard FORTRAN. The SINDA library consists of a large number of FORTRAN subroutines that perform a variety of commonly needed actions. The use of these subroutines can greatly reduce the programming effort required to solve many problems. A complete run of a SINDA'85/FLUINT model is a four step process. First, the user's desired model is run through the preprocessor which writes out data files for the processor to read and translates the user's program code. Second, the translated code is compiled. The third step requires linking the user's code with the processor library. Finally, the processor is executed. SINDA'85/FLUINT program features include 20,000 nodes, 100,000 conductors, 100 thermal submodels, and 10 fluid submodels. SINDA'85/FLUINT can also model two phase flow, capillary devices, user defined fluids, gravity and acceleration body forces on a fluid, and variable volumes. SINDA'85/FLUINT offers the following numerical solution techniques. The Finite difference formulation of the explicit method is the Forward-difference explicit approximation. The formulation of the implicit method is the Crank-Nicolson approximation. The program allows simulation of non-uniform heating and facilitates modeling thin-walled heat exchangers. The ability to model non-equilibrium behavior within two-phase volumes is included. Recent improvements to the program were made in modeling real evaporator-pumps and other capillary-assist evaporators. SINDA'85/FLUINT is available by license for a period of ten (10) years to approved licensees. The licensed program product includes the source code and one copy of the supporting documentation. Additional copies of the documentation may be purchased separately at any time. SINDA'85/FLUINT is written in FORTRAN 77. Version 2.3 has been implemented on Cray series computers running UNICOS, CONVEX computers running CONVEX OS, and DEC RISC computers running ULTRIX. Binaries are included with the Cray version only. The Cray version of SINDA'85/FLUINT also contains SINGE, an additional graphics program developed at Johnson Space Flight Center. Both source and executable code are provided for SINGE. Users wishing to create their own SINGE executable will also need the NASA Device Independent Graphics Library (NASADIG, previously known as SMDDIG; UNIX version, MSC-22001). The Cray and CONVEX versions of SINDA'85/FLUINT are available on 9-track 1600 BPI UNIX tar format magnetic tapes. The CONVEX version is also available on a .25 inch streaming magnetic tape cartridge in UNIX tar format. The DEC RISC ULTRIX version is available on a TK50 magnetic tape cartridge in UNIX tar format. SINDA was developed in 1971, and first had fluid capability added in 1975. SINDA'85/FLUINT version 2.3 was released in 1990.

  12. The effect of happiness and sadness on alerting, orienting, and executive attention.

    PubMed

    Finucane, Anne M; Whiteman, Martha C; Power, Mick J

    2010-05-01

    According to the attention network approach, attention is best understood in terms of three functionally and neuroanatomically distinct networks-alerting, orienting, and executive attention. An important question is whether the experience of emotion differentially influences the efficiency of these networks. This study examines 180 participants were randomly assigned to a happy, sad, or control condition and undertook a modified version of the Attention Network Test. The results showed no effect of happiness or sadness on alerting, orienting, or executive attention. However, sad participants showed reduced intrinsic alertness. This suggests that sadness reduces general alertness rather than impairing the efficiency of specific attention networks.

  13. A Research on Development of The Multi-mode Flood Forecasting System Version Management

    NASA Astrophysics Data System (ADS)

    Shen, J.-C.; Chang, C. H.; Lien, H. C.; Wu, S. J.; Horng, M. J.

    2009-04-01

    With the global economy and technological development, the degree of urbanization and population density relative to raise. At the same time, a natural buffer space and resources year after year, the situation has been weakened, not only lead to potential environmental disasters, more and more serious, disaster caused by the economy, loss of natural environment at all levels has been expanded. In view of this, the active participation of all countries in the world cross-sectoral integration of disaster prevention technology research and development, in addition, the specialized field of disaster prevention technology, science and technology development, network integration technology, high-speed data transmission and information to support the establishment of mechanisms for disaster management The decision-making and cross-border global disaster information network building and other related technologies, has become the international anti-disaster science and technology development trends, this trend. Naturally a few years in Taiwan, people's lives and property losses caused by many problems related to natural disaster prevention and disaster prevention and the establishment of applications has become a very important. For FEWS_Taiwan, flood warning system developed by the Delft Hydraulics and introduced the Water Resources Agency (WRA), it provides those functionalities for users to modify contents to add the basins, regions, data sources, models and etc. Despite this advantage, version differences due to different users or different teams yet bring about the difficulties on synchronization and integration.At the same time in different research teams will also add different modes of meteorological and hydrological data. From the government perspective of WRA, the need to plan standard operation procedures for system integration demands that the effort for version control due to version differences must be cost down or yet canceled out. As for FEWS_Taiwan, this paper proposed the feasible avenues and solutions to smoothly integrate different configurations from different teams. In the current system has been completed by 20 of Taiwan's main rivers in the building of the basic structure of the flood forecasting. And regular updating of the relevant parameters, using the new survey results, in order to have a better flood forecasting results.

  14. Stream classification of the Apalachicola-Chattahoochee-Flint River System to support modeling of aquatic habitat response to climate change

    USGS Publications Warehouse

    Elliott, Caroline M.; Jacobson, Robert B.; Freeman, Mary C.

    2014-01-01

    A stream classification and associated datasets were developed for the Apalachicola-Chattahoochee-Flint River Basin to support biological modeling of species response to climate change in the southeastern United States. The U.S. Geological Survey and the Department of the Interior’s National Climate Change and Wildlife Science Center established the Southeast Regional Assessment Project (SERAP) which used downscaled general circulation models to develop landscape-scale assessments of climate change and subsequent effects on land cover, ecosystems, and priority species in the southeastern United States. The SERAP aquatic and hydrologic dynamics modeling efforts involve multiscale watershed hydrology, stream-temperature, and fish-occupancy models, which all are based on the same stream network. Models were developed for the Apalachicola-Chattahoochee-Flint River Basin and subbasins in Alabama, Florida, and Georgia, and for the Upper Roanoke River Basin in Virginia. The stream network was used as the spatial scheme through which information was shared across the various models within SERAP. Because these models operate at different scales, coordinated pair versions of the network were delineated, characterized, and parameterized for coarse- and fine-scale hydrologic and biologic modeling. The stream network used for the SERAP aquatic models was extracted from a 30-meter (m) scale digital elevation model (DEM) using standard topographic analysis of flow accumulation. At the finer scale, reaches were delineated to represent lengths of stream channel with fairly homogenous physical characteristics (mean reach length = 350 m). Every reach in the network is designated with geomorphic attributes including upstream drainage basin area, channel gradient, channel width, valley width, Strahler and Shreve stream order, stream power, and measures of stream confinement. The reach network was aggregated from tributary junction to tributary junction to define segments for the benefit of hydrological, soil erosion, and coarser ecological modeling. Reach attributes are summarized for each segment. In six subbasins segments are assigned additional attributes about barriers (usually impoundments) to fish migration and stream isolation. Segments in the six sub-basins are also attributed with percent urban area for the watershed upstream from the stream segment for each decade from 2010–2100 from models of urban growth. On a broader scale, for application in a coarse-scale species-response model, the stream-network information is aggregated and summarized by 256 drainage subbasins (Hydrologic Response Units) used for watershed hydrologic and stream-temperature models. A model of soil erodibility based on the Revised Universal Soil Loss Equation also was developed at this scale to parameterize a model to evaluate stream condition. The reach-scale network was classified using multivariate clustering based on modeled channel width, valley width, and mean reach gradient as variables. The resulting classification consists of a 6-cluster and a 12-cluster classification for every reach in the Apalachicola-Chattahoochee-Flint Basin. We present an example of the utility of the classification that was tested using the occurrence of two species of darters and two species of minnows in the Apalachicola-Chattahoochee-Flint River Basin, the blackbanded darter and Halloween darter, and the bluestripe shiner and blacktail shiner.

  15. The Plant Genome Integrative Explorer Resource: PlantGenIE.org.

    PubMed

    Sundell, David; Mannapperuma, Chanaka; Netotea, Sergiu; Delhomme, Nicolas; Lin, Yao-Cheng; Sjödin, Andreas; Van de Peer, Yves; Jansson, Stefan; Hvidsten, Torgeir R; Street, Nathaniel R

    2015-12-01

    Accessing and exploring large-scale genomics data sets remains a significant challenge to researchers without specialist bioinformatics training. We present the integrated PlantGenIE.org platform for exploration of Populus, conifer and Arabidopsis genomics data, which includes expression networks and associated visualization tools. Standard features of a model organism database are provided, including genome browsers, gene list annotation, Blast homology searches and gene information pages. Community annotation updating is supported via integration of WebApollo. We have produced an RNA-sequencing (RNA-Seq) expression atlas for Populus tremula and have integrated these data within the expression tools. An updated version of the ComPlEx resource for performing comparative plant expression analyses of gene coexpression network conservation between species has also been integrated. The PlantGenIE.org platform provides intuitive access to large-scale and genome-wide genomics data from model forest tree species, facilitating both community contributions to annotation improvement and tools supporting use of the included data resources to inform biological insight. © 2015 The Authors. New Phytologist © 2015 New Phytologist Trust.

  16. Wellness partners: design and evaluation of a web-based physical activity diary with social gaming features for adults.

    PubMed

    Gotsis, Marientina; Wang, Hua; Spruijt-Metz, Donna; Jordan-Marsh, Maryalice; Valente, Thomas William

    2013-02-01

    The United States is currently in an age of obesity and inactivity despite increasing public awareness and scientific knowledge of detrimental long-term health effects of this lifestyle. Behavior-tracking diaries offer an effective strategy for physical activity adherence and weight management. Furthermore, Web-based physical activity diaries can engage meaningful partners in people's social networks through fun online gaming interactions and generate motivational mechanisms for effective behavioral change and positive health outcomes. Wellness Partners (WP) is a Web-based intervention in the form of a physical activity diary with social networking and game features. Two versions were designed and developed for the purpose of this study-"Diary" only and "Diary+Game". The objectives of this study included pilot testing the research process of this intervention design, implementation, evaluation, and exploring the effectiveness of social gaming features on adult participants' physical activity and anthropometric measures. We conducted a field experiment with randomized crossover design. Assessments occurred at baseline, first follow-up (FU, 5-8 weeks after using one version of WP), and second FU (5-8 weeks of using the other version of WP). In the control condition, participants started with the "Diary" version of WP while in the experimental condition, participants started with the "Diary+Game" version of WP. A total of 54 adults (egos) ages 44-88, and their family and friends (alters) ages 17-69 participated in the study in ego-network groups. Both egos and their alters completed online surveys about their exercise habits. In addition, egos completed anthropometric measurements of BMI, fat percentage, and fat mass by bioimpedance. From October 2009 to May 2010, flyers, emails, and Web advertisements yielded 335 volunteers who were screened. Rolling recruitment resulted in enrollment of 142 qualified participants in 54 ego-network groups, which were randomly assigned to a study condition. The final analytic sample included 87 individuals from 41 groups. Data were collected from December 2009 to August 2010, and data analysis was completed in 2011. Overall, the participants were given access to the intervention for 10-13 weeks. Statistical analysis suggested an increase in self-reported exercise frequency (mean days per week) from baseline (2.57, SD 1.92) to first FU (3.21, SD 1.74) in both conditions. Stronger effects were seen in the condition where Diary+Game was played first, especially in network groups with larger age variation between the alters and egos. Overall, the decrease in egos' BMI was statistically significant from baseline to first FU, with greater decrease for those in the Diary+Game first condition (-0.26 vs -0.16 in the Diary first condition). The Wellness Partners program increased physical activity among participants and resulted in health benefits among the egos. Web-based diary interventions designed with social gaming features hold potential to promote active lifestyles for middle-age adults and people in their social networks.

  17. A Semi-Structured MODFLOW-USG Model to Evaluate Local Water Sources to Wells for Decision Support.

    PubMed

    Feinstein, Daniel T; Fienen, Michael N; Reeves, Howard W; Langevin, Christian D

    2016-07-01

    In order to better represent the configuration of the stream network and simulate local groundwater-surface water interactions, a version of MODFLOW with refined spacing in the topmost layer was applied to a Lake Michigan Basin (LMB) regional groundwater-flow model developed by the U.S. Geological. Regional MODFLOW models commonly use coarse grids over large areas; this coarse spacing precludes model application to local management issues (e.g., surface-water depletion by wells) without recourse to labor-intensive inset models. Implementation of an unstructured formulation within the MODFLOW framework (MODFLOW-USG) allows application of regional models to address local problems. A "semi-structured" approach (uniform lateral spacing within layers, different lateral spacing among layers) was tested using the LMB regional model. The parent 20-layer model with uniform 5000-foot (1524-m) lateral spacing was converted to 4 layers with 500-foot (152-m) spacing in the top glacial (Quaternary) layer, where surface water features are located, overlying coarser resolution layers representing deeper deposits. This semi-structured version of the LMB model reproduces regional flow conditions, whereas the finer resolution in the top layer improves the accuracy of the simulated response of surface water to shallow wells. One application of the semi-structured LMB model is to provide statistical measures of the correlation between modeled inputs and the simulated amount of water that wells derive from local surface water. The relations identified in this paper serve as the basis for metamodels to predict (with uncertainty) surface-water depletion in response to shallow pumping within and potentially beyond the modeled area, see Fienen et al. (2015a). Published 2016. This article is a U.S. Government work and is in the public domain in the USA.

  18. A semi-structured MODFLOW-USG model to evaluate local water sources to wells for decision support

    USGS Publications Warehouse

    Feinstein, Daniel T.; Fienen, Michael N.; Reeves, Howard W.; Langevin, Christian D.

    2016-01-01

    In order to better represent the configuration of the stream network and simulate local groundwater-surface water interactions, a version of MODFLOW with refined spacing in the topmost layer was applied to a Lake Michigan Basin (LMB) regional groundwater-flow model developed by the U.S. Geological. Regional MODFLOW models commonly use coarse grids over large areas; this coarse spacing precludes model application to local management issues (e.g., surface-water depletion by wells) without recourse to labor-intensive inset models. Implementation of an unstructured formulation within the MODFLOW framework (MODFLOW-USG) allows application of regional models to address local problems. A “semi-structured” approach (uniform lateral spacing within layers, different lateral spacing among layers) was tested using the LMB regional model. The parent 20-layer model with uniform 5000-foot (1524-m) lateral spacing was converted to 4 layers with 500-foot (152-m) spacing in the top glacial (Quaternary) layer, where surface water features are located, overlying coarser resolution layers representing deeper deposits. This semi-structured version of the LMB model reproduces regional flow conditions, whereas the finer resolution in the top layer improves the accuracy of the simulated response of surface water to shallow wells. One application of the semi-structured LMB model is to provide statistical measures of the correlation between modeled inputs and the simulated amount of water that wells derive from local surface water. The relations identified in this paper serve as the basis for metamodels to predict (with uncertainty) surface-water depletion in response to shallow pumping within and potentially beyond the modeled area, see Fienen et al. (2015a).

  19. Synaptic behaviors of a single metal-oxide-metal resistive device

    NASA Astrophysics Data System (ADS)

    Choi, Sang-Jun; Kim, Guk-Bae; Lee, Kyoobin; Kim, Ki-Hong; Yang, Woo-Young; Cho, Soohaeng; Bae, Hyung-Jin; Seo, Dong-Seok; Kim, Sang-Il; Lee, Kyung-Jin

    2011-03-01

    The mammalian brain is far superior to today's electronic circuits in intelligence and efficiency. Its functions are realized by the network of neurons connected via synapses. Much effort has been extended in finding satisfactory electronic neural networks that act like brains, i.e., especially the electronic version of synapse that is capable of the weight control and is independent of the external data storage. We demonstrate experimentally that a single metal-oxide-metal structure successfully stores the biological synaptic weight variations (synaptic plasticity) without any external storage node or circuit. Our device also demonstrates the reliability of plasticity experimentally with the model considering the time dependence of spikes. All these properties are embodied by the change of resistance level corresponding to the history of injected voltage-pulse signals. Moreover, we prove the capability of second-order learning of the multi-resistive device by applying it to the circuit composed of transistors. We anticipate our demonstration will invigorate the study of electronic neural networks using non-volatile multi-resistive device, which is simpler and superior compared to other storage devices.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Granger, Brian R.; Chang, Yi -Chien; Wang, Yan

    Here, the complexity of metabolic networks in microbial communities poses an unresolved visualization and interpretation challenge. We address this challenge in the newly expanded version of a software tool for the analysis of biological networks, VisANT 5.0. We focus in particular on facilitating the visual exploration of metabolic interaction between microbes in a community, e.g. as predicted by COMETS (Computation of Microbial Ecosystems in Time and Space), a dynamic stoichiometric modeling framework. Using VisANT's unique meta-graph implementation, we show how one can use VisANT 5.0 to explore different time-dependent ecosystem-level metabolic networks. In particular, we analyze the metabolic interaction networkmore » between two bacteria previously shown to display an obligate cross-feeding interdependency. In addition, we illustrate how a putative minimal gut microbiome community could be represented in our framework, making it possible to highlight interactions across multiple coexisting species. We envisage that the "symbiotic layout" of VisANT can be employed as a general tool for the analysis of metabolism in complex microbial communities as well as heterogeneous human tissues.« less

  1. Deep greedy learning under thermal variability in full diurnal cycles

    NASA Astrophysics Data System (ADS)

    Rauss, Patrick; Rosario, Dalton

    2017-08-01

    We study the generalization and scalability behavior of a deep belief network (DBN) applied to a challenging long-wave infrared hyperspectral dataset, consisting of radiance from several manmade and natural materials within a fixed site located 500 m from an observation tower. The collections cover multiple full diurnal cycles and include different atmospheric conditions. Using complementary priors, a DBN uses a greedy algorithm that can learn deep, directed belief networks one layer at a time and has two layers form to provide undirected associative memory. The greedy algorithm initializes a slower learning procedure, which fine-tunes the weights, using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of spectral data and their labels, despite significant data variability between and within classes due to environmental and temperature variation occurring within and between full diurnal cycles. We argue, however, that more questions than answers are raised regarding the generalization capacity of these deep nets through experiments aimed at investigating their training and augmented learning behavior.

  2. Outcomes from the GLEON fellowship program. Training graduate students in data driven network science.

    NASA Astrophysics Data System (ADS)

    Dugan, H.; Hanson, P. C.; Weathers, K. C.

    2016-12-01

    In the water sciences there is a massive need for graduate students who possess the analytical and technical skills to deal with large datasets and function in the new paradigm of open, collaborative -science. The Global Lake Ecological Observatory Network (GLEON) graduate fellowship program (GFP) was developed as an interdisciplinary training program to supplement the intensive disciplinary training of traditional graduate education. The primary goal of the GFP was to train a diverse cohort of graduate students in network science, open-web technologies, collaboration, and data analytics, and importantly to provide the opportunity to use these skills to conduct collaborative research resulting in publishable scientific products. The GFP is run as a series of three week-long workshops over two years that brings together a cohort of twelve students. In addition, fellows are expected to attend and contribute to at least one international GLEON all-hands' meeting. Here, we provide examples of training modules in the GFP (model building, data QA/QC, information management, bayesian modeling, open coding/version control, national data programs), as well as scientific outputs (manuscripts, software products, and new global datasets) produced by the fellows, as well as the process by which this team science was catalyzed. Data driven education that lets students apply learned skills to real research projects reinforces concepts, provides motivation, and can benefit their publication record. This program design is extendable to other institutions and networks.

  3. Retina Image Screening and Analysis Software Version 2.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tobin, Jr., Kenneth W.; Karnowski, Thomas P.; Aykac, Deniz

    2009-04-01

    The software allows physicians or researchers to ground-truth images of retinas, identifying key physiological features and lesions that are indicative of disease. The software features methods to automatically detect the physiological features and lesions. The software contains code to measure the quality of images received from a telemedicine network; create and populate a database for a telemedicine network; review and report the diagnosis of a set of images; and also contains components to transmit images from a Zeiss camera to the network through SFTP.

  4. DoD Electronic Data Interchange (EDI) Convention: ASC X12 Transaction Set 836 Contract Award (Version 003010)

    DTIC Science & Technology

    1993-01-01

    upon designation of DoD Activity Address Code (DoDAAC) or other code coordinated with the value-added network (VAN). Mandatory ISA06 106 Interc.ange...coordinated with the value-added network (VAN). Non-DoD activities use identification code qualified by ISA05 and coordinated with the VAN. Mandatory...designation of DoD Activity Address Code (DoDAAC) or other code coordinated with the value-added network (VAN). Mandatory ISA08 107 Interchange Receiver

  5. Development and assessment of a higher-spatial-resolution (4.4 km) MISR aerosol optical depth product using AERONET-DRAGON data

    NASA Astrophysics Data System (ADS)

    Garay, Michael J.; Kalashnikova, Olga V.; Bull, Michael A.

    2017-04-01

    Since early 2000, the Multi-angle Imaging SpectroRadiometer (MISR) instrument on NASA's Terra satellite has been acquiring data that have been used to produce aerosol optical depth (AOD) and particle property retrievals at 17.6 km spatial resolution. Capitalizing on the capabilities provided by multi-angle viewing, the current operational (Version 22) MISR algorithm performs well, with about 75 % of MISR AOD retrievals globally falling within 0.05 or 20 % × AOD of paired validation data from the ground-based Aerosol Robotic Network (AERONET). This paper describes the development and assessment of a prototype version of a higher-spatial-resolution 4.4 km MISR aerosol optical depth product compared against multiple AERONET Distributed Regional Aerosol Gridded Observations Network (DRAGON) deployments around the globe. In comparisons with AERONET-DRAGON AODs, the 4.4 km resolution retrievals show improved correlation (r = 0. 9595), smaller RMSE (0.0768), reduced bias (-0.0208), and a larger fraction within the expected error envelope (80.92 %) relative to the Version 22 MISR retrievals.

  6. The Department of Defense’s Transition of Program of Record (POR) Systems from Internet Protocol Version Four (IPv4) to Internet Protocol Version Six (IPv6)

    DTIC Science & Technology

    2006-12-01

    Robert N. Beck, Dean Graduate School of Business and Public Policy iv THIS PAGE INTENTIONALLY...THIS PAGE INTENTIONALLY LEFT BLANK vii TABLE OF CONTENTS I. GRADUATE SCHOOL OF BUSINESS & PUBLIC POLICY JOINT...Warfighter Information Network – Tactical 1 I. GRADUATE SCHOOL OF BUSINESS & PUBLIC POLICY JOINT APPLIED PROJECT PLAN A. TENTATIVE PROJECT TITLE

  7. Improving the Air Mobility Command’s Air Refueler Route Building Capabilities

    DTIC Science & Technology

    2014-03-27

    routing tool. Sundar and Rathinam [18] also study a traveling salesman version of the problem in the unmanned aerial vehicle realm. Their focus is on...constrained shortest path with fuel limitations. The objective is to minimize the distance traveled . Some aircraft routing problems involve...radius and network density their only limitations. 4 O’Rourke et al. [15] examine a traveling salesman version of aircraft routing in the unmanned aerial

  8. The Development of a Small-World Network of Higher Education Students, Using a Large-Group Problem-Solving Method

    ERIC Educational Resources Information Center

    Sousa, Fernando Cardoso; Monteiro, Ileana Pardal; Pellissier, René

    2014-01-01

    This article presents the development of a small-world network using an adapted version of the large-group problem-solving method "Future Search." Two management classes in a higher education setting were selected and required to plan a project. The students completed a survey focused on the frequency of communications before and after…

  9. Dr. Dan Arvizu Keynote Presentation Text Version | NREL

    Science.gov Websites

    stakeholders, investors, cleantech start-ups, all looking to network, get to know one another, find avenues for 2010, attending the forum and really getting more acquainted with this ecosystem. The one-on-one networking session kicks off the event and it's primarily three hours of one-on-one meetings between start

  10. Creating an Online Laboratory

    DTIC Science & Technology

    2015-03-18

    Problem (TSP) to solve, a canonical computer science problem that involves identifying the shortest itinerary for a hypothetical salesman traveling among a...also created working versions of the travelling salesperson problem , prisoners’ dilemma, public goods game, ultimatum game, word ladders, and...the task within networks of others performing the task. Thus, we built five problems which could be embedded in networks: the traveling salesperson

  11. At-Least Version of the Generalized Minimum Spanning Tree Problem: Optimization Through Ant Colony System and Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Janich, Karl W.

    2005-01-01

    The At-Least version of the Generalized Minimum Spanning Tree Problem (L-GMST) is a problem in which the optimal solution connects all defined clusters of nodes in a given network at a minimum cost. The L-GMST is NPHard; therefore, metaheuristic algorithms have been used to find reasonable solutions to the problem as opposed to computationally feasible exact algorithms, which many believe do not exist for such a problem. One such metaheuristic uses a swarm-intelligent Ant Colony System (ACS) algorithm, in which agents converge on a solution through the weighing of local heuristics, such as the shortest available path and the number of agents that recently used a given path. However, in a network using a solution derived from the ACS algorithm, some nodes may move around to different clusters and cause small changes in the network makeup. Rerunning the algorithm from the start would be somewhat inefficient due to the significance of the changes, so a genetic algorithm based on the top few solutions found in the ACS algorithm is proposed to quickly and efficiently adapt the network to these small changes.

  12. Telescope networking and user support via Remote Telescope Markup Language

    NASA Astrophysics Data System (ADS)

    Hessman, Frederic V.; Pennypacker, Carlton R.; Romero-Colmenero, Encarni; Tuparev, Georg

    2004-09-01

    Remote Telescope Markup Language (RTML) is an XML-based interface/document format designed to facilitate the exchange of astronomical observing requests and results between investigators and observatories as well as within networks of observatories. While originally created to support simple imaging telescope requests (Versions 1.0-2.1), RTML Version 3.0 now supports a wide range of applications, from request preparation, exposure calculation, spectroscopy, and observation reports to remote telescope scheduling, target-of-opportunity observations and telescope network administration. The elegance of RTML is that all of this is made possible using a public XML Schema which provides a general-purpose, easily parsed, and syntax-checked medium for the exchange of astronomical and user information while not restricting or otherwise constraining the use of the information at either end. Thus, RTML can be used to connect heterogeneous systems and their users without requiring major changes in existing local resources and procedures. Projects as very different as a number of advanced amateur observatories, the global Hands-On Universe project, the MONET network (robotic imaging), the STELLA consortium (robotic spectroscopy), and the 11-m Southern African Large Telescope are now using or intending to use RTML in various forms and for various purposes.

  13. Impact of office productivity cloud computing on energy consumption and greenhouse gas emissions.

    PubMed

    Williams, Daniel R; Tang, Yinshan

    2013-05-07

    Cloud computing is usually regarded as being energy efficient and thus emitting less greenhouse gases (GHG) than traditional forms of computing. When the energy consumption of Microsoft's cloud computing Office 365 (O365) and traditional Office 2010 (O2010) software suites were tested and modeled, some cloud services were found to consume more energy than the traditional form. The developed model in this research took into consideration the energy consumption at the three main stages of data transmission; data center, network, and end user device. Comparable products from each suite were selected and activities were defined for each product to represent a different computing type. Microsoft provided highly confidential data for the data center stage, while the networking and user device stages were measured directly. A new measurement and software apportionment approach was defined and utilized allowing the power consumption of cloud services to be directly measured for the user device stage. Results indicated that cloud computing is more energy efficient for Excel and Outlook which consumed less energy and emitted less GHG than the standalone counterpart. The power consumption of the cloud based Outlook (8%) and Excel (17%) was lower than their traditional counterparts. However, the power consumption of the cloud version of Word was 17% higher than its traditional equivalent. A third mixed access method was also measured for Word which emitted 5% more GHG than the traditional version. It is evident that cloud computing may not provide a unified way forward to reduce energy consumption and GHG. Direct conversion from the standalone package into the cloud provision platform can now consider energy and GHG emissions at the software development and cloud service design stage using the methods described in this research.

  14. Development of a State-Wide 3-D Seismic Tomography Velocity Model for California

    NASA Astrophysics Data System (ADS)

    Thurber, C. H.; Lin, G.; Zhang, H.; Hauksson, E.; Shearer, P.; Waldhauser, F.; Hardebeck, J.; Brocher, T.

    2007-12-01

    We report on progress towards the development of a state-wide tomographic model of the P-wave velocity for the crust and uppermost mantle of California. The dataset combines first arrival times from earthquakes and quarry blasts recorded on regional network stations and travel times of first arrivals from explosions and airguns recorded on profile receivers and network stations. The principal active-source datasets are Geysers-San Pablo Bay, Imperial Valley, Livermore, W. Mojave, Gilroy-Coyote Lake, Shasta region, Great Valley, Morro Bay, Mono Craters-Long Valley, PACE, S. Sierras, LARSE 1 and 2, Loma Prieta, BASIX, San Francisco Peninsula and Parkfield. Our beta-version model is coarse (uniform 30 km horizontal and variable vertical gridding) but is able to image the principal features in previous separate regional models for northern and southern California, such as the high-velocity subducting Gorda Plate, upper to middle crustal velocity highs beneath the Sierra Nevada and much of the Coast Ranges, the deep low-velocity basins of the Great Valley, Ventura, and Los Angeles, and a high- velocity body in the lower crust underlying the Great Valley. The new state-wide model has improved areal coverage compared to the previous models, and extends to greater depth due to the data at large epicentral distances. We plan a series of steps to improve the model. We are enlarging and calibrating the active-source dataset as we obtain additional picks from investigators and perform quality control analyses on the existing and new picks. We will also be adding data from more quarry blasts, mainly in northern California, following an identification and calibration procedure similar to Lin et al. (2006). Composite event construction (Lin et al., in press) will be carried out for northern California for use in conventional tomography. A major contribution of the state-wide model is the identification of earthquakes yielding arrival times at both the Northern California Seismic Network and the Southern California Seismic Network. These events are critical to the determination of the seismic velocity model in central California, in the former `no-mans-land' between the Northern and Southern California networks. Ultimately, a combination of active-source datasets, composite events, original catalog picks, and differential times from both waveform cross-correlation and catalog picks will be used in a double-difference tomography inversion.

  15. Cart'Eaux: an automatic mapping procedure for wastewater networks using machine learning and data mining

    NASA Astrophysics Data System (ADS)

    Bailly, J. S.; Delenne, C.; Chahinian, N.; Bringay, S.; Commandré, B.; Chaumont, M.; Derras, M.; Deruelle, L.; Roche, M.; Rodriguez, F.; Subsol, G.; Teisseire, M.

    2017-12-01

    In France, local government institutions must establish a detailed description of wastewater networks. The information should be available, but it remains fragmented (different formats held by different stakeholders) and incomplete. In the "Cart'Eaux" project, a multidisciplinary team, including an industrial partner, develops a global methodology using Machine Learning and Data Mining approaches applied to various types of large data to recover information in the aim of mapping urban sewage systems for hydraulic modelling. Deep-learning is first applied using a Convolution Neural Network to localize manhole covers on 5 cm resolution aerial RGB images. The detected manhole covers are then automatically connected using a tree-shaped graph constrained by industry rules. Based on a Delaunay triangulation, connections are chosen to minimize a cost function depending on pipe length, slope and possible intersection with roads or buildings. A stochastic version of this algorithm is currently being developed to account for positional uncertainty and detection errors, and generate sets of probable networks. As more information is required for hydraulic modeling (slopes, diameters, materials, etc.), text data mining is used to extract network characteristics from data posted on the Web or available through governmental or specific databases. Using an appropriate list of keywords, the web is scoured for documents which are saved in text format. The thematic entities are identified and linked to the surrounding spatial and temporal entities. The methodology is developed and tested on two towns in southern France. The primary results are encouraging: 54% of manhole covers are detected with few false detections, enabling the reconstruction of probable networks. The data mining results are still being investigated. It is clear at this stage that getting numerical values on specific pipes will be challenging. Thus, when no information is found, decision rules will be used to assign admissible numerical values to enable the final hydraulic modelling. Consequently, sensitivity analysis of the hydraulic model will be performed to take into account the uncertainty associated with each piece of information. Project funded by the European Regional Development Fund and the Occitanie Region.

  16. Multilayer neural networks for reduced-rank approximation.

    PubMed

    Diamantaras, K I; Kung, S Y

    1994-01-01

    This paper is developed in two parts. First, the authors formulate the solution to the general reduced-rank linear approximation problem relaxing the invertibility assumption of the input autocorrelation matrix used by previous authors. The authors' treatment unifies linear regression, Wiener filtering, full rank approximation, auto-association networks, SVD and principal component analysis (PCA) as special cases. The authors' analysis also shows that two-layer linear neural networks with reduced number of hidden units, trained with the least-squares error criterion, produce weights that correspond to the generalized singular value decomposition of the input-teacher cross-correlation matrix and the input data matrix. As a corollary the linear two-layer backpropagation model with reduced hidden layer extracts an arbitrary linear combination of the generalized singular vector components. Second, the authors investigate artificial neural network models for the solution of the related generalized eigenvalue problem. By introducing and utilizing the extended concept of deflation (originally proposed for the standard eigenvalue problem) the authors are able to find that a sequential version of linear BP can extract the exact generalized eigenvector components. The advantage of this approach is that it's easier to update the model structure by adding one more unit or pruning one or more units when the application requires it. An alternative approach for extracting the exact components is to use a set of lateral connections among the hidden units trained in such a way as to enforce orthogonality among the upper- and lower-layer weights. The authors call this the lateral orthogonalization network (LON) and show via theoretical analysis-and verify via simulation-that the network extracts the desired components. The advantage of the LON-based model is that it can be applied in a parallel fashion so that the components are extracted concurrently. Finally, the authors show the application of their results to the solution of the identification problem of systems whose excitation has a non-invertible autocorrelation matrix. Previous identification methods usually rely on the invertibility assumption of the input autocorrelation, therefore they can not be applied to this case.

  17. Performance and Evaluation of the Global Modeling and Assimilation Office Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Prive, Nikki; Errico, R. M.; Carvalho, D.

    2018-01-01

    The National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO) has spent more than a decade developing and implementing a global Observing System Simulation Experiment framework for use in evaluting both new observation types as well as the behavior of data assimilation systems. The NASA/GMAO OSSE has constantly evolved to relect changes in the Gridpoint Statistical Interpolation data assimiation system, the Global Earth Observing System model, version 5 (GEOS-5), and the real world observational network. Software and observational datasets for the GMAO OSSE are publicly available, along with a technical report. Substantial modifications have recently been made to the NASA/GMAO OSSE framework, including the character of synthetic observation errors, new instrument types, and more sophisticated atmospheric wind vectors. These improvements will be described, along with the overall performance of the current OSSE. Lessons learned from investigations into correlated errors and model error will be discussed.

  18. Evolving a lingua franca and associated software infrastructure for computational systems biology: the Systems Biology Markup Language (SBML) project.

    PubMed

    Hucka, M; Finney, A; Bornstein, B J; Keating, S M; Shapiro, B E; Matthews, J; Kovitz, B L; Schilstra, M J; Funahashi, A; Doyle, J C; Kitano, H

    2004-06-01

    Biologists are increasingly recognising that computational modelling is crucial for making sense of the vast quantities of complex experimental data that are now being collected. The systems biology field needs agreed-upon information standards if models are to be shared, evaluated and developed cooperatively. Over the last four years, our team has been developing the Systems Biology Markup Language (SBML) in collaboration with an international community of modellers and software developers. SBML has become a de facto standard format for representing formal, quantitative and qualitative models at the level of biochemical reactions and regulatory networks. In this article, we summarise the current and upcoming versions of SBML and our efforts at developing software infrastructure for supporting and broadening its use. We also provide a brief overview of the many SBML-compatible software tools available today.

  19. A catalog of selected compact radio sources for the construction of an extragalactic radio/optical reference frame (Argue et al. 1984): Documentation for the machine-readable version

    NASA Technical Reports Server (NTRS)

    1990-01-01

    This document describes the machine readable version of the Selected Compact Radio Source Catalog as it is currently being distributed from the international network of astronomical data centers. It is intended to enable users to read and process the computerized catalog. The catalog contains 233 strong, compact extragalactic radio sources having identified optical counterparts. The machine version contains the same data as the published catalog and includes source identifications, equatorial positions at J2000.0 and their mean errors, object classifications, visual magnitudes, redshift, 5-GHz flux densities, and comments.

  20. Comparison of fMRI data from passive listening and active-response story processing tasks in children

    PubMed Central

    Vannest, Jennifer J.; Karunanayaka, Prasanna R.; Altaye, Mekibib; Schmithorst, Vincent J.; Plante, Elena M.; Eaton, Kenneth J.; Rasmussen, Jerod M.; Holland, Scott K.

    2009-01-01

    Purpose To use functional MRI methods to visualize a network of auditory and language-processing brain regions associated with processing an aurally-presented story. We compare a passive listening (PL) story paradigm to an active-response (AR) version including on-line performance monitoring and a sparse acquisition technique. Materials/Methods Twenty children (ages 11−13) completed PL and AR story processing tasks. The PL version presented alternating 30-second blocks of stories and tones; the AR version presented story segments, comprehension questions, and 5s tone sequences, with fMRI acquisitions between stimuli. fMRI data was analyzed using a general linear model approach and paired t-test identifying significant group activation. Results Both tasks activated in primary auditory cortex, superior temporal gyrus bilaterally, left inferior frontal gyrus. The AR task demonstrated more extensive activation, including dorsolateral prefrontal cortex and anterior/posterior cingulate cortex. Comparison of effect size in each paradigm showed a larger effect for the AR paradigm in a left inferior frontal ROI. Conclusion Activation patterns for story processing in children are similar in passive listening and active-response tasks. Increases in extent and magnitude of activation in the AR task are likely associated with memory and attention resources engaged across acquisition intervals. PMID:19306445

  1. Comparison of fMRI data from passive listening and active-response story processing tasks in children.

    PubMed

    Vannest, Jennifer J; Karunanayaka, Prasanna R; Altaye, Mekibib; Schmithorst, Vincent J; Plante, Elena M; Eaton, Kenneth J; Rasmussen, Jerod M; Holland, Scott K

    2009-04-01

    To use functional MRI (fMRI) methods to visualize a network of auditory and language-processing brain regions associated with processing an aurally-presented story. We compare a passive listening (PL) story paradigm to an active-response (AR) version including online performance monitoring and a sparse acquisition technique. Twenty children (ages 11-13 years) completed PL and AR story processing tasks. The PL version presented alternating 30-second blocks of stories and tones; the AR version presented story segments, comprehension questions, and 5-second tone sequences, with fMRI acquisitions between stimuli. fMRI data was analyzed using a general linear model approach and paired t-test identifying significant group activation. Both tasks showed activation in the primary auditory cortex, superior temporal gyrus bilaterally, and left inferior frontal gyrus (IFG). The AR task demonstrated more extensive activation, including the dorsolateral prefrontal cortex and anterior/posterior cingulate cortex. Comparison of effect size in each paradigm showed a larger effect for the AR paradigm in a left inferior frontal region-of-interest (ROI). Activation patterns for story processing in children are similar in PL and AR tasks. Increases in extent and magnitude of activation in the AR task are likely associated with memory and attention resources engaged across acquisition intervals.

  2. What's new in the Atmospheric Model Evaluation Tool (AMET) version 1.3

    EPA Science Inventory

    A new version of the Atmospheric Model Evaluation Tool (AMET) has been released. The new version of AMET, version 1.3 (AMETv1.3), contains a number of updates and changes from the previous of version of AMET (v1.2) released in 2012. First, the Perl scripts used in the previous ve...

  3. Opinion formation on multiplex scale-free networks

    NASA Astrophysics Data System (ADS)

    Nguyen, Vu Xuan; Xiao, Gaoxi; Xu, Xin-Jian; Li, Guoqi; Wang, Zhen

    2018-01-01

    Most individuals, if not all, live in various social networks. The formation of opinion systems is an outcome of social interactions and information propagation occurring in such networks. We study the opinion formation with a new rule of pairwise interactions in the novel version of the well-known Deffuant model on multiplex networks composed of two layers, each of which is a scale-free network. It is found that in a duplex network composed of two identical layers, the presence of the multiplexity helps either diminish or enhance opinion diversity depending on the relative magnitudes of tolerance ranges characterizing the degree of openness/tolerance on both layers: there is a steady separation between different regions of tolerance range values on two network layers where multiplexity plays two different roles, respectively. Additionally, the two critical tolerance ranges follow a one-sum rule; that is, each of the layers reaches a complete consensus only if the sum of the tolerance ranges on the two layers is greater than a constant approximately equaling 1, the double of the critical bound on a corresponding isolated network. A further investigation of the coupling between constituent layers quantified by a link overlap parameter reveals that as the layers are loosely coupled, the two opinion systems co-evolve independently, but when the inter-layer coupling is sufficiently strong, a monotonic behavior is observed: an increase in the tolerance range of a layer causes a decline in the opinion diversity on the other layer regardless of the magnitudes of tolerance ranges associated with the layers in question.

  4. Prostate cancer - treatment

    MedlinePlus

    ... usually painless. Treatment is done in a radiation oncology center that is usually connected to a hospital. ... Cancer Network website. NCCN clinical practice guidelines in oncology (NCCN guidelines): prostate cancer. Version 2.2017. www. ...

  5. Hepatocellular carcinoma

    MedlinePlus

    ... JH, Kastan MB, Tepper JE, eds. Abeloff's Clinical Oncology . 5th ed. Philadelphia, PA: Elsevier Saunders; 2014:chap ... Cancer Network website. NCCN clinical practice guidelines in oncology: hepatobiliary cancers. Version 3.2017. www.nccn.org/ ...

  6. Pancreatic cancer

    MedlinePlus

    ... JH, Kastan MB, Tepper JE, eds. Abeloff's Clinical Oncology . 5th ed. Philadelphia, PA: Elsevier Saunders; 2014:chap ... Cancer Network website. NCCN clinical practice guidelines in oncology: pancreatic adenocarcinoma. Version 1.2018. www.nccn.org/ ...

  7. Deep nets vs expert designed features in medical physics: An IMRT QA case study.

    PubMed

    Interian, Yannet; Rideout, Vincent; Kearney, Vasant P; Gennatas, Efstathios; Morin, Olivier; Cheung, Joey; Solberg, Timothy; Valdes, Gilmer

    2018-03-30

    The purpose of this study was to compare the performance of Deep Neural Networks against a technique designed by domain experts in the prediction of gamma passing rates for Intensity Modulated Radiation Therapy Quality Assurance (IMRT QA). A total of 498 IMRT plans across all treatment sites were planned in Eclipse version 11 and delivered using a dynamic sliding window technique on Clinac iX or TrueBeam Linacs. Measurements were performed using a commercial 2D diode array, and passing rates for 3%/3 mm local dose/distance-to-agreement (DTA) were recorded. Separately, fluence maps calculated for each plan were used as inputs to a convolution neural network (CNN). The CNNs were trained to predict IMRT QA gamma passing rates using TensorFlow and Keras. A set of model architectures, inspired by the convolutional blocks of the VGG-16 ImageNet model, were constructed and implemented. Synthetic data, created by rotating and translating the fluence maps during training, was created to boost the performance of the CNNs. Dropout, batch normalization, and data augmentation were utilized to help train the model. The performance of the CNNs was compared to a generalized Poisson regression model, previously developed for this application, which used 78 expert designed features. Deep Neural Networks without domain knowledge achieved comparable performance to a baseline system designed by domain experts in the prediction of 3%/3 mm Local gamma passing rates. An ensemble of neural nets resulted in a mean absolute error (MAE) of 0.70 ± 0.05 and the domain expert model resulted in a 0.74 ± 0.06. Convolutional neural networks (CNNs) with transfer learning can predict IMRT QA passing rates by automatically designing features from the fluence maps without human expert supervision. Predictions from CNNs are comparable to a system carefully designed by physicist experts. © 2018 American Association of Physicists in Medicine.

  8. Distorted Character Recognition Via An Associative Neural Network

    NASA Astrophysics Data System (ADS)

    Messner, Richard A.; Szu, Harold H.

    1987-03-01

    The purpose of this paper is two-fold. First, it is intended to provide some preliminary results of a character recognition scheme which has foundations in on-going neural network architecture modeling, and secondly, to apply some of the neural network results in a real application area where thirty years of effort has had little effect on providing the machine an ability to recognize distorted objects within the same object class. It is the author's belief that the time is ripe to start applying in ernest the results of over twenty years of effort in neural modeling to some of the more difficult problems which seem so hard to solve by conventional means. The character recognition scheme proposed utilizes a preprocessing stage which performs a 2-dimensional Walsh transform of an input cartesian image field, then sequency filters this spectrum into three feature bands. Various features are then extracted and organized into three sets of feature vectors. These vector patterns that are stored and recalled associatively. Two possible associative neural memory models are proposed for further investigation. The first being an outer-product linear matrix associative memory with a threshold function controlling the strength of the output pattern (similar to Kohonen's crosscorrelation approach [1]). The second approach is based upon a modified version of Grossberg's neural architecture [2] which provides better self-organizing properties due to its adaptive nature. Preliminary results of the sequency filtering and feature extraction preprocessing stage and discussion about the use of the proposed neural architectures is included.

  9. A neural network model of semantic memory linking feature-based object representation and words.

    PubMed

    Cuppini, C; Magosso, E; Ursino, M

    2009-06-01

    Recent theories in cognitive neuroscience suggest that semantic memory is a distributed process, which involves many cortical areas and is based on a multimodal representation of objects. The aim of this work is to extend a previous model of object representation to realize a semantic memory, in which sensory-motor representations of objects are linked with words. The model assumes that each object is described as a collection of features, coded in different cortical areas via a topological organization. Features in different objects are segmented via gamma-band synchronization of neural oscillators. The feature areas are further connected with a lexical area, devoted to the representation of words. Synapses among the feature areas, and among the lexical area and the feature areas are trained via a time-dependent Hebbian rule, during a period in which individual objects are presented together with the corresponding words. Simulation results demonstrate that, during the retrieval phase, the network can deal with the simultaneous presence of objects (from sensory-motor inputs) and words (from acoustic inputs), can correctly associate objects with words and segment objects even in the presence of incomplete information. Moreover, the network can realize some semantic links among words representing objects with shared features. These results support the idea that semantic memory can be described as an integrated process, whose content is retrieved by the co-activation of different multimodal regions. In perspective, extended versions of this model may be used to test conceptual theories, and to provide a quantitative assessment of existing data (for instance concerning patients with neural deficits).

  10. Development of a robust analytical framework for assessing landbird trends, dynamics and relationships with environmental covariates in the North Coast and Cascades Network

    USGS Publications Warehouse

    Ray, Chris; Saracco, James; Jenkins, Kurt J.; Huff, Mark; Happe, Patricia J.; Ransom, Jason I.

    2017-01-01

    During 2015-2016, we completed development of a new analytical framework for landbird population monitoring data from the National Park Service (NPS) North Coast and Cascades Inventory and Monitoring Network (NCCN). This new tool for analysis combines several recent advances in modeling population status and trends using point-count data and is designed to supersede the approach previously slated for analysis of trends in the NCCN and other networks, including the Sierra Nevada Network (SIEN). Advances supported by the new model-based approach include 1) the use of combined data on distance and time of detection to estimate detection probability without assuming perfect detection at zero distance, 2) seamless accommodation of variation in sampling effort and missing data, and 3) straightforward estimation of the effects of downscaled climate and other local habitat characteristics on spatial and temporal trends in landbird populations. No changes in the current field protocol are necessary to facilitate the new analyses. We applied several versions of the new model to data from each of 39 species recorded in the three mountain parks of the NCCN, estimating trends and climate relationships for each species during 2005-2014. Our methods and results are also reported in a manuscript in revision for the journal Ecosphere (hereafter, Ray et al.). Here, we summarize the methods and results outlined in depth by Ray et al., discuss benefits of the new analytical framework, and provide recommendations for its application to synthetic analyses of long-term data from the NCCN and SIEN. All code necessary for implementing the new analyses is provided within the Appendices to this report, in the form of fully annotated scripts written in the open-access programming languages R and JAGS.

  11. Sensing Models and Sensor Network Architectures for Transport Infrastructure Monitoring in Smart Cities

    NASA Astrophysics Data System (ADS)

    Simonis, Ingo

    2015-04-01

    Transport infrastructure monitoring and analysis is one of the focus areas in the context of smart cities. With the growing number of people moving into densely populated urban metro areas, precise tracking of moving people and goods is the basis for profound decision-making and future planning. With the goal of defining optimal extensions and modifications to existing transport infrastructures, multi-modal transport has to be monitored and analysed. This process is performed on the basis of sensor networks that combine a variety of sensor models, types, and deployments within the area of interest. Multi-generation networks, consisting of a number of sensor types and versions, are causing further challenges for the integration and processing of sensor observations. These challenges are not getting any smaller with the development of the Internet of Things, which brings promising opportunities, but is currently stuck in a type of protocol war between big industry players from both the hardware and network infrastructure domain. In this paper, we will highlight how the OGC suite of standards, with the Sensor Web standards developed by the Sensor Web Enablement Initiative together with the latest developments by the Sensor Web for Internet of Things community can be applied to the monitoring and improvement of transport infrastructures. Sensor Web standards have been applied in the past to pure technical domains, but need to be broadened now in order to meet new challenges. Only cross domain approaches will allow to develop satisfying transport infrastructure approaches that take into account requirements coming form a variety of sectors such as tourism, administration, transport industry, emergency services, or private people. The goal is the development of interoperable components that can be easily integrated within data infrastructures and follow well defined information models to allow robust processing.

  12. Modeling the Inhomogeneous Response of Steady and Transient Flows of Entangled Micellar Solutions

    NASA Astrophysics Data System (ADS)

    McKinley, Gareth

    2008-03-01

    Surfactant molecules can self-assemble in solution into long flexible structures known as wormlike micelles. These structures entangle, forming a viscoelastic network similar to those in entangled polymer melts and solutions. However, in contrast to `inert' polymeric networks, wormlike micelles continuously break and reform leading to an additional relaxation mechanism and the name `living polymers'. Observations in both classes of entangled fluids have shown that steady and transient shearing flows of these solutions exhibit spatial inhomogeneities such as `shear-bands' at sufficiently large applied strains. In the present work, we investigate the dynamical response of a class of two-species elastic network models which can capture, in a self-consistent manner, the creation and destruction of elastically-active network segments, as well as diffusive coupling between the microstructural conformations and the local state of stress in regions with large spatial gradients of local deformation. These models incorporate a discrete version of the micellar breakage and reforming dynamics originally proposed by Cates and capture, at least qualitatively, non-affine tube deformation and chain disentanglement. The `flow curves' of stress and apparent shear rate resulting from an assumption of homogeneous deformation is non-monotonic and linear stability analysis shows that the region of non-monotonic response is unstable. Calculation of the full inhomogeneous flow field results in localized shear bands that grow linearly in extent across the gap as the apparent shear rate increases. Time-dependent calculations in step strain, large amplitude oscillatory shear (LAOS) and in start up of steady shear flow show that the velocity profile in the gap and the total stress measured at the bounding surfaces are coupled and evolve in a complex non-monotonic manner as the shear bands develop and propagate.

  13. Quantum Bayesian networks with application to games displaying Parrondo's paradox

    NASA Astrophysics Data System (ADS)

    Pejic, Michael

    Bayesian networks and their accompanying graphical models are widely used for prediction and analysis across many disciplines. We will reformulate these in terms of linear maps. This reformulation will suggest a natural extension, which we will show is equivalent to standard textbook quantum mechanics. Therefore, this extension will be termed quantum. However, the term quantum should not be taken to imply this extension is necessarily only of utility in situations traditionally thought of as in the domain of quantum mechanics. In principle, it may be employed in any modelling situation, say forecasting the weather or the stock market---it is up to experiment to determine if this extension is useful in practice. Even restricting to the domain of quantum mechanics, with this new formulation the advantages of Bayesian networks can be maintained for models incorporating quantum and mixed classical-quantum behavior. The use of these will be illustrated by various basic examples. Parrondo's paradox refers to the situation where two, multi-round games with a fixed winning criteria, both with probability greater than one-half for one player to win, are combined. Using a possibly biased coin to determine the rule to employ for each round, paradoxically, the previously losing player now wins the combined game with probabilitygreater than one-half. Using the extended Bayesian networks, we will formulate and analyze classical observed, classical hidden, and quantum versions of a game that displays this paradox, finding bounds for the discrepancy from naive expectations for the occurrence of the paradox. A quantum paradox inspired by Parrondo's paradox will also be analyzed. We will prove a bound for the discrepancy from naive expectations for this paradox as well. Games involving quantum walks that achieve this bound will be presented.

  14. CBEO:N, Chesapeake Bay Environmental Observatory as a Cyberinfrastructure Node

    NASA Astrophysics Data System (ADS)

    Zaslavsky, I.; Piasecki, M.; Whitenack, T.; Ball, W. P.; Murphy, R.

    2008-12-01

    Chesapeake Bay Environmental Observatory (CBEO) is an NSF-supported project focused on studying hypoxia in Chesapeake Bay using advanced cyberinfrastructure (CI) technologies. The project is organized around four concurrent and interacting activities: 1) CBEO:S provides science and management context for the use of CI technologies, focusing on hypoxia and its non-linear dynamics as affected by management and climate; 2) CBEO:T constructs a locally-accessible CBEO test bed prototype centered on spatio-temporal interpolation and advanced querying of model runs; 3) CBEO:N incorporates the test bed CI into national environmental observation networks, and 4) CBEO:E develops education and outreach components of the project that translate observational science for public consumption. CBEO:N activities, which are the focus of this paper, are four-fold: - constructing an online project portal to enable researchers to publish, discover, query, visualize and integrate project-related datasets of different types. The portal is based on the technologies developed within the GEON (the Geosciences Network) project, and has established the CBEO project data server as part of the GEON network of servers; * developing a CBEO node within the WATERS network, taking advantage of the CUAHSI Hydrologic Information System (HIS) Server technology that supports online publication of observation data as web services, and ontology-assisted data discovery; *developing new data structures and metadata in order to describe water quality observational data, and model run output, obtained for the Chesapeake Bay area, using data structures adopted and modified from the Observations Data Model of CUAHSI HIS; * prototyping CBEO tools that can be re-used through the portal, in particular implementing a portal version of R-based spatial interpolation tools. The paper describes recent accomplishments in these four development areas, and demonstrates how CI approaches transform research and data sharing in environmental observing systems.

  15. rSNPBase 3.0: an updated database of SNP-related regulatory elements, element-gene pairs and SNP-based gene regulatory networks.

    PubMed

    Guo, Liyuan; Wang, Jing

    2018-01-04

    Here, we present the updated rSNPBase 3.0 database (http://rsnp3.psych.ac.cn), which provides human SNP-related regulatory elements, element-gene pairs and SNP-based regulatory networks. This database is the updated version of the SNP regulatory annotation database rSNPBase and rVarBase. In comparison to the last two versions, there are both structural and data adjustments in rSNPBase 3.0: (i) The most significant new feature is the expansion of analysis scope from SNP-related regulatory elements to include regulatory element-target gene pairs (E-G pairs), therefore it can provide SNP-based gene regulatory networks. (ii) Web function was modified according to data content and a new network search module is provided in the rSNPBase 3.0 in addition to the previous regulatory SNP (rSNP) search module. The two search modules support data query for detailed information (related-elements, element-gene pairs, and other extended annotations) on specific SNPs and SNP-related graphic networks constructed by interacting transcription factors (TFs), miRNAs and genes. (3) The type of regulatory elements was modified and enriched. To our best knowledge, the updated rSNPBase 3.0 is the first data tool supports SNP functional analysis from a regulatory network prospective, it will provide both a comprehensive understanding and concrete guidance for SNP-related regulatory studies. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  16. rSNPBase 3.0: an updated database of SNP-related regulatory elements, element-gene pairs and SNP-based gene regulatory networks

    PubMed Central

    2018-01-01

    Abstract Here, we present the updated rSNPBase 3.0 database (http://rsnp3.psych.ac.cn), which provides human SNP-related regulatory elements, element-gene pairs and SNP-based regulatory networks. This database is the updated version of the SNP regulatory annotation database rSNPBase and rVarBase. In comparison to the last two versions, there are both structural and data adjustments in rSNPBase 3.0: (i) The most significant new feature is the expansion of analysis scope from SNP-related regulatory elements to include regulatory element–target gene pairs (E–G pairs), therefore it can provide SNP-based gene regulatory networks. (ii) Web function was modified according to data content and a new network search module is provided in the rSNPBase 3.0 in addition to the previous regulatory SNP (rSNP) search module. The two search modules support data query for detailed information (related-elements, element-gene pairs, and other extended annotations) on specific SNPs and SNP-related graphic networks constructed by interacting transcription factors (TFs), miRNAs and genes. (3) The type of regulatory elements was modified and enriched. To our best knowledge, the updated rSNPBase 3.0 is the first data tool supports SNP functional analysis from a regulatory network prospective, it will provide both a comprehensive understanding and concrete guidance for SNP-related regulatory studies. PMID:29140525

  17. A new algorithm to construct phylogenetic networks from trees.

    PubMed

    Wang, J

    2014-03-06

    Developing appropriate methods for constructing phylogenetic networks from tree sets is an important problem, and much research is currently being undertaken in this area. BIMLR is an algorithm that constructs phylogenetic networks from tree sets. The algorithm can construct a much simpler network than other available methods. Here, we introduce an improved version of the BIMLR algorithm, QuickCass. QuickCass changes the selection strategy of the labels of leaves below the reticulate nodes, i.e., the nodes with an indegree of at least 2 in BIMLR. We show that QuickCass can construct simpler phylogenetic networks than BIMLR. Furthermore, we show that QuickCass is a polynomial-time algorithm when the output network that is constructed by QuickCass is binary.

  18. Mission Data System Java Edition Version 7

    NASA Technical Reports Server (NTRS)

    Reinholtz, William K.; Wagner, David A.

    2013-01-01

    The Mission Data System framework defines closed-loop control system abstractions from State Analysis including interfaces for state variables, goals, estimators, and controllers that can be adapted to implement a goal-oriented control system. The framework further provides an execution environment that includes a goal scheduler, execution engine, and fault monitor that support the expression of goal network activity plans. Using these frameworks, adapters can build a goal-oriented control system where activity coordination is verified before execution begins (plan time), and continually during execution. Plan failures including violations of safety constraints expressed in the plan can be handled through automatic re-planning. This version optimizes a number of key interfaces and features to minimize dependencies, performance overhead, and improve reliability. Fault diagnosis and real-time projection capabilities are incorporated. This version enhances earlier versions primarily through optimizations and quality improvements that raise the technology readiness level. Goals explicitly constrain system states over explicit time intervals to eliminate ambiguity about intent, as compared to command-oriented control that only implies persistent intent until another command is sent. A goal network scheduling and verification process ensures that all goals in the plan are achievable before starting execution. Goal failures at runtime can be detected (including predicted failures) and handled by adapted response logic. Responses can include plan repairs (try an alternate tactic to achieve the same goal), goal shedding, ignoring the fault, cancelling the plan, or safing the system.

  19. Efficient Algorithmic Frameworks via Structural Graph Theory

    DTIC Science & Technology

    2016-10-28

    centrally planned solution. Policy recommendation: Given a socioeconomic game among multiple parties (countries, armies, political parties, terrorist...etc.). 2 Graph Structure of Network Creation Games We completed the final versions of two of our papers about the graph structure inherent in...network creation games ”, which appeared in the following venues: Erik D. Demaine, MohammadTaghi Hajiaghayi, Hamid Mahini, and Morteza Zadi- moghaddam, “The

  20. Science and Technology Undergraduate Students' Use of the Internet, Cell Phones and Social Networking Sites to Access Library Information

    ERIC Educational Resources Information Center

    Salisbury, Lutishoor; Laincz, Jozef; Smith, Jeremy J.

    2012-01-01

    Many academic libraries and publishers have developed mobile-optimized versions of their web sites and catalogs. Almost all database vendors and major journal publishers have provided a way to connect to their resources via the Internet and the mobile web. In light of this pervasive use of the Internet, mobile devices and social networking, this…

Top