Sample records for validation network asrvn

  1. Atmospheric correction at AERONET locations: A new science and validation data set

    USGS Publications Warehouse

    Wang, Y.; Lyapustin, A.I.; Privette, J.L.; Morisette, J.T.; Holben, B.

    2009-01-01

    This paper describes an Aerosol Robotic Network (AERONET)-based Surface Reflectance Validation Network (ASRVN) and its data set of spectral surface bidirectional reflectance and albedo based on Moderate Resolution Imaging Spectroradiometer (MODIS) TERRA and AQUA data. The ASRVN is an operational data collection and processing system. It receives 50 ?? 50 km2; subsets of MODIS level 1B (L1B) data from MODIS adaptive processing system and AERONET aerosol and water-vapor information. Then, it performs an atmospheric correction (AC) for about 100 AERONET sites based on accurate radiative-transfer theory with complex quality control of the input data. The ASRVN processing software consists of an L1B data gridding algorithm, a new cloud-mask (CM) algorithm based on a time-series analysis, and an AC algorithm using ancillary AERONET aerosol and water-vapor data. The AC is achieved by fitting the MODIS top-of-atmosphere measurements, accumulated for a 16-day interval, with theoretical reflectance parameterized in terms of the coefficients of the Li SparseRoss Thick (LSRT) model of the bidirectional reflectance factor (BRF). The ASRVN takes several steps to ensure high quality of results: 1) the filtering of opaque clouds by a CM algorithm; 2) the development of an aerosol filter to filter residual semitransparent and subpixel clouds, as well as cases with high inhomogeneity of aerosols in the processing area; 3) imposing the requirement of the consistency of the new solution with previously retrieved BRF and albedo; 4) rapid adjustment of the 16-day retrieval to the surface changes using the last day of measurements; and 5) development of a seasonal backup spectral BRF database to increase data coverage. The ASRVN provides a gapless or near-gapless coverage for the processing area. The gaps, caused by clouds, are filled most naturally with the latest solution for a given pixel. The ASRVN products include three parameters of the LSRT model (kL, kG, and kV), surface albedo

  2. Atmospheric Correction at AERONET Locations: A New Science and Validation Data Set

    NASA Technical Reports Server (NTRS)

    Wang, Yujie; Lyapustin, Alexei; Privette, Jeffery L.; Morisette, Jeffery T.; Holben, Brent

    2008-01-01

    This paper describes an AERONET-based Surface Reflectance Validation Network (ASRVN) and its dataset of spectral surface bidirectional reflectance and albedo based on MODIS TERRA and AQUA data. The ASRVN is an operational data collection and processing system. It receives 50x50 square kilometer subsets of MODIS L1B data from MODAPS and AERONET aerosol and water vapor information. Then it performs an accurate atmospheric correction for about 100 AERONET sites based on accurate radiative transfer theory with high quality control of the input data. The ASRVN processing software consists of L1B data gridding algorithm, a new cloud mask algorithm based on a time series analysis, and an atmospheric correction algorithm. The atmospheric correction is achieved by fitting the MODIS top of atmosphere measurements, accumulated for 16-day interval, with theoretical reflectance parameterized in terms of coefficients of the LSRT BRF model. The ASRVN takes several steps to ensure high quality of results: 1) cloud mask algorithm filters opaque clouds; 2) an aerosol filter has been developed to filter residual semi-transparent and sub-pixel clouds, as well as cases with high inhomogeneity of aerosols in the processing area; 3) imposing requirement of consistency of the new solution with previously retrieved BRF and albedo; 4) rapid adjustment of the 16-day retrieval to the surface changes using the last day of measurements; and 5) development of seasonal back-up spectral BRF database to increase data coverage. The ASRVN provides a gapless or near-gapless coverage for the processing area. The gaps, caused by clouds, are filled most naturally with the latest solution for a given pixels. The ASRVN products include three parameters of LSRT model (k(sup L), k(sup G), k(sup V)), surface albedo, NBRF (a normalized BRF computed for a standard viewing geometry, VZA=0 deg., SZA=45 deg.), and IBRF (instantaneous, or one angle, BRF value derived from the last day of MODIS measurement for

  3. Assessment of Biases in MODIS Surface Reflectance Due to Lambertian Approximation

    NASA Technical Reports Server (NTRS)

    Wang, Yujie; Lyapustin, Alexei I.; Privette, Jeffrey L.; Cook, Robert B.; SanthanaVannan, Suresh K.; Vermote, Eric F.; Schaaf, Crystal

    2010-01-01

    Using MODIS data and the AERONET-based Surface Reflectance Validation Network (ASRVN), this work studies errors of MODIS atmospheric correction caused by the Lambertian approximation. On one hand, this approximation greatly simplifies the radiative transfer model, reduces the size of the look-up tables, and makes operational algorithm faster. On the other hand, uncompensated atmospheric scattering caused by Lambertian model systematically biases the results. For example, for a typical bowl-shaped bidirectional reflectance distribution function (BRDF), the derived reflectance is underestimated at high solar or view zenith angles, where BRDF is high, and is overestimated at low zenith angles where BRDF is low. The magnitude of biases grows with the amount of scattering in the atmosphere, i.e., at shorter wavelengths and at higher aerosol concentration. The slope of regression of Lambertian surface reflectance vs. ASRVN bidirectional reflectance factor (BRF) is about 0.85 in the red and 0.6 in the green bands. This error propagates into the MODIS BRDF/albedo algorithm, slightly reducing the magnitude of overall reflectance and anisotropy of BRDF. This results in a small negative bias of spectral surface albedo. An assessment for the GSFC (Greenbelt, USA) validation site shows the albedo reduction by 0.004 in the near infrared, 0.005 in the red, and 0.008 in the green MODIS bands.

  4. Network Security Validation Using Game Theory

    NASA Astrophysics Data System (ADS)

    Papadopoulou, Vicky; Gregoriades, Andreas

    Non-functional requirements (NFR) such as network security recently gained widespread attention in distributed information systems. Despite their importance however, there is no systematic approach to validate these requirements given the complexity and uncertainty characterizing modern networks. Traditionally, network security requirements specification has been the results of a reactive process. This however, limited the immunity property of the distributed systems that depended on these networks. Security requirements specification need a proactive approach. Networks' infrastructure is constantly under attack by hackers and malicious software that aim to break into computers. To combat these threats, network designers need sophisticated security validation techniques that will guarantee the minimum level of security for their future networks. This paper presents a game-theoretic approach to security requirements validation. An introduction to game theory is presented along with an example that demonstrates the application of the approach.

  5. Network testbed creation and validation

    DOEpatents

    Thai, Tan Q.; Urias, Vincent; Van Leeuwen, Brian P.; Watts, Kristopher K.; Sweeney, Andrew John

    2017-03-21

    Embodiments of network testbed creation and validation processes are described herein. A "network testbed" is a replicated environment used to validate a target network or an aspect of its design. Embodiments describe a network testbed that comprises virtual testbed nodes executed via a plurality of physical infrastructure nodes. The virtual testbed nodes utilize these hardware resources as a network "fabric," thereby enabling rapid configuration and reconfiguration of the virtual testbed nodes without requiring reconfiguration of the physical infrastructure nodes. Thus, in contrast to prior art solutions which require a tester manually build an emulated environment of physically connected network devices, embodiments receive or derive a target network description and build out a replica of this description using virtual testbed nodes executed via the physical infrastructure nodes. This process allows for the creation of very large (e.g., tens of thousands of network elements) and/or very topologically complex test networks.

  6. Network testbed creation and validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thai, Tan Q.; Urias, Vincent; Van Leeuwen, Brian P.

    Embodiments of network testbed creation and validation processes are described herein. A "network testbed" is a replicated environment used to validate a target network or an aspect of its design. Embodiments describe a network testbed that comprises virtual testbed nodes executed via a plurality of physical infrastructure nodes. The virtual testbed nodes utilize these hardware resources as a network "fabric," thereby enabling rapid configuration and reconfiguration of the virtual testbed nodes without requiring reconfiguration of the physical infrastructure nodes. Thus, in contrast to prior art solutions which require a tester manually build an emulated environment of physically connected network devices,more » embodiments receive or derive a target network description and build out a replica of this description using virtual testbed nodes executed via the physical infrastructure nodes. This process allows for the creation of very large (e.g., tens of thousands of network elements) and/or very topologically complex test networks.« less

  7. Validating Large Scale Networks Using Temporary Local Scale Networks

    USDA-ARS?s Scientific Manuscript database

    The USDA NRCS Soil Climate Analysis Network and NOAA Climate Reference Networks are nationwide meteorological and land surface data networks with soil moisture measurements in the top layers of soil. There is considerable interest in scaling these point measurements to larger scales for validating ...

  8. Using Neural Networks for Sensor Validation

    NASA Technical Reports Server (NTRS)

    Mattern, Duane L.; Jaw, Link C.; Guo, Ten-Huei; Graham, Ronald; McCoy, William

    1998-01-01

    This paper presents the results of applying two different types of neural networks in two different approaches to the sensor validation problem. The first approach uses a functional approximation neural network as part of a nonlinear observer in a model-based approach to analytical redundancy. The second approach uses an auto-associative neural network to perform nonlinear principal component analysis on a set of redundant sensors to provide an estimate for a single failed sensor. The approaches are demonstrated using a nonlinear simulation of a turbofan engine. The fault detection and sensor estimation results are presented and the training of the auto-associative neural network to provide sensor estimates is discussed.

  9. Statistically Validated Networks in Bipartite Complex Systems

    PubMed Central

    Tumminello, Michele; Miccichè, Salvatore; Lillo, Fabrizio; Piilo, Jyrki; Mantegna, Rosario N.

    2011-01-01

    Many complex systems present an intrinsic bipartite structure where elements of one set link to elements of the second set. In these complex systems, such as the system of actors and movies, elements of one set are qualitatively different than elements of the other set. The properties of these complex systems are typically investigated by constructing and analyzing a projected network on one of the two sets (for example the actor network or the movie network). Complex systems are often very heterogeneous in the number of relationships that the elements of one set establish with the elements of the other set, and this heterogeneity makes it very difficult to discriminate links of the projected network that are just reflecting system's heterogeneity from links relevant to unveil the properties of the system. Here we introduce an unsupervised method to statistically validate each link of a projected network against a null hypothesis that takes into account system heterogeneity. We apply the method to a biological, an economic and a social complex system. The method we propose is able to detect network structures which are very informative about the organization and specialization of the investigated systems, and identifies those relationships between elements of the projected network that cannot be explained simply by system heterogeneity. We also show that our method applies to bipartite systems in which different relationships might have different qualitative nature, generating statistically validated networks in which such difference is preserved. PMID:21483858

  10. Construct Validation of Wenger's Support Network Typology.

    PubMed

    Szabo, Agnes; Stephens, Christine; Allen, Joanne; Alpass, Fiona

    2016-10-07

    The study aimed to validate Wenger's empirically derived support network typology of responses to the Practitioner Assessment of Network Type (PANT) in an older New Zealander population. The configuration of network types was tested across ethnic groups and in the total sample. Data (N = 872, Mage = 67 years, SDage = 1.56 years) from the 2006 wave of the New Zealand Health, Work and Retirement study were analyzed using latent profile analysis. In addition, demographic differences among the emerging profiles were tested. Competing models were evaluated based on a range of fit criteria, which supported a five-profile solution. The "locally integrated," "community-focused," "local self-contained," "private-restricted," and "friend- and family-dependent" network types were identified as latent profiles underlying the data. There were no differences between Māori and non-Māori in final profile configurations. However, Māori were more likely to report integrated network types. Findings confirm the validity of Wenger's network types. However, the level to which participants endorse accessibility of family, frequency of interactions, and community engagement can be influenced by sample and contextual characteristics. Future research using the PANT items should empirically verify and derive the social support network types, rather than use a predefined scoring system. © The Author 2016. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  11. iSAFT Protocol Validation Platform for On-Board Data Networks

    NASA Astrophysics Data System (ADS)

    Tavoularis, Antonis; Kollias, Vangelis; Marinis, Kostas

    2014-08-01

    iSAFT is an integrated powerful HW/SW environmentfor the simulation, validation & monitoring of satellite/spacecraft on-board data networks supporting simultaneously a wide range of protocols (RMAP, PTP, CCSDS Space Packet, TM/TC, CANopen, etc.) and network interfaces (SpaceWire, ECSS MIL-STD-1553, ECSS CAN). It is based on over 20 years of TELETEL's experience in the area of protocol validation in the telecommunications and aeronautical sectors, and it has been fully re-engineered in cooperation of TELETEL with ESA & space Primes, to comply with space on-board industrial validation requirements (ECSS, EGSE, AIT, AIV, etc.). iSAFT is highly modular and expandable to support new network interfaces & protocols and it is based on the powerful iSAFT graphical tool chain (Protocol Analyser / Recorder, TestRunner, Device Simulator, Traffic Generator, etc.).

  12. Citizen science networks in natural history and the collective validation of biodiversity data.

    PubMed

    Turnhout, Esther; Lawrence, Anna; Turnhout, Sander

    2016-06-01

    Biodiversity data are in increasing demand to inform policy and management. A substantial portion of these data is generated in citizen science networks. To ensure the quality of biodiversity data, standards and criteria for validation have been put in place. We used interviews and document analysis from the United Kingdom and The Netherlands to examine how data validation serves as a point of connection between the diverse people and practices in natural history citizen science networks. We found that rather than a unidirectional imposition of standards, validation was performed collectively. Specifically, it was enacted in ongoing circulations of biodiversity records between recorders and validators as they jointly negotiated the biodiversity that was observed and the validity of the records. These collective validation practices contributed to the citizen science character or natural history networks and tied these networks together. However, when biodiversity records were included in biodiversity-information initiatives on different policy levels and scales, the circulation of records diminished. These initiatives took on a more extractive mode of data use. Validation ceased to be collective with important consequences for the natural history networks involved and citizen science more generally. © 2016 The Authors. Conservation Biology published by Wiley Periodicals, Inc. on behalf of Society for Conservation Biology.

  13. Social Network Data Validity: The Example of the Social Network of Caregivers of Older Persons with Alzheimer-Type Dementia

    ERIC Educational Resources Information Center

    Carpentier, Normand

    2007-01-01

    This article offers reflection on the validity of relational data such as used in social network analysis. Ongoing research on the transformation of the support network of caregivers of persons with an Alzheimer-type disease provides the data to fuel the debate on the validity of participant report. More specifically, we sought to understand the…

  14. Development and validation of a survey to measure features of clinical networks.

    PubMed

    Brown, Bernadette Bea; Haines, Mary; Middleton, Sandy; Paul, Christine; D'Este, Catherine; Klineberg, Emily; Elliott, Elizabeth

    2016-09-30

    Networks of clinical experts are increasingly being implemented as a strategy to improve health care processes and outcomes and achieve change in the health system. Few are ever formally evaluated and, when this is done, not all networks are equally successful in their efforts. There is a need to formatively assess the strategic and operational management and leadership of networks to identify where functioning could be improved to maximise impact. This paper outlines the development and psychometric evaluation of an Internet survey to measure features of clinical networks and provides descriptive results from a sample of members of 19 diverse clinical networks responsible for evidence-based quality improvement across a large geographical region. Instrument development was based on: a review of published and grey literature; a qualitative study of clinical network members; a program logic framework; and consultation with stakeholders. The resulting domain structure was validated for a sample of 592 clinical network members using confirmatory factor analysis. Scale reliability was assessed using Cronbach's alpha. A summary score was calculated for each domain and aggregate level means and ranges are reported. The instrument was shown to have good construct validity across seven domains as demonstrated by a high level of internal consistency, and all Cronbach's α coefficients were equal to or above 0.75. In the survey sample of network members there was strong reported commitment and belief in network-led quality improvement initiatives, which were perceived to have improved quality of care (72.8 %) and patient outcomes (63.2 %). Network managers were perceived to be effective leaders and clinical co-chairs were perceived as champions for change. Perceived external support had the lowest summary score across the seven domains. This survey, which has good construct validity and internal reliability, provides a valid instrument to use in future research related to

  15. Neural Network-Based Sensor Validation for Turboshaft Engines

    NASA Technical Reports Server (NTRS)

    Moller, James C.; Litt, Jonathan S.; Guo, Ten-Huei

    1998-01-01

    Sensor failure detection, isolation, and accommodation using a neural network approach is described. An auto-associative neural network is configured to perform dimensionality reduction on the sensor measurement vector and provide estimated sensor values. The sensor validation scheme is applied in a simulation of the T700 turboshaft engine in closed loop operation. Performance is evaluated based on the ability to detect faults correctly and maintain stable and responsive engine operation. The set of sensor outputs used for engine control forms the network input vector. Analytical redundancy is verified by training networks of successively smaller bottleneck layer sizes. Training data generation and strategy are discussed. The engine maintained stable behavior in the presence of sensor hard failures. With proper selection of fault determination thresholds, stability was maintained in the presence of sensor soft failures.

  16. Gene network biological validity based on gene-gene interaction relevance.

    PubMed

    Gómez-Vela, Francisco; Díaz-Díaz, Norberto

    2014-01-01

    In recent years, gene networks have become one of the most useful tools for modeling biological processes. Many inference gene network algorithms have been developed as techniques for extracting knowledge from gene expression data. Ensuring the reliability of the inferred gene relationships is a crucial task in any study in order to prove that the algorithms used are precise. Usually, this validation process can be carried out using prior biological knowledge. The metabolic pathways stored in KEGG are one of the most widely used knowledgeable sources for analyzing relationships between genes. This paper introduces a new methodology, GeneNetVal, to assess the biological validity of gene networks based on the relevance of the gene-gene interactions stored in KEGG metabolic pathways. Hence, a complete KEGG pathway conversion into a gene association network and a new matching distance based on gene-gene interaction relevance are proposed. The performance of GeneNetVal was established with three different experiments. Firstly, our proposal is tested in a comparative ROC analysis. Secondly, a randomness study is presented to show the behavior of GeneNetVal when the noise is increased in the input network. Finally, the ability of GeneNetVal to detect biological functionality of the network is shown.

  17. Rationality Validation of a Layered Decision Model for Network Defense

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, Huaqiang; Alves-Foss, James; Zhang, Du

    2007-08-31

    We propose a cost-effective network defense strategy built on three key: three decision layers: security policies, defense strategies, and real-time defense tactics for countering immediate threats. A layered decision model (LDM) can be used to capture this decision process. The LDM helps decision-makers gain insight into the hierarchical relationships among inter-connected entities and decision types, and supports the selection of cost-effective defense mechanisms to safeguard computer networks. To be effective as a business tool, it is first necessary to validate the rationality of model before applying it to real-world business cases. This paper describes our efforts in validating the LDMmore » rationality through simulation.« less

  18. Correcting Evaluation Bias of Relational Classifiers with Network Cross Validation

    DTIC Science & Technology

    2010-01-01

    classi- fication algorithms: simple random resampling (RRS), equal-instance random resampling (ERS), and network cross-validation ( NCV ). The first two... NCV procedure that eliminates overlap between test sets altogether. The procedure samples for k disjoint test sets that will be used for evaluation...propLabeled ∗ S) nodes from train Pool in f erenceSet =network − trainSet F = F ∪ < trainSet, test Set, in f erenceSet > end for output: F NCV addresses

  19. Verification and Validation Methodology of Real-Time Adaptive Neural Networks for Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Gupta, Pramod; Loparo, Kenneth; Mackall, Dale; Schumann, Johann; Soares, Fola

    2004-01-01

    Recent research has shown that adaptive neural based control systems are very effective in restoring stability and control of an aircraft in the presence of damage or failures. The application of an adaptive neural network with a flight critical control system requires a thorough and proven process to ensure safe and proper flight operation. Unique testing tools have been developed as part of a process to perform verification and validation (V&V) of real time adaptive neural networks used in recent adaptive flight control system, to evaluate the performance of the on line trained neural networks. The tools will help in certification from FAA and will help in the successful deployment of neural network based adaptive controllers in safety-critical applications. The process to perform verification and validation is evaluated against a typical neural adaptive controller and the results are discussed.

  20. A Network for Standardized Ocean Color Validation Measurements

    NASA Technical Reports Server (NTRS)

    Zibordi, Giuseppe; Holben, Brent; Hooker, Stanford; Melin, Frederic; Berthon, Jean-Francois; Slutsker, Ilya; Giles, David; Vandemark, Doug; Feng, Hui; Rutledge, Ken; hide

    2006-01-01

    The Aerosol Robotic Network (AERONET) was developed to support atmospheric studies at various scales with measurements from worldwide distributed autonomous sunphotometers [Holben et al. 1998]. AERONET has now extended its support to marine applications through the additional capability of measuring the radiance emerging from the sea with modified sun-photometers installed on offshore platforms like lighthouses, navigation aids, oceanographic and oil towers. The functionality of this added network component called AERONET - Ocean Color (AERONET-OC), has been verified at different sites and deployment structures over a four year testing phase. Continuous or occasional deployment platforms (see Fig. 1) included: the Acqua Alta Oceanographic Tower (AAOT) of the Italian National Research Council in the northern Adriatic Sea since spring 2002; the Martha s Vineyard Coastal Observatory (MVCO) tower of the Woods Hole Oceanographic Institution in the Atlantic off the Massachusetts coast for different periods since spring 2004; the TOTAL Abu-Al-Bukhoosh oil Platform (AABP, shown through an artistic rendition in Fig. 1) in the Persian (Arabian) Gulf in fall 2004; the Gustaf Dal n Lighthouse Tower (GDLT) of the Swedish Maritime Administration in the Baltic Sea in summer 2005; and the platform at the Clouds and the Earth's Radiant Energy System (CERES) Ocean Validation Experiment (COVE) site located in the Atlantic Ocean off the Virginia coast since fall 2005. Data collected during the network testing phase, confirm the capability of AERONET-OC to support the validation of marine optical remote sensing products through standardized measurements of normalized water-leaving radiance, LWN, and aerosol optical thickness, a, at multiple coastal sites.

  1. BioNetCAD: design, simulation and experimental validation of synthetic biochemical networks

    PubMed Central

    Rialle, Stéphanie; Felicori, Liza; Dias-Lopes, Camila; Pérès, Sabine; El Atia, Sanaâ; Thierry, Alain R.; Amar, Patrick; Molina, Franck

    2010-01-01

    Motivation: Synthetic biology studies how to design and construct biological systems with functions that do not exist in nature. Biochemical networks, although easier to control, have been used less frequently than genetic networks as a base to build a synthetic system. To date, no clear engineering principles exist to design such cell-free biochemical networks. Results: We describe a methodology for the construction of synthetic biochemical networks based on three main steps: design, simulation and experimental validation. We developed BioNetCAD to help users to go through these steps. BioNetCAD allows designing abstract networks that can be implemented thanks to CompuBioTicDB, a database of parts for synthetic biology. BioNetCAD enables also simulations with the HSim software and the classical Ordinary Differential Equations (ODE). We demonstrate with a case study that BioNetCAD can rationalize and reduce further experimental validation during the construction of a biochemical network. Availability and implementation: BioNetCAD is freely available at http://www.sysdiag.cnrs.fr/BioNetCAD. It is implemented in Java and supported on MS Windows. CompuBioTicDB is freely accessible at http://compubiotic.sysdiag.cnrs.fr/ Contact: stephanie.rialle@sysdiag.cnrs.fr; franck.molina@sysdiag.cnrs.fr Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20628073

  2. An experimentally validated network of nine haematopoietic transcription factors reveals mechanisms of cell state stability

    PubMed Central

    Schütte, Judith; Wang, Huange; Antoniou, Stella; Jarratt, Andrew; Wilson, Nicola K; Riepsaame, Joey; Calero-Nieto, Fernando J; Moignard, Victoria; Basilico, Silvia; Kinston, Sarah J; Hannah, Rebecca L; Chan, Mun Chiang; Nürnberg, Sylvia T; Ouwehand, Willem H; Bonzanni, Nicola; de Bruijn, Marella FTR; Göttgens, Berthold

    2016-01-01

    Transcription factor (TF) networks determine cell-type identity by establishing and maintaining lineage-specific expression profiles, yet reconstruction of mammalian regulatory network models has been hampered by a lack of comprehensive functional validation of regulatory interactions. Here, we report comprehensive ChIP-Seq, transgenic and reporter gene experimental data that have allowed us to construct an experimentally validated regulatory network model for haematopoietic stem/progenitor cells (HSPCs). Model simulation coupled with subsequent experimental validation using single cell expression profiling revealed potential mechanisms for cell state stabilisation, and also how a leukaemogenic TF fusion protein perturbs key HSPC regulators. The approach presented here should help to improve our understanding of both normal physiological and disease processes. DOI: http://dx.doi.org/10.7554/eLife.11469.001 PMID:26901438

  3. Intrusion-aware alert validation algorithm for cooperative distributed intrusion detection schemes of wireless sensor networks.

    PubMed

    Shaikh, Riaz Ahmed; Jameel, Hassan; d'Auriol, Brian J; Lee, Heejo; Lee, Sungyoung; Song, Young-Jae

    2009-01-01

    Existing anomaly and intrusion detection schemes of wireless sensor networks have mainly focused on the detection of intrusions. Once the intrusion is detected, an alerts or claims will be generated. However, any unidentified malicious nodes in the network could send faulty anomaly and intrusion claims about the legitimate nodes to the other nodes. Verifying the validity of such claims is a critical and challenging issue that is not considered in the existing cooperative-based distributed anomaly and intrusion detection schemes of wireless sensor networks. In this paper, we propose a validation algorithm that addresses this problem. This algorithm utilizes the concept of intrusion-aware reliability that helps to provide adequate reliability at a modest communication cost. In this paper, we also provide a security resiliency analysis of the proposed intrusion-aware alert validation algorithm.

  4. NNvPDB: Neural Network based Protein Secondary Structure Prediction with PDB Validation.

    PubMed

    Sakthivel, Seethalakshmi; S K M, Habeeb

    2015-01-01

    The predicted secondary structural states are not cross validated by any of the existing servers. Hence, information on the level of accuracy for every sequence is not reported by the existing servers. This was overcome by NNvPDB, which not only reported greater Q3 but also validates every prediction with the homologous PDB entries. NNvPDB is based on the concept of Neural Network, with a new and different approach of training the network every time with five PDB structures that are similar to query sequence. The average accuracy for helix is 76%, beta sheet is 71% and overall (helix, sheet and coil) is 66%. http://bit.srmuniv.ac.in/cgi-bin/bit/cfpdb/nnsecstruct.pl.

  5. Intrusion-Aware Alert Validation Algorithm for Cooperative Distributed Intrusion Detection Schemes of Wireless Sensor Networks

    PubMed Central

    Shaikh, Riaz Ahmed; Jameel, Hassan; d’Auriol, Brian J.; Lee, Heejo; Lee, Sungyoung; Song, Young-Jae

    2009-01-01

    Existing anomaly and intrusion detection schemes of wireless sensor networks have mainly focused on the detection of intrusions. Once the intrusion is detected, an alerts or claims will be generated. However, any unidentified malicious nodes in the network could send faulty anomaly and intrusion claims about the legitimate nodes to the other nodes. Verifying the validity of such claims is a critical and challenging issue that is not considered in the existing cooperative-based distributed anomaly and intrusion detection schemes of wireless sensor networks. In this paper, we propose a validation algorithm that addresses this problem. This algorithm utilizes the concept of intrusion-aware reliability that helps to provide adequate reliability at a modest communication cost. In this paper, we also provide a security resiliency analysis of the proposed intrusion-aware alert validation algorithm. PMID:22454568

  6. Disruption Tolerant Networking Flight Validation Experiment on NASA's EPOXI Mission

    NASA Technical Reports Server (NTRS)

    Wyatt, Jay; Burleigh, Scott; Jones, Ross; Torgerson, Leigh; Wissler, Steve

    2009-01-01

    In October and November of 2008, the Jet Propulsion Laboratory installed and tested essential elements of Delay/Disruption Tolerant Networking (DTN) technology on the Deep Impact spacecraft. This experiment, called Deep Impact Network Experiment (DINET), was performed in close cooperation with the EPOXI project which has responsibility for the spacecraft. During DINET some 300 images were transmitted from the JPL nodes to the spacecraft. Then they were automatically forwarded from the spacecraft back to the JPL nodes, exercising DTN's bundle origination, transmission, acquisition, dynamic route computation, congestion control, prioritization, custody transfer, and automatic retransmission procedures, both on the spacecraft and on the ground, over a period of 27 days. All transmitted bundles were successfully received, without corruption. The DINET experiment demonstrated DTN readiness for operational use in space missions. This activity was part of a larger NASA space DTN development program to mature DTN to flight readiness for a wide variety of mission types by the end of 2011. This paper describes the DTN protocols, the flight demo implementation, validation metrics which were created for the experiment, and validation results.

  7. A method for validating Rent's rule for technological and biological networks.

    PubMed

    Alcalde Cuesta, Fernando; González Sequeiros, Pablo; Lozano Rojo, Álvaro

    2017-07-14

    Rent's rule is empirical power law introduced in an effort to describe and optimize the wiring complexity of computer logic graphs. It is known that brain and neuronal networks also obey Rent's rule, which is consistent with the idea that wiring costs play a fundamental role in brain evolution and development. Here we propose a method to validate this power law for a certain range of network partitions. This method is based on the bifurcation phenomenon that appears when the network is subjected to random alterations preserving its degree distribution. It has been tested on a set of VLSI circuits and real networks, including biological and technological ones. We also analyzed the effect of different types of random alterations on the Rentian scaling in order to test the influence of the degree distribution. There are network architectures quite sensitive to these randomization procedures with significant increases in the values of the Rent exponents.

  8. Statistically validated network of portfolio overlaps and systemic risk.

    PubMed

    Gualdi, Stanislao; Cimini, Giulio; Primicerio, Kevin; Di Clemente, Riccardo; Challet, Damien

    2016-12-21

    Common asset holding by financial institutions (portfolio overlap) is nowadays regarded as an important channel for financial contagion with the potential to trigger fire sales and severe losses at the systemic level. We propose a method to assess the statistical significance of the overlap between heterogeneously diversified portfolios, which we use to build a validated network of financial institutions where links indicate potential contagion channels. The method is implemented on a historical database of institutional holdings ranging from 1999 to the end of 2013, but can be applied to any bipartite network. We find that the proportion of validated links (i.e. of significant overlaps) increased steadily before the 2007-2008 financial crisis and reached a maximum when the crisis occurred. We argue that the nature of this measure implies that systemic risk from fire sales liquidation was maximal at that time. After a sharp drop in 2008, systemic risk resumed its growth in 2009, with a notable acceleration in 2013. We finally show that market trends tend to be amplified in the portfolios identified by the algorithm, such that it is possible to have an informative signal about institutions that are about to suffer (enjoy) the most significant losses (gains).

  9. Statistically validated network of portfolio overlaps and systemic risk

    PubMed Central

    Gualdi, Stanislao; Cimini, Giulio; Primicerio, Kevin; Di Clemente, Riccardo; Challet, Damien

    2016-01-01

    Common asset holding by financial institutions (portfolio overlap) is nowadays regarded as an important channel for financial contagion with the potential to trigger fire sales and severe losses at the systemic level. We propose a method to assess the statistical significance of the overlap between heterogeneously diversified portfolios, which we use to build a validated network of financial institutions where links indicate potential contagion channels. The method is implemented on a historical database of institutional holdings ranging from 1999 to the end of 2013, but can be applied to any bipartite network. We find that the proportion of validated links (i.e. of significant overlaps) increased steadily before the 2007–2008 financial crisis and reached a maximum when the crisis occurred. We argue that the nature of this measure implies that systemic risk from fire sales liquidation was maximal at that time. After a sharp drop in 2008, systemic risk resumed its growth in 2009, with a notable acceleration in 2013. We finally show that market trends tend to be amplified in the portfolios identified by the algorithm, such that it is possible to have an informative signal about institutions that are about to suffer (enjoy) the most significant losses (gains). PMID:28000764

  10. Validation of the thermal challenge problem using Bayesian Belief Networks.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McFarland, John; Swiler, Laura Painton

    The thermal challenge problem has been developed at Sandia National Laboratories as a testbed for demonstrating various types of validation approaches and prediction methods. This report discusses one particular methodology to assess the validity of a computational model given experimental data. This methodology is based on Bayesian Belief Networks (BBNs) and can incorporate uncertainty in experimental measurements, in physical quantities, and model uncertainties. The approach uses the prior and posterior distributions of model output to compute a validation metric based on Bayesian hypothesis testing (a Bayes' factor). This report discusses various aspects of the BBN, specifically in the context ofmore » the thermal challenge problem. A BBN is developed for a given set of experimental data in a particular experimental configuration. The development of the BBN and the method for ''solving'' the BBN to develop the posterior distribution of model output through Monte Carlo Markov Chain sampling is discussed in detail. The use of the BBN to compute a Bayes' factor is demonstrated.« less

  11. Towards a Methodology for Validation of Centrality Measures in Complex Networks

    PubMed Central

    2014-01-01

    Background Living systems are associated with Social networksnetworks made up of nodes, some of which may be more important in various aspects as compared to others. While different quantitative measures labeled as “centralities” have previously been used in the network analysis community to find out influential nodes in a network, it is debatable how valid the centrality measures actually are. In other words, the research question that remains unanswered is: how exactly do these measures perform in the real world? So, as an example, if a centrality of a particular node identifies it to be important, is the node actually important? Purpose The goal of this paper is not just to perform a traditional social network analysis but rather to evaluate different centrality measures by conducting an empirical study analyzing exactly how do network centralities correlate with data from published multidisciplinary network data sets. Method We take standard published network data sets while using a random network to establish a baseline. These data sets included the Zachary's Karate Club network, dolphin social network and a neural network of nematode Caenorhabditis elegans. Each of the data sets was analyzed in terms of different centrality measures and compared with existing knowledge from associated published articles to review the role of each centrality measure in the determination of influential nodes. Results Our empirical analysis demonstrates that in the chosen network data sets, nodes which had a high Closeness Centrality also had a high Eccentricity Centrality. Likewise high Degree Centrality also correlated closely with a high Eigenvector Centrality. Whereas Betweenness Centrality varied according to network topology and did not demonstrate any noticeable pattern. In terms of identification of key nodes, we discovered that as compared with other centrality measures, Eigenvector and Eccentricity Centralities were better able to identify important nodes

  12. Validation of the Social Networking Activity Intensity Scale among Junior Middle School Students in China.

    PubMed

    Li, Jibin; Lau, Joseph T F; Mo, Phoenix K H; Su, Xuefen; Wu, Anise M S; Tang, Jie; Qin, Zuguo

    2016-01-01

    Online social networking use has been integrated into adolescents' daily life and the intensity of online social networking use may have important consequences on adolescents' well-being. However, there are few validated instruments to measure social networking use intensity. The present study aims to develop the Social Networking Activity Intensity Scale (SNAIS) and validate it among junior middle school students in China. A total of 910 students who were social networking users were recruited from two junior middle schools in Guangzhou, and 114 students were retested after two weeks to examine the test-retest reliability. The psychometrics of the SNAIS were estimated using appropriate statistical methods. Two factors, Social Function Use Intensity (SFUI) and Entertainment Function Use Intensity (EFUI), were clearly identified by both exploratory and confirmatory factor analyses. No ceiling or floor effects were observed for the SNAIS and its two subscales. The SNAIS and its two subscales exhibited acceptable reliability (Cronbach's alpha = 0.89, 0.90 and 0.60, and test-retest Intra-class Correlation Coefficient = 0.85, 0.87 and 0.67 for Overall scale, SFUI and EFUI subscale, respectively, p<0.001). As expected, the SNAIS and its subscale scores were correlated significantly with emotional connection to social networking, social networking addiction, Internet addiction, and characteristics related to social networking use. The SNAIS is an easily self-administered scale with good psychometric properties. It would facilitate more research in this field worldwide and specifically in the Chinese population.

  13. Genotet: An Interactive Web-based Visual Exploration Framework to Support Validation of Gene Regulatory Networks.

    PubMed

    Yu, Bowen; Doraiswamy, Harish; Chen, Xi; Miraldi, Emily; Arrieta-Ortiz, Mario Luis; Hafemeister, Christoph; Madar, Aviv; Bonneau, Richard; Silva, Cláudio T

    2014-12-01

    Elucidation of transcriptional regulatory networks (TRNs) is a fundamental goal in biology, and one of the most important components of TRNs are transcription factors (TFs), proteins that specifically bind to gene promoter and enhancer regions to alter target gene expression patterns. Advances in genomic technologies as well as advances in computational biology have led to multiple large regulatory network models (directed networks) each with a large corpus of supporting data and gene-annotation. There are multiple possible biological motivations for exploring large regulatory network models, including: validating TF-target gene relationships, figuring out co-regulation patterns, and exploring the coordination of cell processes in response to changes in cell state or environment. Here we focus on queries aimed at validating regulatory network models, and on coordinating visualization of primary data and directed weighted gene regulatory networks. The large size of both the network models and the primary data can make such coordinated queries cumbersome with existing tools and, in particular, inhibits the sharing of results between collaborators. In this work, we develop and demonstrate a web-based framework for coordinating visualization and exploration of expression data (RNA-seq, microarray), network models and gene-binding data (ChIP-seq). Using specialized data structures and multiple coordinated views, we design an efficient querying model to support interactive analysis of the data. Finally, we show the effectiveness of our framework through case studies for the mouse immune system (a dataset focused on a subset of key cellular functions) and a model bacteria (a small genome with high data-completeness).

  14. Validation of the Social Networking Activity Intensity Scale among Junior Middle School Students in China

    PubMed Central

    Li, Jibin; Lau, Joseph T. F.; Mo, Phoenix K. H.; Su, Xuefen; Wu, Anise M. S.; Tang, Jie; Qin, Zuguo

    2016-01-01

    Background Online social networking use has been integrated into adolescents’ daily life and the intensity of online social networking use may have important consequences on adolescents’ well-being. However, there are few validated instruments to measure social networking use intensity. The present study aims to develop the Social Networking Activity Intensity Scale (SNAIS) and validate it among junior middle school students in China. Methods A total of 910 students who were social networking users were recruited from two junior middle schools in Guangzhou, and 114 students were retested after two weeks to examine the test-retest reliability. The psychometrics of the SNAIS were estimated using appropriate statistical methods. Results Two factors, Social Function Use Intensity (SFUI) and Entertainment Function Use Intensity (EFUI), were clearly identified by both exploratory and confirmatory factor analyses. No ceiling or floor effects were observed for the SNAIS and its two subscales. The SNAIS and its two subscales exhibited acceptable reliability (Cronbach’s alpha = 0.89, 0.90 and 0.60, and test-retest Intra-class Correlation Coefficient = 0.85, 0.87 and 0.67 for Overall scale, SFUI and EFUI subscale, respectively, p<0.001). As expected, the SNAIS and its subscale scores were correlated significantly with emotional connection to social networking, social networking addiction, Internet addiction, and characteristics related to social networking use. Conclusions The SNAIS is an easily self-administered scale with good psychometric properties. It would facilitate more research in this field worldwide and specifically in the Chinese population. PMID:27798699

  15. Correcting evaluation bias of relational classifiers with network cross validation

    DOE PAGES

    Neville, Jennifer; Gallagher, Brian; Eliassi-Rad, Tina; ...

    2011-01-04

    Recently, a number of modeling techniques have been developed for data mining and machine learning in relational and network domains where the instances are not independent and identically distributed (i.i.d.). These methods specifically exploit the statistical dependencies among instances in order to improve classification accuracy. However, there has been little focus on how these same dependencies affect our ability to draw accurate conclusions about the performance of the models. More specifically, the complex link structure and attribute dependencies in relational data violate the assumptions of many conventional statistical tests and make it difficult to use these tests to assess themore » models in an unbiased manner. In this work, we examine the task of within-network classification and the question of whether two algorithms will learn models that will result in significantly different levels of performance. We show that the commonly used form of evaluation (paired t-test on overlapping network samples) can result in an unacceptable level of Type I error. Furthermore, we show that Type I error increases as (1) the correlation among instances increases and (2) the size of the evaluation set increases (i.e., the proportion of labeled nodes in the network decreases). Lastly, we propose a method for network cross-validation that combined with paired t-tests produces more acceptable levels of Type I error while still providing reasonable levels of statistical power (i.e., 1–Type II error).« less

  16. Enhanced data validation strategy of air quality monitoring network.

    PubMed

    Harkat, Mohamed-Faouzi; Mansouri, Majdi; Nounou, Mohamed; Nounou, Hazem

    2018-01-01

    Quick validation and detection of faults in measured air quality data is a crucial step towards achieving the objectives of air quality networks. Therefore, the objectives of this paper are threefold: (i) to develop a modeling technique that can be used to predict the normal behavior of air quality variables and help provide accurate reference for monitoring purposes; (ii) to develop fault detection method that can effectively and quickly detect any anomalies in measured air quality data. For this purpose, a new fault detection method that is based on the combination of generalized likelihood ratio test (GLRT) and exponentially weighted moving average (EWMA) will be developed. GLRT is a well-known statistical fault detection method that relies on maximizing the detection probability for a given false alarm rate. In this paper, we propose to develop GLRT-based EWMA fault detection method that will be able to detect the changes in the values of certain air quality variables; (iii) to develop fault isolation and identification method that allows defining the fault source(s) in order to properly apply appropriate corrective actions. In this paper, reconstruction approach that is based on Midpoint-Radii Principal Component Analysis (MRPCA) model will be developed to handle the types of data and models associated with air quality monitoring networks. All air quality modeling, fault detection, fault isolation and reconstruction methods developed in this paper will be validated using real air quality data (such as particulate matter, ozone, nitrogen and carbon oxides measurement). Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Use of Bayesian Networks to Probabilistically Model and Improve the Likelihood of Validation of Microarray Findings by RT-PCR

    PubMed Central

    English, Sangeeta B.; Shih, Shou-Ching; Ramoni, Marco F.; Smith, Lois E.; Butte, Atul J.

    2014-01-01

    Though genome-wide technologies, such as microarrays, are widely used, data from these methods are considered noisy; there is still varied success in downstream biological validation. We report a method that increases the likelihood of successfully validating microarray findings using real time RT-PCR, including genes at low expression levels and with small differences. We use a Bayesian network to identify the most relevant sources of noise based on the successes and failures in validation for an initial set of selected genes, and then improve our subsequent selection of genes for validation based on eliminating these sources of noise. The network displays the significant sources of noise in an experiment, and scores the likelihood of validation for every gene. We show how the method can significantly increase validation success rates. In conclusion, in this study, we have successfully added a new automated step to determine the contributory sources of noise that determine successful or unsuccessful downstream biological validation. PMID:18790084

  18. Validation of electronic medical record-based phenotyping algorithms: results and lessons learned from the eMERGE network.

    PubMed

    Newton, Katherine M; Peissig, Peggy L; Kho, Abel Ngo; Bielinski, Suzette J; Berg, Richard L; Choudhary, Vidhu; Basford, Melissa; Chute, Christopher G; Kullo, Iftikhar J; Li, Rongling; Pacheco, Jennifer A; Rasmussen, Luke V; Spangler, Leslie; Denny, Joshua C

    2013-06-01

    Genetic studies require precise phenotype definitions, but electronic medical record (EMR) phenotype data are recorded inconsistently and in a variety of formats. To present lessons learned about validation of EMR-based phenotypes from the Electronic Medical Records and Genomics (eMERGE) studies. The eMERGE network created and validated 13 EMR-derived phenotype algorithms. Network sites are Group Health, Marshfield Clinic, Mayo Clinic, Northwestern University, and Vanderbilt University. By validating EMR-derived phenotypes we learned that: (1) multisite validation improves phenotype algorithm accuracy; (2) targets for validation should be carefully considered and defined; (3) specifying time frames for review of variables eases validation time and improves accuracy; (4) using repeated measures requires defining the relevant time period and specifying the most meaningful value to be studied; (5) patient movement in and out of the health plan (transience) can result in incomplete or fragmented data; (6) the review scope should be defined carefully; (7) particular care is required in combining EMR and research data; (8) medication data can be assessed using claims, medications dispensed, or medications prescribed; (9) algorithm development and validation work best as an iterative process; and (10) validation by content experts or structured chart review can provide accurate results. Despite the diverse structure of the five EMRs of the eMERGE sites, we developed, validated, and successfully deployed 13 electronic phenotype algorithms. Validation is a worthwhile process that not only measures phenotype performance but also strengthens phenotype algorithm definitions and enhances their inter-institutional sharing.

  19. Validating the Use of Deep Learning Neural Networks for Correction of Large Hydrometric Datasets

    NASA Astrophysics Data System (ADS)

    Frazier, N.; Ogden, F. L.; Regina, J. A.; Cheng, Y.

    2017-12-01

    Collection and validation of Earth systems data can be time consuming and labor intensive. In particular, high resolution hydrometric data, including rainfall and streamflow measurements, are difficult to obtain due to a multitude of complicating factors. Measurement equipment is subject to clogs, environmental disturbances, and sensor drift. Manual intervention is typically required to identify, correct, and validate these data. Weirs can become clogged and the pressure transducer may float or drift over time. We typically employ a graphical tool called Time Series Editor to manually remove clogs and sensor drift from the data. However, this process is highly subjective and requires hydrological expertise. Two different people may produce two different data sets. To use this data for scientific discovery and model validation, a more consistent method is needed to processes this field data. Deep learning neural networks have proved to be excellent mechanisms for recognizing patterns in data. We explore the use of Recurrent Neural Networks (RNN) to capture the patterns in the data over time using various gating mechanisms (LSTM and GRU), network architectures, and hyper-parameters to build an automated data correction model. We also explore the required amount of manually corrected training data required to train the network for reasonable accuracy. The benefits of this approach are that the time to process a data set is significantly reduced, and the results are 100% reproducible after training is complete. Additionally, we train the RNN and calibrate a physically-based hydrological model against the same portion of data. Both the RNN and the model are applied to the remaining data using a split-sample methodology. Performance of the machine learning is evaluated for plausibility by comparing with the output of the hydrological model, and this analysis identifies potential periods where additional investigation is warranted.

  20. Validating module network learning algorithms using simulated data.

    PubMed

    Michoel, Tom; Maere, Steven; Bonnet, Eric; Joshi, Anagha; Saeys, Yvan; Van den Bulcke, Tim; Van Leemput, Koenraad; van Remortel, Piet; Kuiper, Martin; Marchal, Kathleen; Van de Peer, Yves

    2007-05-03

    In recent years, several authors have used probabilistic graphical models to learn expression modules and their regulatory programs from gene expression data. Despite the demonstrated success of such algorithms in uncovering biologically relevant regulatory relations, further developments in the area are hampered by a lack of tools to compare the performance of alternative module network learning strategies. Here, we demonstrate the use of the synthetic data generator SynTReN for the purpose of testing and comparing module network learning algorithms. We introduce a software package for learning module networks, called LeMoNe, which incorporates a novel strategy for learning regulatory programs. Novelties include the use of a bottom-up Bayesian hierarchical clustering to construct the regulatory programs, and the use of a conditional entropy measure to assign regulators to the regulation program nodes. Using SynTReN data, we test the performance of LeMoNe in a completely controlled situation and assess the effect of the methodological changes we made with respect to an existing software package, namely Genomica. Additionally, we assess the effect of various parameters, such as the size of the data set and the amount of noise, on the inference performance. Overall, application of Genomica and LeMoNe to simulated data sets gave comparable results. However, LeMoNe offers some advantages, one of them being that the learning process is considerably faster for larger data sets. Additionally, we show that the location of the regulators in the LeMoNe regulation programs and their conditional entropy may be used to prioritize regulators for functional validation, and that the combination of the bottom-up clustering strategy with the conditional entropy-based assignment of regulators improves the handling of missing or hidden regulators. We show that data simulators such as SynTReN are very well suited for the purpose of developing, testing and improving module network

  1. An overview of mesoscale aerosol processes, comparisons, and validation studies from DRAGON networks

    NASA Astrophysics Data System (ADS)

    Holben, Brent N.; Kim, Jhoon; Sano, Itaru; Mukai, Sonoyo; Eck, Thomas F.; Giles, David M.; Schafer, Joel S.; Sinyuk, Aliaksandr; Slutsker, Ilya; Smirnov, Alexander; Sorokin, Mikhail; Anderson, Bruce E.; Che, Huizheng; Choi, Myungje; Crawford, James H.; Ferrare, Richard A.; Garay, Michael J.; Jeong, Ukkyo; Kim, Mijin; Kim, Woogyung; Knox, Nichola; Li, Zhengqiang; Lim, Hwee S.; Liu, Yang; Maring, Hal; Nakata, Makiko; Pickering, Kenneth E.; Piketh, Stuart; Redemann, Jens; Reid, Jeffrey S.; Salinas, Santo; Seo, Sora; Tan, Fuyi; Tripathi, Sachchida N.; Toon, Owen B.; Xiao, Qingyang

    2018-01-01

    Over the past 24 years, the AErosol RObotic NETwork (AERONET) program has provided highly accurate remote-sensing characterization of aerosol optical and physical properties for an increasingly extensive geographic distribution including all continents and many oceanic island and coastal sites. The measurements and retrievals from the AERONET global network have addressed satellite and model validation needs very well, but there have been challenges in making comparisons to similar parameters from in situ surface and airborne measurements. Additionally, with improved spatial and temporal satellite remote sensing of aerosols, there is a need for higher spatial-resolution ground-based remote-sensing networks. An effort to address these needs resulted in a number of field campaign networks called Distributed Regional Aerosol Gridded Observation Networks (DRAGONs) that were designed to provide a database for in situ and remote-sensing comparison and analysis of local to mesoscale variability in aerosol properties. This paper describes the DRAGON deployments that will continue to contribute to the growing body of research related to meso- and microscale aerosol features and processes. The research presented in this special issue illustrates the diversity of topics that has resulted from the application of data from these networks.

  2. Valid approximation of spatially distributed grain size distributions - A priori information encoded to a feedforward network

    NASA Astrophysics Data System (ADS)

    Berthold, T.; Milbradt, P.; Berkhahn, V.

    2018-04-01

    This paper presents a model for the approximation of multiple, spatially distributed grain size distributions based on a feedforward neural network. Since a classical feedforward network does not guarantee to produce valid cumulative distribution functions, a priori information is incor porated into the model by applying weight and architecture constraints. The model is derived in two steps. First, a model is presented that is able to produce a valid distribution function for a single sediment sample. Although initially developed for sediment samples, the model is not limited in its application; it can also be used to approximate any other multimodal continuous distribution function. In the second part, the network is extended in order to capture the spatial variation of the sediment samples that have been obtained from 48 locations in the investigation area. Results show that the model provides an adequate approximation of grain size distributions, satisfying the requirements of a cumulative distribution function.

  3. Validating the TeleStroke Mimic Score: A Prediction Rule for Identifying Stroke Mimics Evaluated Over Telestroke Networks.

    PubMed

    Ali, Syed F; Hubert, Gordian J; Switzer, Jeffrey A; Majersik, Jennifer J; Backhaus, Roland; Shepard, L Wylie; Vedala, Kishore; Schwamm, Lee H

    2018-03-01

    Up to 30% of acute stroke evaluations are deemed stroke mimics, and these are common in telestroke as well. We recently published a risk prediction score for use during telestroke encounters to differentiate stroke mimics from ischemic cerebrovascular disease derived and validated in the Partners TeleStroke Network. Using data from 3 distinct US and European telestroke networks, we sought to externally validate the TeleStroke Mimic (TM) score in a broader population. We evaluated the TM score in 1930 telestroke consults from the University of Utah, Georgia Regents University, and the German TeleMedical Project for Integrative Stroke Care Network. We report the area under the curve in receiver-operating characteristic curve analysis with 95% confidence interval for our previously derived TM score in which lower TM scores correspond with a higher likelihood of being a stroke mimic. Based on final diagnosis at the end of the telestroke consultation, there were 630 of 1930 (32.6%) stroke mimics in the external validation cohort. All 6 variables included in the score were significantly different between patients with ischemic cerebrovascular disease versus stroke mimics. The TM score performed well (area under curve, 0.72; 95% confidence interval, 0.70-0.73; P <0.001), similar to our prior external validation in the Partners National Telestroke Network. The TM score's ability to predict the presence of a stroke mimic during telestroke consultation in these diverse cohorts was similar to its performance in our original cohort. Predictive decision-support tools like the TM score may help highlight key clinical differences between mimics and patients with stroke during complex, time-critical telestroke evaluations. © 2018 American Heart Association, Inc.

  4. Validating Farmers' Indigenous Social Networks for Local Seed Supply in Central Rift Valley of Ethiopia.

    ERIC Educational Resources Information Center

    Seboka, B.; Deressa, A.

    2000-01-01

    Indigenous social networks of Ethiopian farmers participate in seed exchange based on mutual interdependence and trust. A government-imposed extension program must validate the role of local seed systems in developing a national seed industry. (SK)

  5. Validating neural-network refinements of nuclear mass models

    NASA Astrophysics Data System (ADS)

    Utama, R.; Piekarewicz, J.

    2018-01-01

    Background: Nuclear astrophysics centers on the role of nuclear physics in the cosmos. In particular, nuclear masses at the limits of stability are critical in the development of stellar structure and the origin of the elements. Purpose: We aim to test and validate the predictions of recently refined nuclear mass models against the newly published AME2016 compilation. Methods: The basic paradigm underlining the recently refined nuclear mass models is based on existing state-of-the-art models that are subsequently refined through the training of an artificial neural network. Bayesian inference is used to determine the parameters of the neural network so that statistical uncertainties are provided for all model predictions. Results: We observe a significant improvement in the Bayesian neural network (BNN) predictions relative to the corresponding "bare" models when compared to the nearly 50 new masses reported in the AME2016 compilation. Further, AME2016 estimates for the handful of impactful isotopes in the determination of r -process abundances are found to be in fairly good agreement with our theoretical predictions. Indeed, the BNN-improved Duflo-Zuker model predicts a root-mean-square deviation relative to experiment of σrms≃400 keV. Conclusions: Given the excellent performance of the BNN refinement in confronting the recently published AME2016 compilation, we are confident of its critical role in our quest for mass models of the highest quality. Moreover, as uncertainty quantification is at the core of the BNN approach, the improved mass models are in a unique position to identify those nuclei that will have the strongest impact in resolving some of the outstanding questions in nuclear astrophysics.

  6. [Systemic validation of clinical practice guidelines: the AGREE network].

    PubMed

    Hannes, K; Van Royen, P; Aertgeerts, B; Buntinx, F; Ramaekers, D; Chevalier, P

    2005-12-01

    Over recent decades, the number of available clinical practice guidelines has enormously grown. Guidelines should meet specific quality criteria to ensure good quality. There is a growing need for the developement of a set of criteria to ensure that potential biases inherent in guideline development have been properly addressed and that the recommendations for practice are valid and reliable. The AGREE-collaboration is an international network that developed an instrument to critically appraise the methodological quality of guidelines. AGREE promotes a clear strategy to produce, disseminate and evaluate guidelines of high quality. In the first phase of the international project the AGREE-instrument was tested in 11 different countries. Based on this experience the instrument was refined and optimised. In the second phase it was disseminated, promoted and evaluated in 18 participating countries. Belgium was one of them. The Belgian partner in the AGREE-project developed 3 workshops and established 13 validation committees to validate guidelines from Belgian developer groups. We collected 33 questionnaires from participants of the workshops and the validation committees, in which we asked for primary experiences and information on the usefulness and applicability of the instrument. We were also interested in the shortcomings of the instrument and potential strategies to bridge them. More efforts should be made to train methodological experts to gain certain skills for a critical appraisal of clinical practice guidelines. Promoting the AGREE-instrument will lead to a broader knowledge and use of quality criteria in guideline development and appraisal. The development and dissemination of an international list of criteria to appraise the quality of guidelines will stimulate the development of methodologically sound guidelines. International comparisons between existing guidelines will lead to a better collaboration between guideline developers throughout the world.

  7. Multiple Score Comparison: a network meta-analysis approach to comparison and external validation of prognostic scores.

    PubMed

    Haile, Sarah R; Guerra, Beniamino; Soriano, Joan B; Puhan, Milo A

    2017-12-21

    Prediction models and prognostic scores have been increasingly popular in both clinical practice and clinical research settings, for example to aid in risk-based decision making or control for confounding. In many medical fields, a large number of prognostic scores are available, but practitioners may find it difficult to choose between them due to lack of external validation as well as lack of comparisons between them. Borrowing methodology from network meta-analysis, we describe an approach to Multiple Score Comparison meta-analysis (MSC) which permits concurrent external validation and comparisons of prognostic scores using individual patient data (IPD) arising from a large-scale international collaboration. We describe the challenges in adapting network meta-analysis to the MSC setting, for instance the need to explicitly include correlations between the scores on a cohort level, and how to deal with many multi-score studies. We propose first using IPD to make cohort-level aggregate discrimination or calibration scores, comparing all to a common comparator. Then, standard network meta-analysis techniques can be applied, taking care to consider correlation structures in cohorts with multiple scores. Transitivity, consistency and heterogeneity are also examined. We provide a clinical application, comparing prognostic scores for 3-year mortality in patients with chronic obstructive pulmonary disease using data from a large-scale collaborative initiative. We focus on the discriminative properties of the prognostic scores. Our results show clear differences in performance, with ADO and eBODE showing higher discrimination with respect to mortality than other considered scores. The assumptions of transitivity and local and global consistency were not violated. Heterogeneity was small. We applied a network meta-analytic methodology to externally validate and concurrently compare the prognostic properties of clinical scores. Our large-scale external validation indicates

  8. Validation of OMPS Ozone Profile Data with Expanded Dataset from Brewer and Automated Dobson Network.

    NASA Astrophysics Data System (ADS)

    Petropavlovskikh, I.; Weatherhead, E.; Cede, A.; Oltmans, S. J.; Kireev, S.; Maillard, E.; Bhartia, P. K.; Flynn, L. E.

    2005-12-01

    The first NPOESS satellite is scheduled to be launched in 2010 and will carry the Ozone Mapping and Profiler Suite (OMPS) instruments for ozone monitoring. Prior this, the OMPS instruments and algorithms will be tested by flight on the NPOESS/NPP satellite, scheduled for launch in 2008. Pre-launch planning for validation, post launch data validation and verification of the nadir and limb profile algorithm are key components for insuring that the NPOESS will produce a high quality, reliable ozone profile data set. The heritage of satellite instrument validation (TOMS, SBUV, GOME, SCIAMACHY, SAGE, HALOE, ATMOS, etc) has always relied upon surface-based observations. While the global coverage of satellite observations is appealing for validating another satellite, there is no substitute for the hard reference point of a ground-based system such as the Dobson or Brewer network, whose instruments are routinely calibrated and intercompared to standard references. The standard solar occultation instruments, SAGE II and HALOE are well beyond their planned lifetimes and might be inoperative during the OMPS period. The Umkehr network has been one of the key data sets for stratospheric ozone trend calculations and has earned its place as a benchmark network for stratospheric ozone profile observations. The normalization of measurements at different solar zenith angle (SZAs) to the measurement at the smallest SZA cancels out many calibration parameters, including the extra-terrestrial solar flux and instrumental constant, thus providing a "self-calibrating" technique in the same manner relied upon by the occultation sensors on satellites. Moreover, the ground-based Umkehr measurement is the only technique that provides data with the same altitude resolution and in the same units (DU) as do the UV-nadir instruments (SBUV-2, GOME-2, OMPS-nadir), i.e., as ozone amount in pressure layers, whereas, occultation instruments measure ozone density with height. A new Umkehr algorithm

  9. The South Fork Experimental Watershed: Soil moisture and precipitation network for satellite validation

    NASA Astrophysics Data System (ADS)

    Cosh, M. H.; Prueger, J. H.; McKee, L.; Bindlish, R.

    2013-12-01

    A recently deployed long term network for the study of soil moisture and precipitation was deployed in north central iowa, in cooperation between USDA and NASA. This site will be a joint calibration/validation network for the Soil Moisture Active Passive (SMAP) and Global Precipitation Measurement (GPM) missions. At total of 20 dual gauge precipitation gages were established across a watershed landscape with an area of approximately 600 km2. In addition, four soil moisture probes were installed in profile at 5, 10, 20, and 50 cm. The network was installed in April of 2013, at the initiation of the Iowa Flood Study (IFloodS) which was a six week intensive ground based radar observation period, conducted in coordination with NASA and the University of Iowa. This site is a member watershed of the Group on Earth Observations Joint Experiments on Crop Assessment and Monitoring (GEO-JECAM) program. A variety of quality control procedures are examined and spatial and temporal stability aspects of the network are examined. Initial comparisons of the watershed to soil moisture estimates from satellites are also conducted.

  10. Statistically validated mobile communication networks: the evolution of motifs in European and Chinese data

    NASA Astrophysics Data System (ADS)

    Li, Ming-Xia; Palchykov, Vasyl; Jiang, Zhi-Qiang; Kaski, Kimmo; Kertész, János; Miccichè, Salvatore; Tumminello, Michele; Zhou, Wei-Xing; Mantegna, Rosario N.

    2014-08-01

    Big data open up unprecedented opportunities for investigating complex systems, including society. In particular, communication data serve as major sources for computational social sciences, but they have to be cleaned and filtered as they may contain spurious information due to recording errors as well as interactions, like commercial and marketing activities, not directly related to the social network. The network constructed from communication data can only be considered as a proxy for the network of social relationships. Here we apply a systematic method, based on multiple-hypothesis testing, to statistically validate the links and then construct the corresponding Bonferroni network, generalized to the directed case. We study two large datasets of mobile phone records, one from Europe and the other from China. For both datasets we compare the raw data networks with the corresponding Bonferroni networks and point out significant differences in the structures and in the basic network measures. We show evidence that the Bonferroni network provides a better proxy for the network of social interactions than the original one. Using the filtered networks, we investigated the statistics and temporal evolution of small directed 3-motifs and concluded that closed communication triads have a formation time scale, which is quite fast and typically intraday. We also find that open communication triads preferentially evolve into other open triads with a higher fraction of reciprocated calls. These stylized facts were observed for both datasets.

  11. Retrieval and Validation of Zenith and Slant Path Delays From the Irish GPS Network

    NASA Astrophysics Data System (ADS)

    Hanafin, Jennifer; Jennings, S. Gerard; O'Dowd, Colin; McGrath, Ray; Whelan, Eoin

    2010-05-01

    Retrieval of atmospheric integrated water vapour (IWV) from ground-based GPS receivers and provision of this data product for meteorological applications has been the focus of a number of Europe-wide networks and projects, most recently the EUMETNET GPS water vapour programme. The results presented here are from a project to provide such information about the state of the atmosphere around Ireland for climate monitoring and improved numerical weather prediction. Two geodetic reference GPS receivers have been deployed at Valentia Observatory in Co. Kerry and Mace Head Atmospheric Research Station in Co. Galway, Ireland. These two receivers supplement the existing Ordnance Survey Ireland active network of 17 permanent ground-based receivers. A system to retrieve column-integrated atmospheric water vapour from the data provided by this network has been developed, based on the GPS Analysis at MIT (GAMIT) software package. The data quality of the zenith retrievals has been assessed using co-located radiosondes at the Valentia site and observations from a microwave profiling radiometer at the Mace Head site. Validation of the slant path retrievals requires a numerical weather prediction model and HIRLAM (High-Resolution Limited Area Model) version 7.2, the current operational forecast model in use at Met Éireann for the region, has been used for this validation work. Results from the data processing and comparisons with the independent observations and model will be presented.

  12. Fracture network created by 3D printer and its validation using CT images

    NASA Astrophysics Data System (ADS)

    Suzuki, A.; Watanabe, N.; Li, K.; Horne, R. N.

    2017-12-01

    Understanding flow mechanisms in fractured media is essential for geoscientific research and geological development industries. This study used 3D printed fracture networks in order to control the properties of fracture distributions inside the sample. The accuracy and appropriateness of creating samples by the 3D printer was investigated by using a X-ray CT scanner. The CT scan images suggest that the 3D printer is able to reproduce complex three-dimensional spatial distributions of fracture networks. Use of hexane after printing was found to be an effective way to remove wax for the post-treatment. Local permeability was obtained by the cubic law and used to calculate the global mean. The experimental value of the permeability was between the arithmetic and geometric means of the numerical results, which is consistent with conventional studies. This methodology based on 3D printed fracture networks can help validate existing flow modeling and numerical methods.

  13. Validation of a metabolic network for Saccharomyces cerevisiae using mixed substrate studies.

    PubMed

    Vanrolleghem, P A; de Jong-Gubbels, P; van Gulik, W M; Pronk, J T; van Dijken, J P; Heijnen, S

    1996-01-01

    Setting up a metabolic network model for respiratory growth of Saccharomyces cerevisiae requires the estimation of only two (energetic) stoichiometric parameters: (1) the operational PO ratio and (2) a growth-related maintenance factor k. It is shown, both theoretically and practically, how chemostat cultivations with different mixtures of two substrates allow unique values to be given to these unknowns of the proposed metabolic model. For the yeast and model considered, an effective PO ratio of 1.09 mol of ATP/mol of O (95% confidence interval 1.07-1.11) and a k factor of 0.415 mol of ATP/C-mol of biomass (0.385-0.445) were obtained from biomass substrate yield data on glucose/ethanol mixtures. Symbolic manipulation software proved very valuable in this study as it supported the proof of theoretical identifiability and significantly reduced the necessary computations for parameter estimation. In the transition from 100% glucose to 100% ethanol in the feed, four metabolic regimes occur. Switching between these regimes is determined by cessation of an irreversible reaction and initiation of an alternative reaction. Metabolic network predictions of these metabolic switches compared well with activity measurements of key enzymes. As a second validation of the network, the biomass yield of S. cerevisiae on acetate was also compared to the network prediction. An excellent agreement was found for a network in which acetate transport was modeled with a proton symport, while passive diffusion of acetate gave significantly higher yield predictions.

  14. A Ground Validation Network for the Global Precipitation Measurement Mission

    NASA Technical Reports Server (NTRS)

    Schwaller, Mathew R.; Morris, K. Robert

    2011-01-01

    A prototype Validation Network (VN) is currently operating as part of the Ground Validation System for NASA's Global Precipitation Measurement (GPM) mission. The VN supports precipitation retrieval algorithm development in the GPM prelaunch era. Postlaunch, the VN will be used to validate GPM spacecraft instrument measurements and retrieved precipitation data products. The period of record for the VN prototype starts on 8 August 2006 and runs to the present day. The VN database includes spacecraft data from the Tropical Rainfall Measuring Mission (TRMM) precipitation radar (PR) and coincident ground radar (GR) data from operational meteorological networks in the United States, Australia, Korea, and the Kwajalein Atoll in the Marshall Islands. Satellite and ground radar data products are collected whenever the PR satellite track crosses within 200 km of a VN ground radar, and these data are stored permanently in the VN database. VN products are generated from coincident PR and GR observations when a significant rain event occurs. The VN algorithm matches PR and GR radar data (including retrieved precipitation data in the case of the PR) by calculating averages of PR reflectivity (both raw and attenuation corrected) and rain rate, and GR reflectivity at the geometric intersection of the PR rays with the individual GR elevation sweeps. The algorithm thus averages the minimum PR and GR sample volumes needed to "matchup" the spatially coincident PR and GR data types. The result of this technique is a set of vertical profiles for a given rainfall event, with coincident PR and GR samples matched at specified heights throughout the profile. VN data can be used to validate satellite measurements and to track ground radar calibration over time. A comparison of matched TRMM PR and GR radar reflectivity factor data found a remarkably small difference between the PR and GR radar reflectivity factor averaged over this period of record in stratiform and convective rain cases when

  15. Creating, generating and comparing random network models with NetworkRandomizer.

    PubMed

    Tosadori, Gabriele; Bestvina, Ivan; Spoto, Fausto; Laudanna, Carlo; Scardoni, Giovanni

    2016-01-01

    Biological networks are becoming a fundamental tool for the investigation of high-throughput data in several fields of biology and biotechnology. With the increasing amount of information, network-based models are gaining more and more interest and new techniques are required in order to mine the information and to validate the results. To fill the validation gap we present an app, for the Cytoscape platform, which aims at creating randomised networks and randomising existing, real networks. Since there is a lack of tools that allow performing such operations, our app aims at enabling researchers to exploit different, well known random network models that could be used as a benchmark for validating real, biological datasets. We also propose a novel methodology for creating random weighted networks, i.e. the multiplication algorithm, starting from real, quantitative data. Finally, the app provides a statistical tool that compares real versus randomly computed attributes, in order to validate the numerical findings. In summary, our app aims at creating a standardised methodology for the validation of the results in the context of the Cytoscape platform.

  16. Pedestrian Validation in Infrared Images by Means of Active Contours and Neural Networks

    DTIC Science & Technology

    2010-01-01

    Research Article Pedestrian Validation in Infrared Images byMeans of Active Contours and Neural Networks Massimo Bertozzi,1 Pietro Cerri,1 Mirko Felisa,1...Stefano Ghidoni,2 andMichael Del Rose3 1VisLab, Dipartimento di Ingegneria dell’Informazione, Università di Parma, 43124 Parma, Italy 2 IAS-Lab...Dipartimento di Ingegneria dell’Informazione, Università di Padova, 35131 Padova, Italy 3Vetronics Research Center, U. S. Army TARDEC, MI 48397, USA

  17. CTD2 Dashboard: a searchable web interface to connect validated results from the Cancer Target Discovery and Development Network

    PubMed Central

    Aksoy, Bülent Arman; Dančík, Vlado; Smith, Kenneth; Mazerik, Jessica N.; Ji, Zhou; Gross, Benjamin; Nikolova, Olga; Jaber, Nadia; Califano, Andrea; Schreiber, Stuart L.; Gerhard, Daniela S.; Hermida, Leandro C.; Jagu, Subhashini

    2017-01-01

    Abstract The Cancer Target Discovery and Development (CTD2) Network aims to use functional genomics to accelerate the translation of high-throughput and high-content genomic and small-molecule data towards use in precision oncology. As part of this goal, and to share its conclusions with the research community, the Network developed the ‘CTD2 Dashboard’ [https://ctd2-dashboard.nci.nih.gov/], which compiles CTD2 Network-generated conclusions, termed ‘observations’, associated with experimental entities, collected by its member groups (‘Centers’). Any researcher interested in learning about a given gene, protein, or compound (a ‘subject’) studied by the Network can come to the CTD2 Dashboard to quickly and easily find, review, and understand Network-generated experimental results. In particular, the Dashboard allows visitors to connect experiments about the same target, biomarker, etc., carried out by multiple Centers in the Network. The Dashboard’s unique knowledge representation allows information to be compiled around a subject, so as to become greater than the sum of the individual contributions. The CTD2 Network has broadly defined levels of validation for evidence (‘Tiers’) pertaining to a particular finding, and the CTD2 Dashboard uses these Tiers to indicate the extent to which results have been validated. Researchers can use the Network’s insights and tools to develop a new hypothesis or confirm existing hypotheses, in turn advancing the findings towards clinical applications. Database URL: https://ctd2-dashboard.nci.nih.gov/ PMID:29220450

  18. The meaning and validation of social support networks for close family of persons with advanced cancer.

    PubMed

    Sjolander, Catarina; Ahlstrom, Gerd

    2012-09-17

    To strengthen the mental well-being of close family of persons newly diagnosed as having cancer, it is necessary to acquire a greater understanding of their experiences of social support networks, so as to better assess what resources are available to them from such networks and what professional measures are required. The main aim of the present study was to explore the meaning of these networks for close family of adult persons in the early stage of treatment for advanced lung or gastrointestinal cancer. An additional aim was to validate the study's empirical findings by means of the Finfgeld-Connett conceptual model for social support. The intention was to investigate whether these findings were in accordance with previous research in nursing. Seventeen family members with a relative who 8-14 weeks earlier had been diagnosed as having lung or gastrointestinal cancer were interviewed. The data were subjected to qualitative latent content analysis and validated by means of identifying antecedents and critical attributes. The meaning or main attribute of the social support network was expressed by the theme Confirmation through togetherness, based on six subthemes covering emotional and, to a lesser extent, instrumental support. Confirmation through togetherness derived principally from information, understanding, encouragement, involvement and spiritual community. Three subthemes were identified as the antecedents to social support: Need of support, Desire for a deeper relationship with relatives, Network to turn to. Social support involves reciprocal exchange of verbal and non-verbal information provided mainly by lay persons. The study provides knowledge of the antecedents and attributes of social support networks, particularly from the perspective of close family of adult persons with advanced lung or gastrointestinal cancer. There is a need for measurement instruments that could encourage nurses and other health-care professionals to focus on family members

  19. The meaning and validation of social support networks for close family of persons with advanced cancer

    PubMed Central

    2012-01-01

    Background To strengthen the mental well-being of close family of persons newly diagnosed as having cancer, it is necessary to acquire a greater understanding of their experiences of social support networks, so as to better assess what resources are available to them from such networks and what professional measures are required. The main aim of the present study was to explore the meaning of these networks for close family of adult persons in the early stage of treatment for advanced lung or gastrointestinal cancer. An additional aim was to validate the study’s empirical findings by means of the Finfgeld-Connett conceptual model for social support. The intention was to investigate whether these findings were in accordance with previous research in nursing. Methods Seventeen family members with a relative who 8–14 weeks earlier had been diagnosed as having lung or gastrointestinal cancer were interviewed. The data were subjected to qualitative latent content analysis and validated by means of identifying antecedents and critical attributes. Results The meaning or main attribute of the social support network was expressed by the theme Confirmation through togetherness, based on six subthemes covering emotional and, to a lesser extent, instrumental support. Confirmation through togetherness derived principally from information, understanding, encouragement, involvement and spiritual community. Three subthemes were identified as the antecedents to social support: Need of support, Desire for a deeper relationship with relatives, Network to turn to. Social support involves reciprocal exchange of verbal and non-verbal information provided mainly by lay persons. Conclusions The study provides knowledge of the antecedents and attributes of social support networks, particularly from the perspective of close family of adult persons with advanced lung or gastrointestinal cancer. There is a need for measurement instruments that could encourage nurses and other health

  20. Validation of in situ networks via field sampling: case study in the South Fork Experimental Watershed

    USDA-ARS?s Scientific Manuscript database

    The calibration and validation of soil moisture remote sensing products is complicated by the logistics of installing a soil moisture network for a long term period in an active landscape. Therefore, these stations are located along field boundaries or in non-representative sites with regards to so...

  1. Social networking addiction, attachment style, and validation of the Italian version of the Bergen Social Media Addiction Scale

    PubMed Central

    Monacis, Lucia; de Palo, Valeria; Griffiths, Mark D.; Sinatra, Maria

    2017-01-01

    Aim Research into social networking addiction has greatly increased over the last decade. However, the number of validated instruments assessing addiction to social networking sites (SNSs) remains few, and none have been validated in the Italian language. Consequently, this study tested the psychometric properties of the Italian version of the Bergen Social Media Addiction Scale (BSMAS), as well as providing empirical data concerning the relationship between attachment styles and SNS addiction. Methods A total of 769 participants were recruited to this study. Confirmatory factor analysis (CFA) and multigroup analyses were applied to assess construct validity of the Italian version of the BSMAS. Reliability analyses comprised the average variance extracted, the standard error of measurement, and the factor determinacy coefficient. Results Indices obtained from the CFA showed the Italian version of the BSMAS to have an excellent fit of the model to the data, thus confirming the single-factor structure of the instrument. Measurement invariance was established at configural, metric, and strict invariances across age groups, and at configural and metric levels across gender groups. Internal consistency was supported by several indicators. In addition, the theoretical associations between SNS addiction and attachment styles were generally supported. Conclusion This study provides evidence that the Italian version of the BSMAS is a psychometrically robust tool that can be used in future Italian research into social networking addiction. PMID:28494648

  2. Social networking addiction, attachment style, and validation of the Italian version of the Bergen Social Media Addiction Scale.

    PubMed

    Monacis, Lucia; de Palo, Valeria; Griffiths, Mark D; Sinatra, Maria

    2017-06-01

    Aim Research into social networking addiction has greatly increased over the last decade. However, the number of validated instruments assessing addiction to social networking sites (SNSs) remains few, and none have been validated in the Italian language. Consequently, this study tested the psychometric properties of the Italian version of the Bergen Social Media Addiction Scale (BSMAS), as well as providing empirical data concerning the relationship between attachment styles and SNS addiction. Methods A total of 769 participants were recruited to this study. Confirmatory factor analysis (CFA) and multigroup analyses were applied to assess construct validity of the Italian version of the BSMAS. Reliability analyses comprised the average variance extracted, the standard error of measurement, and the factor determinacy coefficient. Results Indices obtained from the CFA showed the Italian version of the BSMAS to have an excellent fit of the model to the data, thus confirming the single-factor structure of the instrument. Measurement invariance was established at configural, metric, and strict invariances across age groups, and at configural and metric levels across gender groups. Internal consistency was supported by several indicators. In addition, the theoretical associations between SNS addiction and attachment styles were generally supported. Conclusion This study provides evidence that the Italian version of the BSMAS is a psychometrically robust tool that can be used in future Italian research into social networking addiction.

  3. Validation of intensive care unit-acquired infection surveillance in the Italian SPIN-UTI network.

    PubMed

    Masia, M D; Barchitta, M; Liperi, G; Cantù, A P; Alliata, E; Auxilia, F; Torregrossa, V; Mura, I; Agodi, A

    2010-10-01

    Validity is one of the most critical factors concerning surveillance of nosocomial infections (NIs). This article describes the first validation study of the Italian Nosocomial Infections Surveillance in Intensive Care Units (ICUs) project (SPIN-UTI) surveillance data. The objective was to validate infection data and thus to determine the sensitivity, specificity, and positive and negative predictive values of NI data reported on patients in the ICUs participating in the SPIN-UTI network. A validation study was performed at the end of the surveillance period. All medical records including all clinical and laboratory data were reviewed retrospectively by the trained physicians of the validation team and a positive predictive value (PPV), a negative predictive value (NPV), sensitivity and specificity were calculated. Eight ICUs (16.3%) were randomly chosen from all 49 SPIN-UTI ICUs for the validation study. In total, the validation team reviewed 832 patient charts (27.3% of the SPIN-UTI patients). The PPV was 83.5% and the NPV was 97.3%. The overall sensitivity was 82.3% and overall specificity was 97.2%. Over- and under-reporting of NIs were related to misinterpretation of the case definitions and deviations from the protocol despite previous training and instructions. The results of this study are useful to identify methodological problems within a surveillance system and have been used to plan retraining for surveillance personnel and to design and implement the second phase of the SPIN-UTI project. Copyright 2010 The Hospital Infection Society. Published by Elsevier Ltd. All rights reserved.

  4. Assessment of the coordination of integrated health service delivery networks by the primary health care: COPAS questionnaire validation in the Brazilian context.

    PubMed

    Rodrigues, Ludmila Barbosa Bandeira; Dos Santos, Claudia Benedita; Goyatá, Sueli Leiko Takamatsu; Popolin, Marcela Paschoal; Yamamura, Mellina; Deon, Keila Christiane; Lapão, Luis Miguel Veles; Santos Neto, Marcelino; Uchoa, Severina Alice da Costa; Arcêncio, Ricardo Alexandre

    2015-07-22

    Health systems organized as networks and coordinated by the Primary Health Care (PHC) may contribute to the improvement of clinical care, sanitary conditions, satisfaction of patients and reduction of local budget expenditures. The aim of this study was to adapt and validate a questionnaire - COPAS - to assess the coordination of Integrated Health Service Delivery Networks by the Primary Health Care. A cross sectional approach was used. The population was pooled from Family Health Strategy healthcare professionals, of the Alfenas region (Minas Gerais, Brazil). Data collection was performed from August to October 2013. The results were checked for the presence of floor and ceiling effects and the internal consistency measured through Cronbach alpha. Construct validity was verified through convergent and discriminant values following Multitrait-Multimethod (MTMM) analysis. Floor and ceiling effects were absent. The internal consistency of the instrument was satisfactory; as was the convergent validity, with a few correlations lower then 0.30. The discriminant validity values of the majority of items, with respect to their own dimension, were found to be higher or significantly higher than their correlations with the dimensions to which they did not belong. The results showed that the COPAS instrument has satisfactory initial psychometric properties and may be used by healthcare managers and workers to assess the PHC coordination performance within the Integrated Health Service Delivery Network.

  5. Data Visualization and Analysis Tools for the Global Precipitation Measurement (GPM) Validation Network

    NASA Technical Reports Server (NTRS)

    Morris, Kenneth R.; Schwaller, Mathew

    2010-01-01

    The Validation Network (VN) prototype for the Global Precipitation Measurement (GPM) Mission compares data from the Tropical Rainfall Measuring Mission (TRMM) satellite Precipitation Radar (PR) to similar measurements from U.S. and international operational weather radars. This prototype is a major component of the GPM Ground Validation System (GVS). The VN provides a means for the precipitation measurement community to identify and resolve significant discrepancies between the ground radar (GR) observations and similar satellite observations. The VN prototype is based on research results and computer code described by Anagnostou et al. (2001), Bolen and Chandrasekar (2000), and Liao et al. (2001), and has previously been described by Morris, et al. (2007). Morris and Schwaller (2009) describe the PR-GR volume-matching algorithm used to create the VN match-up data set used for the comparisons. This paper describes software tools that have been developed for visualization and statistical analysis of the original and volume matched PR and GR data.

  6. Development and Validation of a Deep Neural Network Model for Prediction of Postoperative In-hospital Mortality.

    PubMed

    Lee, Christine K; Hofer, Ira; Gabel, Eilon; Baldi, Pierre; Cannesson, Maxime

    2018-04-17

    The authors tested the hypothesis that deep neural networks trained on intraoperative features can predict postoperative in-hospital mortality. The data used to train and validate the algorithm consists of 59,985 patients with 87 features extracted at the end of surgery. Feed-forward networks with a logistic output were trained using stochastic gradient descent with momentum. The deep neural networks were trained on 80% of the data, with 20% reserved for testing. The authors assessed improvement of the deep neural network by adding American Society of Anesthesiologists (ASA) Physical Status Classification and robustness of the deep neural network to a reduced feature set. The networks were then compared to ASA Physical Status, logistic regression, and other published clinical scores including the Surgical Apgar, Preoperative Score to Predict Postoperative Mortality, Risk Quantification Index, and the Risk Stratification Index. In-hospital mortality in the training and test sets were 0.81% and 0.73%. The deep neural network with a reduced feature set and ASA Physical Status classification had the highest area under the receiver operating characteristics curve, 0.91 (95% CI, 0.88 to 0.93). The highest logistic regression area under the curve was found with a reduced feature set and ASA Physical Status (0.90, 95% CI, 0.87 to 0.93). The Risk Stratification Index had the highest area under the receiver operating characteristics curve, at 0.97 (95% CI, 0.94 to 0.99). Deep neural networks can predict in-hospital mortality based on automatically extractable intraoperative data, but are not (yet) superior to existing methods.

  7. A standalone perfusion platform for drug testing and target validation in micro-vessel networks

    PubMed Central

    Zhang, Boyang; Peticone, Carlotta; Murthy, Shashi K.; Radisic, Milica

    2013-01-01

    Studying the effects of pharmacological agents on human endothelium includes the routine use of cell monolayers cultivated in multi-well plates. This configuration fails to recapitulate the complex architecture of vascular networks in vivo and does not capture the relationship between shear stress (i.e. flow) experienced by the cells and dose of the applied pharmacological agents. Microfluidic platforms have been applied extensively to create vascular systems in vitro; however, they rely on bulky external hardware to operate, which hinders the wide application of microfluidic chips by non-microfluidic experts. Here, we have developed a standalone perfusion platform where multiple devices were perfused at a time with a single miniaturized peristaltic pump. Using the platform, multiple micro-vessel networks, that contained three levels of branching structures, were created by culturing endothelial cells within circular micro-channel networks mimicking the geometrical configuration of natural blood vessels. To demonstrate the feasibility of our platform for drug testing and validation assays, a drug induced nitric oxide assay was performed on the engineered micro-vessel network using a panel of vaso-active drugs (acetylcholine, phenylephrine, atorvastatin, and sildenafil), showing both flow and drug dose dependent responses. The interactive effects between flow and drug dose for sildenafil could not be captured by a simple straight rectangular channel coated with endothelial cells, but it was captured in a more physiological branching circular network. A monocyte adhesion assay was also demonstrated with and without stimulation by an inflammatory cytokine, tumor necrosis factor-α. PMID:24404058

  8. A regional-scale network for geoid monitoring and satellite gravimetry validation

    NASA Astrophysics Data System (ADS)

    Winester, D.; Pool, D.; Kennedy, J.

    2010-12-01

    In the past two decades, improved measurements of acceleration due to gravity have allowed for accurate detection of temporal gravity change. Terrestrial absolute gravimeters (for example, Micro-g LaCoste FG5 or A-10) can sense changes of gravity induced by elevation or mass changes, including local effects that may bias regional studies. Satellite instrumentation (e.g. GRACE) can detect large scale mass changes on a regular basis. However, the Nyquist wave number for satellite observations is often much too small for the size of regional studies. Also, satellites are limited by their life of deployment. Both techniques are used to (in)validate change models generated from other geophysical observations including water storage(underground and glacial), geoid definition, isostatic adjustments and tectonic(magmatic and faulting)activity. The gap between terrestrial and satellite gravity observations (and between satellite missions) might be bridged by developing a terrestrial network of sites of various observation techniques that define a representative sample of a given, regional study area. This information could then be statistically extrapolated to the extent of the region. The Southern High Plains Aquifer is such a region, since it has widespread relatively uniform geology, has relatively flat topography, and is well monitored for groundwater levels and soil moisture. Each site would have extensive instrumentation for monitoring, at a minimum, gravity (periodic and continuous) using absolute and tidal gravimeters, soil moisture, precipitation, depths to water in wells, evapotranspiration, air pressure, and land surface (GPS). Where possible, the network would build upon existing, data collection infrastructure. Preferably, the region would also have seismic tomography or crustal seismic reflection observations to characterize Moho-depth mass changes and have regional Bouguer anomaly mapping. In addition to information on local hydrology and geology, data

  9. Verification and Validation of KBS with Neural Network Components

    NASA Technical Reports Server (NTRS)

    Wen, Wu; Callahan, John

    1996-01-01

    Artificial Neural Network (ANN) play an important role in developing robust Knowledge Based Systems (KBS). The ANN based components used in these systems learn to give appropriate predictions through training with correct input-output data patterns. Unlike traditional KBS that depends on a rule database and a production engine, the ANN based system mimics the decisions of an expert without specifically formulating the if-than type of rules. In fact, the ANNs demonstrate their superiority when such if-then type of rules are hard to generate by human expert. Verification of traditional knowledge based system is based on the proof of consistency and completeness of the rule knowledge base and correctness of the production engine.These techniques, however, can not be directly applied to ANN based components.In this position paper, we propose a verification and validation procedure for KBS with ANN based components. The essence of the procedure is to obtain an accurate system specification through incremental modification of the specifications using an ANN rule extraction algorithm.

  10. Genexpi: a toolset for identifying regulons and validating gene regulatory networks using time-course expression data.

    PubMed

    Modrák, Martin; Vohradský, Jiří

    2018-04-13

    Identifying regulons of sigma factors is a vital subtask of gene network inference. Integrating multiple sources of data is essential for correct identification of regulons and complete gene regulatory networks. Time series of expression data measured with microarrays or RNA-seq combined with static binding experiments (e.g., ChIP-seq) or literature mining may be used for inference of sigma factor regulatory networks. We introduce Genexpi: a tool to identify sigma factors by combining candidates obtained from ChIP experiments or literature mining with time-course gene expression data. While Genexpi can be used to infer other types of regulatory interactions, it was designed and validated on real biological data from bacterial regulons. In this paper, we put primary focus on CyGenexpi: a plugin integrating Genexpi with the Cytoscape software for ease of use. As a part of this effort, a plugin for handling time series data in Cytoscape called CyDataseries has been developed and made available. Genexpi is also available as a standalone command line tool and an R package. Genexpi is a useful part of gene network inference toolbox. It provides meaningful information about the composition of regulons and delivers biologically interpretable results.

  11. Development and field validation of a community-engaged particulate matter air quality monitoring network in Imperial, California, USA.

    PubMed

    Carvlin, Graeme N; Lugo, Humberto; Olmedo, Luis; Bejarano, Ester; Wilkie, Alexa; Meltzer, Dan; Wong, Michelle; King, Galatea; Northcross, Amanda; Jerrett, Michael; English, Paul B; Hammond, Donald; Seto, Edmund

    2017-12-01

    The Imperial County Community Air Monitoring Network was developed as part of a community-engaged research study to provide real-time particulate matter (PM) air quality information at a high spatial resolution in Imperial County, California. The network augmented the few existing regulatory monitors and increased monitoring near susceptible populations. Monitors were both calibrated and field validated, a key component of evaluating the quality of the data produced by the community monitoring network. This paper examines the performance of a customized version of the low-cost Dylos optical particle counter used in the community air monitors compared with both PM 2.5 and PM 10 (particulate matter with aerodynamic diameters <2.5 and <10 μm, respectively) federal equivalent method (FEM) beta-attenuation monitors (BAMs) and federal reference method (FRM) gravimetric filters at a collocation site in the study area. A conversion equation was developed that estimates particle mass concentrations from the native Dylos particle counts, taking into account relative humidity. The R 2 for converted hourly averaged Dylos mass measurements versus a PM 2.5 BAM was 0.79 and that versus a PM 10 BAM was 0.78. The performance of the conversion equation was evaluated at six other sites with collocated PM 2.5 environmental beta-attenuation monitors (EBAMs) located throughout Imperial County. The agreement of the Dylos with the EBAMs was moderate to high (R 2 = 0.35-0.81). The performance of low-cost air quality sensors in community networks is currently not well documented. This paper provides a methodology for quantifying the performance of a next-generation Dylos PM sensor used in the Imperial County Community Air Monitoring Network. This air quality network provides data at a much finer spatial and temporal resolution than has previously been possible with government monitoring efforts. Once calibrated and validated, these high-resolution data may provide more information on

  12. Validation of Malayalam Version of National Comprehensive Cancer Network Distress Thermometer and its Feasibility in Oncology Patients.

    PubMed

    Biji, M S; Dessai, Sampada; Sindhu, N; Aravind, Sithara; Satheesan, B

    2018-01-01

    This study was designed to translate and validate the National Comprehensive Cancer Network (NCCN) distress thermometer (DT) in regional language " Malayalam" and to see the feasibility of using it in our patients. (1) To translate and validate the NCCN DT. (2) To study the feasibility of using validated Malayalam translated DT in Malabar Cancer center. This is a single-arm prospective observational study. The study was conducted at author's institution between December 8, 2015, and January 20, 2016 in the Department of Cancer Palliative Medicine. This was a prospective observational study carried out in two phases. In Phase 1, the linguistic validation of the NCCN DT was done. In Phase 2, the feasibility, face validity, and utility of the translated of NCCN DT in accordance with QQ-10 too was done. SPSS version 16 (SPSS Inc. Released 2007. SPSS for Windows, Version 16.0. Chicago, SPSS Inc.) was used for analysis. Ten patients were enrolled in Phase 2. The median age was 51.5 years and 40% of patients were male. All patients had completed at least basic education up to the primary level. The primary site of cancer was heterogeneous. The NCCN DT completion rate was 100%. The face validity, utility, reliability, and feasibility were 100%, 100%, 100%, and 90%, respectively. It can be concluded that the Malayalam validated DT has high face validity, utility, and it is feasible for its use.

  13. Spectral characteristics of the Hellenic vertical network - Validation over Central and Northern Greece using GOCE/GRACE global geopotential models

    NASA Astrophysics Data System (ADS)

    Andritsanos, Vassilios D.; Vergos, George S.; Grigoriadis, Vassilios N.; Pagounis, Vassilios; Tziavos, Ilias N.

    2014-05-01

    The Elevation project, funded by the action "Archimedes III - Funding of research groups in T.E.I.", co-financed by the E.U. (European Social Fund) and national funds under the Operational Program "Education and Lifelong Learning 2007-2013" aims mainly to the validation of the Hellenic vertical datum. This validation is carried out over two areas under study, one in Central and another in Northern Greece. During the first stage of the validation process, satellite-only as well as combined satellite-terrestrial models of the Earth's geopotential are used. GOCE and GRACE satellite information is compared against recently measured GPS/Levelling observations at specific benchmarks of the vertical network in Attiki (Central Greece) and Thessaloniki (Northern Greece). A spectral enhancement approach is followed where, given the GOCE/GRACE GGM truncation degree, EGM2008 is used to fill-in the medium and high-frequency content along with RTM effects for the high and ultra high part. The second stage is based on the localization of possible blunders of the vertical network using the spectral information derived previously. The undoubted accuracy of the contemporary global models at the low frequency band leads to some initial conclusions about the consistency of the Hellenic vertical datum.

  14. Multicentre study for validation of the French addictovigilance network reports assessment tool

    PubMed Central

    Hardouin, Jean Benoit; Rousselet, Morgane; Gerardin, Marie; Guerlais, Marylène; Guillou, Morgane; Bronnec, Marie; Sébille, Véronique; Jolliet, Pascale

    2016-01-01

    Aims The French health authority (ANSM) is responsible for monitoring medicinal and other drug dependencies. To support these activities, the ANSM manages a network of 13 drug dependence evaluation and information centres (Centres d'Evaluation et d'Information sur la Pharmacodépendance ‐ Addictovigilance ‐ CEIP‐A) throughout France. In 2006, the Nantes CEIP‐A created a new tool called the EGAP (Echelle de GrAvité de la Pharmacodépendance‐ drug dependence severity scale) based on DSM IV criteria. This tool allows the creation of a substance use profile that enables the drug dependence severity to be homogeneously quantified by assigning a score to each substance indicated in the reports from health professionals. This article describes the validation and psychometric properties of the drug dependence severity score obtained from the scale ( Clinicaltrials.gov NCT01052675). Method The validity of the EGAP construct, the concurrent validity and the discriminative ability of the EGAP score, the consistency of answers to EGAP items, the internal consistency and inter rater reliability of the EGAP score were assessed using statistical methods that are generally used for psychometric tests. Results The total EGAP score was a reliable and precise measure for evaluating drug dependence (Cronbach alpha = 0.84; ASI correlation = 0.70; global ICC = 0.92). In addition to its good psychometric properties, the EGAP is a simple and efficient tool that can be easily specified on the official ANSM notification form. Conclusion The good psychometric properties of the total EGAP score justify its use for evaluating the severity of drug dependence. PMID:27302554

  15. Real-time sensor data validation

    NASA Technical Reports Server (NTRS)

    Bickmore, Timothy W.

    1994-01-01

    This report describes the status of an on-going effort to develop software capable of detecting sensor failures on rocket engines in real time. This software could be used in a rocket engine controller to prevent the erroneous shutdown of an engine due to sensor failures which would otherwise be interpreted as engine failures by the control software. The approach taken combines analytical redundancy with Bayesian belief networks to provide a solution which has well defined real-time characteristics and well-defined error rates. Analytical redundancy is a technique in which a sensor's value is predicted by using values from other sensors and known or empirically derived mathematical relations. A set of sensors and a set of relations among them form a network of cross-checks which can be used to periodically validate all of the sensors in the network. Bayesian belief networks provide a method of determining if each of the sensors in the network is valid, given the results of the cross-checks. This approach has been successfully demonstrated on the Technology Test Bed Engine at the NASA Marshall Space Flight Center. Current efforts are focused on extending the system to provide a validation capability for 100 sensors on the Space Shuttle Main Engine.

  16. Validation workflow for a clinical Bayesian network model in multidisciplinary decision making in head and neck oncology treatment.

    PubMed

    Cypko, Mario A; Stoehr, Matthaeus; Kozniewski, Marcin; Druzdzel, Marek J; Dietz, Andreas; Berliner, Leonard; Lemke, Heinz U

    2017-11-01

    Oncological treatment is being increasingly complex, and therefore, decision making in multidisciplinary teams is becoming the key activity in the clinical pathways. The increased complexity is related to the number and variability of possible treatment decisions that may be relevant to a patient. In this paper, we describe validation of a multidisciplinary cancer treatment decision in the clinical domain of head and neck oncology. Probabilistic graphical models and corresponding inference algorithms, in the form of Bayesian networks, can support complex decision-making processes by providing a mathematically reproducible and transparent advice. The quality of BN-based advice depends on the quality of the model. Therefore, it is vital to validate the model before it is applied in practice. For an example BN subnetwork of laryngeal cancer with 303 variables, we evaluated 66 patient records. To validate the model on this dataset, a validation workflow was applied in combination with quantitative and qualitative analyses. In the subsequent analyses, we observed four sources of imprecise predictions: incorrect data, incomplete patient data, outvoting relevant observations, and incorrect model. Finally, the four problems were solved by modifying the data and the model. The presented validation effort is related to the model complexity. For simpler models, the validation workflow is the same, although it may require fewer validation methods. The validation success is related to the model's well-founded knowledge base. The remaining laryngeal cancer model may disclose additional sources of imprecise predictions.

  17. The Development and Validation of the Social Networking Experiences Questionnaire: A Measure of Adolescent Cyberbullying and Its Impact.

    PubMed

    Dredge, Rebecca; Gleeson, John; Garcia, Xochitl de la Piedad

    2015-01-01

    The measurement of cyberbullying has been marked by several inconsistencies that lead to difficulties in cross-study comparisons of the frequency of occurrence and the impact of cyberbullying. Consequently, the first aim of this study was to develop a measure of experience with and impact of cyberbullying victimization in social networking sites in adolescents. The second aim was to investigate the psychometric properties of a purpose-built measure (Social Networking Experiences Questionnaire [SNEQ]). Exploratory factor analysis on 253 adolescent social networking sites users produced a six-factor model of impact. However, one factor was removed because of low internal consistency. Cronbach's alpha was higher than .76 for the victimization and remaining five impact subscales. Furthermore, correlation coefficients for the Victimization scale and related dimensions showed good construct validity. The utility of the SNEQ for victim support personnel, research, and cyberbullying education/prevention programs is discussed.

  18. A network model of genomic hormone interactions underlying dementia and its translational validation through serendipitous off-target effect

    PubMed Central

    2013-01-01

    Background While the majority of studies have focused on the association between sex hormones and dementia, emerging evidence supports the role of other hormone signals in increasing dementia risk. However, due to the lack of an integrated view on mechanistic interactions of hormone signaling pathways associated with dementia, molecular mechanisms through which hormones contribute to the increased risk of dementia has remained unclear and capacity of translating hormone signals to potential therapeutic and diagnostic applications in relation to dementia has been undervalued. Methods Using an integrative knowledge- and data-driven approach, a global hormone interaction network in the context of dementia was constructed, which was further filtered down to a model of convergent hormone signaling pathways. This model was evaluated for its biological and clinical relevance through pathway recovery test, evidence-based analysis, and biomarker-guided analysis. Translational validation of the model was performed using the proposed novel mechanism discovery approach based on ‘serendipitous off-target effects’. Results Our results reveal the existence of a well-connected hormone interaction network underlying dementia. Seven hormone signaling pathways converge at the core of the hormone interaction network, which are shown to be mechanistically linked to the risk of dementia. Amongst these pathways, estrogen signaling pathway takes the major part in the model and insulin signaling pathway is analyzed for its association to learning and memory functions. Validation of the model through serendipitous off-target effects suggests that hormone signaling pathways substantially contribute to the pathogenesis of dementia. Conclusions The integrated network model of hormone interactions underlying dementia may serve as an initial translational platform for identifying potential therapeutic targets and candidate biomarkers for dementia-spectrum disorders such as Alzheimer

  19. Prototype of NASA's Global Precipitation Measurement Mission Ground Validation System

    NASA Technical Reports Server (NTRS)

    Schwaller, M. R.; Morris, K. R.; Petersen, W. A.

    2007-01-01

    NASA is developing a Ground Validation System (GVS) as one of its contributions to the Global Precipitation Mission (GPM). The GPM GVS provides an independent means for evaluation, diagnosis, and ultimately improvement of GPM spaceborne measurements and precipitation products. NASA's GPM GVS consists of three elements: field campaigns/physical validation, direct network validation, and modeling and simulation. The GVS prototype of direct network validation compares Tropical Rainfall Measuring Mission (TRMM) satellite-borne radar data to similar measurements from the U.S. national network of operational weather radars. A prototype field campaign has also been conducted; modeling and simulation prototypes are under consideration.

  20. Qualitative validation of the reduction from two reciprocally coupled neurons to one self-coupled neuron in a respiratory network model.

    PubMed

    Dunmyre, Justin R

    2011-06-01

    The pre-Bötzinger complex of the mammalian brainstem is a heterogeneous neuronal network, and individual neurons within the network have varying strengths of the persistent sodium and calcium-activated nonspecific cationic currents. Individually, these currents have been the focus of modeling efforts. Previously, Dunmyre et al. (J Comput Neurosci 1-24, 2011) proposed a model and studied the interactions of these currents within one self-coupled neuron. In this work, I consider two identical, reciprocally coupled model neurons and validate the reduction to the self-coupled case. I find that all of the dynamics of the two model neuron network and the regions of parameter space where these distinct dynamics are found are qualitatively preserved in the reduction to the self-coupled case.

  1. Improved Diagnostic Accuracy of Alzheimer's Disease by Combining Regional Cortical Thickness and Default Mode Network Functional Connectivity: Validated in the Alzheimer's Disease Neuroimaging Initiative Set.

    PubMed

    Park, Ji Eun; Park, Bumwoo; Kim, Sang Joon; Kim, Ho Sung; Choi, Choong Gon; Jung, Seung Chai; Oh, Joo Young; Lee, Jae-Hong; Roh, Jee Hoon; Shim, Woo Hyun

    2017-01-01

    To identify potential imaging biomarkers of Alzheimer's disease by combining brain cortical thickness (CThk) and functional connectivity and to validate this model's diagnostic accuracy in a validation set. Data from 98 subjects was retrospectively reviewed, including a study set (n = 63) and a validation set from the Alzheimer's Disease Neuroimaging Initiative (n = 35). From each subject, data for CThk and functional connectivity of the default mode network was extracted from structural T1-weighted and resting-state functional magnetic resonance imaging. Cortical regions with significant differences between patients and healthy controls in the correlation of CThk and functional connectivity were identified in the study set. The diagnostic accuracy of functional connectivity measures combined with CThk in the identified regions was evaluated against that in the medial temporal lobes using the validation set and application of a support vector machine. Group-wise differences in the correlation of CThk and default mode network functional connectivity were identified in the superior temporal ( p < 0.001) and supramarginal gyrus ( p = 0.007) of the left cerebral hemisphere. Default mode network functional connectivity combined with the CThk of those two regions were more accurate than that combined with the CThk of both medial temporal lobes (91.7% vs. 75%). Combining functional information with CThk of the superior temporal and supramarginal gyri in the left cerebral hemisphere improves diagnostic accuracy, making it a potential imaging biomarker for Alzheimer's disease.

  2. Improved Diagnostic Accuracy of Alzheimer's Disease by Combining Regional Cortical Thickness and Default Mode Network Functional Connectivity: Validated in the Alzheimer's Disease Neuroimaging Initiative Set

    PubMed Central

    Park, Ji Eun; Park, Bumwoo; Kim, Ho Sung; Choi, Choong Gon; Jung, Seung Chai; Oh, Joo Young; Lee, Jae-Hong; Roh, Jee Hoon; Shim, Woo Hyun

    2017-01-01

    Objective To identify potential imaging biomarkers of Alzheimer's disease by combining brain cortical thickness (CThk) and functional connectivity and to validate this model's diagnostic accuracy in a validation set. Materials and Methods Data from 98 subjects was retrospectively reviewed, including a study set (n = 63) and a validation set from the Alzheimer's Disease Neuroimaging Initiative (n = 35). From each subject, data for CThk and functional connectivity of the default mode network was extracted from structural T1-weighted and resting-state functional magnetic resonance imaging. Cortical regions with significant differences between patients and healthy controls in the correlation of CThk and functional connectivity were identified in the study set. The diagnostic accuracy of functional connectivity measures combined with CThk in the identified regions was evaluated against that in the medial temporal lobes using the validation set and application of a support vector machine. Results Group-wise differences in the correlation of CThk and default mode network functional connectivity were identified in the superior temporal (p < 0.001) and supramarginal gyrus (p = 0.007) of the left cerebral hemisphere. Default mode network functional connectivity combined with the CThk of those two regions were more accurate than that combined with the CThk of both medial temporal lobes (91.7% vs. 75%). Conclusion Combining functional information with CThk of the superior temporal and supramarginal gyri in the left cerebral hemisphere improves diagnostic accuracy, making it a potential imaging biomarker for Alzheimer's disease. PMID:29089831

  3. Causality within the Epileptic Network: An EEG-fMRI Study Validated by Intracranial EEG.

    PubMed

    Vaudano, Anna Elisabetta; Avanzini, Pietro; Tassi, Laura; Ruggieri, Andrea; Cantalupo, Gaetano; Benuzzi, Francesca; Nichelli, Paolo; Lemieux, Louis; Meletti, Stefano

    2013-01-01

    Accurate localization of the Seizure Onset Zone (SOZ) is crucial in patients with drug-resistance focal epilepsy. EEG with fMRI recording (EEG-fMRI) has been proposed as a complementary non-invasive tool, which can give useful additional information in the pre-surgical work-up. However, fMRI maps related to interictal epileptiform activities (IED) often show multiple regions of signal change, or "networks," rather than highly focal ones. Effective connectivity approaches like Dynamic Causal Modeling (DCM) applied to fMRI data potentially offers a framework to address which brain regions drives the generation of seizures and IED within an epileptic network. Here, we present a first attempt to validate DCM on EEG-fMRI data in one patient affected by frontal lobe epilepsy. Pre-surgical EEG-fMRI demonstrated two distinct clusters of blood oxygenation level dependent (BOLD) signal increases linked to IED, one located in the left frontal pole and the other in the ipsilateral dorso-lateral frontal cortex. DCM of the IED-related BOLD signal favored a model corresponding to the left dorso-lateral frontal cortex as driver of changes in the fronto-polar region. The validity of DCM was supported by: (a) the results of two different non-invasive analysis obtained on the same dataset: EEG source imaging (ESI), and "psycho-physiological interaction" analysis; (b) the failure of a first surgical intervention limited to the fronto-polar region; (c) the results of the intracranial EEG monitoring performed after the first surgical intervention confirming a SOZ located over the dorso-lateral frontal cortex. These results add evidence that EEG-fMRI together with advanced methods of BOLD signal analysis is a promising tool that can give relevant information within the epilepsy surgery diagnostic work-up.

  4. Brief Report: Independent Validation of Autism Spectrum Disorder Case Status in the Utah Autism and Developmental Disabilities Monitoring (ADDM) Network Site

    ERIC Educational Resources Information Center

    Bakian, Amanda V.; Bilder, Deborah A.; Carbone, Paul S.; Hunt, Tyler D.; Petersen, Brent; Rice, Catherine E.

    2015-01-01

    An independent validation was conducted of the Utah Autism and Developmental Disabilities Monitoring Network's (UT-ADDM) classification of children with autism spectrum disorder (ASD). UT-ADDM final case status (n = 90) was compared with final case status as determined by independent external expert reviewers (EERs). Inter-rater reliability…

  5. A methodology to estimate representativeness of LAI station observation for validation: a case study with Chinese Ecosystem Research Network (CERN) in situ data

    NASA Astrophysics Data System (ADS)

    Xu, Baodong; Li, Jing; Liu, Qinhuo; Zeng, Yelu; Yin, Gaofei

    2014-11-01

    Leaf Area Index (LAI) is known as a key vegetation biophysical variable. To effectively use remote sensing LAI products in various disciplines, it is critical to understand the accuracy of them. The common method for the validation of LAI products is firstly establish the empirical relationship between the field data and high-resolution imagery, to derive LAI maps, then aggregate high-resolution LAI maps to match moderate-resolution LAI products. This method is just suited for the small region, and its frequencies of measurement are limited. Therefore, the continuous observing LAI datasets from ground station network are important for the validation of multi-temporal LAI products. However, due to the scale mismatch between the point observation in the ground station and the pixel observation, the direct comparison will bring the scale error. Thus it is needed to evaluate the representativeness of ground station measurement within pixel scale of products for the reasonable validation. In this paper, a case study with Chinese Ecosystem Research Network (CERN) in situ data was taken to introduce a methodology to estimate representativeness of LAI station observation for validating LAI products. We first analyzed the indicators to evaluate the observation representativeness, and then graded the station measurement data. Finally, the LAI measurement data which can represent the pixel scale was used to validate the MODIS, GLASS and GEOV1 LAI products. The result shows that the best agreement is reached between the GLASS and GEOV1, while the lowest uncertainty is achieved by GEOV1 followed by GLASS and MODIS. We conclude that the ground station measurement data can validate multi-temporal LAI products objectively based on the evaluation indicators of station observation representativeness, which can also improve the reliability for the validation of remote sensing products.

  6. Formal Specification and Validation of a Hybrid Connectivity Restoration Algorithm for Wireless Sensor and Actor Networks

    PubMed Central

    Imran, Muhammad; Zafar, Nazir Ahmad

    2012-01-01

    Maintaining inter-actor connectivity is extremely crucial in mission-critical applications of Wireless Sensor and Actor Networks (WSANs), as actors have to quickly plan optimal coordinated responses to detected events. Failure of a critical actor partitions the inter-actor network into disjoint segments besides leaving a coverage hole, and thus hinders the network operation. This paper presents a Partitioning detection and Connectivity Restoration (PCR) algorithm to tolerate critical actor failure. As part of pre-failure planning, PCR determines critical/non-critical actors based on localized information and designates each critical node with an appropriate backup (preferably non-critical). The pre-designated backup detects the failure of its primary actor and initiates a post-failure recovery process that may involve coordinated multi-actor relocation. To prove the correctness, we construct a formal specification of PCR using Z notation. We model WSAN topology as a dynamic graph and transform PCR to corresponding formal specification using Z notation. Formal specification is analyzed and validated using the Z Eves tool. Moreover, we simulate the specification to quantitatively analyze the efficiency of PCR. Simulation results confirm the effectiveness of PCR and the results shown that it outperforms contemporary schemes found in the literature.

  7. SoilSCAPE in-Situ Observations of Soil Moisture for SMAP Validation: Pushing the Envelopes of Spatial Coverage and Energy Efficiency in Sparse Wireless Sensor Networks (Invited)

    NASA Astrophysics Data System (ADS)

    Moghaddam, M.; Silva, A.; Clewley, D.; Akbar, R.; Entekhabi, D.

    2013-12-01

    Soil Moisture Sensing Controller and oPtimal Estimator (SoilSCAPE) is a wireless in-situ sensor network technology, developed under the support of NASA ESTO/AIST program, for multi-scale validation of soil moisture retrievals from the Soil Moisture Active and Passive (SMAP) mission. The SMAP sensor suite is expected to produce soil moisture retrievals at 3 km scale from the radar instrument, at 36 km from the radiometer, and at 10 km from the combination of the two sensors. To validate the retrieved soil moisture maps at any of these scales, it is necessary to perform in-situ observations at multiple scales (ten, hundreds, and thousands of meters), representative of the true spatial variability of soil moisture fields. The most recent SoilSCAPE network, deployed in the California central valley, has been designed, built, and deployed to accomplish this goal, and is expected to become a core validation site for SMAP. The network consists of up to 150 sensor nodes, each comprised of 3-4 soil moisture sensors at various depths, deployed over a spatial extent of 36 km by 36 km. The network contains multiple sub-networks, each having up to 30 nodes, whose location is selected in part based on maximizing the land cover diversity within the 36 km cell. The network has achieved unprecedented energy efficiency, longevity, and spatial coverage using custom-designed hardware and software protocols. The network architecture utilizes a nested strategy, where a number of end devices (EDs) communicate to a local coordinator (LC) using our recently developed hardware with ultra-efficient circuitry and best-effort-timeslot allocation communication protocol. The LCs in turn communicates with the base station (BS) via text messages and a new compression scheme. The hardware and software technologies required to implement this latest deployment of the SoilSCAPE network will be presented in this paper, and several data sets resulting from the measurements will be shown. The data are

  8. Bibliometrics for Social Validation.

    PubMed

    Hicks, Daniel J

    2016-01-01

    This paper introduces a bibliometric, citation network-based method for assessing the social validation of novel research, and applies this method to the development of high-throughput toxicology research at the US Environmental Protection Agency. Social validation refers to the acceptance of novel research methods by a relevant scientific community; it is formally independent of the technical validation of methods, and is frequently studied in history, philosophy, and social studies of science using qualitative methods. The quantitative methods introduced here find that high-throughput toxicology methods are spread throughout a large and well-connected research community, which suggests high social validation. Further assessment of social validation involving mixed qualitative and quantitative methods are discussed in the conclusion.

  9. Bibliometrics for Social Validation

    PubMed Central

    2016-01-01

    This paper introduces a bibliometric, citation network-based method for assessing the social validation of novel research, and applies this method to the development of high-throughput toxicology research at the US Environmental Protection Agency. Social validation refers to the acceptance of novel research methods by a relevant scientific community; it is formally independent of the technical validation of methods, and is frequently studied in history, philosophy, and social studies of science using qualitative methods. The quantitative methods introduced here find that high-throughput toxicology methods are spread throughout a large and well-connected research community, which suggests high social validation. Further assessment of social validation involving mixed qualitative and quantitative methods are discussed in the conclusion. PMID:28005974

  10. Cultural Geography Model Validation

    DTIC Science & Technology

    2010-03-01

    the Cultural Geography Model (CGM), a government owned, open source multi - agent system utilizing Bayesian networks, queuing systems, the Theory of...referent determined either from theory or SME opinion. 4. CGM Overview The CGM is a government-owned, open source, data driven multi - agent social...HSCB, validation, social network analysis ABSTRACT: In the current warfighting environment , the military needs robust modeling and simulation (M&S

  11. Serial Network Flow Monitor

    NASA Technical Reports Server (NTRS)

    Robinson, Julie A.; Tate-Brown, Judy M.

    2009-01-01

    Using a commercial software CD and minimal up-mass, SNFM monitors the Payload local area network (LAN) to analyze and troubleshoot LAN data traffic. Validating LAN traffic models may allow for faster and more reliable computer networks to sustain systems and science on future space missions. Research Summary: This experiment studies the function of the computer network onboard the ISS. On-orbit packet statistics are captured and used to validate ground based medium rate data link models and enhance the way that the local area network (LAN) is monitored. This information will allow monitoring and improvement in the data transfer capabilities of on-orbit computer networks. The Serial Network Flow Monitor (SNFM) experiment attempts to characterize the network equivalent of traffic jams on board ISS. The SNFM team is able to specifically target historical problem areas including the SAMS (Space Acceleration Measurement System) communication issues, data transmissions from the ISS to the ground teams, and multiple users on the network at the same time. By looking at how various users interact with each other on the network, conflicts can be identified and work can begin on solutions. SNFM is comprised of a commercial off the shelf software package that monitors packet traffic through the payload Ethernet LANs (local area networks) on board ISS.

  12. Cascade Back-Propagation Learning in Neural Networks

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.

    2003-01-01

    The cascade back-propagation (CBP) algorithm is the basis of a conceptual design for accelerating learning in artificial neural networks. The neural networks would be implemented as analog very-large-scale integrated (VLSI) circuits, and circuits to implement the CBP algorithm would be fabricated on the same VLSI circuit chips with the neural networks. Heretofore, artificial neural networks have learned slowly because it has been necessary to train them via software, for lack of a good on-chip learning technique. The CBP algorithm is an on-chip technique that provides for continuous learning in real time. Artificial neural networks are trained by example: A network is presented with training inputs for which the correct outputs are known, and the algorithm strives to adjust the weights of synaptic connections in the network to make the actual outputs approach the correct outputs. The input data are generally divided into three parts. Two of the parts, called the "training" and "cross-validation" sets, respectively, must be such that the corresponding input/output pairs are known. During training, the cross-validation set enables verification of the status of the input-to-output transformation learned by the network to avoid over-learning. The third part of the data, termed the "test" set, consists of the inputs that are required to be transformed into outputs; this set may or may not include the training set and/or the cross-validation set. Proposed neural-network circuitry for on-chip learning would be divided into two distinct networks; one for training and one for validation. Both networks would share the same synaptic weights.

  13. AERONET-OC: Strengths and Weaknesses of a Network for the Validation of Satellite Coastal Radiometric Products

    NASA Technical Reports Server (NTRS)

    Zibordi, Giuseppe; Holben, Brent; Slutsker, Ilya; Giles, David; D'Alimonte, Davide; Melin, Frederic; Berthon, Jean-Francois; Vandemark, Doug; Feng, Hui; Schuster, Gregory; hide

    2008-01-01

    The Ocean Color component of the Aerosol Robotic Network (AERONET-OC) has been implemented to support long-term satellite ocean color investigations through cross-site consistent and accurate measurements collected by autonomous radiometer systems deployed on offshore fixed platforms. The ultimate purpose of AERONET-OC is the production of standardized measurements performed at different sites with identical measuring systems and protocols, calibrated using a single reference source and method, and processed with the same code. The AERONET-OC primary data product is the normalized water leaving radiance determined at center-wavelengths of interest for satellite ocean color applications, with an uncertainty lower than 5% in the blue-green spectral regions and higher than 8% in the red. Measurements collected at 6 sites counting the northern Adriatic Sea, the Baltic Proper, the Gulf of Finland, the Persian Gulf, and, the northern and southern margins of the Middle Atlantic Bay, have shown the capability of producing quality assured data over a wide range of bio-optical conditions including Case-2 yellow substance- and sedimentdominated waters. This work briefly introduces network elements like: deployment sites, measurement method, instrument calibration, processing scheme, quality-assurance, uncertainties, data archive and products accessibility. Emphases is given to those elements which underline the network strengths (i.e., mostly standardization of any network element) and its weaknesses (i.e., the use of consolidated, but old-fashioned technology). The work also addresses the application of AERONET-OC data to the validation of primary satellite radiometric products over a variety of complex coastal waters and finally provides elements for the identification of new deployment sites most suitable to support satellite ocean color missions.

  14. Spanish Museum Libraries Network.

    ERIC Educational Resources Information Center

    Lopez de Prado, Rosario

    This paper describes the creation of an automated network of museum libraries in Spain. The only way in which the specialized libraries in the world today can continue to be active and to offer valid information is to automate the service they offer, and create network libraries with cooperative plans. The network can be configured with different…

  15. Validation of the Mindful Coping Scale

    ERIC Educational Resources Information Center

    Tharaldsen, Kjersti B.; Bru, Edvin

    2011-01-01

    The aim of this research is to develop and validate a self-report measure of mindfulness and coping, the mindful coping scale (MCS). Dimensions of mindful coping were theoretically deduced from mindfulness theory and coping theory. The MCS was empirically evaluated by use of factor analyses, reliability testing and nomological network validation.…

  16. Thermodynamic Constraints Improve Metabolic Networks.

    PubMed

    Krumholz, Elias W; Libourel, Igor G L

    2017-08-08

    In pursuit of establishing a realistic metabolic phenotypic space, the reversibility of reactions is thermodynamically constrained in modern metabolic networks. The reversibility constraints follow from heuristic thermodynamic poise approximations that take anticipated cellular metabolite concentration ranges into account. Because constraints reduce the feasible space, draft metabolic network reconstructions may need more extensive reconciliation, and a larger number of genes may become essential. Notwithstanding ubiquitous application, the effect of reversibility constraints on the predictive capabilities of metabolic networks has not been investigated in detail. Instead, work has focused on the implementation and validation of the thermodynamic poise calculation itself. With the advance of fast linear programming-based network reconciliation, the effects of reversibility constraints on network reconciliation and gene essentiality predictions have become feasible and are the subject of this study. Networks with thermodynamically informed reversibility constraints outperformed gene essentiality predictions compared to networks that were constrained with randomly shuffled constraints. Unconstrained networks predicted gene essentiality as accurately as thermodynamically constrained networks, but predicted substantially fewer essential genes. Networks that were reconciled with sequence similarity data and strongly enforced reversibility constraints outperformed all other networks. We conclude that metabolic network analysis confirmed the validity of the thermodynamic constraints, and that thermodynamic poise information is actionable during network reconciliation. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  17. What do you mean "drunk"? Convergent validation of multiple methods of mapping alcohol expectancy memory networks.

    PubMed

    Reich, Richard R; Ariel, Idan; Darkes, Jack; Goldman, Mark S

    2012-09-01

    The configuration and activation of memory networks have been theorized as mechanisms that underlie the often observed link between alcohol expectancies and drinking. A key component of this network is the expectancy "drunk." The memory network configuration of "drunk" was mapped by using cluster analysis of data gathered from the paired-similarities task (PST) and the Alcohol Expectancy Multi-Axial Assessment (AEMAX). A third task, the free associates task (FA), assessed participants' strongest alcohol expectancy associates and was used as a validity check for the cluster analyses. Six hundred forty-seven 18-19-year-olds completed these measures and a measure of alcohol consumption at baseline assessment for a 5-year longitudinal study. For both the PST and AEMAX, "drunk" clustered with mainly negative and sedating effects (e.g., "sick," "dizzy," "sleepy") in lighter drinkers and with more positive and arousing effects (e.g., "happy," "horny," "outgoing") in heavier drinkers, showing that the cognitive organization of expectancies reflected drinker type (and might influence the choice to drink). Consistent with the cluster analyses, in participants who gave "drunk" as an FA response, heavier drinkers rated the word as more positive and arousing than lighter drinkers. Additionally, gender did not account for the observed drinker-type differences. These results support the notion that for some emerging adults, drinking may be linked to what they mean by the word "drunk." PsycINFO Database Record (c) 2012 APA, all rights reserved.

  18. Feasibility of a Networked Air Traffic Infrastructure Validation Environment for Advanced NextGen Concepts

    NASA Technical Reports Server (NTRS)

    McCormack, Michael J.; Gibson, Alec K.; Dennis, Noah E.; Underwood, Matthew C.; Miller,Lana B.; Ballin, Mark G.

    2013-01-01

    Abstract-Next Generation Air Transportation System (NextGen) applications reliant upon aircraft data links such as Automatic Dependent Surveillance-Broadcast (ADS-B) offer a sweeping modernization of the National Airspace System (NAS), but the aviation stakeholder community has not yet established a positive business case for equipage and message content standards remain in flux. It is necessary to transition promising Air Traffic Management (ATM) Concepts of Operations (ConOps) from simulation environments to full-scale flight tests in order to validate user benefits and solidify message standards. However, flight tests are prohibitively expensive and message standards for Commercial-off-the-Shelf (COTS) systems cannot support many advanced ConOps. It is therefore proposed to simulate future aircraft surveillance and communications equipage and employ an existing commercial data link to exchange data during dedicated flight tests. This capability, referred to as the Networked Air Traffic Infrastructure Validation Environment (NATIVE), would emulate aircraft data links such as ADS-B using in-flight Internet and easily-installed test equipment. By utilizing low-cost equipment that is easy to install and certify for testing, advanced ATM ConOps can be validated, message content standards can be solidified, and new standards can be established through full-scale flight trials without necessary or expensive equipage or extensive flight test preparation. This paper presents results of a feasibility study of the NATIVE concept. To determine requirements, six NATIVE design configurations were developed for two NASA ConOps that rely on ADS-B. The performance characteristics of three existing in-flight Internet services were investigated to determine whether performance is adequate to support the concept. Next, a study of requisite hardware and software was conducted to examine whether and how the NATIVE concept might be realized. Finally, to determine a business case

  19. The Network Information Management System (NIMS) in the Deep Space Network

    NASA Technical Reports Server (NTRS)

    Wales, K. J.

    1983-01-01

    In an effort to better manage enormous amounts of administrative, engineering, and management data that is distributed worldwide, a study was conducted which identified the need for a network support system. The Network Information Management System (NIMS) will provide the Deep Space Network with the tools to provide an easily accessible source of valid information to support management activities and provide a more cost-effective method of acquiring, maintaining, and retrieval data.

  20. Computing preimages of Boolean networks.

    PubMed

    Klotz, Johannes; Bossert, Martin; Schober, Steffen

    2013-01-01

    In this paper we present an algorithm based on the sum-product algorithm that finds elements in the preimage of a feed-forward Boolean networks given an output of the network. Our probabilistic method runs in linear time with respect to the number of nodes in the network. We evaluate our algorithm for randomly constructed Boolean networks and a regulatory network of Escherichia coli and found that it gives a valid solution in most cases.

  1. Validation of the Social Appearance Anxiety Scale in Patients with Systemic Sclerosis: A Scleroderma Patient-centered Intervention Network Cohort Study.

    PubMed

    Mills, Sarah D; Kwakkenbos, Linda; Carrier, Marie-Eve; Gholizadeh, Shadi; Fox, Rina S; Jewett, Lisa R; Gottesman, Karen; Roesch, Scott C; Thombs, Brett D; Malcarne, Vanessa L

    2018-01-17

    Systemic sclerosis (SSc) is an autoimmune disease that can cause disfiguring changes in appearance. This study examined the structural validity, internal consistency reliability, convergent validity, and measurement equivalence of the Social Appearance Anxiety Scale (SAAS) across SSc disease subtypes. Patients enrolled in the Scleroderma Patient-centered Intervention Network Cohort completed the SAAS and measures of appearance-related concerns and psychological distress. Confirmatory factor analysis (CFA) was used to examine the structural validity of the SAAS. Multiple-group CFA was used to determine if SAAS scores can be compared across patients with limited and diffuse disease subtypes. Cronbach's alpha was used to examine internal consistency reliability. Correlations of SAAS scores with measures of body image dissatisfaction, fear of negative evaluation, social anxiety, and depression were used to examine convergent validity. SAAS scores were hypothesized to be positively associated with all convergent validity measures, with correlations significant and moderate to large in size. A total of 938 patients with SSc were included. CFA supported a one-factor structure (CFI: .92; SRMR: .04; RMSEA: .08), and multiple-group CFA indicated that the scalar invariance model best fit the data. Internal consistency reliability was good in the total sample (α = .96) and in disease subgroups. Overall, evidence of convergent validity was found with measures of body image dissatisfaction, fear of negative evaluation, social anxiety, and depression. The SAAS can be reliably and validly used to assess fear of appearance evaluation in patients with SSc, and SAAS scores can be meaningfully compared across disease subtypes. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  2. Validation of Copernicus Height-resolved Ozone data Products from Sentinel-5P TROPOMI using global sonde and lidar networks (CHEOPS-5P)

    NASA Astrophysics Data System (ADS)

    Keppens, Arno; Lambert, Jean-Christopher; Hubert, Daan; Verhoelst, Tijl; Granville, José; Ancellet, Gérard; Balis, Dimitris; Delcloo, Andy; Duflot, Valentin; Godin-Beekmann, Sophie; Koukouli, Marilisa; Leblanc, Thierry; Stavrakou, Trissevgeni; Steinbrecht, Wolfgang; Stübi, Réné; Thompson, Anne

    2017-04-01

    Monitoring of and research on air quality, stratospheric ozone and climate change require global and long-term observation of the vertical distribution of atmospheric ozone, at ever-improving resolution and accuracy. Global tropospheric and stratospheric ozone profile measurement capabilities from space have therefore improved substantially over the last decades. Being a part of the space segment of the Copernicus Atmosphere and Climate Services that is currently under implementation, the upcoming Sentinel-5 Precursor (S5P) mission with its imaging spectrometer TROPOMI (Tropospheric Monitoring Instrument) is dedicated to the measurement of nadir atmospheric radiance and solar irradiance in the UV-VIS-NIR-SWIR spectral range. Ozone profile and tropospheric ozone column data will be retrieved from these measurements by use of several complementary retrieval methods. The geophysical validation of the enhanced height-resolved ozone data products, as well as support to the continuous evolution of the associated retrieval algorithms, is a key objective of the CHEOPS-5P project, a contributor to the ESA-led S5P Validation Team (S5PVT). This work describes the principles and implementation of the CHEOPS-5P quality assessment (QA) and validation system. The QA/validation methodology relies on the analysis of S5P retrieval diagnostics and on comparisons of S5P data with reference ozone profile measurements. The latter are collected from ozonesonde, stratospheric lidar and tropospheric lidar stations performing network operation in the context of WMO's Global Atmosphere Watch, including the NDACC global and SHADOZ tropical networks. After adaptation of the Multi-TASTE versatile satellite validation environment currently operational in the context of ESA's CCI, EUMETSAT O3M-SAF, and CEOS and SPARC initiatives, a list of S5P data Quality Indicators (QI) will be derived from complementary investigations: (1) data content and information content studies of the S5P data retrievals

  3. Revealing the Effects of the Herbal Pair of Euphorbia kansui and Glycyrrhiza on Hepatocellular Carcinoma Ascites with Integrating Network Target Analysis and Experimental Validation

    PubMed Central

    Zhang, Yanqiong; Lin, Ya; Zhao, Haiyu; Guo, Qiuyan; Yan, Chen; Lin, Na

    2016-01-01

    Although the herbal pair of Euphorbia kansui (GS) and Glycyrrhiza (GC) is one of the so-called "eighteen antagonistic medicaments" in Chinese medicinal literature, it is prescribed in a classic Traditional Chinese Medicine (TCM) formula Gansui-Banxia-Tang for cancerous ascites, suggesting that GS and GC may exhibit synergistic or antagonistic effects in different combination designs. Here, we modeled the effects of GS/GC combination with a target interaction network and clarified the associations between the network topologies involving the drug targets and the drug combination effects. Moreover, the "edge-betweenness" values, which is defined as the frequency with which edges are placed on the shortest paths between all pairs of modules in network, were calculated, and the ADRB1-PIK3CG interaction exhibited the greatest edge-betweenness value, suggesting its crucial role in connecting the other edges in the network. Because ADRB1 and PIK3CG were putative targets of GS and GC, respectively, and both had functional interactions with AVPR2 approved as known therapeutic target for ascites, we proposed that the ADRB1-PIK3CG-AVPR2 signal axis might be involved in the effects of the GS-GC combination on ascites. This proposal was further experimentally validated in a H22 hepatocellular carcinoma (HCC) ascites model. Collectively, this systems-level investigation integrated drug target prediction and network analysis to reveal the combination principles of the herbal pair of GS and GC. Experimental validation in an in vivo system provided convincing evidence that different combination designs of GS and GC might result in synergistic or antagonistic effects on HCC ascites that might be partially related to their regulation of the ADRB1-PIK3CG-AVPR2 signal axis. PMID:27143956

  4. Verification and Validation of Neural Networks for Aerospace Systems

    NASA Technical Reports Server (NTRS)

    Mackall, Dale; Nelson, Stacy; Schumann, Johann

    2002-01-01

    The Dryden Flight Research Center V&V working group and NASA Ames Research Center Automated Software Engineering (ASE) group collaborated to prepare this report. The purpose is to describe V&V processes and methods for certification of neural networks for aerospace applications, particularly adaptive flight control systems like Intelligent Flight Control Systems (IFCS) that use neural networks. This report is divided into the following two sections: Overview of Adaptive Systems and V&V Processes/Methods.

  5. Reverse Engineering Validation using a Benchmark Synthetic Gene Circuit in Human Cells

    PubMed Central

    Kang, Taek; White, Jacob T.; Xie, Zhen; Benenson, Yaakov; Sontag, Eduardo; Bleris, Leonidas

    2013-01-01

    Multi-component biological networks are often understood incompletely, in large part due to the lack of reliable and robust methodologies for network reverse engineering and characterization. As a consequence, developing automated and rigorously validated methodologies for unraveling the complexity of biomolecular networks in human cells remains a central challenge to life scientists and engineers. Today, when it comes to experimental and analytical requirements, there exists a great deal of diversity in reverse engineering methods, which renders the independent validation and comparison of their predictive capabilities difficult. In this work we introduce an experimental platform customized for the development and verification of reverse engineering and pathway characterization algorithms in mammalian cells. Specifically, we stably integrate a synthetic gene network in human kidney cells and use it as a benchmark for validating reverse engineering methodologies. The network, which is orthogonal to endogenous cellular signaling, contains a small set of regulatory interactions that can be used to quantify the reconstruction performance. By performing successive perturbations to each modular component of the network and comparing protein and RNA measurements, we study the conditions under which we can reliably reconstruct the causal relationships of the integrated synthetic network. PMID:23654266

  6. Reverse engineering validation using a benchmark synthetic gene circuit in human cells.

    PubMed

    Kang, Taek; White, Jacob T; Xie, Zhen; Benenson, Yaakov; Sontag, Eduardo; Bleris, Leonidas

    2013-05-17

    Multicomponent biological networks are often understood incompletely, in large part due to the lack of reliable and robust methodologies for network reverse engineering and characterization. As a consequence, developing automated and rigorously validated methodologies for unraveling the complexity of biomolecular networks in human cells remains a central challenge to life scientists and engineers. Today, when it comes to experimental and analytical requirements, there exists a great deal of diversity in reverse engineering methods, which renders the independent validation and comparison of their predictive capabilities difficult. In this work we introduce an experimental platform customized for the development and verification of reverse engineering and pathway characterization algorithms in mammalian cells. Specifically, we stably integrate a synthetic gene network in human kidney cells and use it as a benchmark for validating reverse engineering methodologies. The network, which is orthogonal to endogenous cellular signaling, contains a small set of regulatory interactions that can be used to quantify the reconstruction performance. By performing successive perturbations to each modular component of the network and comparing protein and RNA measurements, we study the conditions under which we can reliably reconstruct the causal relationships of the integrated synthetic network.

  7. Validation and understanding of Moderate Resolution Imaging Spectroradiometer aerosol products (C5) using ground-based measurements from the handheld Sun photometer network in China

    Treesearch

    Zhanqing Li; Feng Niu; Kwon-Ho Lee; Jinyuan Xin; Wei Min Hao; Bryce L. Nordgren; Yuesi Wang; Pucai Wang

    2007-01-01

    The Moderate Resolution Imaging Spectroradiometer (MODIS) currently provides the most extensive aerosol retrievals on a global basis, but validation is limited to a small number of ground stations. This study presents a comprehensive evaluation of Collection 4 and 5 MODIS aerosol products using ground measurements from the Chinese Sun Hazemeter Network (CSHNET). The...

  8. Network Hardware Virtualization for Application Provisioning in Core Networks

    DOE PAGES

    Gumaste, Ashwin; Das, Tamal; Khandwala, Kandarp; ...

    2017-02-03

    We present that service providers and vendors are moving toward a network virtualized core, whereby multiple applications would be treated on their own merit in programmable hardware. Such a network would have the advantage of being customized for user requirements and allow provisioning of next generation services that are built specifically to meet user needs. In this article, we articulate the impact of network virtualization on networks that provide customized services and how a provider's business can grow with network virtualization. We outline a decision map that allows mapping of applications with technology that is supported in network-virtualization - orientedmore » equipment. Analogies to the world of virtual machines and generic virtualization show that hardware supporting network virtualization will facilitate new customer needs while optimizing the provider network from the cost and performance perspectives. A key conclusion of the article is that growth would yield sizable revenue when providers plan ahead in terms of supporting network-virtualization-oriented technology in their networks. To be precise, providers have to incorporate into their growth plans network elements capable of new service deployments while protecting network neutrality. Finally, a simulation study validates our NV-induced model.« less

  9. Network Hardware Virtualization for Application Provisioning in Core Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gumaste, Ashwin; Das, Tamal; Khandwala, Kandarp

    We present that service providers and vendors are moving toward a network virtualized core, whereby multiple applications would be treated on their own merit in programmable hardware. Such a network would have the advantage of being customized for user requirements and allow provisioning of next generation services that are built specifically to meet user needs. In this article, we articulate the impact of network virtualization on networks that provide customized services and how a provider's business can grow with network virtualization. We outline a decision map that allows mapping of applications with technology that is supported in network-virtualization - orientedmore » equipment. Analogies to the world of virtual machines and generic virtualization show that hardware supporting network virtualization will facilitate new customer needs while optimizing the provider network from the cost and performance perspectives. A key conclusion of the article is that growth would yield sizable revenue when providers plan ahead in terms of supporting network-virtualization-oriented technology in their networks. To be precise, providers have to incorporate into their growth plans network elements capable of new service deployments while protecting network neutrality. Finally, a simulation study validates our NV-induced model.« less

  10. Verification and Validation of Neural Networks for Aerospace Systems

    NASA Technical Reports Server (NTRS)

    Mackall, Dale; Nelson, Stacy; Schumman, Johann; Clancy, Daniel (Technical Monitor)

    2002-01-01

    The Dryden Flight Research Center V&V working group and NASA Ames Research Center Automated Software Engineering (ASE) group collaborated to prepare this report. The purpose is to describe V&V processes and methods for certification of neural networks for aerospace applications, particularly adaptive flight control systems like Intelligent Flight Control Systems (IFCS) that use neural networks. This report is divided into the following two sections: 1) Overview of Adaptive Systems; and 2) V&V Processes/Methods.

  11. Using Social Network Methods to Study School Leadership

    ERIC Educational Resources Information Center

    Pitts, Virginia M.; Spillane, James P.

    2009-01-01

    Social network analysis is increasingly used in the study of policy implementation and school leadership. A key question that remains is that of instrument validity--that is, the question of whether these social network survey instruments measure what they purport to measure. In this paper, we describe our work to examine the validity of the…

  12. Default mode network, motor network, dorsal and ventral basal ganglia networks in the rat brain: comparison to human networks using resting state-fMRI.

    PubMed

    Sierakowiak, Adam; Monnot, Cyril; Aski, Sahar Nikkhou; Uppman, Martin; Li, Tie-Qiang; Damberg, Peter; Brené, Stefan

    2015-01-01

    Rodent models are developed to enhance understanding of the underlying biology of different brain disorders. However, before interpreting findings from animal models in a translational aspect to understand human disease, a fundamental step is to first have knowledge of similarities and differences of the biological systems studied. In this study, we analyzed and verified four known networks termed: default mode network, motor network, dorsal basal ganglia network, and ventral basal ganglia network using resting state functional MRI (rsfMRI) in humans and rats. Our work supports the notion that humans and rats have common robust resting state brain networks and that rsfMRI can be used as a translational tool when validating animal models of brain disorders. In the future, rsfMRI may be used, in addition to short-term interventions, to characterize longitudinal effects on functional brain networks after long-term intervention in humans and rats.

  13. Default Mode Network, Motor Network, Dorsal and Ventral Basal Ganglia Networks in the Rat Brain: Comparison to Human Networks Using Resting State-fMRI

    PubMed Central

    Sierakowiak, Adam; Monnot, Cyril; Aski, Sahar Nikkhou; Uppman, Martin; Li, Tie-Qiang; Damberg, Peter; Brené, Stefan

    2015-01-01

    Rodent models are developed to enhance understanding of the underlying biology of different brain disorders. However, before interpreting findings from animal models in a translational aspect to understand human disease, a fundamental step is to first have knowledge of similarities and differences of the biological systems studied. In this study, we analyzed and verified four known networks termed: default mode network, motor network, dorsal basal ganglia network, and ventral basal ganglia network using resting state functional MRI (rsfMRI) in humans and rats. Our work supports the notion that humans and rats have common robust resting state brain networks and that rsfMRI can be used as a translational tool when validating animal models of brain disorders. In the future, rsfMRI may be used, in addition to short-term interventions, to characterize longitudinal effects on functional brain networks after long-term intervention in humans and rats. PMID:25789862

  14. Reliability, Convergent Validity and Time Invariance of Default Mode Network Deviations in Early Adult Major Depressive Disorder.

    PubMed

    Bessette, Katie L; Jenkins, Lisanne M; Skerrett, Kristy A; Gowins, Jennifer R; DelDonno, Sophie R; Zubieta, Jon-Kar; McInnis, Melvin G; Jacobs, Rachel H; Ajilore, Olusola; Langenecker, Scott A

    2018-01-01

    There is substantial variability across studies of default mode network (DMN) connectivity in major depressive disorder, and reliability and time-invariance are not reported. This study evaluates whether DMN dysconnectivity in remitted depression (rMDD) is reliable over time and symptom-independent, and explores convergent relationships with cognitive features of depression. A longitudinal study was conducted with 82 young adults free of psychotropic medications (47 rMDD, 35 healthy controls) who completed clinical structured interviews, neuropsychological assessments, and 2 resting-state fMRI scans across 2 study sites. Functional connectivity analyses from bilateral posterior cingulate and anterior hippocampal formation seeds in DMN were conducted at both time points within a repeated-measures analysis of variance to compare groups and evaluate reliability of group-level connectivity findings. Eleven hyper- (from posterior cingulate) and 6 hypo- (from hippocampal formation) connectivity clusters in rMDD were obtained with moderate to adequate reliability in all but one cluster (ICC's range = 0.50 to 0.76 for 16 of 17). The significant clusters were reduced with a principle component analysis (5 components obtained) to explore these connectivity components, and were then correlated with cognitive features (rumination, cognitive control, learning and memory, and explicit emotion identification). At the exploratory level, for convergent validity, components consisting of posterior cingulate with cognitive control network hyperconnectivity in rMDD were related to cognitive control (inverse) and rumination (positive). Components consisting of anterior hippocampal formation with social emotional network and DMN hypoconnectivity were related to memory (inverse) and happy emotion identification (positive). Thus, time-invariant DMN connectivity differences exist early in the lifespan course of depression and are reliable. The nuanced results suggest a ventral within-network

  15. Distrubtion Tolerant Network Technology Flight Validation Report: DINET

    NASA Technical Reports Server (NTRS)

    Jones, Ross M.

    2009-01-01

    In October and November of 2008, the Jet Propulsion Laboratory installed and tested essential elements of Delay/Disruption Tolerant Networking (DTN) technology on the Deep Impact spacecraft. This experiment, called Deep Impact Network Experiment (DINET), was performed in close cooperation with the EPOXI project which has responsibility for the spacecraft. During DINET some 300 images were transmitted from the JPL nodes to the spacecraft. Then, they were automatically forwarded from the spacecraft back to the JPL nodes, exercising DTN's bundle origination, transmission, acquisition, dynamic route computation, congestion control, prioritization, custody transfer, and automatic retransmission procedures, both on the spacecraft and on the ground, over a period of 27 days. All transmitted bundles were successfully received, without corruption. The DINET experiment demonstrated DTN readiness for operational use in space missions.

  16. Distribution Tolerant Network Technology Flight Validation Report: DINET

    NASA Technical Reports Server (NTRS)

    Jones, Ross M.

    2009-01-01

    In October and November of 2008, the Jet Propulsion Laboratory installed and tested essential elements of Delay/Disruption Tolerant Networking (DTN) technology on the Deep Impact spacecraft. This experiment, called Deep Impact Network Experiment (DINET), was performed in close cooperation with the EPOXI project which has responsibility for the spacecraft. During DINET some 300 images were transmitted from the JPL nodes to the spacecraft. Then, they were automatically forwarded from the spacecraft back to the JPL nodes, exercising DTN's bundle origination, transmission, acquisition, dynamic route computation, congestion control, prioritization, custody transfer, and automatic retransmission procedures, both on the spacecraft and on the ground, over a period of 27 days. All transmitted bundles were successfully received, without corruption. The DINET experiment demonstrated DTN readiness for operational use in space missions.

  17. Directed network modules

    NASA Astrophysics Data System (ADS)

    Palla, Gergely; Farkas, Illés J.; Pollner, Péter; Derényi, Imre; Vicsek, Tamás

    2007-06-01

    A search technique locating network modules, i.e. internally densely connected groups of nodes in directed networks is introduced by extending the clique percolation method originally proposed for undirected networks. After giving a suitable definition for directed modules we investigate their percolation transition in the Erdos-Rényi graph both analytically and numerically. We also analyse four real-world directed networks, including Google's own web-pages, an email network, a word association graph and the transcriptional regulatory network of the yeast Saccharomyces cerevisiae. The obtained directed modules are validated by additional information available for the nodes. We find that directed modules of real-world graphs inherently overlap and the investigated networks can be classified into two major groups in terms of the overlaps between the modules. Accordingly, in the word-association network and Google's web-pages, overlaps are likely to contain in-hubs, whereas the modules in the email and transcriptional regulatory network tend to overlap via out-hubs.

  18. Link and Network Layers Design for Ultra-High-Speed Terahertz-Band Communications Networks

    DTIC Science & Technology

    2017-01-01

    throughput, and identify the optimal parameter values for their design (Sec. 6.2.3). Moreover, we validate and test the scheme with experimental data obtained...LINK AND NETWORK LAYERS DESIGN FOR ULTRA-HIGH- SPEED TERAHERTZ-BAND COMMUNICATIONS NETWORKS STATE UNIVERSITY OF NEW YORK (SUNY) AT BUFFALO JANUARY...TYPE FINAL TECHNICAL REPORT 3. DATES COVERED (From - To) FEB 2015 – SEP 2016 4. TITLE AND SUBTITLE LINK AND NETWORK LAYERS DESIGN FOR ULTRA-HIGH

  19. Validation of a smartphone app to map social networks of proximity

    PubMed Central

    Larsen, Mark E.; Townsend, Samuel; Christensen, Helen

    2017-01-01

    Social network analysis is a prominent approach to investigate interpersonal relationships. Most studies use self-report data to quantify the connections between participants and construct social networks. In recent years smartphones have been used as an alternative to map networks by assessing the proximity between participants based on Bluetooth and GPS data. While most studies have handed out specially programmed smartphones to study participants, we developed an application for iOS and Android to collect Bluetooth data from participants’ own smartphones. In this study, we compared the networks estimated with the smartphone app to those obtained from sociometric badges and self-report data. Participants (n = 21) installed the app on their phone and wore a sociometric badge during office hours. Proximity data was collected for 4 weeks. A contingency table revealed a significant association between proximity data (ϕ = 0.17, p<0.0001), but the marginal odds were higher for the app (8.6%) than for the badges (1.3%), indicating that dyads were more often detected by the app. We then compared the networks that were estimated using the proximity and self-report data. All three networks were significantly correlated, although the correlation with self-reported data was lower for the app (ρ = 0.25) than for badges (ρ = 0.67). The scanning rates of the app varied considerably between devices and was lower on iOS than on Android. The association between the app and the badges increased when the network was estimated between participants whose app recorded more regularly. These findings suggest that the accuracy of proximity networks can be further improved by reducing missing data and restricting the interpersonal distance at which interactions are detected. PMID:29261782

  20. Sparse brain network using penalized linear regression

    NASA Astrophysics Data System (ADS)

    Lee, Hyekyoung; Lee, Dong Soo; Kang, Hyejin; Kim, Boong-Nyun; Chung, Moo K.

    2011-03-01

    Sparse partial correlation is a useful connectivity measure for brain networks when it is difficult to compute the exact partial correlation in the small-n large-p setting. In this paper, we formulate the problem of estimating partial correlation as a sparse linear regression with a l1-norm penalty. The method is applied to brain network consisting of parcellated regions of interest (ROIs), which are obtained from FDG-PET images of the autism spectrum disorder (ASD) children and the pediatric control (PedCon) subjects. To validate the results, we check their reproducibilities of the obtained brain networks by the leave-one-out cross validation and compare the clustered structures derived from the brain networks of ASD and PedCon.

  1. Physical-layer network coding for passive optical interconnect in datacenter networks.

    PubMed

    Lin, Rui; Cheng, Yuxin; Guan, Xun; Tang, Ming; Liu, Deming; Chan, Chun-Kit; Chen, Jiajia

    2017-07-24

    We introduce physical-layer network coding (PLNC) technique in a passive optical interconnect (POI) architecture for datacenter networks. The implementation of the PLNC in the POI at 2.5 Gb/s and 10Gb/s have been experimentally validated while the gains in terms of network layer performances have been investigated by simulation. The results reveal that in order to realize negligible packet drop, the wavelengths usage can be reduced by half while a significant improvement in packet delay especially under high traffic load can be achieved by employing PLNC over POI.

  2. Networked Participatory Scholarship: Emergent Techno-Cultural Pressures toward Open and Digital Scholarship in Online Networks

    ERIC Educational Resources Information Center

    Veletsianos, George; Kimmons, Royce

    2012-01-01

    We examine the relationship between scholarly practice and participatory technologies and explore how such technologies invite and reflect the emergence of a new form of scholarship that we call "Networked Participatory Scholarship": scholars' participation in online social networks to share, reflect upon, critique, improve, validate, and…

  3. Spatial Upscaling of Long-term In Situ LAI Measurements from Global Network Sites for Validation of Remotely Sensed Products

    NASA Astrophysics Data System (ADS)

    Xu, B.; Jing, L.; Qinhuo, L.; Zeng, Y.; Yin, G.; Fan, W.; Zhao, J.

    2015-12-01

    Leaf area index (LAI) is a key parameter in terrestrial ecosystem models, and a series of global LAI products have been derived from satellite data. To effectively apply these LAI products, it is necessary to evaluate their accuracy reasonablely. The long-term LAI measurements from the global network sites are an important supplement to the product validation dataset. However, the spatial scale mismatch between the site measurement and the pixel grid hinders the utilization of these measurements in LAI product validation. In this study, a pragmatic approach based on the Bayesian linear regression between long-term LAI measurements and high-resolution images is presented for upscaling the point-scale measurements to the pixel-scale. The algorithm was evaluated using high-resolution LAI reference maps provided by the VALERI project at the Järvselja site and was implemented to upscale the long-term LAI measurements at the global network sites. Results indicate that the spatial scaling algorithm can reduce the root mean square error (RMSE) from 0.42 before upscaling to 0.21 after upscaling compared with the aggregated LAI reference maps at the pixel-scale. Meanwhile, the algorithm shows better reliability and robustness than the ordinary least square (OLS) method for upscaling some LAI measurements acquired at specific dates without high-resolution images. The upscaled LAI measurements were employed to validate three global LAI products, including MODIS, GLASS and GEOV1. Results indicate that (i) GLASS and GEOV1 show consistent temporal profiles over most sites, while MODIS exhibits temporal instability over a few forest sites. The RMSE of seasonality between products and upscaled LAI measurement is 0.25-1.72 for MODIS, 0.17-1.29 for GLASS and 0.36-1.35 for GEOV1 along with different sites. (ii) The uncertainty for products varies over different months. The lowest and highest uncertainty for MODIS are 0.67 in March and 1.53 in August, for GLASS are 0.67 in November

  4. Gene expression complex networks: synthesis, identification, and analysis.

    PubMed

    Lopes, Fabrício M; Cesar, Roberto M; Costa, Luciano Da F

    2011-10-01

    Thanks to recent advances in molecular biology, allied to an ever increasing amount of experimental data, the functional state of thousands of genes can now be extracted simultaneously by using methods such as cDNA microarrays and RNA-Seq. Particularly important related investigations are the modeling and identification of gene regulatory networks from expression data sets. Such a knowledge is fundamental for many applications, such as disease treatment, therapeutic intervention strategies and drugs design, as well as for planning high-throughput new experiments. Methods have been developed for gene networks modeling and identification from expression profiles. However, an important open problem regards how to validate such approaches and its results. This work presents an objective approach for validation of gene network modeling and identification which comprises the following three main aspects: (1) Artificial Gene Networks (AGNs) model generation through theoretical models of complex networks, which is used to simulate temporal expression data; (2) a computational method for gene network identification from the simulated data, which is founded on a feature selection approach where a target gene is fixed and the expression profile is observed for all other genes in order to identify a relevant subset of predictors; and (3) validation of the identified AGN-based network through comparison with the original network. The proposed framework allows several types of AGNs to be generated and used in order to simulate temporal expression data. The results of the network identification method can then be compared to the original network in order to estimate its properties and accuracy. Some of the most important theoretical models of complex networks have been assessed: the uniformly-random Erdös-Rényi (ER), the small-world Watts-Strogatz (WS), the scale-free Barabási-Albert (BA), and geographical networks (GG). The experimental results indicate that the inference

  5. Validation of SCIAMACHY HDO/H2O measurements using the TCCON and NDACC-MUSICA networks

    NASA Astrophysics Data System (ADS)

    Scheepmaker, R. A.; Frankenberg, C.; Deutscher, N. M.; Schneider, M.; Barthlott, S.; Blumenstock, T.; Garcia, O. E.; Hase, F.; Jones, N.; Mahieu, E.; Notholt, J.; Velazco, V.; Landgraf, J.; Aben, I.

    2015-04-01

    Measurements of the atmospheric HDO/H2O ratio help us to better understand the hydrological cycle and improve models to correctly simulate tropospheric humidity and therefore climate change. We present an updated version of the column-averaged HDO/H2O ratio data set from the SCanning Imaging Absorption spectroMeter for Atmospheric CHartographY (SCIAMACHY). The data set is extended with 2 additional years, now covering 2003-2007, and is validated against co-located ground-based total column δD measurements from Fourier transform spectrometers (FTS) of the Total Carbon Column Observing Network (TCCON) and the Network for the Detection of Atmospheric Composition Change (NDACC, produced within the framework of the MUSICA project). Even though the time overlap among the available data is not yet ideal, we determined a mean negative bias in SCIAMACHY δD of -35 ± 30‰ compared to TCCON and -69 ± 15‰ compared to MUSICA (the uncertainty indicating the station-to-station standard deviation). The bias shows a latitudinal dependency, being largest (∼ -60 to -80‰) at the highest latitudes and smallest (∼ -20 to -30‰) at the lowest latitudes. We have tested the impact of an offset correction to the SCIAMACHY HDO and H2O columns. This correction leads to a humidity- and latitude-dependent shift in δD and an improvement of the bias by 27‰, although it does not lead to an improved correlation with the FTS measurements nor to a strong reduction of the latitudinal dependency of the bias. The correction might be an improvement for dry, high-altitude areas, such as the Tibetan Plateau and the Andes region. For these areas, however, validation is currently impossible due to a lack of ground stations. The mean standard deviation of single-sounding SCIAMACHY-FTS differences is ∼ 115‰, which is

  6. Validation of SCIAMACHY HDO/H2O measurements using the TCCON and NDACC-MUSICA networks

    NASA Astrophysics Data System (ADS)

    Scheepmaker, R. A.; Frankenberg, C.; Deutscher, N. M.; Schneider, M.; Barthlott, S.; Blumenstock, T.; Garcia, O. E.; Hase, F.; Jones, N.; Mahieu, E.; Notholt, J.; Velazco, V.; Landgraf, J.; Aben, I.

    2014-11-01

    Measurements of the atmospheric HDO/H2O ratio help us to better understand the hydrological cycle and improve models to correctly simulate tropospheric humidity and therefore climate change. We present an updated version of the column-averaged HDO/H2O ratio dataset from the SCanning Imaging Absorption spectroMeter for Atmospheric CHartographY (SCIAMACHY). The dataset is extended with two additional years, now covering 2003-2007, and is validated against co-located ground-based total column δD measurements from Fourier-Transform Spectrometers (FTS) of the Total Carbon Column Observing Network (TCCON) and the Network for the Detection of Atmospheric Composition Change (NDACC, produced within the framework of the MUSICA project). Even though the time overlap between the available data is not yet ideal, we determined a mean negative bias in SCIAMACHY δD of -35±30‰ compared to TCCON and -69±15‰ compared to MUSICA (the uncertainty indicating the station-to-station standard deviation). The bias shows a latitudinal dependency, being largest (∼ -60 to -80‰) at the highest latitudes and smallest (∼ -20 to -30‰) at the lowest latitudes. We have tested the impact of an offset correction to the SCIAMACHY HDO and H2O columns. This correction leads to a humidity and latitude dependent shift in δD and an improvement of the bias by 27‰, although it does not lead to an improved correlation with the FTS measurements nor to a strong reduction of the latitudinal dependency of the bias. The correction might be an improvement for dry, high-altitude areas, such as the Tibetan Plateau and the Andes region. For these areas, however, validation is currently impossible due to a lack of ground stations. The mean standard deviation of single-sounding SCIAMACHY-FTS differences is ∼ 115‰, which is reduced

  7. Discovery and validation of a glioblastoma co-expressed gene module

    PubMed Central

    Dunwoodie, Leland J.; Poehlman, William L.; Ficklin, Stephen P.; Feltus, Frank Alexander

    2018-01-01

    Tumors exhibit complex patterns of aberrant gene expression. Using a knowledge-independent, noise-reducing gene co-expression network construction software called KINC, we created multiple RNAseq-based gene co-expression networks relevant to brain and glioblastoma biology. In this report, we describe the discovery and validation of a glioblastoma-specific gene module that contains 22 co-expressed genes. The genes are upregulated in glioblastoma relative to normal brain and lower grade glioma samples; they are also hypo-methylated in glioblastoma relative to lower grade glioma tumors. Among the proneural, neural, mesenchymal, and classical glioblastoma subtypes, these genes are most-highly expressed in the mesenchymal subtype. Furthermore, high expression of these genes is associated with decreased survival across each glioblastoma subtype. These genes are of interest to glioblastoma biology and our gene interaction discovery and validation workflow can be used to discover and validate co-expressed gene modules derived from any co-expression network. PMID:29541392

  8. Discovery and validation of a glioblastoma co-expressed gene module.

    PubMed

    Dunwoodie, Leland J; Poehlman, William L; Ficklin, Stephen P; Feltus, Frank Alexander

    2018-02-16

    Tumors exhibit complex patterns of aberrant gene expression. Using a knowledge-independent, noise-reducing gene co-expression network construction software called KINC, we created multiple RNAseq-based gene co-expression networks relevant to brain and glioblastoma biology. In this report, we describe the discovery and validation of a glioblastoma-specific gene module that contains 22 co-expressed genes. The genes are upregulated in glioblastoma relative to normal brain and lower grade glioma samples; they are also hypo-methylated in glioblastoma relative to lower grade glioma tumors. Among the proneural, neural, mesenchymal, and classical glioblastoma subtypes, these genes are most-highly expressed in the mesenchymal subtype. Furthermore, high expression of these genes is associated with decreased survival across each glioblastoma subtype. These genes are of interest to glioblastoma biology and our gene interaction discovery and validation workflow can be used to discover and validate co-expressed gene modules derived from any co-expression network.

  9. Scintillometer networks for calibration and validation of energy balance and soil moisture remote sensing algorithms

    NASA Astrophysics Data System (ADS)

    Hendrickx, Jan M. H.; Kleissl, Jan; Gómez Vélez, Jesús D.; Hong, Sung-ho; Fábrega Duque, José R.; Vega, David; Moreno Ramírez, Hernán A.; Ogden, Fred L.

    2007-04-01

    Accurate estimation of sensible and latent heat fluxes as well as soil moisture from remotely sensed satellite images poses a great challenge. Yet, it is critical to face this challenge since the estimation of spatial and temporal distributions of these parameters over large areas is impossible using only ground measurements. A major difficulty for the calibration and validation of operational remote sensing methods such as SEBAL, METRIC, and ALEXI is the ground measurement of sensible heat fluxes at a scale similar to the spatial resolution of the remote sensing image. While the spatial length scale of remote sensing images covers a range from 30 m (LandSat) to 1000 m (MODIS) direct methods to measure sensible heat fluxes such as eddy covariance (EC) only provide point measurements at a scale that may be considerably smaller than the estimate obtained from a remote sensing method. The Large Aperture scintillometer (LAS) flux footprint area is larger (up to 5000 m long) and its spatial extent better constraint than that of EC systems. Therefore, scintillometers offer the unique possibility of measuring the vertical flux of sensible heat averaged over areas comparable with several pixels of a satellite image (up to about 40 Landsat thermal pixels or about 5 MODIS thermal pixels). The objective of this paper is to present our experiences with an existing network of seven scintillometers in New Mexico and a planned network of three scintillometers in the humid tropics of Panama and Colombia.

  10. Design and validation of a real-time spiking-neural-network decoder for brain-machine interfaces.

    PubMed

    Dethier, Julie; Nuyujukian, Paul; Ryu, Stephen I; Shenoy, Krishna V; Boahen, Kwabena

    2013-06-01

    Cortically-controlled motor prostheses aim to restore functions lost to neurological disease and injury. Several proof of concept demonstrations have shown encouraging results, but barriers to clinical translation still remain. In particular, intracortical prostheses must satisfy stringent power dissipation constraints so as not to damage cortex. One possible solution is to use ultra-low power neuromorphic chips to decode neural signals for these intracortical implants. The first step is to explore in simulation the feasibility of translating decoding algorithms for brain-machine interface (BMI) applications into spiking neural networks (SNNs). Here we demonstrate the validity of the approach by implementing an existing Kalman-filter-based decoder in a simulated SNN using the Neural Engineering Framework (NEF), a general method for mapping control algorithms onto SNNs. To measure this system's robustness and generalization, we tested it online in closed-loop BMI experiments with two rhesus monkeys. Across both monkeys, a Kalman filter implemented using a 2000-neuron SNN has comparable performance to that of a Kalman filter implemented using standard floating point techniques. These results demonstrate the tractability of SNN implementations of statistical signal processing algorithms on different monkeys and for several tasks, suggesting that a SNN decoder, implemented on a neuromorphic chip, may be a feasible computational platform for low-power fully-implanted prostheses. The validation of this closed-loop decoder system and the demonstration of its robustness and generalization hold promise for SNN implementations on an ultra-low power neuromorphic chip using the NEF.

  11. Cross-Cultural Validation of the Five-Factor Structure of Social Goals: A Filipino Investigation

    ERIC Educational Resources Information Center

    King, Ronnel B.; Watkins, David A.

    2012-01-01

    The aim of the present study was to test the cross-cultural validity of the five-factor structure of social goals that Dowson and McInerney proposed. Using both between-network and within-network approaches to construct validation, 1,147 Filipino high school students participated in the study. Confirmatory factor analysis indicated that the…

  12. Optimization of multilayer neural network parameters for speaker recognition

    NASA Astrophysics Data System (ADS)

    Tovarek, Jaromir; Partila, Pavol; Rozhon, Jan; Voznak, Miroslav; Skapa, Jan; Uhrin, Dominik; Chmelikova, Zdenka

    2016-05-01

    This article discusses the impact of multilayer neural network parameters for speaker identification. The main task of speaker identification is to find a specific person in the known set of speakers. It means that the voice of an unknown speaker (wanted person) belongs to a group of reference speakers from the voice database. One of the requests was to develop the text-independent system, which means to classify wanted person regardless of content and language. Multilayer neural network has been used for speaker identification in this research. Artificial neural network (ANN) needs to set parameters like activation function of neurons, steepness of activation functions, learning rate, the maximum number of iterations and a number of neurons in the hidden and output layers. ANN accuracy and validation time are directly influenced by the parameter settings. Different roles require different settings. Identification accuracy and ANN validation time were evaluated with the same input data but different parameter settings. The goal was to find parameters for the neural network with the highest precision and shortest validation time. Input data of neural networks are a Mel-frequency cepstral coefficients (MFCC). These parameters describe the properties of the vocal tract. Audio samples were recorded for all speakers in a laboratory environment. Training, testing and validation data set were split into 70, 15 and 15 %. The result of the research described in this article is different parameter setting for the multilayer neural network for four speakers.

  13. Brief report: The Brief Alcohol Social Density Assessment (BASDA): convergent, criterion-related, and incremental validity.

    PubMed

    MacKillop, James; Acker, John D; Bollinger, Jared; Clifton, Allan; Miller, Joshua D; Campbell, W Keith; Goodie, Adam S

    2013-09-01

    Alcohol misuse is substantially influenced by social factors, but systematic assessments of social network drinking are typically lengthy. The goal of the present study was to provide further validation of a brief measure of social network alcohol use, the Brief Alcohol Social Density Assessment (BASDA), in a sample of emerging adults. Specifically, the study sought to examine the BASDA's convergent, criterion, and incremental validity in relation to well-established measures of drinking motives and problematic drinking. Participants were 354 undergraduates who were assessed using the BASDA, the Alcohol Use Disorders Identification Test (AUDIT), and the Drinking Motives Questionnaire. Significant associations were observed between the BASDA index of alcohol-related social density and alcohol misuse, social motives, and conformity motives, supporting convergent validity. Criterion-related validity was supported by evidence that significantly greater alcohol involvement was present in the social networks of individuals scoring at or above an AUDIT score of 8, a validated criterion for hazardous drinking. Finally, the BASDA index was significantly associated with alcohol misuse above and beyond drinking motives in relation to AUDIT scores, supporting incremental validity. Taken together, these findings provide further support for the BASDA as an efficient measure of drinking in an individual's social network. Methodological considerations as well as recommendations for future investigations in this area are discussed.

  14. A Global Lake Ecological Observatory Network (GLEON) for synthesising high-frequency sensor data for validation of deterministic ecological models

    USGS Publications Warehouse

    David, Hamilton P; Carey, Cayelan C.; Arvola, Lauri; Arzberger, Peter; Brewer, Carol A.; Cole, Jon J; Gaiser, Evelyn; Hanson, Paul C.; Ibelings, Bas W; Jennings, Eleanor; Kratz, Tim K; Lin, Fang-Pang; McBride, Christopher G.; de Motta Marques, David; Muraoka, Kohji; Nishri, Ami; Qin, Boqiang; Read, Jordan S.; Rose, Kevin C.; Ryder, Elizabeth; Weathers, Kathleen C.; Zhu, Guangwei; Trolle, Dennis; Brookes, Justin D

    2014-01-01

    A Global Lake Ecological Observatory Network (GLEON; www.gleon.org) has formed to provide a coordinated response to the need for scientific understanding of lake processes, utilising technological advances available from autonomous sensors. The organisation embraces a grassroots approach to engage researchers from varying disciplines, sites spanning geographic and ecological gradients, and novel sensor and cyberinfrastructure to synthesise high-frequency lake data at scales ranging from local to global. The high-frequency data provide a platform to rigorously validate process- based ecological models because model simulation time steps are better aligned with sensor measurements than with lower-frequency, manual samples. Two case studies from Trout Bog, Wisconsin, USA, and Lake Rotoehu, North Island, New Zealand, are presented to demonstrate that in the past, ecological model outputs (e.g., temperature, chlorophyll) have been relatively poorly validated based on a limited number of directly comparable measurements, both in time and space. The case studies demonstrate some of the difficulties of mapping sensor measurements directly to model state variable outputs as well as the opportunities to use deviations between sensor measurements and model simulations to better inform process understanding. Well-validated ecological models provide a mechanism to extrapolate high-frequency sensor data in space and time, thereby potentially creating a fully 3-dimensional simulation of key variables of interest.

  15. QSAR modelling using combined simple competitive learning networks and RBF neural networks.

    PubMed

    Sheikhpour, R; Sarram, M A; Rezaeian, M; Sheikhpour, E

    2018-04-01

    The aim of this study was to propose a QSAR modelling approach based on the combination of simple competitive learning (SCL) networks with radial basis function (RBF) neural networks for predicting the biological activity of chemical compounds. The proposed QSAR method consisted of two phases. In the first phase, an SCL network was applied to determine the centres of an RBF neural network. In the second phase, the RBF neural network was used to predict the biological activity of various phenols and Rho kinase (ROCK) inhibitors. The predictive ability of the proposed QSAR models was evaluated and compared with other QSAR models using external validation. The results of this study showed that the proposed QSAR modelling approach leads to better performances than other models in predicting the biological activity of chemical compounds. This indicated the efficiency of simple competitive learning networks in determining the centres of RBF neural networks.

  16. Optimal network alignment with graphlet degree vectors.

    PubMed

    Milenković, Tijana; Ng, Weng Leong; Hayes, Wayne; Przulj, Natasa

    2010-06-30

    Important biological information is encoded in the topology of biological networks. Comparative analyses of biological networks are proving to be valuable, as they can lead to transfer of knowledge between species and give deeper insights into biological function, disease, and evolution. We introduce a new method that uses the Hungarian algorithm to produce optimal global alignment between two networks using any cost function. We design a cost function based solely on network topology and use it in our network alignment. Our method can be applied to any two networks, not just biological ones, since it is based only on network topology. We use our new method to align protein-protein interaction networks of two eukaryotic species and demonstrate that our alignment exposes large and topologically complex regions of network similarity. At the same time, our alignment is biologically valid, since many of the aligned protein pairs perform the same biological function. From the alignment, we predict function of yet unannotated proteins, many of which we validate in the literature. Also, we apply our method to find topological similarities between metabolic networks of different species and build phylogenetic trees based on our network alignment score. The phylogenetic trees obtained in this way bear a striking resemblance to the ones obtained by sequence alignments. Our method detects topologically similar regions in large networks that are statistically significant. It does this independent of protein sequence or any other information external to network topology.

  17. Wayfinding in Social Networks

    NASA Astrophysics Data System (ADS)

    Liben-Nowell, David

    With the recent explosion of popularity of commercial social-networking sites like Facebook and MySpace, the size of social networks that can be studied scientifically has passed from the scale traditionally studied by sociologists and anthropologists to the scale of networks more typically studied by computer scientists. In this chapter, I will highlight a recent line of computational research into the modeling and analysis of the small-world phenomenon - the observation that typical pairs of people in a social network are connected by very short chains of intermediate friends - and the ability of members of a large social network to collectively find efficient routes to reach individuals in the network. I will survey several recent mathematical models of social networks that account for these phenomena, with an emphasis on both the provable properties of these social-network models and the empirical validation of the models against real large-scale social-network data.

  18. Validation of a Social Networks and Support Measurement Tool for Use in International Aging Research: The International Mobility in Aging Study.

    PubMed

    Ahmed, Tamer; Belanger, Emmanuelle; Vafaei, Afshin; Koné, Georges K; Alvarado, Beatriz; Béland, François; Zunzunegui, Maria Victoria

    2018-03-01

    The purpose of this study was to develop and validate a new instrument to assess social networks and social support (IMIAS-SNSS) for different types of social ties in an international sample of older adults. The study sample included n = 1995 community dwelling older people aged between 65 and 74 years from the baseline of the longitudinal International Mobility in Aging Study (IMIAS). In order to measure social networks for each type of social tie, participants were asked about the number of contacts, the number of contacts they see at least once a month or have a very good relationship with, or speak with at least once a month. For social support, participants had to rate the level of social support provided by the four types of contacts for five Likert scale items. Confirmatory Factor Analysis was conducted to determine the goodness of fit of the measurement models. Satisfactory goodness-of-fit indices confirmed the satisfactory factorial structure of the IMIAS-SNSS instrument. Reliability coefficients were 0.80, 0.81, 0.85, and 0.88 for friends, children, family, and partner models, respectively. The models were confirmed by CFA for each type of social tie. Moreover, IMIAS-SNSS detected gender differences in the older adult populations of IMIAS. These results provide evidence supporting that IMIAS-SNSS is a psychometrically sound instrument and of its validity and reliability for international populations of older adults.

  19. The application of neural networks to the SSME startup transient

    NASA Technical Reports Server (NTRS)

    Meyer, Claudia M.; Maul, William A.

    1991-01-01

    Feedforward neural networks were used to model three parameters during the Space Shuttle Main Engine startup transient. The three parameters were the main combustion chamber pressure, a controlled parameter, the high pressure oxidizer turbine discharge temperature, a redlined parameter, and the high pressure fuel pump discharge pressure, a failure-indicating performance parameter. Network inputs consisted of time windows of data from engine measurements that correlated highly to the modeled parameter. A standard backpropagation algorithm was used to train the feedforward networks on two nominal firings. Each trained network was validated with four additional nominal firings. For all three parameters, the neural networks were able to accurately predict the data in the validation sets as well as the training set.

  20. Earlinet validation of CATS L2 product

    NASA Astrophysics Data System (ADS)

    Proestakis, Emmanouil; Amiridis, Vassilis; Kottas, Michael; Marinou, Eleni; Binietoglou, Ioannis; Ansmann, Albert; Wandinger, Ulla; Yorks, John; Nowottnick, Edward; Makhmudov, Abduvosit; Papayannis, Alexandros; Pietruczuk, Aleksander; Gialitaki, Anna; Apituley, Arnoud; Muñoz-Porcar, Constantino; Bortoli, Daniele; Dionisi, Davide; Althausen, Dietrich; Mamali, Dimitra; Balis, Dimitris; Nicolae, Doina; Tetoni, Eleni; Luigi Liberti, Gian; Baars, Holger; Stachlewska, Iwona S.; Voudouri, Kalliopi-Artemis; Mona, Lucia; Mylonaki, Maria; Rita Perrone, Maria; João Costa, Maria; Sicard, Michael; Papagiannopoulos, Nikolaos; Siomos, Nikolaos; Burlizzi, Pasquale; Engelmann, Ronny; Abdullaev, Sabur F.; Hofer, Julian; Pappalardo, Gelsomina

    2018-04-01

    The Cloud-Aerosol Transport System (CATS) onboard the International Space Station (ISS), is a lidar system providing vertically resolved aerosol and cloud profiles since February 2015. In this study, the CATS aerosol product is validated against the aerosol profiles provided by the European Aerosol Research Lidar Network (EARLINET). This validation activity is based on collocated CATS-EARLINET measurements and the comparison of the particle backscatter coefficient at 1064nm.

  1. Implementation of WirelessHART in the NS-2 Simulator and Validation of Its Correctness

    PubMed Central

    Zand, Pouria; Mathews, Emi; Havinga, Paul; Stojanovski, Spase; Sisinni, Emiliano; Ferrari, Paolo

    2014-01-01

    One of the first standards in the wireless sensor networks domain, WirelessHART (HART (Highway Addressable Remote Transducer)), was introduced to address industrial process automation and control requirements. This standard can be used as a reference point to evaluate other wireless protocols in the domain of industrial monitoring and control. This makes it worthwhile to set up a reliable WirelessHART simulator in order to achieve that reference point in a relatively easy manner. Moreover, it offers an alternative to expensive testbeds for testing and evaluating the performance of WirelessHART. This paper explains our implementation of WirelessHART in the NS-2 network simulator. According to our knowledge, this is the first implementation that supports the WirelessHART network manager, as well as the whole stack (all OSI (Open Systems Interconnection model) layers) of the WirelessHART standard. It also explains our effort to validate the correctness of our implementation, namely through the validation of the implementation of the WirelessHART stack protocol and of the network manager. We use sniffed traffic from a real WirelessHART testbed installed in the Idrolab plant for these validations. This confirms the validity of our simulator. Empirical analysis shows that the simulated results are nearly comparable to the results obtained from real networks. We also demonstrate the versatility and usability of our implementation by providing some further evaluation results in diverse scenarios. For example, we evaluate the performance of the WirelessHART network by applying incremental interference in a multi-hop network. PMID:24841245

  2. The Correlation Fractal Dimension of Complex Networks

    NASA Astrophysics Data System (ADS)

    Wang, Xingyuan; Liu, Zhenzhen; Wang, Mogei

    2013-05-01

    The fractality of complex networks is studied by estimating the correlation dimensions of the networks. Comparing with the previous algorithms of estimating the box dimension, our algorithm achieves a significant reduction in time complexity. For four benchmark cases tested, that is, the Escherichia coli (E. Coli) metabolic network, the Homo sapiens protein interaction network (H. Sapiens PIN), the Saccharomyces cerevisiae protein interaction network (S. Cerevisiae PIN) and the World Wide Web (WWW), experiments are provided to demonstrate the validity of our algorithm.

  3. Design and validation of a real-time spiking-neural-network decoder for brain-machine interfaces

    NASA Astrophysics Data System (ADS)

    Dethier, Julie; Nuyujukian, Paul; Ryu, Stephen I.; Shenoy, Krishna V.; Boahen, Kwabena

    2013-06-01

    Objective. Cortically-controlled motor prostheses aim to restore functions lost to neurological disease and injury. Several proof of concept demonstrations have shown encouraging results, but barriers to clinical translation still remain. In particular, intracortical prostheses must satisfy stringent power dissipation constraints so as not to damage cortex. Approach. One possible solution is to use ultra-low power neuromorphic chips to decode neural signals for these intracortical implants. The first step is to explore in simulation the feasibility of translating decoding algorithms for brain-machine interface (BMI) applications into spiking neural networks (SNNs). Main results. Here we demonstrate the validity of the approach by implementing an existing Kalman-filter-based decoder in a simulated SNN using the Neural Engineering Framework (NEF), a general method for mapping control algorithms onto SNNs. To measure this system’s robustness and generalization, we tested it online in closed-loop BMI experiments with two rhesus monkeys. Across both monkeys, a Kalman filter implemented using a 2000-neuron SNN has comparable performance to that of a Kalman filter implemented using standard floating point techniques. Significance. These results demonstrate the tractability of SNN implementations of statistical signal processing algorithms on different monkeys and for several tasks, suggesting that a SNN decoder, implemented on a neuromorphic chip, may be a feasible computational platform for low-power fully-implanted prostheses. The validation of this closed-loop decoder system and the demonstration of its robustness and generalization hold promise for SNN implementations on an ultra-low power neuromorphic chip using the NEF.

  4. Development and Validation of the Air Force Cyber Intruder Alert Testbed (CIAT)

    DTIC Science & Technology

    2016-07-27

    Validation of the Air Force Cyber Intruder Alert Testbed (CIAT) 5a. CONTRACT NUMBER FA8650-16-C-6722 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...network analysts. Therefore, a new cyber STE focused on network analysts called the Air Force Cyber Intruder Alert Testbed (CIAT) was developed. This...Prescribed by ANSI Std. Z39-18 Development and Validation of the Air Force Cyber Intruder Alert Testbed (CIAT) Gregory Funke, Gregory Dye, Brett Borghetti

  5. Security-Enhanced Autonomous Network Management

    NASA Technical Reports Server (NTRS)

    Zeng, Hui

    2015-01-01

    Ensuring reliable communication in next-generation space networks requires a novel network management system to support greater levels of autonomy and greater awareness of the environment and assets. Intelligent Automation, Inc., has developed a security-enhanced autonomous network management (SEANM) approach for space networks through cross-layer negotiation and network monitoring, analysis, and adaptation. The underlying technology is bundle-based delay/disruption-tolerant networking (DTN). The SEANM scheme allows a system to adaptively reconfigure its network elements based on awareness of network conditions, policies, and mission requirements. Although SEANM is generically applicable to any radio network, for validation purposes it has been prototyped and evaluated on two specific networks: a commercial off-the-shelf hardware test-bed using Institute of Electrical Engineers (IEEE) 802.11 Wi-Fi devices and a military hardware test-bed using AN/PRC-154 Rifleman Radio platforms. Testing has demonstrated that SEANM provides autonomous network management resulting in reliable communications in delay/disruptive-prone environments.

  6. Application of Petri net theory for modelling and validation of the sucrose breakdown pathway in the potato tuber.

    PubMed

    Koch, Ina; Junker, Björn H; Heiner, Monika

    2005-04-01

    Because of the complexity of metabolic networks and their regulation, formal modelling is a useful method to improve the understanding of these systems. An essential step in network modelling is to validate the network model. Petri net theory provides algorithms and methods, which can be applied directly to metabolic network modelling and analysis in order to validate the model. The metabolism between sucrose and starch in the potato tuber is of great research interest. Even if the metabolism is one of the best studied in sink organs, it is not yet fully understood. We provide an approach for model validation of metabolic networks using Petri net theory, which we demonstrate for the sucrose breakdown pathway in the potato tuber. We start with hierarchical modelling of the metabolic network as a Petri net and continue with the analysis of qualitative properties of the network. The results characterize the net structure and give insights into the complex net behaviour.

  7. Network Compression as a Quality Measure for Protein Interaction Networks

    PubMed Central

    Royer, Loic; Reimann, Matthias; Stewart, A. Francis; Schroeder, Michael

    2012-01-01

    With the advent of large-scale protein interaction studies, there is much debate about data quality. Can different noise levels in the measurements be assessed by analyzing network structure? Because proteomic regulation is inherently co-operative, modular and redundant, it is inherently compressible when represented as a network. Here we propose that network compression can be used to compare false positive and false negative noise levels in protein interaction networks. We validate this hypothesis by first confirming the detrimental effect of false positives and false negatives. Second, we show that gold standard networks are more compressible. Third, we show that compressibility correlates with co-expression, co-localization, and shared function. Fourth, we also observe correlation with better protein tagging methods, physiological expression in contrast to over-expression of tagged proteins, and smart pooling approaches for yeast two-hybrid screens. Overall, this new measure is a proxy for both sensitivity and specificity and gives complementary information to standard measures such as average degree and clustering coefficients. PMID:22719828

  8. Toward multidomain integrated network management for ATM and SDH networks

    NASA Astrophysics Data System (ADS)

    Galis, Alex; Gantenbein, Dieter; Covaci, Stefan; Bianza, Carlo; Karayannis, Fotis; Mykoniatis, George

    1996-12-01

    ACTS Project AC080 MISA has embarked upon the task of realizing and validating via European field trials integrated end-to-end management of hybrid SDH and ATM networks in the framework of open network provision. This paper reflects the initial work of the project and gives an overview of the proposed MISA system architecture and initial design. We describe our understanding of the underlying enterprise model in the network management context, including the concept of the MISA Global Broadband Connectivity Management service. It supports Integrated Broadband Communication by defining an end-to-end broadband connection service in a multi-domain business environment. Its implementation by the MISA consortium within trials across Europe aims for an efficient management of network resources of the SDH and ATM infrastructure, considering optimum end-to-end quality of service and the needs of a number of telecommunication actors: customers, value-added service providers, and network providers.

  9. Microbial Community Metabolic Modeling: A Community Data-Driven Network Reconstruction: COMMUNITY DATA-DRIVEN METABOLIC NETWORK MODELING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Henry, Christopher S.; Bernstein, Hans C.; Weisenhorn, Pamela

    Metabolic network modeling of microbial communities provides an in-depth understanding of community-wide metabolic and regulatory processes. Compared to single organism analyses, community metabolic network modeling is more complex because it needs to account for interspecies interactions. To date, most approaches focus on reconstruction of high-quality individual networks so that, when combined, they can predict community behaviors as a result of interspecies interactions. However, this conventional method becomes ineffective for communities whose members are not well characterized and cannot be experimentally interrogated in isolation. Here, we tested a new approach that uses community-level data as a critical input for the networkmore » reconstruction process. This method focuses on directly predicting interspecies metabolic interactions in a community, when axenic information is insufficient. We validated our method through the case study of a bacterial photoautotroph-heterotroph consortium that was used to provide data needed for a community-level metabolic network reconstruction. Resulting simulations provided experimentally validated predictions of how a photoautotrophic cyanobacterium supports the growth of an obligate heterotrophic species by providing organic carbon and nitrogen sources.« less

  10. Dynamic Trust Management for Mobile Networks and Its Applications

    ERIC Educational Resources Information Center

    Bao, Fenye

    2013-01-01

    Trust management in mobile networks is challenging due to dynamically changing network environments and the lack of a centralized trusted authority. In this dissertation research, we "design" and "validate" a class of dynamic trust management protocols for mobile networks, and demonstrate the utility of dynamic trust management…

  11. Brazilian Center for the Validation of Alternative Methods (BraCVAM) and the process of validation in Brazil.

    PubMed

    Presgrave, Octavio; Moura, Wlamir; Caldeira, Cristiane; Pereira, Elisabete; Bôas, Maria H Villas; Eskes, Chantra

    2016-03-01

    The need for the creation of a Brazilian centre for the validation of alternative methods was recognised in 2008, and members of academia, industry and existing international validation centres immediately engaged with the idea. In 2012, co-operation between the Oswaldo Cruz Foundation (FIOCRUZ) and the Brazilian Health Surveillance Agency (ANVISA) instigated the establishment of the Brazilian Center for the Validation of Alternative Methods (BraCVAM), which was officially launched in 2013. The Brazilian validation process follows OECD Guidance Document No. 34, where BraCVAM functions as the focal point to identify and/or receive requests from parties interested in submitting tests for validation. BraCVAM then informs the Brazilian National Network on Alternative Methods (RENaMA) of promising assays, which helps with prioritisation and contributes to the validation studies of selected assays. A Validation Management Group supervises the validation study, and the results obtained are peer-reviewed by an ad hoc Scientific Review Committee, organised under the auspices of BraCVAM. Based on the peer-review outcome, BraCVAM will prepare recommendations on the validated test method, which will be sent to the National Council for the Control of Animal Experimentation (CONCEA). CONCEA is in charge of the regulatory adoption of all validated test methods in Brazil, following an open public consultation. 2016 FRAME.

  12. Ionospheric Modeling: Development, Verification and Validation

    DTIC Science & Technology

    2005-09-01

    facilitate the automated processing of a large network of GPS receiver data. 4.; CALIBRATION AND VALIDATION OF IONOSPHERIC SENSORS We have been...NOFS Workshop, Estes Park, CO, January 2005. W. Rideout, A. Coster, P. Doherty, MIT Haystack Automated Processing of GPS Data to Produce Worldwide TEC

  13. On Applicability of Network Coding Technique for 6LoWPAN-based Sensor Networks.

    PubMed

    Amanowicz, Marek; Krygier, Jaroslaw

    2018-05-26

    In this paper, the applicability of the network coding technique in 6LoWPAN-based sensor multihop networks is examined. The 6LoWPAN is one of the standards proposed for the Internet of Things architecture. Thus, we can expect the significant growth of traffic in such networks, which can lead to overload and decrease in the sensor network lifetime. The authors propose the inter-session network coding mechanism that can be implemented in resource-limited sensor motes. The solution reduces the overall traffic in the network, and in consequence, the energy consumption is decreased. Used procedures take into account deep header compressions of the native 6LoWPAN packets and the hop-by-hop changes of the header structure. Applied simplifications reduce signaling traffic that is typically occurring in network coding deployments, keeping the solution usefulness for the wireless sensor networks with limited resources. The authors validate the proposed procedures in terms of end-to-end packet delay, packet loss ratio, traffic in the air, total energy consumption, and network lifetime. The solution has been tested in a real wireless sensor network. The results confirm the efficiency of the proposed technique, mostly in delay-tolerant sensor networks.

  14. Lagged correlation networks

    NASA Astrophysics Data System (ADS)

    Curme, Chester

    Technological advances have provided scientists with large high-dimensional datasets that describe the behaviors of complex systems: from the statistics of energy levels in complex quantum systems, to the time-dependent transcription of genes, to price fluctuations among assets in a financial market. In this environment, where it may be difficult to infer the joint distribution of the data, network science has flourished as a way to gain insight into the structure and organization of such systems by focusing on pairwise interactions. This work focuses on a particular setting, in which a system is described by multivariate time series data. We consider time-lagged correlations among elements in this system, in such a way that the measured interactions among elements are asymmetric. Finally, we allow these interactions to be characteristically weak, so that statistical uncertainties may be important to consider when inferring the structure of the system. We introduce a methodology for constructing statistically validated networks to describe such a system, extend the methodology to accommodate interactions with a periodic component, and show how consideration of bipartite community structures in these networks can aid in the construction of robust statistical models. An example of such a system is a financial market, in which high frequency returns data may be used to describe contagion, or the spreading of shocks in price among assets. These data provide the experimental testing ground for our methodology. We study NYSE data from both the present day and one decade ago, examine the time scales over which the validated lagged correlation networks exist, and relate differences in the topological properties of the networks to an increasing economic efficiency. We uncover daily periodicities in the validated interactions, and relate our findings to explanations of the Epps Effect, an empirical phenomenon of financial time series. We also study bipartite community

  15. Validation of the Information/Communications Technology Literacy Test

    DTIC Science & Technology

    2016-10-01

    nested set. Table 11 presents the results of incremental validity analyses for job knowledge/performance criteria by MOS. Figure 7 presents much...Systems Operator-Analyst (25B) and Nodal Network Systems Operator-Maintainer (25N) MOS. This report documents technical procedures and results of the...research effort. Results suggest that the ICTL test has potential as a valid and highly efficient predictor of valued outcomes in Signal school MOS. Not

  16. Social Networks and Mourning: A Comparative Approach.

    ERIC Educational Resources Information Center

    Rubin, Nissan

    1990-01-01

    Suggests using social network theory to explain varieties of mourning behavior in different societies. Compares participation in funeral ceremonies of members of different social circles in American society and Israeli kibbutz. Concludes that results demonstrated validity of concepts deriving from social network analysis in study of bereavement,…

  17. Validating the Chinese Version of the Inventory of School Motivation

    ERIC Educational Resources Information Center

    King, Ronnel B.; Watkins, David A.

    2013-01-01

    The aim of this study is to assess the cross-cultural applicability of the Chinese version of the Inventory of School Motivation (ISM; McInerney & Sinclair, 1991) in the Hong Kong context using both within-network and between-network approaches to construct validation. The ISM measures four types of achievement goals: mastery, performance,…

  18. Functional and nonfunctional testing of ATM networks

    NASA Astrophysics Data System (ADS)

    Ricardo, Manuel; Ferreira, M. E. P.; Guimaraes, Francisco E.; Mamede, J.; Henriques, M.; da Silva, Jorge A.; Carrapatoso, E.

    1995-02-01

    ATM network will support new multimedia services that will require new protocols, those services and protocols will need different test strategies and tools. In this paper, the concepts of functional and non-functional testers of ATM networks are discussed, a multimedia service and its requirements are presented and finally, a summary description of an ATM network and of the test tool that will be used to validate it are presented.

  19. MACHETE: Environment for Space Networking Evaluation

    NASA Technical Reports Server (NTRS)

    Jennings, Esther H.; Segui, John S.; Woo, Simon

    2010-01-01

    Space Exploration missions requires the design and implementation of space networking that differs from terrestrial networks. In a space networking architecture, interplanetary communication protocols need to be designed, validated and evaluated carefully to support different mission requirements. As actual systems are expensive to build, it is essential to have a low cost method to validate and verify mission/system designs and operations. This can be accomplished through simulation. Simulation can aid design decisions where alternative solutions are being considered, support trade-studies and enable fast study of what-if scenarios. It can be used to identify risks, verify system performance against requirements, and as an initial test environment as one moves towards emulation and actual hardware implementation of the systems. We describe the development of Multi-mission Advanced Communications Hybrid Environment for Test and Evaluation (MACHETE) and its use cases in supporting architecture trade studies, protocol performance and its role in hybrid simulation/emulation. The MACHETE environment contains various tools and interfaces such that users may select the set of tools tailored for the specific simulation end goal. The use cases illustrate tool combinations for simulating space networking in different mission scenarios. This simulation environment is useful in supporting space networking design for planned and future missions as well as evaluating performance of existing networks where non-determinism exist in data traffic and/or link conditions.

  20. The Koukopoulos Mixed Depression Rating Scale (KMDRS): An International Mood Network (IMN) validation study of a new mixed mood rating scale.

    PubMed

    Sani, Gabriele; Vöhringer, Paul A; Barroilhet, Sergio A; Koukopoulos, Alexia E; Ghaemi, S Nassir

    2018-05-01

    It has been proposed that the broad major depressive disorder (MDD) construct is heterogenous. Koukopoulos has provided diagnostic criteria for an important subtype within that construct, "mixed depression" (MxD), which encompasses clinical pictures characterized by marked psychomotor or inner excitation and rage/anger, along with severe depression. This study provides psychometric validation for the first rating scale specifically designed to assess MxD symptoms cross-sectionally, the Koukopoulos Mixed Depression Rating Scale (KMDRS). 350 patients from the international mood network (IMN) completed three rating scales: the KMDRS, Montgomery-Asberg Depression Rating Scale (MADRS) and Young Mania Rating Scale (YMRS). KMDRS' psychometric properties assessed included Cronbach's alpha, inter-rater reliability, factor analysis, predictive validity, and Receiver Operator Curve analysis. Internal consistency (Cronbach's alpha = 0.76; 95% CI 0.57, 0.94) and interrater reliability (kappa = 0.73) were adequate. Confirmatory factor analysis identified 2 components: anger and psychomotor excitation (80% of total variance). Good predictive validity was seen (C-statistic = 0.82 95% CI 0.68, 0.93). Severity cut-off scores identified were as follows: none (0-4), possible (5-9), mild (10-15), moderate (16-20) and severe (> 21) MxD. Non DSM-based diagnosis of MxD may pose some difficulties in the initial use and interpretation of the scoring of the scale. Moreover, the cross-sectional nature of the evaluation does not verify the long-term stability of the scale. KMDRS was a reliable and valid instrument to assess MxD symptoms. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Randomizing Genome-Scale Metabolic Networks

    PubMed Central

    Samal, Areejit; Martin, Olivier C.

    2011-01-01

    Networks coming from protein-protein interactions, transcriptional regulation, signaling, or metabolism may appear to have “unusual” properties. To quantify this, it is appropriate to randomize the network and test the hypothesis that the network is not statistically different from expected in a motivated ensemble. However, when dealing with metabolic networks, the randomization of the network using edge exchange generates fictitious reactions that are biochemically meaningless. Here we provide several natural ensembles of randomized metabolic networks. A first constraint is to use valid biochemical reactions. Further constraints correspond to imposing appropriate functional constraints. We explain how to perform these randomizations with the help of Markov Chain Monte Carlo (MCMC) and show that they allow one to approach the properties of biological metabolic networks. The implication of the present work is that the observed global structural properties of real metabolic networks are likely to be the consequence of simple biochemical and functional constraints. PMID:21779409

  2. Optimal percolation on multiplex networks.

    PubMed

    Osat, Saeed; Faqeeh, Ali; Radicchi, Filippo

    2017-11-16

    Optimal percolation is the problem of finding the minimal set of nodes whose removal from a network fragments the system into non-extensive disconnected clusters. The solution to this problem is important for strategies of immunization in disease spreading, and influence maximization in opinion dynamics. Optimal percolation has received considerable attention in the context of isolated networks. However, its generalization to multiplex networks has not yet been considered. Here we show that approximating the solution of the optimal percolation problem on a multiplex network with solutions valid for single-layer networks extracted from the multiplex may have serious consequences in the characterization of the true robustness of the system. We reach this conclusion by extending many of the methods for finding approximate solutions of the optimal percolation problem from single-layer to multiplex networks, and performing a systematic analysis on synthetic and real-world multiplex networks.

  3. Validation of the Gratitude Questionnaire in Filipino Secondary School Students.

    PubMed

    Valdez, Jana Patricia M; Yang, Weipeng; Datu, Jesus Alfonso D

    2017-10-11

    Most studies have assessed the psychometric properties of the Gratitude Questionnaire - Six-Item Form (GQ-6) in the Western contexts while very few research has been generated to explore the applicability of this scale in non-Western settings. To address this gap, the aim of the study was to examine the factorial validity and gender invariance of the Gratitude Questionnaire in the Philippines through a construct validation approach. There were 383 Filipino high school students who participated in the research. In terms of within-network construct validity, results of confirmatory factor analyses revealed that the five-item version of the questionnaire (GQ-5) had better fit compared to the original six-item version of the gratitude questionnaire. The scores from the GQ-5 also exhibited invariance across gender. Between-network construct validation showed that gratitude was associated with higher levels of academic achievement (β = .46, p <.001), autonomous motivation (β = .73, p <.001), and controlled motivation (β = .28, p <.01). Conversely, gratitude was linked to lower degree of amotivation (β = -.51, p <.001). Theoretical and practical implications are discussed.

  4. Enhancing robustness and immunization in geographical networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang Liang; Department of Physics, Lanzhou University, Lanzhou 730000; Yang Kongqing

    2007-03-15

    We find that different geographical structures of networks lead to varied percolation thresholds, although these networks may have similar abstract topological structures. Thus, strategies for enhancing robustness and immunization of a geographical network are proposed. Using the generating function formalism, we obtain an explicit form of the percolation threshold q{sub c} for networks containing arbitrary order cycles. For three-cycles, the dependence of q{sub c} on the clustering coefficients is ascertained. The analysis substantiates the validity of the strategies with analytical evidence.

  5. Advanced Networks in Motion Mobile Sensorweb

    NASA Technical Reports Server (NTRS)

    Ivancic, William D.; Stewart, David H.

    2011-01-01

    Advanced mobile networking technology applicable to mobile sensor platforms was developed, deployed and demonstrated. A two-tier sensorweb design was developed. The first tier utilized mobile network technology to provide mobility. The second tier, which sits above the first tier, utilizes 6LowPAN (Internet Protocol version 6 Low Power Wireless Personal Area Networks) sensors. The entire network was IPv6 enabled. Successful mobile sensorweb system field tests took place in late August and early September of 2009. The entire network utilized IPv6 and was monitored and controlled using a remote Web browser via IPv6 technology. This paper describes the mobile networking and 6LowPAN sensorweb design, implementation, deployment and testing as well as wireless systems and network monitoring software developed to support testing and validation.

  6. Multistability in bidirectional associative memory neural networks

    NASA Astrophysics Data System (ADS)

    Huang, Gan; Cao, Jinde

    2008-04-01

    In this Letter, the multistability issue is studied for Bidirectional Associative Memory (BAM) neural networks. Based on the existence and stability analysis of the neural networks with or without delay, it is found that the 2 n-dimensional networks can have 3 equilibria and 2 equilibria of them are locally exponentially stable, where each layer of the BAM network has n neurons. Furthermore, the results has been extended to (n+m)-dimensional BAM neural networks, where there are n and m neurons on the two layers respectively. Finally, two numerical examples are presented to illustrate the validity of our results.

  7. THE VALIDITY OF HUMAN AND COMPUTERIZED WRITING ASSESSMENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronald L. Boring

    2005-09-01

    This paper summarizes an experiment designed to assess the validity of essay grading between holistic and analytic human graders and a computerized grader based on latent semantic analysis. The validity of the grade was gauged by the extent to which the student’s knowledge of the topic correlated with the grader’s expert knowledge. To assess knowledge, Pathfinder networks were generated by the student essay writers, the holistic and analytic graders, and the computerized grader. It was found that the computer generated grades more closely matched the definition of valid grading than did human generated grades.

  8. Validating the BERMS in situ soil moisture network with a large scale temporary network

    USDA-ARS?s Scientific Manuscript database

    Calibration and validation of soil moisture satellite products requires data records of large spatial and temporal extent, but obtaining this data can be challenging. These challenges can include remote locations, and expense of equipment. One location with a long record of soil moisture data is th...

  9. Application of validity theory and methodology to patient-reported outcome measures (PROMs): building an argument for validity.

    PubMed

    Hawkins, Melanie; Elsworth, Gerald R; Osborne, Richard H

    2018-07-01

    Data from subjective patient-reported outcome measures (PROMs) are now being used in the health sector to make or support decisions about individuals, groups and populations. Contemporary validity theorists define validity not as a statistical property of the test but as the extent to which empirical evidence supports the interpretation of test scores for an intended use. However, validity testing theory and methodology are rarely evident in the PROM validation literature. Application of this theory and methodology would provide structure for comprehensive validation planning to support improved PROM development and sound arguments for the validity of PROM score interpretation and use in each new context. This paper proposes the application of contemporary validity theory and methodology to PROM validity testing. The validity testing principles will be applied to a hypothetical case study with a focus on the interpretation and use of scores from a translated PROM that measures health literacy (the Health Literacy Questionnaire or HLQ). Although robust psychometric properties of a PROM are a pre-condition to its use, a PROM's validity lies in the sound argument that a network of empirical evidence supports the intended interpretation and use of PROM scores for decision making in a particular context. The health sector is yet to apply contemporary theory and methodology to PROM development and validation. The theoretical and methodological processes in this paper are offered as an advancement of the theory and practice of PROM validity testing in the health sector.

  10. Validation and quantification of uncertainty in coupled climate models using network analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bracco, Annalisa

    We developed a fast, robust and scalable methodology to examine, quantify, and visualize climate patterns and their relationships. It is based on a set of notions, algorithms and metrics used in the study of graphs, referred to as complex network analysis. This approach can be applied to explain known climate phenomena in terms of an underlying network structure and to uncover regional and global linkages in the climate system, while comparing general circulation models outputs with observations. The proposed method is based on a two-layer network representation, and is substantially new within the available network methodologies developed for climate studies.more » At the first layer, gridded climate data are used to identify ‘‘areas’’, i.e., geographical regions that are highly homogeneous in terms of the given climate variable. At the second layer, the identified areas are interconnected with links of varying strength, forming a global climate network. The robustness of the method (i.e. the ability to separate between topological distinct fields, while identifying correctly similarities) has been extensively tested. It has been proved that it provides a reliable, fast framework for comparing and ranking the ability of climate models of reproducing observed climate patterns and their connectivity. We further developed the methodology to account for lags in the connectivity between climate patterns and refined our area identification algorithm to account for autocorrelation in the data. The new methodology based on complex network analysis has been applied to state-of-the-art climate model simulations that participated to the last IPCC (International Panel for Climate Change) assessment to verify their performances, quantify uncertainties, and uncover changes in global linkages between past and future projections. Network properties of modeled sea surface temperature and rainfall over 1956–2005 have been constrained towards observations or reanalysis

  11. The genomic applications in practice and prevention network.

    PubMed

    Khoury, Muin J; Feero, W Gregory; Reyes, Michele; Citrin, Toby; Freedman, Andrew; Leonard, Debra; Burke, Wylie; Coates, Ralph; Croyle, Robert T; Edwards, Karen; Kardia, Sharon; McBride, Colleen; Manolio, Teri; Randhawa, Gurvaneet; Rasooly, Rebekah; St Pierre, Jeannette; Terry, Sharon

    2009-07-01

    The authors describe the rationale and initial development of a new collaborative initiative, the Genomic Applications in Practice and Prevention Network. The network convened by the Centers for Disease Control and Prevention and the National Institutes of Health includes multiple stakeholders from academia, government, health care, public health, industry and consumers. The premise of Genomic Applications in Practice and Prevention Network is that there is an unaddressed chasm between gene discoveries and demonstration of their clinical validity and utility. This chasm is due to the lack of readily accessible information about the utility of most genomic applications and the lack of necessary knowledge by consumers and providers to implement what is known. The mission of Genomic Applications in Practice and Prevention Network is to accelerate and streamline the effective integration of validated genomic knowledge into the practice of medicine and public health, by empowering and sponsoring research, evaluating research findings, and disseminating high quality information on candidate genomic applications in practice and prevention. Genomic Applications in Practice and Prevention Network will develop a process that links ongoing collection of information on candidate genomic applications to four crucial domains: (1) knowledge synthesis and dissemination for new and existing technologies, and the identification of knowledge gaps, (2) a robust evidence-based recommendation development process, (3) translation research to evaluate validity, utility and impact in the real world and how to disseminate and implement recommended genomic applications, and (4) programs to enhance practice, education, and surveillance.

  12. Grand canonical validation of the bipartite international trade network.

    PubMed

    Straka, Mika J; Caldarelli, Guido; Saracco, Fabio

    2017-08-01

    Devising strategies for economic development in a globally competitive landscape requires a solid and unbiased understanding of countries' technological advancements and similarities among export products. Both can be addressed through the bipartite representation of the International Trade Network. In this paper, we apply the recently proposed grand canonical projection algorithm to uncover country and product communities. Contrary to past endeavors, our methodology, based on information theory, creates monopartite projections in an unbiased and analytically tractable way. Single links between countries or products represent statistically significant signals, which are not accounted for by null models such as the bipartite configuration model. We find stable country communities reflecting the socioeconomic distinction in developed, newly industrialized, and developing countries. Furthermore, we observe product clusters based on the aforementioned country groups. Our analysis reveals the existence of a complicated structure in the bipartite International Trade Network: apart from the diversification of export baskets from the most basic to the most exclusive products, we observe a statistically significant signal of an export specialization mechanism towards more sophisticated products.

  13. Grand canonical validation of the bipartite international trade network

    NASA Astrophysics Data System (ADS)

    Straka, Mika J.; Caldarelli, Guido; Saracco, Fabio

    2017-08-01

    Devising strategies for economic development in a globally competitive landscape requires a solid and unbiased understanding of countries' technological advancements and similarities among export products. Both can be addressed through the bipartite representation of the International Trade Network. In this paper, we apply the recently proposed grand canonical projection algorithm to uncover country and product communities. Contrary to past endeavors, our methodology, based on information theory, creates monopartite projections in an unbiased and analytically tractable way. Single links between countries or products represent statistically significant signals, which are not accounted for by null models such as the bipartite configuration model. We find stable country communities reflecting the socioeconomic distinction in developed, newly industrialized, and developing countries. Furthermore, we observe product clusters based on the aforementioned country groups. Our analysis reveals the existence of a complicated structure in the bipartite International Trade Network: apart from the diversification of export baskets from the most basic to the most exclusive products, we observe a statistically significant signal of an export specialization mechanism towards more sophisticated products.

  14. Social Network Map: Some Further Refinements on Administration.

    ERIC Educational Resources Information Center

    Tracy, Elizabeth M.; Abell, Neil

    1994-01-01

    Notes that social network mapping techniques have been advanced as means of assessing social and environmental resources. Addresses issue of convergent construct validity, correlations among dimensions of perceived social support as measured by social network data with other standardized social support instruments. Findings confirm that structural…

  15. Validation of The Health Improvement Network (THIN) Database for Epidemiologic Studies of Chronic Kidney Disease

    PubMed Central

    Denburg, Michelle R.; Haynes, Kevin; Shults, Justine; Lewis, James D.; Leonard, Mary B.

    2011-01-01

    Purpose Chronic kidney disease (CKD) is a prevalent and important outcome and covariate in pharmacoepidemiology. The Health Improvement Network (THIN) in the U.K. represents a unique resource for population-based studies of CKD. We compiled a valid list of Read codes to identify subjects with moderate to advanced CKD. Methods A cross-sectional validation study was performed to identify codes that best define CKD stages 3–5. All subjects with at least one non-zero measure of serum creatinine after 1-Jan-2002 were included. Estimated glomerular filtration rate (eGFR) was calculated according to the Schwartz formula for subjects <18 years and the Modification of Diet in Renal Disease formula for subjects ≥18 years of age. CKD was defined as an eGFR <60 ml/min/1.73m2 on at least two occasions, more than 90 days apart. Results The laboratory definition identified 230,426 subjects with CKD, for a period prevalence in 2008 of 4.56% (95% CI: 4.54, 4.58). A list of 45 Read codes was compiled which yielded a positive predictive value of 88.9% (95% CI: 88.7, 89.1), sensitivity of 48.8%, negative predictive value of 86.5%, and specificity of 98.2%. Of the 11.1% of subjects with a code who did not meet the laboratory definition, 83.6% had at least one eGFR <60. The most commonly used code was for CKD stage 3. Conclusions The proposed list of codes can be used to accurately identify CKD when serum creatinine data are limited. The most sensitive approach for the detection of CKD is to use this list to supplement creatinine measures. PMID:22020900

  16. Validation of The Health Improvement Network (THIN) database for epidemiologic studies of chronic kidney disease.

    PubMed

    Denburg, Michelle R; Haynes, Kevin; Shults, Justine; Lewis, James D; Leonard, Mary B

    2011-11-01

    Chronic kidney disease (CKD) is a prevalent and important outcome and covariate in pharmacoepidemiology. The Health Improvement Network (THIN) in the UK represents a unique resource for population-based studies of CKD. We compiled a valid list of Read codes to identify subjects with moderate to advanced CKD. A cross-sectional validation study was performed to identify codes that best define CKD Stages 3-5. All subjects with at least one non-zero measure of serum creatinine after 1 January 2002 were included. Estimated glomerular filtration rate (eGFR) was calculated according to the Schwartz formula for subjects aged < 18 years and the Modification of Diet in Renal Disease formula for subjects aged ≥ 18 years. CKD was defined as an eGFR <60 mL/minute/1.73 m² on at least two occasions, more than 90 days apart. The laboratory definition identified 230,426 subjects with CKD, for a period prevalence in 2008 of 4.56% (95%CI, 4.54-4.58). A list of 45 Read codes was compiled, which yielded a positive predictive value of 88.9% (95%CI, 88.7-89.1), sensitivity of 48.8%, negative predictive value of 86.5%, and specificity of 98.2%. Of the 11.1% of subjects with a code who did not meet the laboratory definition, 83.6% had at least one eGFR <60. The most commonly used code was for CKD Stage 3. The proposed list of codes can be used to accurately identify CKD when serum creatinine data are limited. The most sensitive approach for the detection of CKD is to use this list to supplement creatinine measures. Copyright © 2011 John Wiley & Sons, Ltd.

  17. Validity of Social, Moral and Emotional Facets of Self-Description Questionnaire II

    ERIC Educational Resources Information Center

    Leung, Kim Chau; Marsh, Herbert W.; Yeung, Alexander Seeshing; Abduljabbar, Adel S.

    2015-01-01

    Studies adopting a construct validity approach can be categorized into within- and between-network studies. Few studies have applied between-network approach and tested the correlations of the social (same-sex relations, opposite-sex relations, parent relations), moral (honesty-trustworthiness), and emotional (emotional stability) facets of the…

  18. Validation of Networks Derived from Snowball Sampling of Municipal Science Education Actors

    ERIC Educational Resources Information Center

    von der Fehr, Ane; Sølberg, Jan; Bruun, Jesper

    2018-01-01

    Social network analysis (SNA) has been used in many educational studies in the past decade, but what these studies have in common is that the populations in question in most cases are defined and known to the researchers studying the networks. Snowball sampling is an SNA methodology most often used to study hidden populations, for example, groups…

  19. Functional Module Analysis for Gene Coexpression Networks with Network Integration.

    PubMed

    Zhang, Shuqin; Zhao, Hongyu; Ng, Michael K

    2015-01-01

    Network has been a general tool for studying the complex interactions between different genes, proteins, and other small molecules. Module as a fundamental property of many biological networks has been widely studied and many computational methods have been proposed to identify the modules in an individual network. However, in many cases, a single network is insufficient for module analysis due to the noise in the data or the tuning of parameters when building the biological network. The availability of a large amount of biological networks makes network integration study possible. By integrating such networks, more informative modules for some specific disease can be derived from the networks constructed from different tissues, and consistent factors for different diseases can be inferred. In this paper, we have developed an effective method for module identification from multiple networks under different conditions. The problem is formulated as an optimization model, which combines the module identification in each individual network and alignment of the modules from different networks together. An approximation algorithm based on eigenvector computation is proposed. Our method outperforms the existing methods, especially when the underlying modules in multiple networks are different in simulation studies. We also applied our method to two groups of gene coexpression networks for humans, which include one for three different cancers, and one for three tissues from the morbidly obese patients. We identified 13 modules with three complete subgraphs, and 11 modules with two complete subgraphs, respectively. The modules were validated through Gene Ontology enrichment and KEGG pathway enrichment analysis. We also showed that the main functions of most modules for the corresponding disease have been addressed by other researchers, which may provide the theoretical basis for further studying the modules experimentally.

  20. Emergent spectral properties of river network topology: an optimal channel network approach.

    PubMed

    Abed-Elmdoust, Armaghan; Singh, Arvind; Yang, Zong-Liang

    2017-09-13

    Characterization of river drainage networks has been a subject of research for many years. However, most previous studies have been limited to quantities which are loosely connected to the topological properties of these networks. In this work, through a graph-theoretic formulation of drainage river networks, we investigate the eigenvalue spectra of their adjacency matrix. First, we introduce a graph theory model for river networks and explore the properties of the network through its adjacency matrix. Next, we show that the eigenvalue spectra of such complex networks follow distinct patterns and exhibit striking features including a spectral gap in which no eigenvalue exists as well as a finite number of zero eigenvalues. We show that such spectral features are closely related to the branching topology of the associated river networks. In this regard, we find an empirical relation for the spectral gap and nullity in terms of the energy dissipation exponent of the drainage networks. In addition, the eigenvalue distribution is found to follow a finite-width probability density function with certain skewness which is related to the drainage pattern. Our results are based on optimal channel network simulations and validated through examples obtained from physical experiments on landscape evolution. These results suggest the potential of the spectral graph techniques in characterizing and modeling river networks.

  1. Algebraic Approach for Recovering Topology in Distributed Camera Networks

    DTIC Science & Technology

    2009-01-14

    not valid for camera networks. Spatial sam- pling of plenoptic function [2] from a network of cameras is rarely i.i.d. (independent and identi- cally...coverage can be used to track and compare paths in a wireless camera network without any metric calibration information. In particular, these results can...edition edition, 2000. [14] A. Rahimi, B. Dunagan, and T. Darrell. Si- multaneous calibration and tracking with a network of non-overlapping sensors. In

  2. Validation and sensitivity of the FINE Bayesian network for forecasting aquatic exposure to nano-silver.

    PubMed

    Money, Eric S; Barton, Lauren E; Dawson, Joseph; Reckhow, Kenneth H; Wiesner, Mark R

    2014-03-01

    The adaptive nature of the Forecasting the Impacts of Nanomaterials in the Environment (FINE) Bayesian network is explored. We create an updated FINE model (FINEAgNP-2) for predicting aquatic exposure concentrations of silver nanoparticles (AgNP) by combining the expert-based parameters from the baseline model established in previous work with literature data related to particle behavior, exposure, and nano-ecotoxicology via parameter learning. We validate the AgNP forecast from the updated model using mesocosm-scale field data and determine the sensitivity of several key variables to changes in environmental conditions, particle characteristics, and particle fate. Results show that the prediction accuracy of the FINEAgNP-2 model increased approximately 70% over the baseline model, with an error rate of only 20%, suggesting that FINE is a reliable tool to predict aquatic concentrations of nano-silver. Sensitivity analysis suggests that fractal dimension, particle diameter, conductivity, time, and particle fate have the most influence on aquatic exposure given the current knowledge; however, numerous knowledge gaps can be identified to suggest further research efforts that will reduce the uncertainty in subsequent exposure and risk forecasts. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Implementation of quantum key distribution network simulation module in the network simulator NS-3

    NASA Astrophysics Data System (ADS)

    Mehic, Miralem; Maurhart, Oliver; Rass, Stefan; Voznak, Miroslav

    2017-10-01

    As the research in quantum key distribution (QKD) technology grows larger and becomes more complex, the need for highly accurate and scalable simulation technologies becomes important to assess the practical feasibility and foresee difficulties in the practical implementation of theoretical achievements. Due to the specificity of the QKD link which requires optical and Internet connection between the network nodes, to deploy a complete testbed containing multiple network hosts and links to validate and verify a certain network algorithm or protocol would be very costly. Network simulators in these circumstances save vast amounts of money and time in accomplishing such a task. The simulation environment offers the creation of complex network topologies, a high degree of control and repeatable experiments, which in turn allows researchers to conduct experiments and confirm their results. In this paper, we described the design of the QKD network simulation module which was developed in the network simulator of version 3 (NS-3). The module supports simulation of the QKD network in an overlay mode or in a single TCP/IP mode. Therefore, it can be used to simulate other network technologies regardless of QKD.

  4. Network model of bilateral power markets based on complex networks

    NASA Astrophysics Data System (ADS)

    Wu, Yang; Liu, Junyong; Li, Furong; Yan, Zhanxin; Zhang, Li

    2014-06-01

    The bilateral power transaction (BPT) mode becomes a typical market organization with the restructuring of electric power industry, the proper model which could capture its characteristics is in urgent need. However, the model is lacking because of this market organization's complexity. As a promising approach to modeling complex systems, complex networks could provide a sound theoretical framework for developing proper simulation model. In this paper, a complex network model of the BPT market is proposed. In this model, price advantage mechanism is a precondition. Unlike other general commodity transactions, both of the financial layer and the physical layer are considered in the model. Through simulation analysis, the feasibility and validity of the model are verified. At same time, some typical statistical features of BPT network are identified. Namely, the degree distribution follows the power law, the clustering coefficient is low and the average path length is a bit long. Moreover, the topological stability of the BPT network is tested. The results show that the network displays a topological robustness to random market member's failures while it is fragile against deliberate attacks, and the network could resist cascading failure to some extent. These features are helpful for making decisions and risk management in BPT markets.

  5. Measuring Networking as an Outcome Variable in Undergraduate Research Experiences

    PubMed Central

    Hanauer, David I.; Hatfull, Graham

    2015-01-01

    The aim of this paper is to propose, present, and validate a simple survey instrument to measure student conversational networking. The tool consists of five items that cover personal and professional social networks, and its basic principle is the self-reporting of degrees of conversation, with a range of specific discussion partners. The networking instrument was validated in three studies. The basic psychometric characteristics of the scales were established by conducting a factor analysis and evaluating internal consistency using Cronbach’s alpha. The second study used a known-groups comparison and involved comparing outcomes for networking scales between two different undergraduate laboratory courses (one involving a specific effort to enhance networking). The final study looked at potential relationships between specific networking items and the established psychosocial variable of project ownership through a series of binary logistic regressions. Overall, the data from the three studies indicate that the networking scales have high internal consistency (α = 0.88), consist of a unitary dimension, can significantly differentiate between research experiences with low and high networking designs, and are related to project ownership scales. The ramifications of the networking instrument for student retention, the enhancement of public scientific literacy, and the differentiation of laboratory courses are discussed. PMID:26538387

  6. Addressing Participant Validity in a Small Internet Health Survey (The Restore Study): Protocol and Recommendations for Survey Response Validation

    PubMed Central

    Dewitt, James; Capistrant, Benjamin; Kohli, Nidhi; Mitteldorf, Darryl; Merengwa, Enyinnaya; West, William

    2018-01-01

    Background While deduplication and cross-validation protocols have been recommended for large Web-based studies, protocols for survey response validation of smaller studies have not been published. Objective This paper reports the challenges of survey validation inherent in a small Web-based health survey research. Methods The subject population was North American, gay and bisexual, prostate cancer survivors, who represent an under-researched, hidden, difficult-to-recruit, minority-within-a-minority population. In 2015-2016, advertising on a large Web-based cancer survivor support network, using email and social media, yielded 478 completed surveys. Results Our manual deduplication and cross-validation protocol identified 289 survey submissions (289/478, 60.4%) as likely spam, most stemming from advertising on social media. The basic components of this deduplication and validation protocol are detailed. An unexpected challenge encountered was invalid survey responses evolving across the study period. This necessitated the static detection protocol be augmented with a dynamic one. Conclusions Five recommendations for validation of Web-based samples, especially with smaller difficult-to-recruit populations, are detailed. PMID:29691203

  7. EOS Terra Validation Program

    NASA Technical Reports Server (NTRS)

    Starr, David

    1999-01-01

    The EOS Terra mission will be launched in July 1999. This mission has great relevance to the atmospheric radiation community and global change issues. Terra instruments include ASTER, CERES, MISR, MODIS and MOPITT. In addition to the fundamental radiance data sets, numerous global science data products will be generated, including various Earth radiation budget, cloud and aerosol parameters, as well as land surface, terrestrial ecology, ocean color, and atmospheric chemistry parameters. Significant investments have been made in on-board calibration to ensure the quality of the radiance observations. A key component of the Terra mission is the validation of the science data products. This is essential for a mission focused on global change issues and the underlying processes. The Terra algorithms have been subject to extensive pre-launch testing with field data whenever possible. Intensive efforts will be made to validate the Terra data products after launch. These include validation of instrument calibration (vicarious calibration) experiments, instrument and cross-platform comparisons, routine collection of high quality correlative data from ground-based networks, such as AERONET, and intensive sites, such as the SGP ARM site, as well as a variety field experiments, cruises, etc. Airborne simulator instruments have been developed for the field experiment and underflight activities including the MODIS Airborne Simulator (MAS), AirMISR, MASTER (MODIS-ASTER), and MOPITT-A. All are integrated on the NASA ER-2, though low altitude platforms are more typically used for MASTER. MATR is an additional sensor used for MOPITT algorithm development and validation. The intensive validation activities planned for the first year of the Terra mission will be described with emphasis on derived geophysical parameters of most relevance to the atmospheric radiation community. Detailed information about the EOS Terra validation Program can be found on the EOS Validation program

  8. Cooperate to Validate: OBSERVAL-NET Experts' Report on Validation of Non-Formal and Informal Learning (VNIL) 2013

    ERIC Educational Resources Information Center

    Weber Guisan, Saskia; Voit, Janine; Lengauer, Sonja; Proinger, Eva; Duvekot, Ruud; Aagaard, Kirsten

    2014-01-01

    The present publication is one of the outcomes of the OBSERVAL-NET project (follow-up of the OBSERVAL project). The main aim of OBSERVAL-NET was to set up a stakeholder-centric network of organisations supporting the validation of non-formal and informal learning in Europe based on the formation of national working groups in the 8 participating…

  9. Cooperate to Validate. Observal-Net Experts' Report on Validation of Non-Formal and Informal Learning (VNIL) 2013

    ERIC Educational Resources Information Center

    Weber Guisan, Saskia; Voit, Janine; Lengauer, Sonja; Proinger, Eva; Duvekot, Ruud; Aagaard, Kirsten

    2014-01-01

    The present publication is one of the outcomes of the OBSERVAL-NET project (followup of the OBSERVAL project). The main aim of OBSERVAL-NET was to set up a stakeholder centric network of organisations supporting the validation of non-formal and informal learning in Europe based on the formation of national working groups in the 8 participating…

  10. Research and Simulation on Application of the Mobile IP Network

    NASA Astrophysics Data System (ADS)

    Yibing, Deng; Wei, Hu; Minghui, Li; Feng, Gao; Junyi, Shen

    The paper analysed the mobile node, home agent, and foreign agent of mobile IP network firstly, some key technique, such as mobile IP network basical principle, protocol work principle, agent discovery, registration, and IP packet transmission, were discussed. Then a network simulation model was designed, validating the characteristic of mobile IP network, and some advantages, which were brought by mobile network, were testified. Finally, the conclusion is gained: mobile IP network could realize the expectation of consumer that they can communicate with others anywhere.

  11. Sub-Network Kernels for Measuring Similarity of Brain Connectivity Networks in Disease Diagnosis.

    PubMed

    Jie, Biao; Liu, Mingxia; Zhang, Daoqiang; Shen, Dinggang

    2018-05-01

    As a simple representation of interactions among distributed brain regions, brain networks have been widely applied to automated diagnosis of brain diseases, such as Alzheimer's disease (AD) and its early stage, i.e., mild cognitive impairment (MCI). In brain network analysis, a challenging task is how to measure the similarity between a pair of networks. Although many graph kernels (i.e., kernels defined on graphs) have been proposed for measuring the topological similarity of a pair of brain networks, most of them are defined using general graphs, thus ignoring the uniqueness of each node in brain networks. That is, each node in a brain network denotes a particular brain region, which is a specific characteristics of brain networks. Accordingly, in this paper, we construct a novel sub-network kernel for measuring the similarity between a pair of brain networks and then apply it to brain disease classification. Different from current graph kernels, our proposed sub-network kernel not only takes into account the inherent characteristic of brain networks, but also captures multi-level (from local to global) topological properties of nodes in brain networks, which are essential for defining the similarity measure of brain networks. To validate the efficacy of our method, we perform extensive experiments on subjects with baseline functional magnetic resonance imaging data obtained from the Alzheimer's disease neuroimaging initiative database. Experimental results demonstrate that the proposed method outperforms several state-of-the-art graph-based methods in MCI classification.

  12. A biologically inspired network design model.

    PubMed

    Zhang, Xiaoge; Adamatzky, Andrew; Chan, Felix T S; Deng, Yong; Yang, Hai; Yang, Xin-She; Tsompanas, Michail-Antisthenis I; Sirakoulis, Georgios Ch; Mahadevan, Sankaran

    2015-06-04

    A network design problem is to select a subset of links in a transport network that satisfy passengers or cargo transportation demands while minimizing the overall costs of the transportation. We propose a mathematical model of the foraging behaviour of slime mould P. polycephalum to solve the network design problem and construct optimal transport networks. In our algorithm, a traffic flow between any two cities is estimated using a gravity model. The flow is imitated by the model of the slime mould. The algorithm model converges to a steady state, which represents a solution of the problem. We validate our approach on examples of major transport networks in Mexico and China. By comparing networks developed in our approach with the man-made highways, networks developed by the slime mould, and a cellular automata model inspired by slime mould, we demonstrate the flexibility and efficiency of our approach.

  13. A Biologically Inspired Network Design Model

    PubMed Central

    Zhang, Xiaoge; Adamatzky, Andrew; Chan, Felix T.S.; Deng, Yong; Yang, Hai; Yang, Xin-She; Tsompanas, Michail-Antisthenis I.; Sirakoulis, Georgios Ch.; Mahadevan, Sankaran

    2015-01-01

    A network design problem is to select a subset of links in a transport network that satisfy passengers or cargo transportation demands while minimizing the overall costs of the transportation. We propose a mathematical model of the foraging behaviour of slime mould P. polycephalum to solve the network design problem and construct optimal transport networks. In our algorithm, a traffic flow between any two cities is estimated using a gravity model. The flow is imitated by the model of the slime mould. The algorithm model converges to a steady state, which represents a solution of the problem. We validate our approach on examples of major transport networks in Mexico and China. By comparing networks developed in our approach with the man-made highways, networks developed by the slime mould, and a cellular automata model inspired by slime mould, we demonstrate the flexibility and efficiency of our approach. PMID:26041508

  14. Inferring causal molecular networks: empirical assessment through a community-based effort

    PubMed Central

    Hill, Steven M.; Heiser, Laura M.; Cokelaer, Thomas; Unger, Michael; Nesser, Nicole K.; Carlin, Daniel E.; Zhang, Yang; Sokolov, Artem; Paull, Evan O.; Wong, Chris K.; Graim, Kiley; Bivol, Adrian; Wang, Haizhou; Zhu, Fan; Afsari, Bahman; Danilova, Ludmila V.; Favorov, Alexander V.; Lee, Wai Shing; Taylor, Dane; Hu, Chenyue W.; Long, Byron L.; Noren, David P.; Bisberg, Alexander J.; Mills, Gordon B.; Gray, Joe W.; Kellen, Michael; Norman, Thea; Friend, Stephen; Qutub, Amina A.; Fertig, Elana J.; Guan, Yuanfang; Song, Mingzhou; Stuart, Joshua M.; Spellman, Paul T.; Koeppl, Heinz; Stolovitzky, Gustavo; Saez-Rodriguez, Julio; Mukherjee, Sach

    2016-01-01

    Inferring molecular networks is a central challenge in computational biology. However, it has remained unclear whether causal, rather than merely correlational, relationships can be effectively inferred in complex biological settings. Here we describe the HPN-DREAM network inference challenge that focused on learning causal influences in signaling networks. We used phosphoprotein data from cancer cell lines as well as in silico data from a nonlinear dynamical model. Using the phosphoprotein data, we scored more than 2,000 networks submitted by challenge participants. The networks spanned 32 biological contexts and were scored in terms of causal validity with respect to unseen interventional data. A number of approaches were effective and incorporating known biology was generally advantageous. Additional sub-challenges considered time-course prediction and visualization. Our results constitute the most comprehensive assessment of causal network inference in a mammalian setting carried out to date and suggest that learning causal relationships may be feasible in complex settings such as disease states. Furthermore, our scoring approach provides a practical way to empirically assess the causal validity of inferred molecular networks. PMID:26901648

  15. Functional Inference of Complex Anatomical Tendinous Networks at a Macroscopic Scale via Sparse Experimentation

    PubMed Central

    Saxena, Anupam; Lipson, Hod; Valero-Cuevas, Francisco J.

    2012-01-01

    In systems and computational biology, much effort is devoted to functional identification of systems and networks at the molecular-or cellular scale. However, similarly important networks exist at anatomical scales such as the tendon network of human fingers: the complex array of collagen fibers that transmits and distributes muscle forces to finger joints. This network is critical to the versatility of the human hand, and its function has been debated since at least the 16th century. Here, we experimentally infer the structure (both topology and parameter values) of this network through sparse interrogation with force inputs. A population of models representing this structure co-evolves in simulation with a population of informative future force inputs via the predator-prey estimation-exploration algorithm. Model fitness depends on their ability to explain experimental data, while the fitness of future force inputs depends on causing maximal functional discrepancy among current models. We validate our approach by inferring two known synthetic Latex networks, and one anatomical tendon network harvested from a cadaver's middle finger. We find that functionally similar but structurally diverse models can exist within a narrow range of the training set and cross-validation errors. For the Latex networks, models with low training set error [<4%] and resembling the known network have the smallest cross-validation errors [∼5%]. The low training set [<4%] and cross validation [<7.2%] errors for models for the cadaveric specimen demonstrate what, to our knowledge, is the first experimental inference of the functional structure of complex anatomical networks. This work expands current bioinformatics inference approaches by demonstrating that sparse, yet informative interrogation of biological specimens holds significant computational advantages in accurate and efficient inference over random testing, or assuming model topology and only inferring parameters values. These

  16. Functional inference of complex anatomical tendinous networks at a macroscopic scale via sparse experimentation.

    PubMed

    Saxena, Anupam; Lipson, Hod; Valero-Cuevas, Francisco J

    2012-01-01

    In systems and computational biology, much effort is devoted to functional identification of systems and networks at the molecular-or cellular scale. However, similarly important networks exist at anatomical scales such as the tendon network of human fingers: the complex array of collagen fibers that transmits and distributes muscle forces to finger joints. This network is critical to the versatility of the human hand, and its function has been debated since at least the 16(th) century. Here, we experimentally infer the structure (both topology and parameter values) of this network through sparse interrogation with force inputs. A population of models representing this structure co-evolves in simulation with a population of informative future force inputs via the predator-prey estimation-exploration algorithm. Model fitness depends on their ability to explain experimental data, while the fitness of future force inputs depends on causing maximal functional discrepancy among current models. We validate our approach by inferring two known synthetic Latex networks, and one anatomical tendon network harvested from a cadaver's middle finger. We find that functionally similar but structurally diverse models can exist within a narrow range of the training set and cross-validation errors. For the Latex networks, models with low training set error [<4%] and resembling the known network have the smallest cross-validation errors [∼5%]. The low training set [<4%] and cross validation [<7.2%] errors for models for the cadaveric specimen demonstrate what, to our knowledge, is the first experimental inference of the functional structure of complex anatomical networks. This work expands current bioinformatics inference approaches by demonstrating that sparse, yet informative interrogation of biological specimens holds significant computational advantages in accurate and efficient inference over random testing, or assuming model topology and only inferring parameters values. These

  17. Groundwater Model Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahmed E. Hassan

    2006-01-24

    Models have an inherent uncertainty. The difficulty in fully characterizing the subsurface environment makes uncertainty an integral component of groundwater flow and transport models, which dictates the need for continuous monitoring and improvement. Building and sustaining confidence in closure decisions and monitoring networks based on models of subsurface conditions require developing confidence in the models through an iterative process. The definition of model validation is postulated as a confidence building and long-term iterative process (Hassan, 2004a). Model validation should be viewed as a process not an end result. Following Hassan (2004b), an approach is proposed for the validation process ofmore » stochastic groundwater models. The approach is briefly summarized herein and detailed analyses of acceptance criteria for stochastic realizations and of using validation data to reduce input parameter uncertainty are presented and applied to two case studies. During the validation process for stochastic models, a question arises as to the sufficiency of the number of acceptable model realizations (in terms of conformity with validation data). Using a hierarchical approach to make this determination is proposed. This approach is based on computing five measures or metrics and following a decision tree to determine if a sufficient number of realizations attain satisfactory scores regarding how they represent the field data used for calibration (old) and used for validation (new). The first two of these measures are applied to hypothetical scenarios using the first case study and assuming field data consistent with the model or significantly different from the model results. In both cases it is shown how the two measures would lead to the appropriate decision about the model performance. Standard statistical tests are used to evaluate these measures with the results indicating they are appropriate measures for evaluating model realizations. The use of

  18. Routing architecture and security for airborne networks

    NASA Astrophysics Data System (ADS)

    Deng, Hongmei; Xie, Peng; Li, Jason; Xu, Roger; Levy, Renato

    2009-05-01

    Airborne networks are envisioned to provide interconnectivity for terrestial and space networks by interconnecting highly mobile airborne platforms. A number of military applications are expected to be used by the operator, and all these applications require proper routing security support to establish correct route between communicating platforms in a timely manner. As airborne networks somewhat different from traditional wired and wireless networks (e.g., Internet, LAN, WLAN, MANET, etc), security aspects valid in these networks are not fully applicable to airborne networks. Designing an efficient security scheme to protect airborne networks is confronted with new requirements. In this paper, we first identify a candidate routing architecture, which works as an underlying structure for our proposed security scheme. And then we investigate the vulnerabilities and attack models against routing protocols in airborne networks. Based on these studies, we propose an integrated security solution to address routing security issues in airborne networks.

  19. Network Coded Cooperative Communication in a Real-Time Wireless Hospital Sensor Network.

    PubMed

    Prakash, R; Balaji Ganesh, A; Sivabalan, Somu

    2017-05-01

    The paper presents a network coded cooperative communication (NC-CC) enabled wireless hospital sensor network architecture for monitoring health as well as postural activities of a patient. A wearable device, referred as a smartband is interfaced with pulse rate, body temperature sensors and an accelerometer along with wireless protocol services, such as Bluetooth and Radio-Frequency transceiver and Wi-Fi. The energy efficiency of wearable device is improved by embedding a linear acceleration based transmission duty cycling algorithm (NC-DRDC). The real-time demonstration is carried-out in a hospital environment to evaluate the performance characteristics, such as power spectral density, energy consumption, signal to noise ratio, packet delivery ratio and transmission offset. The resource sharing and energy efficiency features of network coding technique are improved by proposing an algorithm referred as network coding based dynamic retransmit/rebroadcast decision control (LA-TDC). From the experimental results, it is observed that the proposed LA-TDC algorithm reduces network traffic and end-to-end delay by an average of 27.8% and 21.6%, respectively than traditional network coded wireless transmission. The wireless architecture is deployed in a hospital environment and results are then successfully validated.

  20. Detecting the Influence of Spreading in Social Networks with Excitable Sensor Networks

    PubMed Central

    Pei, Sen; Tang, Shaoting; Zheng, Zhiming

    2015-01-01

    Detecting spreading outbreaks in social networks with sensors is of great significance in applications. Inspired by the formation mechanism of humans’ physical sensations to external stimuli, we propose a new method to detect the influence of spreading by constructing excitable sensor networks. Exploiting the amplifying effect of excitable sensor networks, our method can better detect small-scale spreading processes. At the same time, it can also distinguish large-scale diffusion instances due to the self-inhibition effect of excitable elements. Through simulations of diverse spreading dynamics on typical real-world social networks (Facebook, coauthor, and email social networks), we find that the excitable sensor networks are capable of detecting and ranking spreading processes in a much wider range of influence than other commonly used sensor placement methods, such as random, targeted, acquaintance and distance strategies. In addition, we validate the efficacy of our method with diffusion data from a real-world online social system, Twitter. We find that our method can detect more spreading topics in practice. Our approach provides a new direction in spreading detection and should be useful for designing effective detection methods. PMID:25950181

  1. Detecting trends in academic research from a citation network using network representation learning

    PubMed Central

    Mori, Junichiro; Ochi, Masanao; Sakata, Ichiro

    2018-01-01

    Several network features and information retrieval methods have been proposed to elucidate the structure of citation networks and to detect important nodes. However, it is difficult to retrieve information related to trends in an academic field and to detect cutting-edge areas from the citation network. In this paper, we propose a novel framework that detects the trend as the growth direction of a citation network using network representation learning(NRL). We presume that the linear growth of citation network in latent space obtained by NRL is the result of the iterative edge additional process of a citation network. On APS datasets and papers of some domains of the Web of Science, we confirm the existence of trends by observing that an academic field grows in a specific direction linearly in latent space. Next, we calculate each node’s degree of trend-following as an indicator called the intrinsic publication year (IPY). As a result, there is a correlation between the indicator and the number of future citations. Furthermore, a word frequently used in the abstracts of cutting-edge papers (high-IPY paper) is likely to be used often in future publications. These results confirm the validity of the detected trend for predicting citation network growth. PMID:29782521

  2. Recovery of infrastructure networks after localised attacks.

    PubMed

    Hu, Fuyu; Yeung, Chi Ho; Yang, Saini; Wang, Weiping; Zeng, An

    2016-04-14

    The stability of infrastructure network is always a critical issue studied by researchers in different fields. A lot of works have been devoted to reveal the robustness of the infrastructure networks against random and malicious attacks. However, real attack scenarios such as earthquakes and typhoons are instead localised attacks which are investigated only recently. Unlike previous studies, we examine in this paper the resilience of infrastructure networks by focusing on the recovery process from localised attacks. We introduce various preferential repair strategies and found that they facilitate and improve network recovery compared to that of random repairs, especially when population size is uneven at different locations. Moreover, our strategic repair methods show similar effectiveness as the greedy repair. The validations are conducted on simulated networks, and on real networks with real disasters. Our method is meaningful in practice as it can largely enhance network resilience and contribute to network risk reduction.

  3. Validation of POLDER/ADEOS data using a ground-based lidar network: Preliminary results for semi-transparent and cirrus clouds

    NASA Technical Reports Server (NTRS)

    Chepfer, H.; Sauvage, L.; Flamant, P. H.; Pelon, J.; Goloub, P.; Brogniez, G.; spinhirne, J.; Lavorato, M.; Sugimoto, N.

    1998-01-01

    At mid and tropical latitudes, cirrus clouds are present more than 50% of the time in satellites observations. Due to their large spatial and temporal coverage, and associated low temperatures, cirrus clouds have a major influence on the Earth-Ocean-Atmosphere energy balance through their effects on the incoming solar radiation and outgoing infrared radiation. At present the impact of cirrus clouds on climate is well recognized but remains to be asserted more precisely, for their optical and radiative properties are not very well known. In order to understand the effects of cirrus clouds on climate, their optical and radiative characteristics of these clouds need to be determined accurately at different scales in different locations i.e. latitude. Lidars are well suited to observe cirrus clouds, they can detect very thin and semi-transparent layers, and retrieve the clouds geometrical properties i.e. altitude and multilayers, as well as radiative properties i.e. optical depth, backscattering phase functions of ice crystals. Moreover the linear depolarization ratio can give information on the ice crystal shape. In addition, the data collected with an airborne version of POLDER (POLarization and Directionality of Earth Reflectances) instrument have shown that bidirectional polarized measurements can provide information on cirrus cloud microphysical properties (crystal shapes, preferred orientation in space). The spaceborne version of POLDER-1 has been flown on ADEOS-1 platform during 8 months (October 96 - June 97), and the next POLDER-2 instrument will be launched in 2000 on ADEOS-2. The POLDER-1 cloud inversion algorithms are currently under validation. For cirrus clouds, a validation based on comparisons between cloud properties retrieved from POLDER-1 data and cloud properties inferred from a ground-based lidar network is currently under consideration. We present the first results of the validation.

  4. Prioritizing chronic obstructive pulmonary disease (COPD) candidate genes in COPD-related networks

    PubMed Central

    Zhang, Yihua; Li, Wan; Feng, Yuyan; Guo, Shanshan; Zhao, Xilei; Wang, Yahui; He, Yuehan; He, Weiming; Chen, Lina

    2017-01-01

    Chronic obstructive pulmonary disease (COPD) is a multi-factor disease, which could be caused by many factors, including disturbances of metabolism and protein-protein interactions (PPIs). In this paper, a weighted COPD-related metabolic network and a weighted COPD-related PPI network were constructed base on COPD disease genes and functional information. Candidate genes in these weighted COPD-related networks were prioritized by making use of a gene prioritization method, respectively. Literature review and functional enrichment analysis of the top 100 genes in these two networks suggested the correlation of COPD and these genes. The performance of our gene prioritization method was superior to that of ToppGene and ToppNet for genes from the COPD-related metabolic network or the COPD-related PPI network after assessing using leave-one-out cross-validation, literature validation and functional enrichment analysis. The top-ranked genes prioritized from COPD-related metabolic and PPI networks could promote the better understanding about the molecular mechanism of this disease from different perspectives. The top 100 genes in COPD-related metabolic network or COPD-related PPI network might be potential markers for the diagnosis and treatment of COPD. PMID:29262568

  5. Prioritizing chronic obstructive pulmonary disease (COPD) candidate genes in COPD-related networks.

    PubMed

    Zhang, Yihua; Li, Wan; Feng, Yuyan; Guo, Shanshan; Zhao, Xilei; Wang, Yahui; He, Yuehan; He, Weiming; Chen, Lina

    2017-11-28

    Chronic obstructive pulmonary disease (COPD) is a multi-factor disease, which could be caused by many factors, including disturbances of metabolism and protein-protein interactions (PPIs). In this paper, a weighted COPD-related metabolic network and a weighted COPD-related PPI network were constructed base on COPD disease genes and functional information. Candidate genes in these weighted COPD-related networks were prioritized by making use of a gene prioritization method, respectively. Literature review and functional enrichment analysis of the top 100 genes in these two networks suggested the correlation of COPD and these genes. The performance of our gene prioritization method was superior to that of ToppGene and ToppNet for genes from the COPD-related metabolic network or the COPD-related PPI network after assessing using leave-one-out cross-validation, literature validation and functional enrichment analysis. The top-ranked genes prioritized from COPD-related metabolic and PPI networks could promote the better understanding about the molecular mechanism of this disease from different perspectives. The top 100 genes in COPD-related metabolic network or COPD-related PPI network might be potential markers for the diagnosis and treatment of COPD.

  6. Connectivity Restoration in Wireless Sensor Networks via Space Network Coding.

    PubMed

    Uwitonze, Alfred; Huang, Jiaqing; Ye, Yuanqing; Cheng, Wenqing

    2017-04-20

    The problem of finding the number and optimal positions of relay nodes for restoring the network connectivity in partitioned Wireless Sensor Networks (WSNs) is Non-deterministic Polynomial-time hard (NP-hard) and thus heuristic methods are preferred to solve it. This paper proposes a novel polynomial time heuristic algorithm, namely, Relay Placement using Space Network Coding (RPSNC), to solve this problem, where Space Network Coding, also called Space Information Flow (SIF), is a new research paradigm that studies network coding in Euclidean space, in which extra relay nodes can be introduced to reduce the cost of communication. Unlike contemporary schemes that are often based on Minimum Spanning Tree (MST), Euclidean Steiner Minimal Tree (ESMT) or a combination of MST with ESMT, RPSNC is a new min-cost multicast space network coding approach that combines Delaunay triangulation and non-uniform partitioning techniques for generating a number of candidate relay nodes, and then linear programming is applied for choosing the optimal relay nodes and computing their connection links with terminals. Subsequently, an equilibrium method is used to refine the locations of the optimal relay nodes, by moving them to balanced positions. RPSNC can adapt to any density distribution of relay nodes and terminals, as well as any density distribution of terminals. The performance and complexity of RPSNC are analyzed and its performance is validated through simulation experiments.

  7. Calibration Testing of Network Tap Devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Popovsky, Barbara; Chee, Brian; Frincke, Deborah A.

    2007-11-14

    Abstract: Understanding the behavior of network forensic devices is important to support prosecutions of malicious conduct on computer networks as well as legal remedies for false accusations of network management negligence. Individuals who seek to establish the credibility of network forensic data must speak competently about how the data was gathered and the potential for data loss. Unfortunately, manufacturers rarely provide information about the performance of low-layer network devices at a level that will survive legal challenges. This paper proposes a first step toward an independent calibration standard by establishing a validation testing methodology for evaluating forensic taps against manufacturermore » specifications. The methodology and the theoretical analysis that led to its development are offered as a conceptual framework for developing a standard and to "operationalize" network forensic readiness. This paper also provides details of an exemplar test, testing environment, procedures and results.« less

  8. NC truck network model development research.

    DOT National Transportation Integrated Search

    2008-09-01

    This research develops a validated prototype truck traffic network model for North Carolina. The model : includes all counties and metropolitan areas of North Carolina and major economic areas throughout the : U.S. Geographic boundaries, population a...

  9. ENFIN--A European network for integrative systems biology.

    PubMed

    Kahlem, Pascal; Clegg, Andrew; Reisinger, Florian; Xenarios, Ioannis; Hermjakob, Henning; Orengo, Christine; Birney, Ewan

    2009-11-01

    Integration of biological data of various types and the development of adapted bioinformatics tools represent critical objectives to enable research at the systems level. The European Network of Excellence ENFIN is engaged in developing an adapted infrastructure to connect databases, and platforms to enable both the generation of new bioinformatics tools and the experimental validation of computational predictions. With the aim of bridging the gap existing between standard wet laboratories and bioinformatics, the ENFIN Network runs integrative research projects to bring the latest computational techniques to bear directly on questions dedicated to systems biology in the wet laboratory environment. The Network maintains internally close collaboration between experimental and computational research, enabling a permanent cycling of experimental validation and improvement of computational prediction methods. The computational work includes the development of a database infrastructure (EnCORE), bioinformatics analysis methods and a novel platform for protein function analysis FuncNet.

  10. Addressing Participant Validity in a Small Internet Health Survey (The Restore Study): Protocol and Recommendations for Survey Response Validation.

    PubMed

    Dewitt, James; Capistrant, Benjamin; Kohli, Nidhi; Rosser, B R Simon; Mitteldorf, Darryl; Merengwa, Enyinnaya; West, William

    2018-04-24

    While deduplication and cross-validation protocols have been recommended for large Web-based studies, protocols for survey response validation of smaller studies have not been published. This paper reports the challenges of survey validation inherent in a small Web-based health survey research. The subject population was North American, gay and bisexual, prostate cancer survivors, who represent an under-researched, hidden, difficult-to-recruit, minority-within-a-minority population. In 2015-2016, advertising on a large Web-based cancer survivor support network, using email and social media, yielded 478 completed surveys. Our manual deduplication and cross-validation protocol identified 289 survey submissions (289/478, 60.4%) as likely spam, most stemming from advertising on social media. The basic components of this deduplication and validation protocol are detailed. An unexpected challenge encountered was invalid survey responses evolving across the study period. This necessitated the static detection protocol be augmented with a dynamic one. Five recommendations for validation of Web-based samples, especially with smaller difficult-to-recruit populations, are detailed. ©James Dewitt, Benjamin Capistrant, Nidhi Kohli, B R Simon Rosser, Darryl Mitteldorf, Enyinnaya Merengwa, William West. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 24.04.2018.

  11. Variability in personality expression across contexts: a social network approach.

    PubMed

    Clifton, Allan

    2014-04-01

    The current research investigated how the contextual expression of personality differs across interpersonal relationships. Two related studies were conducted with college samples (Study 1: N = 52, 38 female; Study 2: N = 111, 72 female). Participants in each study completed a five-factor measure of personality and constructed a social network detailing their 30 most important relationships. Participants used a brief Five-Factor Model scale to rate their personality as they experience it when with each person in their social network. Multiple informants selected from each social network then rated the target participant's personality (Study 1: N = 227, Study 2: N = 777). Contextual personality ratings demonstrated incremental validity beyond standard global self-report in predicting specific informants' perceptions. Variability in these contextualized personality ratings was predicted by the position of the other individuals within the social network. Across both studies, participants reported being more extraverted and neurotic, and less conscientious, with more central members of their social networks. Dyadic social network-based assessments of personality provide incremental validity in understanding personality, revealing dynamic patterns of personality variability unobservable with standard assessment techniques. © 2013 Wiley Periodicals, Inc.

  12. High fidelity wireless network evaluation for heterogeneous cognitive radio networks

    NASA Astrophysics Data System (ADS)

    Ding, Lei; Sagduyu, Yalin; Yackoski, Justin; Azimi-Sadjadi, Babak; Li, Jason; Levy, Renato; Melodia, Tammaso

    2012-06-01

    We present a high fidelity cognitive radio (CR) network emulation platform for wireless system tests, measure- ments, and validation. This versatile platform provides the configurable functionalities to control and repeat realistic physical channel effects in integrated space, air, and ground networks. We combine the advantages of scalable simulation environment with reliable hardware performance for high fidelity and repeatable evaluation of heterogeneous CR networks. This approach extends CR design only at device (software-defined-radio) or lower-level protocol (dynamic spectrum access) level to end-to-end cognitive networking, and facilitates low-cost deployment, development, and experimentation of new wireless network protocols and applications on frequency- agile programmable radios. Going beyond the channel emulator paradigm for point-to-point communications, we can support simultaneous transmissions by network-level emulation that allows realistic physical-layer inter- actions between diverse user classes, including secondary users, primary users, and adversarial jammers in CR networks. In particular, we can replay field tests in a lab environment with real radios perceiving and learning the dynamic environment thereby adapting for end-to-end goals over distributed spectrum coordination channels that replace the common control channel as a single point of failure. CR networks offer several dimensions of tunable actions including channel, power, rate, and route selection. The proposed network evaluation platform is fully programmable and can reliably evaluate the necessary cross-layer design solutions with configurable op- timization space by leveraging the hardware experiments to represent the realistic effects of physical channel, topology, mobility, and jamming on spectrum agility, situational awareness, and network resiliency. We also provide the flexibility to scale up the test environment by introducing virtual radios and establishing seamless signal

  13. Validation of Leaf Area Index measurements based on the Wireless Sensor Network platform

    NASA Astrophysics Data System (ADS)

    Song, Q.; Li, X.; Liu, Q.

    2017-12-01

    The leaf area index (LAI) is one of the important parameters for estimating plant canopy function, which has significance for agricultural analysis such as crop yield estimation and disease evaluation. The quick and accurate access to acquire crop LAI is particularly vital. In the study, LAI measurement of corn crops is mainly through three kinds of methods: the leaf length and width method (LAILLW), the instruments indirect measurement method (LAII) and the leaf area index sensor method(LAIS). Among them, LAI value obtained from LAILLW can be regarded as approximate true value. LAI-2200,the current widespread LAI canopy analyzer,is used in LAII. LAIS based on wireless sensor network can realize the automatic acquisition of crop images,simplifying the data collection work,while the other two methods need person to carry out field measurements.Through the comparison of LAIS and other two methods, the validity and reliability of LAIS observation system is verified. It is found that LAI trend changes are similar in three methods, and the rate of change of LAI has an increase with time in the first two months of corn growth when LAIS costs less manpower, energy and time. LAI derived from LAIS is more accurate than LAII in the early growth stage,due to the small blade especially under the strong light. Besides, LAI processed from a false color image with near infrared information is much closer to the true value than true color picture after the corn growth period up to one and half months.

  14. Using a soil moisture and precipitation network for satellite validation

    USDA-ARS?s Scientific Manuscript database

    A long term in situ network for the study of soil moisture and precipitation was deployed in north central Iowa, in cooperation between USDA and NASA. A total of 20 dual precipitation gages were established across a watershed landscape with an area of approximately 600 km2. In addition, four soil mo...

  15. Recovery of infrastructure networks after localised attacks

    PubMed Central

    Hu, Fuyu; Yeung, Chi Ho; Yang, Saini; Wang, Weiping; Zeng, An

    2016-01-01

    The stability of infrastructure network is always a critical issue studied by researchers in different fields. A lot of works have been devoted to reveal the robustness of the infrastructure networks against random and malicious attacks. However, real attack scenarios such as earthquakes and typhoons are instead localised attacks which are investigated only recently. Unlike previous studies, we examine in this paper the resilience of infrastructure networks by focusing on the recovery process from localised attacks. We introduce various preferential repair strategies and found that they facilitate and improve network recovery compared to that of random repairs, especially when population size is uneven at different locations. Moreover, our strategic repair methods show similar effectiveness as the greedy repair. The validations are conducted on simulated networks, and on real networks with real disasters. Our method is meaningful in practice as it can largely enhance network resilience and contribute to network risk reduction. PMID:27075559

  16. Network sampling coverage II: The effect of non-random missing data on network measurement

    PubMed Central

    Smith, Jeffrey A.; Moody, James; Morgan, Jonathan

    2016-01-01

    Missing data is an important, but often ignored, aspect of a network study. Measurement validity is affected by missing data, but the level of bias can be difficult to gauge. Here, we describe the effect of missing data on network measurement across widely different circumstances. In Part I of this study (Smith and Moody, 2013), we explored the effect of measurement bias due to randomly missing nodes. Here, we drop the assumption that data are missing at random: what happens to estimates of key network statistics when central nodes are more/less likely to be missing? We answer this question using a wide range of empirical networks and network measures. We find that bias is worse when more central nodes are missing. With respect to network measures, Bonacich centrality is highly sensitive to the loss of central nodes, while closeness centrality is not; distance and bicomponent size are more affected than triad summary measures and behavioral homophily is more robust than degree-homophily. With respect to types of networks, larger, directed networks tend to be more robust, but the relation is weak. We end the paper with a practical application, showing how researchers can use our results (translated into a publically available java application) to gauge the bias in their own data. PMID:27867254

  17. Network sampling coverage II: The effect of non-random missing data on network measurement.

    PubMed

    Smith, Jeffrey A; Moody, James; Morgan, Jonathan

    2017-01-01

    Missing data is an important, but often ignored, aspect of a network study. Measurement validity is affected by missing data, but the level of bias can be difficult to gauge. Here, we describe the effect of missing data on network measurement across widely different circumstances. In Part I of this study (Smith and Moody, 2013), we explored the effect of measurement bias due to randomly missing nodes. Here, we drop the assumption that data are missing at random: what happens to estimates of key network statistics when central nodes are more/less likely to be missing? We answer this question using a wide range of empirical networks and network measures. We find that bias is worse when more central nodes are missing. With respect to network measures, Bonacich centrality is highly sensitive to the loss of central nodes, while closeness centrality is not; distance and bicomponent size are more affected than triad summary measures and behavioral homophily is more robust than degree-homophily. With respect to types of networks, larger, directed networks tend to be more robust, but the relation is weak. We end the paper with a practical application, showing how researchers can use our results (translated into a publically available java application) to gauge the bias in their own data.

  18. Robust network data envelopment analysis approach to evaluate the efficiency of regional electricity power networks under uncertainty.

    PubMed

    Fathollah Bayati, Mohsen; Sadjadi, Seyed Jafar

    2017-01-01

    In this paper, new Network Data Envelopment Analysis (NDEA) models are developed to evaluate the efficiency of regional electricity power networks. The primary objective of this paper is to consider perturbation in data and develop new NDEA models based on the adaptation of robust optimization methodology. Furthermore, in this paper, the efficiency of the entire networks of electricity power, involving generation, transmission and distribution stages is measured. While DEA has been widely used to evaluate the efficiency of the components of electricity power networks during the past two decades, there is no study to evaluate the efficiency of the electricity power networks as a whole. The proposed models are applied to evaluate the efficiency of 16 regional electricity power networks in Iran and the effect of data uncertainty is also investigated. The results are compared with the traditional network DEA and parametric SFA methods. Validity and verification of the proposed models are also investigated. The preliminary results indicate that the proposed models were more reliable than the traditional Network DEA model.

  19. Robust network data envelopment analysis approach to evaluate the efficiency of regional electricity power networks under uncertainty

    PubMed Central

    Sadjadi, Seyed Jafar

    2017-01-01

    In this paper, new Network Data Envelopment Analysis (NDEA) models are developed to evaluate the efficiency of regional electricity power networks. The primary objective of this paper is to consider perturbation in data and develop new NDEA models based on the adaptation of robust optimization methodology. Furthermore, in this paper, the efficiency of the entire networks of electricity power, involving generation, transmission and distribution stages is measured. While DEA has been widely used to evaluate the efficiency of the components of electricity power networks during the past two decades, there is no study to evaluate the efficiency of the electricity power networks as a whole. The proposed models are applied to evaluate the efficiency of 16 regional electricity power networks in Iran and the effect of data uncertainty is also investigated. The results are compared with the traditional network DEA and parametric SFA methods. Validity and verification of the proposed models are also investigated. The preliminary results indicate that the proposed models were more reliable than the traditional Network DEA model. PMID:28953900

  20. Principle and verification of novel optical virtual private networks over multiprotocol label switching/optical packet switching networks

    NASA Astrophysics Data System (ADS)

    Zhang, Chongfu; Wang, Zhengsuan; Jin, Wei; Qiu, Kun

    2012-11-01

    A novel realization method of the optical virtual private networks (OVPN) over multiprotocol label switching/optical packet switching (MPLS/OPS) networks is proposed. In this scheme, the introduction of MPLS control plane makes OVPN over OPS networks more reliable and easier; OVPN makes use of the concept of high reconfiguration of light-paths offered by MPLS, to set up secure tunnels of high bandwidth across intelligent OPS networks. Through resource management, the signal mechanism, connection control, and the architecture of the creation and maintenance of OVPN are efficiently realized. We also present an OVPN architecture with two traffic priorities, which is used to analyze the capacity, throughput, delay time of the proposed networks, and the packet loss rate performance of the OVPN over MPLS/OPS networks based on full mesh topology. The results validate the applicability of such reliable connectivity to high quality services in the OVPN over MPLS/OPS networks. Along with the results, the feasibility of the approach as the basis for the next generation networks is demonstrated and discussed.

  1. Percolation in real multiplex networks

    NASA Astrophysics Data System (ADS)

    Bianconi, Ginestra; Radicchi, Filippo

    2016-12-01

    We present an exact mathematical framework able to describe site-percolation transitions in real multiplex networks. Specifically, we consider the average percolation diagram valid over an infinite number of random configurations where nodes are present in the system with given probability. The approach relies on the locally treelike ansatz, so that it is expected to accurately reproduce the true percolation diagram of sparse multiplex networks with negligible number of short loops. The performance of our theory is tested in social, biological, and transportation multiplex graphs. When compared against previously introduced methods, we observe improvements in the prediction of the percolation diagrams in all networks analyzed. Results from our method confirm previous claims about the robustness of real multiplex networks, in the sense that the average connectedness of the system does not exhibit any significant abrupt change as its individual components are randomly destroyed.

  2. Classification of images acquired with colposcopy using artificial neural networks.

    PubMed

    Simões, Priscyla W; Izumi, Narjara B; Casagrande, Ramon S; Venson, Ramon; Veronezi, Carlos D; Moretti, Gustavo P; da Rocha, Edroaldo L; Cechinel, Cristian; Ceretta, Luciane B; Comunello, Eros; Martins, Paulo J; Casagrande, Rogério A; Snoeyer, Maria L; Manenti, Sandra A

    2014-01-01

    To explore the advantages of using artificial neural networks (ANNs) to recognize patterns in colposcopy to classify images in colposcopy. Transversal, descriptive, and analytical study of a quantitative approach with an emphasis on diagnosis. The training test e validation set was composed of images collected from patients who underwent colposcopy. These images were provided by a gynecology clinic located in the city of Criciúma (Brazil). The image database (n = 170) was divided; 48 images were used for the training process, 58 images were used for the tests, and 64 images were used for the validation. A hybrid neural network based on Kohonen self-organizing maps and multilayer perceptron (MLP) networks was used. After 126 cycles, the validation was performed. The best results reached an accuracy of 72.15%, a sensibility of 69.78%, and a specificity of 68%. Although the preliminary results still exhibit an average efficiency, the present approach is an innovative and promising technique that should be deeply explored in the context of the present study.

  3. Complete characterization of the stability of cluster synchronization in complex dynamical networks.

    PubMed

    Sorrentino, Francesco; Pecora, Louis M; Hagerstrom, Aaron M; Murphy, Thomas E; Roy, Rajarshi

    2016-04-01

    Synchronization is an important and prevalent phenomenon in natural and engineered systems. In many dynamical networks, the coupling is balanced or adjusted to admit global synchronization, a condition called Laplacian coupling. Many networks exhibit incomplete synchronization, where two or more clusters of synchronization persist, and computational group theory has recently proved to be valuable in discovering these cluster states based on the topology of the network. In the important case of Laplacian coupling, additional synchronization patterns can exist that would not be predicted from the group theory analysis alone. Understanding how and when clusters form, merge, and persist is essential for understanding collective dynamics, synchronization, and failure mechanisms of complex networks such as electric power grids, distributed control networks, and autonomous swarming vehicles. We describe a method to find and analyze all of the possible cluster synchronization patterns in a Laplacian-coupled network, by applying methods of computational group theory to dynamically equivalent networks. We present a general technique to evaluate the stability of each of the dynamically valid cluster synchronization patterns. Our results are validated in an optoelectronic experiment on a five-node network that confirms the synchronization patterns predicted by the theory.

  4. Controlling Contagion Processes in Activity Driven Networks

    NASA Astrophysics Data System (ADS)

    Liu, Suyu; Perra, Nicola; Karsai, Márton; Vespignani, Alessandro

    2014-03-01

    The vast majority of strategies aimed at controlling contagion processes on networks consider the connectivity pattern of the system either quenched or annealed. However, in the real world, many networks are highly dynamical and evolve, in time, concurrently with the contagion process. Here, we derive an analytical framework for the study of control strategies specifically devised for a class of time-varying networks, namely activity-driven networks. We develop a block variable mean-field approach that allows the derivation of the equations describing the coevolution of the contagion process and the network dynamic. We derive the critical immunization threshold and assess the effectiveness of three different control strategies. Finally, we validate the theoretical picture by simulating numerically the spreading process and control strategies in both synthetic networks and a large-scale, real-world, mobile telephone call data set.

  5. Neural networks for vertical microcode compaction

    NASA Astrophysics Data System (ADS)

    Chu, Pong P.

    1992-09-01

    Neural networks provide an alternative way to solve complex optimization problems. Instead of performing a program of instructions sequentially as in a traditional computer, neural network model explores many competing hypotheses simultaneously using its massively parallel net. The paper shows how to use the neural network approach to perform vertical micro-code compaction for a micro-programmed control unit. The compaction procedure includes two basic steps. The first step determines the compatibility classes and the second step selects a minimal subset to cover the control signals. Since the selection process is an NP- complete problem, to find an optimal solution is impractical. In this study, we employ a customized neural network to obtain the minimal subset. We first formalize this problem, and then define an `energy function' and map it to a two-layer fully connected neural network. The modified network has two types of neurons and can always obtain a valid solution.

  6. Forecasting short-term data center network traffic load with convolutional neural networks.

    PubMed

    Mozo, Alberto; Ordozgoiti, Bruno; Gómez-Canaval, Sandra

    2018-01-01

    Efficient resource management in data centers is of central importance to content service providers as 90 percent of the network traffic is expected to go through them in the coming years. In this context we propose the use of convolutional neural networks (CNNs) to forecast short-term changes in the amount of traffic crossing a data center network. This value is an indicator of virtual machine activity and can be utilized to shape the data center infrastructure accordingly. The behaviour of network traffic at the seconds scale is highly chaotic and therefore traditional time-series-analysis approaches such as ARIMA fail to obtain accurate forecasts. We show that our convolutional neural network approach can exploit the non-linear regularities of network traffic, providing significant improvements with respect to the mean absolute and standard deviation of the data, and outperforming ARIMA by an increasingly significant margin as the forecasting granularity is above the 16-second resolution. In order to increase the accuracy of the forecasting model, we exploit the architecture of the CNNs using multiresolution input distributed among separate channels of the first convolutional layer. We validate our approach with an extensive set of experiments using a data set collected at the core network of an Internet Service Provider over a period of 5 months, totalling 70 days of traffic at the one-second resolution.

  7. Forecasting short-term data center network traffic load with convolutional neural networks

    PubMed Central

    Ordozgoiti, Bruno; Gómez-Canaval, Sandra

    2018-01-01

    Efficient resource management in data centers is of central importance to content service providers as 90 percent of the network traffic is expected to go through them in the coming years. In this context we propose the use of convolutional neural networks (CNNs) to forecast short-term changes in the amount of traffic crossing a data center network. This value is an indicator of virtual machine activity and can be utilized to shape the data center infrastructure accordingly. The behaviour of network traffic at the seconds scale is highly chaotic and therefore traditional time-series-analysis approaches such as ARIMA fail to obtain accurate forecasts. We show that our convolutional neural network approach can exploit the non-linear regularities of network traffic, providing significant improvements with respect to the mean absolute and standard deviation of the data, and outperforming ARIMA by an increasingly significant margin as the forecasting granularity is above the 16-second resolution. In order to increase the accuracy of the forecasting model, we exploit the architecture of the CNNs using multiresolution input distributed among separate channels of the first convolutional layer. We validate our approach with an extensive set of experiments using a data set collected at the core network of an Internet Service Provider over a period of 5 months, totalling 70 days of traffic at the one-second resolution. PMID:29408936

  8. Controlling extreme events on complex networks

    NASA Astrophysics Data System (ADS)

    Chen, Yu-Zhong; Huang, Zi-Gang; Lai, Ying-Cheng

    2014-08-01

    Extreme events, a type of collective behavior in complex networked dynamical systems, often can have catastrophic consequences. To develop effective strategies to control extreme events is of fundamental importance and practical interest. Utilizing transportation dynamics on complex networks as a prototypical setting, we find that making the network ``mobile'' can effectively suppress extreme events. A striking, resonance-like phenomenon is uncovered, where an optimal degree of mobility exists for which the probability of extreme events is minimized. We derive an analytic theory to understand the mechanism of control at a detailed and quantitative level, and validate the theory numerically. Implications of our finding to current areas such as cybersecurity are discussed.

  9. Description of the CERES Ocean Validation Experiment (COVE), A Dedicated EOS Validation Test Site

    NASA Astrophysics Data System (ADS)

    Rutledge, K.; Charlock, T.; Smith, B.; Jin, Z.; Rose, F.; Denn, F.; Rutan, D.; Haeffelin, M.; Su, W.; Xhang, T.; Jay, M.

    2001-12-01

    A unique test site located in the mid-Atlantic coastal marine waters has been used by several EOS projects for validation measurements. A common theme across these projects is the need for a stable measurement site within the marine environment for long-term, high quality radiation measurements. The site was initiated by NASA's Clouds and the Earths Radiant Energy System (CERES) project. One of CERES's challenging goals is to provide upwelling and downwelling shortwave fluxes at several pressure altitudes within the atmosphere and at the surface. Operationally the radiative transfer model of Fu and Liou (1996, 1998), the CERES instrument measured radiances and various other EOS platform data are being used to accomplish this goal. We present here, a component of the CERES/EOS validation effort that is focused to verify and optimize the prediction algorithms for radiation parameters associated with the marine coastal and oceanic surface types of the planet. For this validation work, the CERES Ocean Validation Experiment (COVE) was developed to provide detailed high-frequency and long-duration measurements for radiation and their associated dependent variables. The CERES validations also include analytical efforts which will not be described here (but see Charlock et.al, Su et.al., Smith et.al-Fall 2001 AGU Meeting) The COVE activity is based on a rigid ocean platform which is located approximately twenty kilometers off of the coast of Virginia Beach, Virginia. The once-manned US Coast Guard facility rises 35 meters from the ocean surface allowing the radiation instruments to be well above the splash zone. The depth of the sea is eleven meters at the site. A power and communications system has been installed for present and future requirements. Scientific measurements at the site have primarily been developed within the framework of established national and international monitoring programs. These include the Baseline Surface Radiation Network of the World

  10. Validation of artificial neural network models for predicting biochemical markers associated with male infertility.

    PubMed

    Vickram, A S; Kamini, A Rao; Das, Raja; Pathy, M Ramesh; Parameswari, R; Archana, K; Sridharan, T B

    2016-08-01

    Seminal fluid is the secretion from many glands comprised of several organic and inorganic compounds including free amino acids, proteins, fructose, glucosidase, zinc, and other scavenging elements like Mg(2+), Ca(2+), K(+), and Na(+). Therefore, in the view of development of novel approaches and proper diagnosis to male infertility, overall understanding of the biochemical and molecular composition and its role in regulation of sperm quality is highly desirable. Perhaps this can be achieved through artificial intelligence. This study was aimed to elucidate and predict various biochemical markers present in human seminal plasma with three different neural network models. A total of 177 semen samples were collected for this research (both fertile and infertile samples) and immediately processed to prepare a semen analysis report, based on the protocol of the World Health Organization (WHO [2010]). The semen samples were then categorized into oligoasthenospermia (n=35), asthenospermia (n=35), azoospermia (n=22), normospermia (n=34), oligospermia (n=34), and control (n=17). The major biochemical parameters like total protein content, fructose, glucosidase, and zinc content were elucidated by standard protocols. All the biochemical markers were predicted by using three different artificial neural network (ANN) models with semen parameters as inputs. Of the three models, the back propagation neural network model (BPNN) yielded the best results with mean absolute error 0.025, -0.080, 0.166, and -0.057 for protein, fructose, glucosidase, and zinc, respectively. This suggests that BPNN can be used to predict biochemical parameters for the proper diagnosis of male infertility in assisted reproductive technology (ART) centres. AAS: absorption spectroscopy; AI: artificial intelligence; ANN: artificial neural networks; ART: assisted reproductive technology; BPNN: back propagation neural network model; DT: decision tress; MLP: multilayer perceptron; PESA: percutaneous

  11. Quantum key distribution network for multiple applications

    NASA Astrophysics Data System (ADS)

    Tajima, A.; Kondoh, T.; Ochi, T.; Fujiwara, M.; Yoshino, K.; Iizuka, H.; Sakamoto, T.; Tomita, A.; Shimamura, E.; Asami, S.; Sasaki, M.

    2017-09-01

    The fundamental architecture and functions of secure key management in a quantum key distribution (QKD) network with enhanced universal interfaces for smooth key sharing between arbitrary two nodes and enabling multiple secure communication applications are proposed. The proposed architecture consists of three layers: a quantum layer, key management layer and key supply layer. We explain the functions of each layer, the key formats in each layer and the key lifecycle for enabling a practical QKD network. A quantum key distribution-advanced encryption standard (QKD-AES) hybrid system and an encrypted smartphone system were developed as secure communication applications on our QKD network. The validity and usefulness of these systems were demonstrated on the Tokyo QKD Network testbed.

  12. Mutually cooperative epidemics on power-law networks

    NASA Astrophysics Data System (ADS)

    Cui, Peng-Bi; Colaiori, Francesca; Castellano, Claudio

    2017-08-01

    The spread of an infectious disease can, in some cases, promote the propagation of other pathogens favoring violent outbreaks, which cause a discontinuous transition to an endemic state. The topology of the contact network plays a crucial role in these cooperative dynamics. We consider a susceptible-infected-removed-type model with two mutually cooperative pathogens: An individual already infected with one disease has an increased probability of getting infected by the other. We present a heterogeneous mean-field theoretical approach to the coinfection dynamics on generic uncorrelated power-law degree-distributed networks and validate its results by means of numerical simulations. We show that, when the second moment of the degree distribution is finite, the epidemic transition is continuous for low cooperativity, while it is discontinuous when cooperativity is sufficiently high. For scale-free networks, i.e., topologies with diverging second moment, the transition is instead always continuous. In this way we clarify the effect of heterogeneity and system size on the nature of the transition, and we validate the physical interpretation about the origin of the discontinuity.

  13. A Predictive Approach to Network Reverse-Engineering

    NASA Astrophysics Data System (ADS)

    Wiggins, Chris

    2005-03-01

    A central challenge of systems biology is the ``reverse engineering" of transcriptional networks: inferring which genes exert regulatory control over which other genes. Attempting such inference at the genomic scale has only recently become feasible, via data-intensive biological innovations such as DNA microrrays (``DNA chips") and the sequencing of whole genomes. In this talk we present a predictive approach to network reverse-engineering, in which we integrate DNA chip data and sequence data to build a model of the transcriptional network of the yeast S. cerevisiae capable of predicting the response of genes in unseen experiments. The technique can also be used to extract ``motifs,'' sequence elements which act as binding sites for regulatory proteins. We validate by a number of approaches and present comparison of theoretical prediction vs. experimental data, along with biological interpretations of the resulting model. En route, we will illustrate some basic notions in statistical learning theory (fitting vs. over-fitting; cross- validation; assessing statistical significance), highlighting ways in which physicists can make a unique contribution in data- driven approaches to reverse engineering.

  14. Social Network Mapping: A New Tool For The Leadership Toolbox

    DTIC Science & Technology

    2002-04-01

    SOCIAL NETWORK MAPPING: A NEW TOOL FOR THE LEADERSHIP TOOLBOX By Elisabeth J. Strines, Colonel, USAF 8037 Washington Road Alexandria...valid OMB control number. 1. REPORT DATE 00 APR 2002 2. REPORT TYPE N/A 3. DATES COVERED - 4. TITLE AND SUBTITLE Social Network Mapping: A...describes the concept of social network mapping and demonstrates how it can be used by squadron commanders and leaders at all levels to provide subtle

  15. Automatic Network Fingerprinting through Single-Node Motifs

    PubMed Central

    Echtermeyer, Christoph; da Fontoura Costa, Luciano; Rodrigues, Francisco A.; Kaiser, Marcus

    2011-01-01

    Complex networks have been characterised by their specific connectivity patterns (network motifs), but their building blocks can also be identified and described by node-motifs—a combination of local network features. One technique to identify single node-motifs has been presented by Costa et al. (L. D. F. Costa, F. A. Rodrigues, C. C. Hilgetag, and M. Kaiser, Europhys. Lett., 87, 1, 2009). Here, we first suggest improvements to the method including how its parameters can be determined automatically. Such automatic routines make high-throughput studies of many networks feasible. Second, the new routines are validated in different network-series. Third, we provide an example of how the method can be used to analyse network time-series. In conclusion, we provide a robust method for systematically discovering and classifying characteristic nodes of a network. In contrast to classical motif analysis, our approach can identify individual components (here: nodes) that are specific to a network. Such special nodes, as hubs before, might be found to play critical roles in real-world networks. PMID:21297963

  16. Active distribution network planning considering linearized system loss

    NASA Astrophysics Data System (ADS)

    Li, Xiao; Wang, Mingqiang; Xu, Hao

    2018-02-01

    In this paper, various distribution network planning techniques with DGs are reviewed, and a new distribution network planning method is proposed. It assumes that the location of DGs and the topology of the network are fixed. The proposed model optimizes the capacities of DG and the optimal distribution line capacity simultaneously by a cost/benefit analysis and the benefit is quantified by the reduction of the expected interruption cost. Besides, the network loss is explicitly analyzed in the paper. For simplicity, the network loss is appropriately simplified as a quadratic function of difference of voltage phase angle. Then it is further piecewise linearized. In this paper, a piecewise linearization technique with different segment lengths is proposed. To validate its effectiveness and superiority, the proposed distribution network planning model with elaborate linearization technique is tested on the IEEE 33-bus distribution network system.

  17. Pinning impulsive control algorithms for complex network

    NASA Astrophysics Data System (ADS)

    Sun, Wen; Lü, Jinhu; Chen, Shihua; Yu, Xinghuo

    2014-03-01

    In this paper, we further investigate the synchronization of complex dynamical network via pinning control in which a selection of nodes are controlled at discrete times. Different from most existing work, the pinning control algorithms utilize only the impulsive signals at discrete time instants, which may greatly improve the communication channel efficiency and reduce control cost. Two classes of algorithms are designed, one for strongly connected complex network and another for non-strongly connected complex network. It is suggested that in the strongly connected network with suitable coupling strength, a single controller at any one of the network's nodes can always pin the network to its homogeneous solution. In the non-strongly connected case, the location and minimum number of nodes needed to pin the network are determined by the Frobenius normal form of the coupling matrix. In addition, the coupling matrix is not necessarily symmetric or irreducible. Illustrative examples are then given to validate the proposed pinning impulsive control algorithms.

  18. Complex Dynamics of Delay-Coupled Neural Networks

    NASA Astrophysics Data System (ADS)

    Mao, Xiaochen

    2016-09-01

    This paper reveals the complicated dynamics of a delay-coupled system that consists of a pair of sub-networks and multiple bidirectional couplings. Time delays are introduced into the internal connections and network-couplings, respectively. The stability and instability of the coupled network are discussed. The sufficient conditions for the existence of oscillations are given. Case studies of numerical simulations are given to validate the analytical results. Interesting and complicated neuronal activities are observed numerically, such as rest states, periodic oscillations, multiple switches of rest states and oscillations, and the coexistence of different types of oscillations.

  19. Rapid Sampling of Hydrogen Bond Networks for Computational Protein Design.

    PubMed

    Maguire, Jack B; Boyken, Scott E; Baker, David; Kuhlman, Brian

    2018-05-08

    Hydrogen bond networks play a critical role in determining the stability and specificity of biomolecular complexes, and the ability to design such networks is important for engineering novel structures, interactions, and enzymes. One key feature of hydrogen bond networks that makes them difficult to rationally engineer is that they are highly cooperative and are not energetically favorable until the hydrogen bonding potential has been satisfied for all buried polar groups in the network. Existing computational methods for protein design are ill-equipped for creating these highly cooperative networks because they rely on energy functions and sampling strategies that are focused on pairwise interactions. To enable the design of complex hydrogen bond networks, we have developed a new sampling protocol in the molecular modeling program Rosetta that explicitly searches for sets of amino acid mutations that can form self-contained hydrogen bond networks. For a given set of designable residues, the protocol often identifies many alternative sets of mutations/networks, and we show that it can readily be applied to large sets of residues at protein-protein interfaces or in the interior of proteins. The protocol builds on a recently developed method in Rosetta for designing hydrogen bond networks that has been experimentally validated for small symmetric systems but was not extensible to many larger protein structures and complexes. The sampling protocol we describe here not only recapitulates previously validated designs with performance improvements but also yields viable hydrogen bond networks for cases where the previous method fails, such as the design of large, asymmetric interfaces relevant to engineering protein-based therapeutics.

  20. Integrative network alignment reveals large regions of global network similarity in yeast and human.

    PubMed

    Kuchaiev, Oleksii; Przulj, Natasa

    2011-05-15

    High-throughput methods for detecting molecular interactions have produced large sets of biological network data with much more yet to come. Analogous to sequence alignment, efficient and reliable network alignment methods are expected to improve our understanding of biological systems. Unlike sequence alignment, network alignment is computationally intractable. Hence, devising efficient network alignment heuristics is currently a foremost challenge in computational biology. We introduce a novel network alignment algorithm, called Matching-based Integrative GRAph ALigner (MI-GRAAL), which can integrate any number and type of similarity measures between network nodes (e.g. proteins), including, but not limited to, any topological network similarity measure, sequence similarity, functional similarity and structural similarity. Hence, we resolve the ties in similarity measures and find a combination of similarity measures yielding the largest contiguous (i.e. connected) and biologically sound alignments. MI-GRAAL exposes the largest functional, connected regions of protein-protein interaction (PPI) network similarity to date: surprisingly, it reveals that 77.7% of proteins in the baker's yeast high-confidence PPI network participate in such a subnetwork that is fully contained in the human high-confidence PPI network. This is the first demonstration that species as diverse as yeast and human contain so large, continuous regions of global network similarity. We apply MI-GRAAL's alignments to predict functions of un-annotated proteins in yeast, human and bacteria validating our predictions in the literature. Furthermore, using network alignment scores for PPI networks of different herpes viruses, we reconstruct their phylogenetic relationship. This is the first time that phylogeny is exactly reconstructed from purely topological alignments of PPI networks. Supplementary files and MI-GRAAL executables: http://bio-nets.doc.ic.ac.uk/MI-GRAAL/.

  1. In Silico Enhancing M. tuberculosis Protein Interaction Networks in STRING To Predict Drug-Resistance Pathways and Pharmacological Risks.

    PubMed

    Mei, Suyu

    2018-05-04

    Bacterial protein-protein interaction (PPI) networks are significant to reveal the machinery of signal transduction and drug resistance within bacterial cells. The database STRING has collected a large number of bacterial pathogen PPI networks, but most of the data are of low quality without being experimentally or computationally validated, thus restricting its further biomedical applications. We exploit the experimental data via four solutions to enhance the quality of M. tuberculosis H37Rv (MTB) PPI networks in STRING. Computational results show that the experimental data derived jointly by two-hybrid and copurification approaches are the most reliable to train an L 2 -regularized logistic regression model for MTB PPI network validation. On the basis of the validated MTB PPI networks, we further study the three problems via breadth-first graph search algorithm: (1) discovery of MTB drug-resistance pathways through searching for the paths between known drug-target genes and drug-resistance genes, (2) choosing potential cotarget genes via searching for the critical genes located on multiple pathways, and (3) choosing essential drug-target genes via analysis of network degree distribution. In addition, we further combine the validated MTB PPI networks with human PPI networks to analyze the potential pharmacological risks of known and candidate drug-target genes from the point of view of system pharmacology. The evidence from protein structure alignment demonstrates that the drugs that act on MTB target genes could also adversely act on human signaling pathways.

  2. Structural reducibility of multilayer networks

    NASA Astrophysics Data System (ADS)

    de Domenico, Manlio; Nicosia, Vincenzo; Arenas, Alexandre; Latora, Vito

    2015-04-01

    Many complex systems can be represented as networks consisting of distinct types of interactions, which can be categorized as links belonging to different layers. For example, a good description of the full protein-protein interactome requires, for some organisms, up to seven distinct network layers, accounting for different genetic and physical interactions, each containing thousands of protein-protein relationships. A fundamental open question is then how many layers are indeed necessary to accurately represent the structure of a multilayered complex system. Here we introduce a method based on quantum theory to reduce the number of layers to a minimum while maximizing the distinguishability between the multilayer network and the corresponding aggregated graph. We validate our approach on synthetic benchmarks and we show that the number of informative layers in some real multilayer networks of protein-genetic interactions, social, economical and transportation systems can be reduced by up to 75%.

  3. The GPM Ground Validation Program: Pre to Post-Launch

    NASA Astrophysics Data System (ADS)

    Petersen, W. A.

    2014-12-01

    NASA GPM Ground Validation (GV) activities have transitioned from the pre to post-launch era. Prior to launch direct validation networks and associated partner institutions were identified world-wide, covering a plethora of precipitation regimes. In the U.S. direct GV efforts focused on use of new operational products such as the NOAA Multi-Radar Multi-Sensor suite (MRMS) for TRMM validation and GPM radiometer algorithm database development. In the post-launch, MRMS products including precipitation rate, types and data quality are being routinely generated to facilitate statistical GV of instantaneous and merged GPM products. To assess precipitation column impacts on product uncertainties, range-gate to pixel-level validation of both Dual-Frequency Precipitation Radar (DPR) and GPM microwave imager data are performed using GPM Validation Network (VN) ground radar and satellite data processing software. VN software ingests quality-controlled volumetric radar datasets and geo-matches those data to coincident DPR and radiometer level-II data. When combined MRMS and VN datasets enable more comprehensive interpretation of ground-satellite estimation uncertainties. To support physical validation efforts eight (one) field campaigns have been conducted in the pre (post) launch era. The campaigns span regimes from northern latitude cold-season snow to warm tropical rain. Most recently the Integrated Precipitation and Hydrology Experiment (IPHEx) took place in the mountains of North Carolina and involved combined airborne and ground-based measurements of orographic precipitation and hydrologic processes underneath the GPM Core satellite. One more U.S. GV field campaign (OLYMPEX) is planned for late 2015 and will address cold-season precipitation estimation, process and hydrology in the orographic and oceanic domains of western Washington State. Finally, continuous direct and physical validation measurements are also being conducted at the NASA Wallops Flight Facility multi

  4. Exploration of the integration of care for persons with a traumatic brain injury using social network analysis methodology.

    PubMed

    Lamontagne, Marie-Eve

    2013-01-01

    Integration is a popular strategy to increase the quality of care within systems of care. However, there is no common language, approach or tool allowing for a valid description, comparison and evaluation of integrated care. Social network analysis could be a viable methodology to provide an objective picture of integrated networks. To illustrate social network analysis use in the context of systems of care for traumatic brain injury. We surveyed members of a network using a validated questionnaire to determine the links between them. We determined the density, centrality, multiplexity, and quality of the links reported. The network was described as moderately dense (0.6), the most prevalent link was knowledge, and four organisation members of a consortium were central to the network. Social network analysis allowed us to create a graphic representation of the network. Social network analysis is a useful methodology to objectively characterise integrated networks.

  5. Three-dimensional evidence network plot system: covariate imbalances and effects in network meta-analysis explored using a new software tool.

    PubMed

    Batson, Sarah; Score, Robert; Sutton, Alex J

    2017-06-01

    The aim of the study was to develop the three-dimensional (3D) evidence network plot system-a novel web-based interactive 3D tool to facilitate the visualization and exploration of covariate distributions and imbalances across evidence networks for network meta-analysis (NMA). We developed the 3D evidence network plot system within an AngularJS environment using a third party JavaScript library (Three.js) to create the 3D element of the application. Data used to enable the creation of the 3D element for a particular topic are inputted via a Microsoft Excel template spreadsheet that has been specifically formatted to hold these data. We display and discuss the findings of applying the tool to two NMA examples considering multiple covariates. These two examples have been previously identified as having potentially important covariate effects and allow us to document the various features of the tool while illustrating how it can be used. The 3D evidence network plot system provides an immediate, intuitive, and accessible way to assess the similarity and differences between the values of covariates for individual studies within and between each treatment contrast in an evidence network. In this way, differences between the studies, which may invalidate the usual assumptions of an NMA, can be identified for further scrutiny. Hence, the tool facilitates NMA feasibility/validity assessments and aids in the interpretation of NMA results. The 3D evidence network plot system is the first tool designed specifically to visualize covariate distributions and imbalances across evidence networks in 3D. This will be of primary interest to systematic review and meta-analysis researchers and, more generally, those assessing the validity and robustness of an NMA to inform reimbursement decisions. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Macrostructure from Microstructure: Generating Whole Systems from Ego Networks

    PubMed Central

    Smith, Jeffrey A.

    2014-01-01

    This paper presents a new simulation method to make global network inference from sampled data. The proposed simulation method takes sampled ego network data and uses Exponential Random Graph Models (ERGM) to reconstruct the features of the true, unknown network. After describing the method, the paper presents two validity checks of the approach: the first uses the 20 largest Add Health networks while the second uses the Sociology Coauthorship network in the 1990's. For each test, I take random ego network samples from the known networks and use my method to make global network inference. I find that my method successfully reproduces the properties of the networks, such as distance and main component size. The results also suggest that simpler, baseline models provide considerably worse estimates for most network properties. I end the paper by discussing the bounds/limitations of ego network sampling. I also discuss possible extensions to the proposed approach. PMID:25339783

  7. A cross-validation package driving Netica with python

    USGS Publications Warehouse

    Fienen, Michael N.; Plant, Nathaniel G.

    2014-01-01

    Bayesian networks (BNs) are powerful tools for probabilistically simulating natural systems and emulating process models. Cross validation is a technique to avoid overfitting resulting from overly complex BNs. Overfitting reduces predictive skill. Cross-validation for BNs is known but rarely implemented due partly to a lack of software tools designed to work with available BN packages. CVNetica is open-source, written in Python, and extends the Netica software package to perform cross-validation and read, rebuild, and learn BNs from data. Insights gained from cross-validation and implications on prediction versus description are illustrated with: a data-driven oceanographic application; and a model-emulation application. These examples show that overfitting occurs when BNs become more complex than allowed by supporting data and overfitting incurs computational costs as well as causing a reduction in prediction skill. CVNetica evaluates overfitting using several complexity metrics (we used level of discretization) and its impact on performance metrics (we used skill).

  8. On the stochastic dissemination of faults in an admissible network

    NASA Technical Reports Server (NTRS)

    Kyrala, A.

    1987-01-01

    The dynamic distribution of faults in a general type network is discussed. The starting point is a uniquely branched network in which each pair of nodes is connected by a single branch. Mathematical expressions for the uniquely branched network transition matrix are derived to show that sufficient stationarity exists to ensure the validity of the use of the Markov Chain model to analyze networks. In addition the conditions for the use of Semi-Markov models are discussed. General mathematical expressions are derived in an examination of branch redundancy techniques commonly used to increase reliability.

  9. Network switching strategy for energy conservation in heterogeneous networks.

    PubMed

    Song, Yujae; Choi, Wooyeol; Baek, Seungjae

    2017-01-01

    In heterogeneous networks (HetNets), the large-scale deployment of small base stations (BSs) together with traditional macro BSs is an economical and efficient solution that is employed to address the exponential growth in mobile data traffic. In dense HetNets, network switching, i.e., handovers, plays a critical role in connecting a mobile terminal (MT) to the best of all accessible networks. In the existing literature, a handover decision is made using various handover metrics such as the signal-to-noise ratio, data rate, and movement speed. However, there are few studies on handovers that focus on energy efficiency in HetNets. In this paper, we propose a handover strategy that helps to minimize energy consumption at BSs in HetNets without compromising the quality of service (QoS) of each MT. The proposed handover strategy aims to capture the effect of the stochastic behavior of handover parameters and the expected energy consumption due to handover execution when making a handover decision. To identify the validity of the proposed handover strategy, we formulate a handover problem as a constrained Markov decision process (CMDP), by which the effects of the stochastic behaviors of handover parameters and consequential handover energy consumption can be accurately reflected when making a handover decision. In the CMDP, the aim is to minimize the energy consumption to service an MT over the lifetime of its connection, and the constraint is to guarantee the QoS requirements of the MT given in terms of the transmission delay and call-dropping probability. We find an optimal policy for the CMDP using a combination of the Lagrangian method and value iteration. Simulation results verify the validity of the proposed handover strategy.

  10. Systematic network assessment of the carcinogenic activities of cadmium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Peizhan; Duan, Xiaohua; Li, Mian

    Cadmium has been defined as type I carcinogen for humans, but the underlying mechanisms of its carcinogenic activity and its influence on protein-protein interactions in cells are not fully elucidated. The aim of the current study was to evaluate, systematically, the carcinogenic activity of cadmium with systems biology approaches. From a literature search of 209 studies that performed with cellular models, 208 proteins influenced by cadmium exposure were identified. All of these were assessed by Western blotting and were recognized as key nodes in network analyses. The protein-protein functional interaction networks were constructed with NetBox software and visualized with Cytoscapemore » software. These cadmium-rewired genes were used to construct a scale-free, highly connected biological protein interaction network with 850 nodes and 8770 edges. Of the network, nine key modules were identified and 60 key signaling pathways, including the estrogen, RAS, PI3K-Akt, NF-κB, HIF-1α, Jak-STAT, and TGF-β signaling pathways, were significantly enriched. With breast cancer, colorectal and prostate cancer cellular models, we validated the key node genes in the network that had been previously reported or inferred form the network by Western blotting methods, including STAT3, JNK, p38, SMAD2/3, P65, AKT1, and HIF-1α. These results suggested the established network was robust and provided a systematic view of the carcinogenic activities of cadmium in human. - Highlights: • A cadmium-influenced network with 850 nodes and 8770 edges was established. • The cadmium-rewired gene network was scale-free and highly connected. • Nine modules were identified, and 60 key signaling pathways related to cadmium-induced carcinogenesis were found. • Key mediators in the network were validated in multiple cellular models.« less

  11. Modeling and control of magnetorheological fluid dampers using neural networks

    NASA Astrophysics Data System (ADS)

    Wang, D. H.; Liao, W. H.

    2005-02-01

    Due to the inherent nonlinear nature of magnetorheological (MR) fluid dampers, one of the challenging aspects for utilizing these devices to achieve high system performance is the development of accurate models and control algorithms that can take advantage of their unique characteristics. In this paper, the direct identification and inverse dynamic modeling for MR fluid dampers using feedforward and recurrent neural networks are studied. The trained direct identification neural network model can be used to predict the damping force of the MR fluid damper on line, on the basis of the dynamic responses across the MR fluid damper and the command voltage, and the inverse dynamic neural network model can be used to generate the command voltage according to the desired damping force through supervised learning. The architectures and the learning methods of the dynamic neural network models and inverse neural network models for MR fluid dampers are presented, and some simulation results are discussed. Finally, the trained neural network models are applied to predict and control the damping force of the MR fluid damper. Moreover, validation methods for the neural network models developed are proposed and used to evaluate their performance. Validation results with different data sets indicate that the proposed direct identification dynamic model using the recurrent neural network can be used to predict the damping force accurately and the inverse identification dynamic model using the recurrent neural network can act as a damper controller to generate the command voltage when the MR fluid damper is used in a semi-active mode.

  12. GPM Ground Validation: Pre to Post-Launch Era

    NASA Astrophysics Data System (ADS)

    Petersen, Walt; Skofronick-Jackson, Gail; Huffman, George

    2015-04-01

    NASA GPM Ground Validation (GV) activities have transitioned from the pre to post-launch era. Prior to launch direct validation networks and associated partner institutions were identified world-wide, covering a plethora of precipitation regimes. In the U.S. direct GV efforts focused on use of new operational products such as the NOAA Multi-Radar Multi-Sensor suite (MRMS) for TRMM validation and GPM radiometer algorithm database development. In the post-launch, MRMS products including precipitation rate, accumulation, types and data quality are being routinely generated to facilitate statistical GV of instantaneous (e.g., Level II orbit) and merged (e.g., IMERG) GPM products. Toward assessing precipitation column impacts on product uncertainties, range-gate to pixel-level validation of both Dual-Frequency Precipitation Radar (DPR) and GPM microwave imager data are performed using GPM Validation Network (VN) ground radar and satellite data processing software. VN software ingests quality-controlled volumetric radar datasets and geo-matches those data to coincident DPR and radiometer level-II data. When combined MRMS and VN datasets enable more comprehensive interpretation of both ground and satellite-based estimation uncertainties. To support physical validation efforts eight (one) field campaigns have been conducted in the pre (post) launch era. The campaigns span regimes from northern latitude cold-season snow to warm tropical rain. Most recently the Integrated Precipitation and Hydrology Experiment (IPHEx) took place in the mountains of North Carolina and involved combined airborne and ground-based measurements of orographic precipitation and hydrologic processes underneath the GPM Core satellite. One more U.S. GV field campaign (OLYMPEX) is planned for late 2015 and will address cold-season precipitation estimation, process and hydrology in the orographic and oceanic domains of western Washington State. Finally, continuous direct and physical validation

  13. A closer look at cross-validation for assessing the accuracy of gene regulatory networks and models.

    PubMed

    Tabe-Bordbar, Shayan; Emad, Amin; Zhao, Sihai Dave; Sinha, Saurabh

    2018-04-26

    Cross-validation (CV) is a technique to assess the generalizability of a model to unseen data. This technique relies on assumptions that may not be satisfied when studying genomics datasets. For example, random CV (RCV) assumes that a randomly selected set of samples, the test set, well represents unseen data. This assumption doesn't hold true where samples are obtained from different experimental conditions, and the goal is to learn regulatory relationships among the genes that generalize beyond the observed conditions. In this study, we investigated how the CV procedure affects the assessment of supervised learning methods used to learn gene regulatory networks (or in other applications). We compared the performance of a regression-based method for gene expression prediction estimated using RCV with that estimated using a clustering-based CV (CCV) procedure. Our analysis illustrates that RCV can produce over-optimistic estimates of the model's generalizability compared to CCV. Next, we defined the 'distinctness' of test set from training set and showed that this measure is predictive of performance of the regression method. Finally, we introduced a simulated annealing method to construct partitions with gradually increasing distinctness and showed that performance of different gene expression prediction methods can be better evaluated using this method.

  14. Validation of organ procurement and transplant network (OPTN)/united network for organ sharing (UNOS) criteria for imaging diagnosis of hepatocellular carcinoma.

    PubMed

    Fowler, Kathryn J; Karimova, E Jane; Arauz, Anthony R; Saad, Nael E; Brunt, Elizabeth M; Chapman, William C; Heiken, Jay P

    2013-06-27

    Imaging diagnosis of hepatocellular carcinoma (HCC) presents an important pathway for transplant exception points and priority for cirrhotic patients. The purpose of this retrospective study is to evaluate the validity of the new Organ Procurement and Transplant Network (OPTN) classification system on patients undergoing transplantation for HCC. One hundred twenty-nine patients underwent transplantation for HCC from April 14, 2006 to April 18, 2011; a total of 263 lesions were reported as suspicious for HCC on pretransplantation magnetic resonance imaging. Magnetic resonance imaging examinations were reviewed independently by two experienced radiologists, blinded to final pathology. Reviewers identified major imaging features and an OPTN classification was assigned to each lesion. Final proof of diagnosis was pathology on explant or necrosis along with imaging findings of ablation after transarterial chemoembolization. Application of OPTN imaging criteria in our population resulted in high specificity for the diagnosis of HCC. Sensitivity in diagnosis of small lesions (≥1 and <2 cm) was low (range, 26%-34%). Use of the OPTN system would have resulted in different management in 17% of our population who had received automatic exception points for HCC based on preoperative imaging but would not have met criteria under the new system. Eleven percent of the patients not meeting OPTN criteria were found to have T2 stage tumor burden on pathology. The OPTN imaging policy introduces a high level of specificity for HCC but may decrease sensitivity for small lesions. Management may be impacted in a number of patients, potentially requiring longer surveillance periods or biopsy to confirm diagnosis.

  15. Multi-agent-based bio-network for systems biology: protein-protein interaction network as an example.

    PubMed

    Ren, Li-Hong; Ding, Yong-Sheng; Shen, Yi-Zhen; Zhang, Xiang-Feng

    2008-10-01

    Recently, a collective effort from multiple research areas has been made to understand biological systems at the system level. This research requires the ability to simulate particular biological systems as cells, organs, organisms, and communities. In this paper, a novel bio-network simulation platform is proposed for system biology studies by combining agent approaches. We consider a biological system as a set of active computational components interacting with each other and with an external environment. Then, we propose a bio-network platform for simulating the behaviors of biological systems and modelling them in terms of bio-entities and society-entities. As a demonstration, we discuss how a protein-protein interaction (PPI) network can be seen as a society of autonomous interactive components. From interactions among small PPI networks, a large PPI network can emerge that has a remarkable ability to accomplish a complex function or task. We also simulate the evolution of the PPI networks by using the bio-operators of the bio-entities. Based on the proposed approach, various simulators with different functions can be embedded in the simulation platform, and further research can be done from design to development, including complexity validation of the biological system.

  16. Space evolution model and empirical analysis of an urban public transport network

    NASA Astrophysics Data System (ADS)

    Sui, Yi; Shao, Feng-jing; Sun, Ren-cheng; Li, Shu-jing

    2012-07-01

    This study explores the space evolution of an urban public transport network, using empirical evidence and a simulation model validated on that data. Public transport patterns primarily depend on traffic spatial-distribution, demands of passengers and expected utility of investors. Evolution is an iterative process of satisfying the needs of passengers and investors based on a given traffic spatial-distribution. The temporal change of urban public transport network is evaluated both using topological measures and spatial ones. The simulation model is validated using empirical data from nine big cities in China. Statistical analyses on topological and spatial attributes suggest that an evolution network with traffic demands characterized by power-law numerical values which distribute in a mode of concentric circles tallies well with these nine cities.

  17. Discovering disease-associated genes in weighted protein-protein interaction networks

    NASA Astrophysics Data System (ADS)

    Cui, Ying; Cai, Meng; Stanley, H. Eugene

    2018-04-01

    Although there have been many network-based attempts to discover disease-associated genes, most of them have not taken edge weight - which quantifies their relative strength - into consideration. We use connection weights in a protein-protein interaction (PPI) network to locate disease-related genes. We analyze the topological properties of both weighted and unweighted PPI networks and design an improved random forest classifier to distinguish disease genes from non-disease genes. We use a cross-validation test to confirm that weighted networks are better able to discover disease-associated genes than unweighted networks, which indicates that including link weight in the analysis of network properties provides a better model of complex genotype-phenotype associations.

  18. Integrating legacy medical data sensors in a wireless network infrastucture.

    PubMed

    Dembeyiotis, S; Konnis, G; Koutsouris, D

    2005-01-01

    In the process of developing a wireless networking solution to provide effective field-deployable communications and telemetry support for rescuers during major natural disasters, we are faced with the task of interfacing the multitude of medical and other legacy data collection sensors to the network grid. In this paper, we detail a number of solutions, with particular attention given to the issue of data security. The chosen implementation allows for sensor control and management from remote network locations, while the sensors can wirelessly transmit their data to nearby network nodes securely, utilizing the latest commercially available cryptography solutions. Initial testing validates the design choices, while the network-enabled sensors are being integrated in the overall wireless network security framework.

  19. Exploration of the integration of care for persons with a traumatic brain injury using social network analysis methodology

    PubMed Central

    Lamontagne, Marie-Eve

    2013-01-01

    Introduction Integration is a popular strategy to increase the quality of care within systems of care. However, there is no common language, approach or tool allowing for a valid description, comparison and evaluation of integrated care. Social network analysis could be a viable methodology to provide an objective picture of integrated networks. Goal of the article To illustrate social network analysis use in the context of systems of care for traumatic brain injury. Method We surveyed members of a network using a validated questionnaire to determine the links between them. We determined the density, centrality, multiplexity, and quality of the links reported. Results The network was described as moderately dense (0.6), the most prevalent link was knowledge, and four organisation members of a consortium were central to the network. Social network analysis allowed us to create a graphic representation of the network. Conclusion Social network analysis is a useful methodology to objectively characterise integrated networks. PMID:24250281

  20. Networking Technologies Enable Advances in Earth Science

    NASA Technical Reports Server (NTRS)

    Johnson, Marjory; Freeman, Kenneth; Gilstrap, Raymond; Beck, Richard

    2004-01-01

    This paper describes an experiment to prototype a new way of conducting science by applying networking and distributed computing technologies to an Earth Science application. A combination of satellite, wireless, and terrestrial networking provided geologists at a remote field site with interactive access to supercomputer facilities at two NASA centers, thus enabling them to validate and calibrate remotely sensed geological data in near-real time. This represents a fundamental shift in the way that Earth scientists analyze remotely sensed data. In this paper we describe the experiment and the network infrastructure that enabled it, analyze the data flow during the experiment, and discuss the scientific impact of the results.

  1. NDSC Lidar Intercomparisons and Validation: OPAL and MLO3 Campaigns in 1995

    NASA Technical Reports Server (NTRS)

    McDermid, Stuart; McGee, Thomas J.; Stuart, Daan P. J.

    1996-01-01

    The Network for the Detection of Stratospheric Change (NDSC) has developed and adopted a Validation Policy in order to ensure that the results submitted and stored in its archives are of a known, high quality. As a part of this validation policy, blind instrument intercomparisons are considered an essential element in the certification of NDSC instruments and a specific format for these campaigns has been recommended by the NDSC-Steering Committee.

  2. Neural Network Prediction of New Aircraft Design Coefficients

    NASA Technical Reports Server (NTRS)

    Norgaard, Magnus; Jorgensen, Charles C.; Ross, James C.

    1997-01-01

    This paper discusses a neural network tool for more effective aircraft design evaluations during wind tunnel tests. Using a hybrid neural network optimization method, we have produced fast and reliable predictions of aerodynamical coefficients, found optimal flap settings, and flap schedules. For validation, the tool was tested on a 55% scale model of the USAF/NASA Subsonic High Alpha Research Concept aircraft (SHARC). Four different networks were trained to predict coefficients of lift, drag, moment of inertia, and lift drag ratio (C(sub L), C(sub D), C(sub M), and L/D) from angle of attack and flap settings. The latter network was then used to determine an overall optimal flap setting and for finding optimal flap schedules.

  3. The QKD network: model and routing scheme

    NASA Astrophysics Data System (ADS)

    Yang, Chao; Zhang, Hongqi; Su, Jinhai

    2017-11-01

    Quantum key distribution (QKD) technology can establish unconditional secure keys between two communicating parties. Although this technology has some inherent constraints, such as the distance and point-to-point mode limits, building a QKD network with multiple point-to-point QKD devices can overcome these constraints. Considering the development level of current technology, the trust relaying QKD network is the first choice to build a practical QKD network. However, the previous research didn't address a routing method on the trust relaying QKD network in detail. This paper focuses on the routing issues, builds a model of the trust relaying QKD network for easily analysing and understanding this network, and proposes a dynamical routing scheme for this network. From the viewpoint of designing a dynamical routing scheme in classical network, the proposed scheme consists of three components: a Hello protocol helping share the network topology information, a routing algorithm to select a set of suitable paths and establish the routing table and a link state update mechanism helping keep the routing table newly. Experiments and evaluation demonstrates the validity and effectiveness of the proposed routing scheme.

  4. A Security Assessment Mechanism for Software-Defined Networking-Based Mobile Networks.

    PubMed

    Luo, Shibo; Dong, Mianxiong; Ota, Kaoru; Wu, Jun; Li, Jianhua

    2015-12-17

    Software-Defined Networking-based Mobile Networks (SDN-MNs) are considered the future of 5G mobile network architecture. With the evolving cyber-attack threat, security assessments need to be performed in the network management. Due to the distinctive features of SDN-MNs, such as their dynamic nature and complexity, traditional network security assessment methodologies cannot be applied directly to SDN-MNs, and a novel security assessment methodology is needed. In this paper, an effective security assessment mechanism based on attack graphs and an Analytic Hierarchy Process (AHP) is proposed for SDN-MNs. Firstly, this paper discusses the security assessment problem of SDN-MNs and proposes a methodology using attack graphs and AHP. Secondly, to address the diversity and complexity of SDN-MNs, a novel attack graph definition and attack graph generation algorithm are proposed. In order to quantify security levels, the Node Minimal Effort (NME) is defined to quantify attack cost and derive system security levels based on NME. Thirdly, to calculate the NME of an attack graph that takes the dynamic factors of SDN-MN into consideration, we use AHP integrated with the Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) as the methodology. Finally, we offer a case study to validate the proposed methodology. The case study and evaluation show the advantages of the proposed security assessment mechanism.

  5. A Security Assessment Mechanism for Software-Defined Networking-Based Mobile Networks

    PubMed Central

    Luo, Shibo; Dong, Mianxiong; Ota, Kaoru; Wu, Jun; Li, Jianhua

    2015-01-01

    Software-Defined Networking-based Mobile Networks (SDN-MNs) are considered the future of 5G mobile network architecture. With the evolving cyber-attack threat, security assessments need to be performed in the network management. Due to the distinctive features of SDN-MNs, such as their dynamic nature and complexity, traditional network security assessment methodologies cannot be applied directly to SDN-MNs, and a novel security assessment methodology is needed. In this paper, an effective security assessment mechanism based on attack graphs and an Analytic Hierarchy Process (AHP) is proposed for SDN-MNs. Firstly, this paper discusses the security assessment problem of SDN-MNs and proposes a methodology using attack graphs and AHP. Secondly, to address the diversity and complexity of SDN-MNs, a novel attack graph definition and attack graph generation algorithm are proposed. In order to quantify security levels, the Node Minimal Effort (NME) is defined to quantify attack cost and derive system security levels based on NME. Thirdly, to calculate the NME of an attack graph that takes the dynamic factors of SDN-MN into consideration, we use AHP integrated with the Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) as the methodology. Finally, we offer a case study to validate the proposed methodology. The case study and evaluation show the advantages of the proposed security assessment mechanism. PMID:26694409

  6. A last updating evolution model for online social networks

    NASA Astrophysics Data System (ADS)

    Bu, Zhan; Xia, Zhengyou; Wang, Jiandong; Zhang, Chengcui

    2013-05-01

    As information technology has advanced, people are turning to electronic media more frequently for communication, and social relationships are increasingly found on online channels. However, there is very limited knowledge about the actual evolution of the online social networks. In this paper, we propose and study a novel evolution network model with the new concept of “last updating time”, which exists in many real-life online social networks. The last updating evolution network model can maintain the robustness of scale-free networks and can improve the network reliance against intentional attacks. What is more, we also found that it has the “small-world effect”, which is the inherent property of most social networks. Simulation experiment based on this model show that the results and the real-life data are consistent, which means that our model is valid.

  7. Fire detection from hyperspectral data using neural network approach

    NASA Astrophysics Data System (ADS)

    Piscini, Alessandro; Amici, Stefania

    2015-10-01

    This study describes an application of artificial neural networks for the recognition of flaming areas using hyper- spectral remote sensed data. Satellite remote sensing is considered an effective and safe way to monitor active fires for environmental and people safeguarding. Neural networks are an effective and consolidated technique for the classification of satellite images. Moreover, once well trained, they prove to be very fast in the application stage for a rapid response. At flaming temperature, thanks to its low excitation energy (about 4.34 eV), potassium (K) ionize with a unique doublet emission features. This emission features can be detected remotely providing a detection map of active fire which allows in principle to separate flaming from smouldering areas of vegetation even in presence of smoke. For this study a normalised Advanced K Band Difference (AKBD) has been applied to airborne hyper spectral sensor covering a range of 400-970 nm with resolution 2.9 nm. A back propagation neural network was used for the recognition of active fires affecting the hyperspectral image. The network was trained using all channels of sensor as inputs, and the corresponding AKBD indexes as target output. In order to evaluate its generalization capabilities, the neural network was validated on two independent data sets of hyperspectral images, not used during neural network training phase. The validation results for the independent data-sets had an overall accuracy round 100% for both image and a few commission errors (0.1%), therefore demonstrating the feasibility of estimating the presence of active fires using a neural network approach. Although the validation of the neural network classifier had a few commission errors, the producer accuracies were lower due to the presence of omission errors. Image analysis revealed that those false negatives lie in "smoky" portion fire fronts, and due to the low intensity of the signal. The proposed method can be considered

  8. Mashup Model and Verification Using Mashup Processing Network

    NASA Astrophysics Data System (ADS)

    Zahoor, Ehtesham; Perrin, Olivier; Godart, Claude

    Mashups are defined to be lightweight Web applications aggregating data from different Web services, built using ad-hoc composition and being not concerned with long term stability and robustness. In this paper we present a pattern based approach, called Mashup Processing Network (MPN). The idea is based on Event Processing Network and is supposed to facilitate the creation, modeling and the verification of mashups. MPN provides a view of how different actors interact for the mashup development namely the producer, consumer, mashup processing agent and the communication channels. It also supports modeling transformations and validations of data and offers validation of both functional and non-functional requirements, such as reliable messaging and security, that are key issues within the enterprise context. We have enriched the model with a set of processing operations and categorize them into data composition, transformation and validation categories. These processing operations can be seen as a set of patterns for facilitating the mashup development process. MPN also paves a way for realizing Mashup Oriented Architecture where mashups along with services are used as building blocks for application development.

  9. A new method for constructing networks from binary data

    NASA Astrophysics Data System (ADS)

    van Borkulo, Claudia D.; Borsboom, Denny; Epskamp, Sacha; Blanken, Tessa F.; Boschloo, Lynn; Schoevers, Robert A.; Waldorp, Lourens J.

    2014-08-01

    Network analysis is entering fields where network structures are unknown, such as psychology and the educational sciences. A crucial step in the application of network models lies in the assessment of network structure. Current methods either have serious drawbacks or are only suitable for Gaussian data. In the present paper, we present a method for assessing network structures from binary data. Although models for binary data are infamous for their computational intractability, we present a computationally efficient model for estimating network structures. The approach, which is based on Ising models as used in physics, combines logistic regression with model selection based on a Goodness-of-Fit measure to identify relevant relationships between variables that define connections in a network. A validation study shows that this method succeeds in revealing the most relevant features of a network for realistic sample sizes. We apply our proposed method to estimate the network of depression and anxiety symptoms from symptom scores of 1108 subjects. Possible extensions of the model are discussed.

  10. Functional network inference of the suprachiasmatic nucleus

    PubMed Central

    Abel, John H.; Meeker, Kirsten; Granados-Fuentes, Daniel; St. John, Peter C.; Wang, Thomas J.; Bales, Benjamin B.; Doyle, Francis J.; Herzog, Erik D.; Petzold, Linda R.

    2016-01-01

    In the mammalian suprachiasmatic nucleus (SCN), noisy cellular oscillators communicate within a neuronal network to generate precise system-wide circadian rhythms. Although the intracellular genetic oscillator and intercellular biochemical coupling mechanisms have been examined previously, the network topology driving synchronization of the SCN has not been elucidated. This network has been particularly challenging to probe, due to its oscillatory components and slow coupling timescale. In this work, we investigated the SCN network at a single-cell resolution through a chemically induced desynchronization. We then inferred functional connections in the SCN by applying the maximal information coefficient statistic to bioluminescence reporter data from individual neurons while they resynchronized their circadian cycling. Our results demonstrate that the functional network of circadian cells associated with resynchronization has small-world characteristics, with a node degree distribution that is exponential. We show that hubs of this small-world network are preferentially located in the central SCN, with sparsely connected shells surrounding these cores. Finally, we used two computational models of circadian neurons to validate our predictions of network structure. PMID:27044085

  11. Functional network inference of the suprachiasmatic nucleus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abel, John H.; Meeker, Kirsten; Granados-Fuentes, Daniel

    2016-04-04

    In the mammalian suprachiasmatic nucleus (SCN), noisy cellular oscillators communicate within a neuronal network to generate precise system-wide circadian rhythms. Although the intracellular genetic oscillator and intercellular biochemical coupling mechanisms have been examined previously, the network topology driving synchronization of the SCN has not been elucidated. This network has been particularly challenging to probe, due to its oscillatory components and slow coupling timescale. In this work, we investigated the SCN network at a single-cell resolution through a chemically induced desynchronization. We then inferred functional connections in the SCN by applying the maximal information coefficient statistic to bioluminescence reporter data frommore » individual neurons while they resynchronized their circadian cycling. Our results demonstrate that the functional network of circadian cells associated with resynchronization has small-world characteristics, with a node degree distribution that is exponential. We show that hubs of this small-world network are preferentially located in the central SCN, with sparsely connected shells surrounding these cores. Finally, we used two computational models of circadian neurons to validate our predictions of network structure.« less

  12. Fly's Eye GLM Simulator Preliminary Validation Analysis

    NASA Astrophysics Data System (ADS)

    Quick, M. G.; Christian, H. J., Jr.; Blakeslee, R. J.; Stewart, M. F.; Corredor, D.; Podgorny, S.

    2017-12-01

    As part of the validation effort for the Geostationary Lightning Mapper (GLM) an airborne radiometer array has been fabricated to observe lightning optical emission through the cloud top. The Fly's Eye GLM Simulator (FEGS) is a multi-spectral, photo-electric radiometer array with a nominal spatial resolution of 2 x 2 km and spatial footprint of 10 x 10 km at cloud top. A main 25 pixel array observes the 777.4 nm oxygen emission triplet using an optical passband filter with a 10 nm FWHM, a sampling rate of 100 kHz, and 16 bit resolution. From March to May of 2017 FEGS was flown on the NASA ER-2 high altitude aircraft during the GOES-R Validation Flight Campaign. Optical signatures of lightning were observed during a variety of thunderstorm scenarios while coincident measurements were obtained by GLM and ground based antennae networks. This presentation will describe the preliminary analysis of the FEGS dataset in the context of GLM validation.

  13. Network-based stochastic competitive learning approach to disambiguation in collaborative networks.

    PubMed

    Christiano Silva, Thiago; Raphael Amancio, Diego

    2013-03-01

    Many patterns have been uncovered in complex systems through the application of concepts and methodologies of complex networks. Unfortunately, the validity and accuracy of the unveiled patterns are strongly dependent on the amount of unavoidable noise pervading the data, such as the presence of homonymous individuals in social networks. In the current paper, we investigate the problem of name disambiguation in collaborative networks, a task that plays a fundamental role on a myriad of scientific contexts. In special, we use an unsupervised technique which relies on a particle competition mechanism in a networked environment to detect the clusters. It has been shown that, in this kind of environment, the learning process can be improved because the network representation of data can capture topological features of the input data set. Specifically, in the proposed disambiguating model, a set of particles is randomly spawned into the nodes constituting the network. As time progresses, the particles employ a movement strategy composed of a probabilistic convex mixture of random and preferential walking policies. In the former, the walking rule exclusively depends on the topology of the network and is responsible for the exploratory behavior of the particles. In the latter, the walking rule depends both on the topology and the domination levels that the particles impose on the neighboring nodes. This type of behavior compels the particles to perform a defensive strategy, because it will force them to revisit nodes that are already dominated by them, rather than exploring rival territories. Computer simulations conducted on the networks extracted from the arXiv repository of preprint papers and also from other databases reveal the effectiveness of the model, which turned out to be more accurate than traditional clustering methods.

  14. Network-based stochastic competitive learning approach to disambiguation in collaborative networks

    NASA Astrophysics Data System (ADS)

    Christiano Silva, Thiago; Raphael Amancio, Diego

    2013-03-01

    Many patterns have been uncovered in complex systems through the application of concepts and methodologies of complex networks. Unfortunately, the validity and accuracy of the unveiled patterns are strongly dependent on the amount of unavoidable noise pervading the data, such as the presence of homonymous individuals in social networks. In the current paper, we investigate the problem of name disambiguation in collaborative networks, a task that plays a fundamental role on a myriad of scientific contexts. In special, we use an unsupervised technique which relies on a particle competition mechanism in a networked environment to detect the clusters. It has been shown that, in this kind of environment, the learning process can be improved because the network representation of data can capture topological features of the input data set. Specifically, in the proposed disambiguating model, a set of particles is randomly spawned into the nodes constituting the network. As time progresses, the particles employ a movement strategy composed of a probabilistic convex mixture of random and preferential walking policies. In the former, the walking rule exclusively depends on the topology of the network and is responsible for the exploratory behavior of the particles. In the latter, the walking rule depends both on the topology and the domination levels that the particles impose on the neighboring nodes. This type of behavior compels the particles to perform a defensive strategy, because it will force them to revisit nodes that are already dominated by them, rather than exploring rival territories. Computer simulations conducted on the networks extracted from the arXiv repository of preprint papers and also from other databases reveal the effectiveness of the model, which turned out to be more accurate than traditional clustering methods.

  15. Remote sensing of an agricultural soil moisture network in Walnut Creek, Iowa

    USDA-ARS?s Scientific Manuscript database

    The calibration and validation of soil moisture remote sensing products is complicated by the logistics of installing a soil moisture network for a long term period in an active landscape. Usually soil moisture sensors are added to existing precipitation networks which have as a singular requiremen...

  16. EOS Terra Validation Program

    NASA Technical Reports Server (NTRS)

    Starr, David

    2000-01-01

    The EOS Terra mission will be launched in July 1999. This mission has great relevance to the atmospheric radiation community and global change issues. Terra instruments include Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), Clouds and Earth's Radiant Energy System (CERES), Multi-Angle Imaging Spectroradiometer (MISR), Moderate Resolution Imaging Spectroradiometer (MODIS) and Measurements of Pollution in the Troposphere (MOPITT). In addition to the fundamental radiance data sets, numerous global science data products will be generated, including various Earth radiation budget, cloud and aerosol parameters, as well as land surface, terrestrial ecology, ocean color, and atmospheric chemistry parameters. Significant investments have been made in on-board calibration to ensure the quality of the radiance observations. A key component of the Terra mission is the validation of the science data products. This is essential for a mission focused on global change issues and the underlying processes. The Terra algorithms have been subject to extensive pre-launch testing with field data whenever possible. Intensive efforts will be made to validate the Terra data products after launch. These include validation of instrument calibration (vicarious calibration) experiments, instrument and cross-platform comparisons, routine collection of high quality correlative data from ground-based networks, such as AERONET, and intensive sites, such as the SGP ARM site, as well as a variety field experiments, cruises, etc. Airborne simulator instruments have been developed for the field experiment and underflight activities including the MODIS Airborne Simulator (MAS) AirMISR, MASTER (MODIS-ASTER), and MOPITT-A. All are integrated on the NASA ER-2 though low altitude platforms are more typically used for MASTER. MATR is an additional sensor used for MOPITT algorithm development and validation. The intensive validation activities planned for the first year of the Terra

  17. Measuring Networking as an Outcome Variable in Undergraduate Research Experiences.

    PubMed

    Hanauer, David I; Hatfull, Graham

    2015-01-01

    The aim of this paper is to propose, present, and validate a simple survey instrument to measure student conversational networking. The tool consists of five items that cover personal and professional social networks, and its basic principle is the self-reporting of degrees of conversation, with a range of specific discussion partners. The networking instrument was validated in three studies. The basic psychometric characteristics of the scales were established by conducting a factor analysis and evaluating internal consistency using Cronbach's alpha. The second study used a known-groups comparison and involved comparing outcomes for networking scales between two different undergraduate laboratory courses (one involving a specific effort to enhance networking). The final study looked at potential relationships between specific networking items and the established psychosocial variable of project ownership through a series of binary logistic regressions. Overall, the data from the three studies indicate that the networking scales have high internal consistency (α = 0.88), consist of a unitary dimension, can significantly differentiate between research experiences with low and high networking designs, and are related to project ownership scales. The ramifications of the networking instrument for student retention, the enhancement of public scientific literacy, and the differentiation of laboratory courses are discussed. © 2015 D. I. Hanauer and G. Hatfull. CBE—Life Sciences Education © 2015 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  18. A neural network application to classification of health status of HIV/AIDS patients.

    PubMed

    Kwak, N K; Lee, C

    1997-04-01

    This paper presents an application of neural networks to classify and to predict the health status of HIV/AIDS patients. A neural network model in classifying both the well and not-well health status of HIV/AIDS patients is developed and evaluated in terms of validity and reliability of the test. Several different neural network topologies are applied to AIDS Cost and Utilization Survey (ACSUS) datasets in order to demonstrate the neural network's capability.

  19. The "Majority Illusion" in Social Networks

    PubMed Central

    Lerman, Kristina; Yan, Xiaoran; Wu, Xin-Zeng

    2016-01-01

    Individual’s decisions, from what product to buy to whether to engage in risky behavior, often depend on the choices, behaviors, or states of other people. People, however, rarely have global knowledge of the states of others, but must estimate them from the local observations of their social contacts. Network structure can significantly distort individual’s local observations. Under some conditions, a state that is globally rare in a network may be dramatically over-represented in the local neighborhoods of many individuals. This effect, which we call the “majority illusion,” leads individuals to systematically overestimate the prevalence of that state, which may accelerate the spread of social contagions. We develop a statistical model that quantifies this effect and validate it with measurements in synthetic and real-world networks. We show that the illusion is exacerbated in networks with a heterogeneous degree distribution and disassortative structure. PMID:26886112

  20. Validation of WBMOD in the Southeast Asian region

    NASA Astrophysics Data System (ADS)

    Cervera, M. A.; Thomas, R. M.; Groves, K. M.; Ramli, A. G.; Effendy

    2001-01-01

    The scintillation modeling code WBMOD, developed at North West Research, provides a global description of scintillation occurrence. However, the model has had limited calibration globally. Thus its performance in localized regions such as Australia-Southeast Asia is required to be evaluated. The Defence Science and Technology Organisation, Australia, in conjunction with Indonesian National Institute of Aeronautics and Space (LAPAN), Defence Science and Technology Centre, Malaysia, Air Force Research laboratory, United States, and IPS Radio and Space Services of Australia, has commissioned a network of GPS receivers to measure scintillation from sites in the region. One of the objectives of this deployment is to carry out a validation of WBMOD in the region. This paper describes the network of GPS receivers used to record the scintillation data. The details of the procedure used to validate WBMOD are given and results of the validation are presented for data collected during 1998 and 1999 from two sites, one situated in the southern anomaly region and the other situated near the geomagnetic equator. We found good overall agreement between WBMOD and the observations for low sunspot numbers at both sites, although some differences were noted, the major one being that the scintillation activity predicted by WBMOD tended to cut off too early in the night. At higher levels of sunspot activity, while WBMOD agreed with the observations in the southern anomaly region, we found that it significantly underestimated the level of scintillation activity at the geomagnetic equator.

  1. Understanding and predicting binding between human leukocyte antigens (HLAs) and peptides by network analysis.

    PubMed

    Luo, Heng; Ye, Hao; Ng, Hui; Shi, Leming; Tong, Weida; Mattes, William; Mendrick, Donna; Hong, Huixiao

    2015-01-01

    As the major histocompatibility complex (MHC), human leukocyte antigens (HLAs) are one of the most polymorphic genes in humans. Patients carrying certain HLA alleles may develop adverse drug reactions (ADRs) after taking specific drugs. Peptides play an important role in HLA related ADRs as they are the necessary co-binders of HLAs with drugs. Many experimental data have been generated for understanding HLA-peptide binding. However, efficiently utilizing the data for understanding and accurately predicting HLA-peptide binding is challenging. Therefore, we developed a network analysis based method to understand and predict HLA-peptide binding. Qualitative Class I HLA-peptide binding data were harvested and prepared from four major databases. An HLA-peptide binding network was constructed from this dataset and modules were identified by the fast greedy modularity optimization algorithm. To examine the significance of signals in the yielded models, the modularity was compared with the modularity values generated from 1,000 random networks. The peptides and HLAs in the modules were characterized by similarity analysis. The neighbor-edges based and unbiased leverage algorithm (Nebula) was developed for predicting HLA-peptide binding. Leave-one-out (LOO) validations and two-fold cross-validations were conducted to evaluate the performance of Nebula using the constructed HLA-peptide binding network. Nine modules were identified from analyzing the HLA-peptide binding network with a highest modularity compared to all the random networks. Peptide length and functional side chains of amino acids at certain positions of the peptides were different among the modules. HLA sequences were module dependent to some extent. Nebula archived an overall prediction accuracy of 0.816 in the LOO validations and average accuracy of 0.795 in the two-fold cross-validations and outperformed the method reported in the literature. Network analysis is a useful approach for analyzing large and

  2. Understanding and predicting binding between human leukocyte antigens (HLAs) and peptides by network analysis

    PubMed Central

    2015-01-01

    Background As the major histocompatibility complex (MHC), human leukocyte antigens (HLAs) are one of the most polymorphic genes in humans. Patients carrying certain HLA alleles may develop adverse drug reactions (ADRs) after taking specific drugs. Peptides play an important role in HLA related ADRs as they are the necessary co-binders of HLAs with drugs. Many experimental data have been generated for understanding HLA-peptide binding. However, efficiently utilizing the data for understanding and accurately predicting HLA-peptide binding is challenging. Therefore, we developed a network analysis based method to understand and predict HLA-peptide binding. Methods Qualitative Class I HLA-peptide binding data were harvested and prepared from four major databases. An HLA-peptide binding network was constructed from this dataset and modules were identified by the fast greedy modularity optimization algorithm. To examine the significance of signals in the yielded models, the modularity was compared with the modularity values generated from 1,000 random networks. The peptides and HLAs in the modules were characterized by similarity analysis. The neighbor-edges based and unbiased leverage algorithm (Nebula) was developed for predicting HLA-peptide binding. Leave-one-out (LOO) validations and two-fold cross-validations were conducted to evaluate the performance of Nebula using the constructed HLA-peptide binding network. Results Nine modules were identified from analyzing the HLA-peptide binding network with a highest modularity compared to all the random networks. Peptide length and functional side chains of amino acids at certain positions of the peptides were different among the modules. HLA sequences were module dependent to some extent. Nebula archived an overall prediction accuracy of 0.816 in the LOO validations and average accuracy of 0.795 in the two-fold cross-validations and outperformed the method reported in the literature. Conclusions Network analysis is a

  3. Validation of Single-Item Screening Measures for Provider Burnout in a Rural Health Care Network.

    PubMed

    Waddimba, Anthony C; Scribani, Melissa; Nieves, Melinda A; Krupa, Nicole; May, John J; Jenkins, Paul

    2016-06-01

    We validated three single-item measures for emotional exhaustion (EE) and depersonalization (DP) among rural physician/nonphysician practitioners. We linked cross-sectional survey data (on provider demographics, satisfaction, resilience, and burnout) with administrative information from an integrated health care network (1 academic medical center, 6 community hospitals, 31 clinics, and 19 school-based health centers) in an eight-county underserved area of upstate New York. In total, 308 physicians and advanced-practice clinicians completed a self-administered, multi-instrument questionnaire (65.1% response rate). Significant proportions of respondents reported high EE (36.1%) and DP (9.9%). In multivariable linear mixed models, scores on EE/DP subscales of the Maslach Burnout Inventory were regressed on each single-item measure. The Physician Work-Life Study's single-item measure (classifying 32.8% of respondents as burning out/completely burned out) was correlated with EE and DP (Spearman's ρ = .72 and .41, p < .0001; Kruskal-Wallis χ(2) = 149.9 and 56.5, p < .0001, respectively). In multivariable models, it predicted high EE (but neither low EE nor low/high DP). EE/DP single items were correlated with parent subscales (Spearman's ρ = .89 and .81, p < .0001; Kruskal-Wallis χ(2) = 230.98 and 197.84, p < .0001, respectively). In multivariable models, the EE item predicted high/low EE, whereas the DP item predicted only low DP. Therefore, the three single-item measures tested varied in effectiveness as screeners for EE/DP dimensions of burnout. © The Author(s) 2015.

  4. Flory-Stockmayer analysis on reprocessable polymer networks

    NASA Astrophysics Data System (ADS)

    Li, Lingqiao; Chen, Xi; Jin, Kailong; Torkelson, John

    Reprocessable polymer networks can undergo structure rearrangement through dynamic chemistries under proper conditions, making them a promising candidate for recyclable crosslinked materials, e.g. tires. This research field has been focusing on various chemistries. However, there has been lacking of an essential physical theory explaining the relationship between abundancy of dynamic linkages and reprocessability. Based on the classical Flory-Stockmayer analysis on network gelation, we developed a similar analysis on reprocessable polymer networks to quantitatively predict the critical condition for reprocessability. Our theory indicates that it is unnecessary for all bonds to be dynamic to make the resulting network reprocessable. As long as there is no percolated permanent network in the system, the material can fully rearrange. To experimentally validate our theory, we used a thiol-epoxy network model system with various dynamic linkage compositions. The stress relaxation behavior of resulting materials supports our theoretical prediction: only 50 % of linkages between crosslinks need to be dynamic for a tri-arm network to be reprocessable. Therefore, this analysis provides the first fundamental theoretical platform for designing and evaluating reprocessable polymer networks. We thank McCormick Research Catalyst Award Fund and ISEN cluster fellowship (L. L.) for funding support.

  5. Network effects on scientific collaborations.

    PubMed

    Uddin, Shahadat; Hossain, Liaquat; Rasmussen, Kim

    2013-01-01

    The analysis of co-authorship network aims at exploring the impact of network structure on the outcome of scientific collaborations and research publications. However, little is known about what network properties are associated with authors who have increased number of joint publications and are being cited highly. Measures of social network analysis, for example network centrality and tie strength, have been utilized extensively in current co-authorship literature to explore different behavioural patterns of co-authorship networks. Using three SNA measures (i.e., degree centrality, closeness centrality and betweenness centrality), we explore scientific collaboration networks to understand factors influencing performance (i.e., citation count) and formation (tie strength between authors) of such networks. A citation count is the number of times an article is cited by other articles. We use co-authorship dataset of the research field of 'steel structure' for the year 2005 to 2009. To measure the strength of scientific collaboration between two authors, we consider the number of articles co-authored by them. In this study, we examine how citation count of a scientific publication is influenced by different centrality measures of its co-author(s) in a co-authorship network. We further analyze the impact of the network positions of authors on the strength of their scientific collaborations. We use both correlation and regression methods for data analysis leading to statistical validation. We identify that citation count of a research article is positively correlated with the degree centrality and betweenness centrality values of its co-author(s). Also, we reveal that degree centrality and betweenness centrality values of authors in a co-authorship network are positively correlated with the strength of their scientific collaborations. Authors' network positions in co-authorship networks influence the performance (i.e., citation count) and formation (i.e., tie strength

  6. Network Effects on Scientific Collaborations

    PubMed Central

    Uddin, Shahadat; Hossain, Liaquat; Rasmussen, Kim

    2013-01-01

    Background The analysis of co-authorship network aims at exploring the impact of network structure on the outcome of scientific collaborations and research publications. However, little is known about what network properties are associated with authors who have increased number of joint publications and are being cited highly. Methodology/Principal Findings Measures of social network analysis, for example network centrality and tie strength, have been utilized extensively in current co-authorship literature to explore different behavioural patterns of co-authorship networks. Using three SNA measures (i.e., degree centrality, closeness centrality and betweenness centrality), we explore scientific collaboration networks to understand factors influencing performance (i.e., citation count) and formation (tie strength between authors) of such networks. A citation count is the number of times an article is cited by other articles. We use co-authorship dataset of the research field of ‘steel structure’ for the year 2005 to 2009. To measure the strength of scientific collaboration between two authors, we consider the number of articles co-authored by them. In this study, we examine how citation count of a scientific publication is influenced by different centrality measures of its co-author(s) in a co-authorship network. We further analyze the impact of the network positions of authors on the strength of their scientific collaborations. We use both correlation and regression methods for data analysis leading to statistical validation. We identify that citation count of a research article is positively correlated with the degree centrality and betweenness centrality values of its co-author(s). Also, we reveal that degree centrality and betweenness centrality values of authors in a co-authorship network are positively correlated with the strength of their scientific collaborations. Conclusions/Significance Authors’ network positions in co-authorship networks influence

  7. Evaluation of Treatment- and Disease-Related Symptoms in Advanced Head and Neck Cancer: Validation of the National Comprehensive Cancer Network-Functional Assessment of Cancer Therapy-Head and Neck Cancer Symptom Index-22 (NFHNSI-22)

    PubMed Central

    Pearman, Timothy P.; Beaumont, Jennifer L.; Paul, Diane; Abernethy, Amy P.; Jacobsen, Paul B.; Syrjala, Karen L.; Von Roenn, Jamie; Cella, David

    2018-01-01

    Context The Functional Assessment of Cancer Therapy-Head and Neck is a well-validated assessment of quality of life used with patients diagnosed with head and neck cancers (HCNs). The present study is an attempt to evaluate and modify this instrument as necessary in light of the recent regulatory guidelines from the Food and Drug Administration on the use of patient-reported outcomes in clinical trials. Objectives Overall, the goal was to identify patients’ highest priority cancer symptoms, compare these symptoms with those suggested by oncology experts, and construct a brief symptom index to assess these symptoms and categorize them as treatment-related, disease-related, or related to general function and well-being. Methods Patients (N = 49) with advanced (Stages III and IV) HCNs were recruited from participating National Comprehensive Cancer Network institutions and community cancer support organizations in the Chicago area. Patients completed open-ended interviews and symptom checklists. Participating oncology physician experts also rated symptoms. Content validity was obtained by evaluating results alongside items in the Functional Assessment of Chronic Illness Therapy system. Eleven oncologists categorized symptoms in terms of importance and also whether the symptoms were primarily related to disease, treatment, or functional well-being. Results HCN-related symptoms endorsed as high priority by both patients and oncology experts were selected for the new National Comprehensive Cancer Network-Functional Assessment of Cancer Therapy-Head and Neck Cancer Symptom Index-22. The final version includes 22 items, which are broken down into disease-related symptoms, treatment side effects, or general function and well-being. The new scale has acceptable internal consistency (Cronbach’s coefficient alpha = 0.86), content validity for use in chemotherapy trials of patients with advanced disease, and concurrent validity as demonstrated by moderate

  8. Soft matter: rubber and networks

    NASA Astrophysics Data System (ADS)

    McKenna, Gregory B.

    2018-06-01

    Rubber networks are important and form the basis for materials with properties ranging from rubber tires to super absorbents and contact lenses. The development of the entropy ideas of rubber deformation thermodynamics provides a powerful framework from which to understand and to use these materials. In addition, swelling of the rubber in the presence of small molecule liquids or solvents leads to materials that are very soft and ‘gel’ like in nature. The review covers the thermodynamics of polymer networks and gels from the perspective of the thermodynamics and mechanics of the strain energy density function. Important relationships are presented and experimental results show that the continuum ideas contained in the phenomenological thermodynamics are valid, but that the molecular bases for some of them remain to be fully elucidated. This is particularly so in the case of the entropic gels or swollen networks. The review is concluded with some perspectives on other networks, ranging from entropic polymer networks such as thermoplastic elastomers to physical gels in which cross-link points are formed by glassy or crystalline domains. A discussion is provided for other physical gels in which the network forms a spinodal-like decomposition, both in thermoplastic polymers that form a glassy network upon phase separation and for colloidal gels that seem to have a similar behavior.

  9. Combined Use of Tissue Morphology, Neural Network Analysis of Chromatin Texture and Clinical Variables to Predict Prostate Cancer Agressiveness from Biopsy Water

    DTIC Science & Technology

    2000-10-01

    Purpose: To combine clinical, serum, pathologic and computer derived information into an artificial neural network to develop/validate a model to...Development of an artificial neural network (year 02). Prospective validation of this model (projected year 03). All models will be tested and

  10. Synchronization in complex oscillator networks and smart grids.

    PubMed

    Dörfler, Florian; Chertkov, Michael; Bullo, Francesco

    2013-02-05

    The emergence of synchronization in a network of coupled oscillators is a fascinating topic in various scientific disciplines. A widely adopted model of a coupled oscillator network is characterized by a population of heterogeneous phase oscillators, a graph describing the interaction among them, and diffusive and sinusoidal coupling. It is known that a strongly coupled and sufficiently homogeneous network synchronizes, but the exact threshold from incoherence to synchrony is unknown. Here, we present a unique, concise, and closed-form condition for synchronization of the fully nonlinear, nonequilibrium, and dynamic network. Our synchronization condition can be stated elegantly in terms of the network topology and parameters or equivalently in terms of an intuitive, linear, and static auxiliary system. Our results significantly improve upon the existing conditions advocated thus far, they are provably exact for various interesting network topologies and parameters; they are statistically correct for almost all networks; and they can be applied equally to synchronization phenomena arising in physics and biology as well as in engineered oscillator networks, such as electrical power networks. We illustrate the validity, the accuracy, and the practical applicability of our results in complex network scenarios and in smart grid applications.

  11. A Multilayer Network Approach for Guiding Drug Repositioning in Neglected Diseases

    PubMed Central

    Chernomoretz, Ariel; Agüero, Fernán

    2016-01-01

    Drug development for neglected diseases has been historically hampered due to lack of market incentives. The advent of public domain resources containing chemical information from high throughput screenings is changing the landscape of drug discovery for these diseases. In this work we took advantage of data from extensively studied organisms like human, mouse, E. coli and yeast, among others, to develop a novel integrative network model to prioritize and identify candidate drug targets in neglected pathogen proteomes, and bioactive drug-like molecules. We modeled genomic (proteins) and chemical (bioactive compounds) data as a multilayer weighted network graph that takes advantage of bioactivity data across 221 species, chemical similarities between 1.7 105 compounds and several functional relations among 1.67 105 proteins. These relations comprised orthology, sharing of protein domains, and shared participation in defined biochemical pathways. We showcase the application of this network graph to the problem of prioritization of new candidate targets, based on the information available in the graph for known compound-target associations. We validated this strategy by performing a cross validation procedure for known mouse and Trypanosoma cruzi targets and showed that our approach outperforms classic alignment-based approaches. Moreover, our model provides additional flexibility as two different network definitions could be considered, finding in both cases qualitatively different but sensible candidate targets. We also showcase the application of the network to suggest targets for orphan compounds that are active against Plasmodium falciparum in high-throughput screens. In this case our approach provided a reduced prioritization list of target proteins for the query molecules and showed the ability to propose new testable hypotheses for each compound. Moreover, we found that some predictions highlighted by our network model were supported by independent

  12. A Multilayer Network Approach for Guiding Drug Repositioning in Neglected Diseases.

    PubMed

    Berenstein, Ariel José; Magariños, María Paula; Chernomoretz, Ariel; Agüero, Fernán

    2016-01-01

    Drug development for neglected diseases has been historically hampered due to lack of market incentives. The advent of public domain resources containing chemical information from high throughput screenings is changing the landscape of drug discovery for these diseases. In this work we took advantage of data from extensively studied organisms like human, mouse, E. coli and yeast, among others, to develop a novel integrative network model to prioritize and identify candidate drug targets in neglected pathogen proteomes, and bioactive drug-like molecules. We modeled genomic (proteins) and chemical (bioactive compounds) data as a multilayer weighted network graph that takes advantage of bioactivity data across 221 species, chemical similarities between 1.7 105 compounds and several functional relations among 1.67 105 proteins. These relations comprised orthology, sharing of protein domains, and shared participation in defined biochemical pathways. We showcase the application of this network graph to the problem of prioritization of new candidate targets, based on the information available in the graph for known compound-target associations. We validated this strategy by performing a cross validation procedure for known mouse and Trypanosoma cruzi targets and showed that our approach outperforms classic alignment-based approaches. Moreover, our model provides additional flexibility as two different network definitions could be considered, finding in both cases qualitatively different but sensible candidate targets. We also showcase the application of the network to suggest targets for orphan compounds that are active against Plasmodium falciparum in high-throughput screens. In this case our approach provided a reduced prioritization list of target proteins for the query molecules and showed the ability to propose new testable hypotheses for each compound. Moreover, we found that some predictions highlighted by our network model were supported by independent

  13. Quantized Synchronization of Chaotic Neural Networks With Scheduled Output Feedback Control.

    PubMed

    Wan, Ying; Cao, Jinde; Wen, Guanghui

    In this paper, the synchronization problem of master-slave chaotic neural networks with remote sensors, quantization process, and communication time delays is investigated. The information communication channel between the master chaotic neural network and slave chaotic neural network consists of several remote sensors, with each sensor able to access only partial knowledge of output information of the master neural network. At each sampling instants, each sensor updates its own measurement and only one sensor is scheduled to transmit its latest information to the controller's side in order to update the control inputs for the slave neural network. Thus, such communication process and control strategy are much more energy-saving comparing with the traditional point-to-point scheme. Sufficient conditions for output feedback control gain matrix, allowable length of sampling intervals, and upper bound of network-induced delays are derived to ensure the quantized synchronization of master-slave chaotic neural networks. Lastly, Chua's circuit system and 4-D Hopfield neural network are simulated to validate the effectiveness of the main results.In this paper, the synchronization problem of master-slave chaotic neural networks with remote sensors, quantization process, and communication time delays is investigated. The information communication channel between the master chaotic neural network and slave chaotic neural network consists of several remote sensors, with each sensor able to access only partial knowledge of output information of the master neural network. At each sampling instants, each sensor updates its own measurement and only one sensor is scheduled to transmit its latest information to the controller's side in order to update the control inputs for the slave neural network. Thus, such communication process and control strategy are much more energy-saving comparing with the traditional point-to-point scheme. Sufficient conditions for output feedback control

  14. Collocation mismatch uncertainties in satellite aerosol retrieval validation

    NASA Astrophysics Data System (ADS)

    Virtanen, Timo H.; Kolmonen, Pekka; Sogacheva, Larisa; Rodríguez, Edith; Saponaro, Giulia; de Leeuw, Gerrit

    2018-02-01

    Satellite-based aerosol products are routinely validated against ground-based reference data, usually obtained from sun photometer networks such as AERONET (AEROsol RObotic NETwork). In a typical validation exercise a spatial sample of the instantaneous satellite data is compared against a temporal sample of the point-like ground-based data. The observations do not correspond to exactly the same column of the atmosphere at the same time, and the representativeness of the reference data depends on the spatiotemporal variability of the aerosol properties in the samples. The associated uncertainty is known as the collocation mismatch uncertainty (CMU). The validation results depend on the sampling parameters. While small samples involve less variability, they are more sensitive to the inevitable noise in the measurement data. In this paper we study systematically the effect of the sampling parameters in the validation of AATSR (Advanced Along-Track Scanning Radiometer) aerosol optical depth (AOD) product against AERONET data and the associated collocation mismatch uncertainty. To this end, we study the spatial AOD variability in the satellite data, compare it against the corresponding values obtained from densely located AERONET sites, and assess the possible reasons for observed differences. We find that the spatial AOD variability in the satellite data is approximately 2 times larger than in the ground-based data, and the spatial variability correlates only weakly with that of AERONET for short distances. We interpreted that only half of the variability in the satellite data is due to the natural variability in the AOD, and the rest is noise due to retrieval errors. However, for larger distances (˜ 0.5°) the correlation is improved as the noise is averaged out, and the day-to-day changes in regional AOD variability are well captured. Furthermore, we assess the usefulness of the spatial variability of the satellite AOD data as an estimate of CMU by comparing the

  15. Dominating Scale-Free Networks Using Generalized Probabilistic Methods

    PubMed Central

    Molnár,, F.; Derzsy, N.; Czabarka, É.; Székely, L.; Szymanski, B. K.; Korniss, G.

    2014-01-01

    We study ensemble-based graph-theoretical methods aiming to approximate the size of the minimum dominating set (MDS) in scale-free networks. We analyze both analytical upper bounds of dominating sets and numerical realizations for applications. We propose two novel probabilistic dominating set selection strategies that are applicable to heterogeneous networks. One of them obtains the smallest probabilistic dominating set and also outperforms the deterministic degree-ranked method. We show that a degree-dependent probabilistic selection method becomes optimal in its deterministic limit. In addition, we also find the precise limit where selecting high-degree nodes exclusively becomes inefficient for network domination. We validate our results on several real-world networks, and provide highly accurate analytical estimates for our methods. PMID:25200937

  16. A multi-scale automatic observatory of soil moisture and temperature served for satellite product validation in Tibetan Plateau

    NASA Astrophysics Data System (ADS)

    Tang, S.; Dong, L.; Lu, P.; Zhou, K.; Wang, F.; Han, S.; Min, M.; Chen, L.; Xu, N.; Chen, J.; Zhao, P.; Li, B.; Wang, Y.

    2016-12-01

    Due to the lack of observing data which match the satellite pixel size, the inversion accuracy of satellite products in Tibetan Plateau(TP) is difficult to be evaluated. Hence, the in situ observations are necessary to support the calibration and validation activities. Under the support of the Third Tibetan Plateau Atmospheric Scientific Experiment (TIPEX-III) projec a multi-scale automatic observatory of soil moisture and temperature served for satellite product validation (TIPEX-III-SMTN) were established in Tibetan Plateau. The observatory consists of two regional scale networks, including the Naqu network and the Geji network. The Naqu network is located in the north of TP, and characterized by alpine grasslands. The Geji network is located in the west of TP, and characterized by marshes. Naqu network includes 33 stations, which are deployed in a 75KM*75KM region according to a pre-designed pattern. At Each station, soil moisture and temperature are measured by five sensors at five soil depths. One sensor is vertically inserted into 0 2 cm depth to measure the averaged near-surface soil moisture and temperature. The other four sensors are horizontally inserted at 5, 10, 20, and 30 cm depths, respectively. The data are recorded every 10 minutes. A wireless transmission system is applied to transmit the data in real time, and a dual power supply system is adopted to keep the continuity of the observation. The construction of Naqu network has been accomplished in August, 2015, and Geji network will be established before Oct., 2016. Observations acquired from TIPEX-III-SMTN can be used to validate satellite products with different spatial resolution, and TIPEX-III-SMTN can also be used as a complementary of the existing similar networks in this area, such as CTP-SMTMN (the multiscale Soil Moistureand Temperature Monitoring Network on the central TP) . Keywords: multi-scale soil moisture soil temperature, Tibetan Plateau Acknowledgments: This work was jointly

  17. Validating crash locations for quantitative spatial analysis: a GIS-based approach.

    PubMed

    Loo, Becky P Y

    2006-09-01

    In this paper, the spatial variables of the crash database in Hong Kong from 1993 to 2004 are validated. The proposed spatial data validation system makes use of three databases (the crash, road network and district board databases) and relies on GIS to carry out most of the validation steps so that the human resource required for manually checking the accuracy of the spatial data can be enormously reduced. With the GIS-based spatial data validation system, it was found that about 65-80% of the police crash records from 1993 to 2004 had correct road names and district board information. In 2004, the police crash database contained about 12.7% mistakes for road names and 9.7% mistakes for district boards. The situation was broadly comparable to the United Kingdom. However, the results also suggest that safety researchers should carefully validate spatial data in the crash database before scientific analysis.

  18. Reliability Modeling of Microelectromechanical Systems Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Perera. J. Sebastian

    2000-01-01

    Microelectromechanical systems (MEMS) are a broad and rapidly expanding field that is currently receiving a great deal of attention because of the potential to significantly improve the ability to sense, analyze, and control a variety of processes, such as heating and ventilation systems, automobiles, medicine, aeronautical flight, military surveillance, weather forecasting, and space exploration. MEMS are very small and are a blend of electrical and mechanical components, with electrical and mechanical systems on one chip. This research establishes reliability estimation and prediction for MEMS devices at the conceptual design phase using neural networks. At the conceptual design phase, before devices are built and tested, traditional methods of quantifying reliability are inadequate because the device is not in existence and cannot be tested to establish the reliability distributions. A novel approach using neural networks is created to predict the overall reliability of a MEMS device based on its components and each component's attributes. The methodology begins with collecting attribute data (fabrication process, physical specifications, operating environment, property characteristics, packaging, etc.) and reliability data for many types of microengines. The data are partitioned into training data (the majority) and validation data (the remainder). A neural network is applied to the training data (both attribute and reliability); the attributes become the system inputs and reliability data (cycles to failure), the system output. After the neural network is trained with sufficient data. the validation data are used to verify the neural networks provided accurate reliability estimates. Now, the reliability of a new proposed MEMS device can be estimated by using the appropriate trained neural networks developed in this work.

  19. Validating the Why/How Contrast for Functional MRI Studies of Theory of Mind

    PubMed Central

    Spunt, Robert P.; Adolphs, Ralph

    2014-01-01

    The ability to impute mental states to others, or Theory of Mind (ToM), has been the subject of hundreds of neuroimaging studies. Although reviews and meta-analyses of these studies have concluded that ToM recruits a coherent brain network, mounting evidence suggests that this network is an abstraction based on pooling data from numerous studies, most of which use different behavioral tasks to investigate ToM. Problematically, this means that no single behavioral task can be used to reliably measure ToM Network function as currently conceived. To make ToM Network function scientifically tractable, we need standardized tasks capable of reliably measuring specific aspects of its functioning. Here, our goal is to validate the Why/How Task for this purpose. Several prior studies have found that when compared to answering how-questions about another person's behavior, answering why-questions about that same behavior activates a network that is anatomically consistent with meta-analytic definitions of the ToM Network. In the version of the Why/How Task presented here, participants answer yes/no Why (e.g., Is the person helping someone?) and How (e.g., Is the person lifting something?) questions about pretested photographs of naturalistic human behaviors. Across three fMRI studies, we show that the task elicits reliable performance measurements and modulates a left-lateralized network that is consistently localized across studies. While this network is convergent with meta-analyses of ToM studies, it is largely distinct from the network identified by the widely used False-Belief Localizer, the most common ToM task. Our new task is publicly available, and can be used as an efficient functional localizer to provide reliable identification of single-subject responses in most regions of the network. Our results validate the Why/How Task, both as a standardized protocol capable of producing maximally comparable data across studies, and as a flexible foundation for programmatic

  20. Reef-fish larval dispersal patterns validate no-take marine reserve network connectivity that links human communities

    NASA Astrophysics Data System (ADS)

    Abesamis, Rene A.; Saenz-Agudelo, Pablo; Berumen, Michael L.; Bode, Michael; Jadloc, Claro Renato L.; Solera, Leilani A.; Villanoy, Cesar L.; Bernardo, Lawrence Patrick C.; Alcala, Angel C.; Russ, Garry R.

    2017-09-01

    Networks of no-take marine reserves (NTMRs) are a widely advocated strategy for managing coral reefs. However, uncertainty about the strength of population connectivity between individual reefs and NTMRs through larval dispersal remains a major obstacle to effective network design. In this study, larval dispersal among NTMRs and fishing grounds in the Philippines was inferred by conducting genetic parentage analysis on a coral-reef fish ( Chaetodon vagabundus). Adult and juvenile fish were sampled intensively in an area encompassing approximately 90 km of coastline. Thirty-seven true parent-offspring pairs were accepted after screening 1978 juveniles against 1387 adults. The data showed all types of dispersal connections that may occur in NTMR networks, with assignments suggesting connectivity among NTMRs and fishing grounds ( n = 35) far outnumbering those indicating self-recruitment ( n = 2). Critically, half (51%) of the inferred occurrences of larval dispersal linked reefs managed by separate, independent municipalities and constituent villages, emphasising the need for nested collaborative management arrangements across management units to sustain NTMR networks. Larval dispersal appeared to be influenced by wind-driven seasonal reversals in the direction of surface currents. The best-fit larval dispersal kernel estimated from the parentage data predicted that 50% of larvae originating from a population would attempt to settle within 33 km, and 95% within 83 km. Mean larval dispersal distance was estimated to be 36.5 km. These results suggest that creating a network of closely spaced (less than a few tens of km apart) NTMRs can enhance recruitment for protected and fished populations throughout the NTMR network. The findings underscore major challenges for regional coral-reef management initiatives that must be addressed with priority: (1) strengthening management of NTMR networks across political or customary boundaries; and (2) achieving adequate population

  1. Why the Item "23 + 1" Is Not in a Depression Questionnaire: Validity from a Network Perspective

    ERIC Educational Resources Information Center

    Cramer, Angelique O. J.

    2012-01-01

    What is validity? A simple question but apparently one with many answers, as Paul Newton highlights in his review of the history of validity. The current definition of validity, as entertained in the 1999 "Standards for Educational and Psychological Testing" is indeed a consensus, one between the classical notion of attributes, and measures…

  2. Using turbidity for designing water networks.

    PubMed

    Castaño, J A; Higuita, J C

    2016-05-01

    Some methods to design water networks with minimum fresh water consumption are based on the selection of a key contaminant. In most of these "single contaminant methods", a maximum allowable concentration of contaminants must be established in water demands and water sources. Turbidity is not a contaminant concentration but is a property that represents the "sum" of other contaminants, with the advantage that it can be cheaper and easily measured than biological oxygen demand, chemical oxygen demand, suspended solids, dissolved solids, among others. The objective of this paper is to demonstrate that turbidity can be used directly in the design of water networks just like any other contaminant concentration. A mathematical demonstration is presented and in order to validate the mathematical results, the design of a water network for a guava fudge production process is performed. The material recovery pinch diagram and nearest neighbors algorithm were used for the design of the water network. Nevertheless, this water network could be designed using other single contaminant methodologies. The maximum error between the expected and the real turbidity values in the water network was 3.3%. These results corroborate the usefulness of turbidity in the design of water networks. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Statistical mechanics of the international trade network.

    PubMed

    Fronczak, Agata; Fronczak, Piotr

    2012-05-01

    Analyzing real data on international trade covering the time interval 1950-2000, we show that in each year over the analyzed period the network is a typical representative of the ensemble of maximally random weighted networks, whose directed connections (bilateral trade volumes) are only characterized by the product of the trading countries' GDPs. It means that time evolution of this network may be considered as a continuous sequence of equilibrium states, i.e., a quasistatic process. This, in turn, allows one to apply the linear response theory to make (and also verify) simple predictions about the network. In particular, we show that bilateral trade fulfills a fluctuation-response theorem, which states that the average relative change in imports (exports) between two countries is a sum of the relative changes in their GDPs. Yearly changes in trade volumes prove that the theorem is valid.

  4. Statistical mechanics of the international trade network

    NASA Astrophysics Data System (ADS)

    Fronczak, Agata; Fronczak, Piotr

    2012-05-01

    Analyzing real data on international trade covering the time interval 1950-2000, we show that in each year over the analyzed period the network is a typical representative of the ensemble of maximally random weighted networks, whose directed connections (bilateral trade volumes) are only characterized by the product of the trading countries' GDPs. It means that time evolution of this network may be considered as a continuous sequence of equilibrium states, i.e., a quasistatic process. This, in turn, allows one to apply the linear response theory to make (and also verify) simple predictions about the network. In particular, we show that bilateral trade fulfills a fluctuation-response theorem, which states that the average relative change in imports (exports) between two countries is a sum of the relative changes in their GDPs. Yearly changes in trade volumes prove that the theorem is valid.

  5. Defense of Tests Prevents Objective Consideration of Validity and Fairness

    ERIC Educational Resources Information Center

    Helms, Janet E.

    2009-01-01

    In defending tests of cognitive abilities, knowledge, or skills (CAKS) from the skepticism of their "family members, friends, and neighbors" and aiding psychologists forced to defend tests from "myth and hearsay" in their own skeptical social networks (p. 215), Sackett, Borneman, and Connelly focused on evaluating validity coefficients, racial or…

  6. A simplified method of performance indicators development for epidemiological surveillance networks--application to the RESAPATH surveillance network.

    PubMed

    Sorbe, A; Chazel, M; Gay, E; Haenni, M; Madec, J-Y; Hendrikx, P

    2011-06-01

    Develop and calculate performance indicators allows to continuously follow the operation of an epidemiological surveillance network. This is an internal evaluation method, implemented by the coordinators in collaboration with all the actors of the network. Its purpose is to detect weak points in order to optimize management. A method for the development of performance indicators of epidemiological surveillance networks was developed in 2004 and was applied to several networks. Its implementation requires a thorough description of the network environment and all its activities to define priority indicators. Since this method is considered to be complex, our objective consisted in developing a simplified approach and applying it to an epidemiological surveillance network. We applied the initial method to a theoretical network model to obtain a list of generic indicators that can be adapted to any surveillance network. We obtained a list of 25 generic performance indicators, intended to be reformulated and described according to the specificities of each network. It was used to develop performance indicators for RESAPATH, an epidemiological surveillance network of antimicrobial resistance in pathogenic bacteria of animal origin in France. This application allowed us to validate the simplified method, its value in terms of practical implementation, and its level of user acceptance. Its ease of use and speed of application compared to the initial method argue in favor of its use on broader scale. Copyright © 2011 Elsevier Masson SAS. All rights reserved.

  7. A logic-based method to build signaling networks and propose experimental plans.

    PubMed

    Rougny, Adrien; Gloaguen, Pauline; Langonné, Nathalie; Reiter, Eric; Crépieux, Pascale; Poupon, Anne; Froidevaux, Christine

    2018-05-18

    With the dramatic increase of the diversity and the sheer quantity of biological data generated, the construction of comprehensive signaling networks that include precise mechanisms cannot be carried out manually anymore. In this context, we propose a logic-based method that allows building large signaling networks automatically. Our method is based on a set of expert rules that make explicit the reasoning made by biologists when interpreting experimental results coming from a wide variety of experiment types. These rules allow formulating all the conclusions that can be inferred from a set of experimental results, and thus building all the possible networks that explain these results. Moreover, given an hypothesis, our system proposes experimental plans to carry out in order to validate or invalidate it. To evaluate the performance of our method, we applied our framework to the reconstruction of the FSHR-induced and the EGFR-induced signaling networks. The FSHR is known to induce the transactivation of the EGFR, but very little is known on the resulting FSH- and EGF-dependent network. We built a single network using data underlying both networks. This leads to a new hypothesis on the activation of MEK by p38MAPK, which we validate experimentally. These preliminary results represent a first step in the demonstration of a cross-talk between these two major MAP kinases pathways.

  8. Artificial Neural Network with Hardware Training and Hardware Refresh

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A. (Inventor)

    2003-01-01

    A neural network circuit is provided having a plurality of circuits capable of charge storage. Also provided is a plurality of circuits each coupled to at least one of the plurality of charge storage circuits and constructed to generate an output in accordance with a neuron transfer function. Each of a plurality of circuits is coupled to one of the plurality of neuron transfer function circuits and constructed to generate a derivative of the output. A weight update circuit updates the charge storage circuits based upon output from the plurality of transfer function circuits and output from the plurality of derivative circuits. In preferred embodiments, separate training and validation networks share the same set of charge storage circuits and may operate concurrently. The validation network has a separate transfer function circuits each being coupled to the charge storage circuits so as to replicate the training network s coupling of the plurality of charge storage to the plurality of transfer function circuits. The plurality of transfer function circuits may be constructed each having a transconductance amplifier providing differential currents combined to provide an output in accordance with a transfer function. The derivative circuits may have a circuit constructed to generate a biased differential currents combined so as to provide the derivative of the transfer function.

  9. Verification and Validation of Adaptive and Intelligent Systems with Flight Test Results

    NASA Technical Reports Server (NTRS)

    Burken, John J.; Larson, Richard R.

    2009-01-01

    F-15 IFCS project goals are: a) Demonstrate Control Approaches that can Efficiently Optimize Aircraft Performance in both Normal and Failure Conditions [A] & [B] failures. b) Advance Neural Network-Based Flight Control Technology for New Aerospace Systems Designs with a Pilot in the Loop. Gen II objectives include; a) Implement and Fly a Direct Adaptive Neural Network Based Flight Controller; b) Demonstrate the Ability of the System to Adapt to Simulated System Failures: 1) Suppress Transients Associated with Failure; 2) Re-Establish Sufficient Control and Handling of Vehicle for Safe Recovery. c) Provide Flight Experience for Development of Verification and Validation Processes for Flight Critical Neural Network Software.

  10. Z-Score-Based Modularity for Community Detection in Networks

    PubMed Central

    Miyauchi, Atsushi; Kawase, Yasushi

    2016-01-01

    Identifying community structure in networks is an issue of particular interest in network science. The modularity introduced by Newman and Girvan is the most popular quality function for community detection in networks. In this study, we identify a problem in the concept of modularity and suggest a solution to overcome this problem. Specifically, we obtain a new quality function for community detection. We refer to the function as Z-modularity because it measures the Z-score of a given partition with respect to the fraction of the number of edges within communities. Our theoretical analysis shows that Z-modularity mitigates the resolution limit of the original modularity in certain cases. Computational experiments using both artificial networks and well-known real-world networks demonstrate the validity and reliability of the proposed quality function. PMID:26808270

  11. Construction and Initial Validation of the Multiracial Experiences Measure (MEM)

    PubMed Central

    Yoo, Hyung Chol; Jackson, Kelly; Guevarra, Rudy P.; Miller, Matthew J.; Harrington, Blair

    2015-01-01

    This article describes the development and validation of the Multiracial Experiences Measure (MEM): a new measure that assesses uniquely racialized risks and resiliencies experienced by individuals of mixed racial heritage. Across two studies, there was evidence for the validation of the 25-item MEM with 5 subscales including Shifting Expressions, Perceived Racial Ambiguity, Creating Third Space, Multicultural Engagement, and Multiracial Discrimination. The 5-subscale structure of the MEM was supported by a combination of exploratory and confirmatory factor analyses. Evidence of criterion-related validity was partially supported with MEM subscales correlating with measures of racial diversity in one’s social network, color-blind racial attitude, psychological distress, and identity conflict. Evidence of discriminant validity was supported with MEM subscales not correlating with impression management. Implications for future research and suggestions for utilization of the MEM in clinical practice with multiracial adults are discussed. PMID:26460977

  12. Construction and initial validation of the Multiracial Experiences Measure (MEM).

    PubMed

    Yoo, Hyung Chol; Jackson, Kelly F; Guevarra, Rudy P; Miller, Matthew J; Harrington, Blair

    2016-03-01

    This article describes the development and validation of the Multiracial Experiences Measure (MEM): a new measure that assesses uniquely racialized risks and resiliencies experienced by individuals of mixed racial heritage. Across 2 studies, there was evidence for the validation of the 25-item MEM with 5 subscales including Shifting Expressions, Perceived Racial Ambiguity, Creating Third Space, Multicultural Engagement, and Multiracial Discrimination. The 5-subscale structure of the MEM was supported by a combination of exploratory and confirmatory factor analyses. Evidence of criterion-related validity was partially supported with MEM subscales correlating with measures of racial diversity in one's social network, color-blind racial attitude, psychological distress, and identity conflict. Evidence of discriminant validity was supported with MEM subscales not correlating with impression management. Implications for future research and suggestions for utilization of the MEM in clinical practice with multiracial adults are discussed. (c) 2016 APA, all rights reserved).

  13. A Cross Cultural Validation of Perceptions and Use of Social Network Service: An Exploratory Study

    ERIC Educational Resources Information Center

    Guo, Chengqi

    2009-01-01

    The rapid developments Social Network Service (SNS) have offered opportunities to re-visit many seminal theoretical assumptions of technology usage within socio-technical environment. Online social network is a rapidly growing field that imposes new questions to the existing IS research paradigm. It is argued that information systems research must…

  14. Incidents Prediction in Road Junctions Using Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Hajji, Tarik; Alami Hassani, Aicha; Ouazzani Jamil, Mohammed

    2018-05-01

    The implementation of an incident detection system (IDS) is an indispensable operation in the analysis of the road traffics. However the IDS may, in no case, represent an alternative to the classical monitoring system controlled by the human eye. The aim of this work is to increase detection and prediction probability of incidents in camera-monitored areas. Knowing that, these areas are monitored by multiple cameras and few supervisors. Our solution is to use Artificial Neural Networks (ANN) to analyze moving objects trajectories on captured images. We first propose a modelling of the trajectories and their characteristics, after we develop a learning database for valid and invalid trajectories, and then we carry out a comparative study to find the artificial neural network architecture that maximizes the rate of valid and invalid trajectories recognition.

  15. On the Reliability of Individual Brain Activity Networks.

    PubMed

    Cassidy, Ben; Bowman, F DuBois; Rae, Caroline; Solo, Victor

    2018-02-01

    There is intense interest in fMRI research on whole-brain functional connectivity, and however, two fundamental issues are still unresolved: the impact of spatiotemporal data resolution (spatial parcellation and temporal sampling) and the impact of the network construction method on the reliability of functional brain networks. In particular, the impact of spatiotemporal data resolution on the resulting connectivity findings has not been sufficiently investigated. In fact, a number of studies have already observed that functional networks often give different conclusions across different parcellation scales. If the interpretations from functional networks are inconsistent across spatiotemporal scales, then the whole validity of the functional network paradigm is called into question. This paper investigates the consistency of resting state network structure when using different temporal sampling or spatial parcellation, or different methods for constructing the networks. To pursue this, we develop a novel network comparison framework based on persistent homology from a topological data analysis. We use the new network comparison tools to characterize the spatial and temporal scales under which consistent functional networks can be constructed. The methods are illustrated on Human Connectome Project data, showing that the DISCOH 2 network construction method outperforms other approaches at most data spatiotemporal resolutions.

  16. Risk prediction model: Statistical and artificial neural network approach

    NASA Astrophysics Data System (ADS)

    Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim

    2017-04-01

    Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.

  17. Morphine Regulated Synaptic Networks Revealed by Integrated Proteomics and Network Analysis*

    PubMed Central

    Stockton, Steven D.; Gomes, Ivone; Liu, Tong; Moraje, Chandrakala; Hipólito, Lucia; Jones, Matthew R.; Ma'ayan, Avi; Morón, Jose A.; Li, Hong; Devi, Lakshmi A.

    2015-01-01

    Despite its efficacy, the use of morphine for the treatment of chronic pain remains limited because of the rapid development of tolerance, dependence and ultimately addiction. These undesired effects are thought to be because of alterations in synaptic transmission and neuroplasticity within the reward circuitry including the striatum. In this study we used subcellular fractionation and quantitative proteomics combined with computational approaches to investigate the morphine-induced protein profile changes at the striatal postsynaptic density. Over 2,600 proteins were identified by mass spectrometry analysis of subcellular fractions enriched in postsynaptic density associated proteins from saline or morphine-treated striata. Among these, the levels of 34 proteins were differentially altered in response to morphine. These include proteins involved in G-protein coupled receptor signaling, regulation of transcription and translation, chaperones, and protein degradation pathways. The altered expression levels of several of these proteins was validated by Western blotting analysis. Using Genes2Fans software suite we connected the differentially expressed proteins with proteins identified within the known background protein-protein interaction network. This led to the generation of a network consisting of 116 proteins with 40 significant intermediates. To validate this, we confirmed the presence of three proteins predicted to be significant intermediates: caspase-3, receptor-interacting serine/threonine protein kinase 3 and NEDD4 (an E3-ubiquitin ligase identified as a neural precursor cell expressed developmentally down-regulated protein 4). Because this morphine-regulated network predicted alterations in proteasomal degradation, we examined the global ubiquitination state of postsynaptic density proteins and found it to be substantially altered. Together, these findings suggest a role for protein degradation and for the ubiquitin/proteasomal system in the etiology of

  18. Signal processing and neural network toolbox and its application to failure diagnosis and prognosis

    NASA Astrophysics Data System (ADS)

    Tu, Fang; Wen, Fang; Willett, Peter K.; Pattipati, Krishna R.; Jordan, Eric H.

    2001-07-01

    Many systems are comprised of components equipped with self-testing capability; however, if the system is complex involving feedback and the self-testing itself may occasionally be faulty, tracing faults to a single or multiple causes is difficult. Moreover, many sensors are incapable of reliable decision-making on their own. In such cases, a signal processing front-end that can match inference needs will be very helpful. The work is concerned with providing an object-oriented simulation environment for signal processing and neural network-based fault diagnosis and prognosis. In the toolbox, we implemented a wide range of spectral and statistical manipulation methods such as filters, harmonic analyzers, transient detectors, and multi-resolution decomposition to extract features for failure events from data collected by data sensors. Then we evaluated multiple learning paradigms for general classification, diagnosis and prognosis. The network models evaluated include Restricted Coulomb Energy (RCE) Neural Network, Learning Vector Quantization (LVQ), Decision Trees (C4.5), Fuzzy Adaptive Resonance Theory (FuzzyArtmap), Linear Discriminant Rule (LDR), Quadratic Discriminant Rule (QDR), Radial Basis Functions (RBF), Multiple Layer Perceptrons (MLP) and Single Layer Perceptrons (SLP). Validation techniques, such as N-fold cross-validation and bootstrap techniques, are employed for evaluating the robustness of network models. The trained networks are evaluated for their performance using test data on the basis of percent error rates obtained via cross-validation, time efficiency, generalization ability to unseen faults. Finally, the usage of neural networks for the prediction of residual life of turbine blades with thermal barrier coatings is described and the results are shown. The neural network toolbox has also been applied to fault diagnosis in mixed-signal circuits.

  19. Region stability analysis and tracking control of memristive recurrent neural network.

    PubMed

    Bao, Gang; Zeng, Zhigang; Shen, Yanjun

    2018-02-01

    Memristor is firstly postulated by Leon Chua and realized by Hewlett-Packard (HP) laboratory. Research results show that memristor can be used to simulate the synapses of neurons. This paper presents a class of recurrent neural network with HP memristors. Firstly, it shows that memristive recurrent neural network has more compound dynamics than the traditional recurrent neural network by simulations. Then it derives that n dimensional memristive recurrent neural network is composed of [Formula: see text] sub neural networks which do not have a common equilibrium point. By designing the tracking controller, it can make memristive neural network being convergent to the desired sub neural network. At last, two numerical examples are given to verify the validity of our result. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. A constraint optimization based virtual network mapping method

    NASA Astrophysics Data System (ADS)

    Li, Xiaoling; Guo, Changguo; Wang, Huaimin; Li, Zhendong; Yang, Zhiwen

    2013-03-01

    Virtual network mapping problem, maps different virtual networks onto the substrate network is an extremely challenging work. This paper proposes a constraint optimization based mapping method for solving virtual network mapping problem. This method divides the problem into two phases, node mapping phase and link mapping phase, which are all NP-hard problems. Node mapping algorithm and link mapping algorithm are proposed for solving node mapping phase and link mapping phase, respectively. Node mapping algorithm adopts the thinking of greedy algorithm, mainly considers two factors, available resources which are supplied by the nodes and distance between the nodes. Link mapping algorithm is based on the result of node mapping phase, adopts the thinking of distributed constraint optimization method, which can guarantee to obtain the optimal mapping with the minimum network cost. Finally, simulation experiments are used to validate the method, and results show that the method performs very well.

  1. Protein interaction network topology uncovers melanogenesis regulatory network components within functional genomics datasets.

    PubMed

    Ho, Hsiang; Milenković, Tijana; Memisević, Vesna; Aruri, Jayavani; Przulj, Natasa; Ganesan, Anand K

    2010-06-15

    RNA-mediated interference (RNAi)-based functional genomics is a systems-level approach to identify novel genes that control biological phenotypes. Existing computational approaches can identify individual genes from RNAi datasets that regulate a given biological process. However, currently available methods cannot identify which RNAi screen "hits" are novel components of well-characterized biological pathways known to regulate the interrogated phenotype. In this study, we describe a method to identify genes from RNAi datasets that are novel components of known biological pathways. We experimentally validate our approach in the context of a recently completed RNAi screen to identify novel regulators of melanogenesis. In this study, we utilize a PPI network topology-based approach to identify targets within our RNAi dataset that may be components of known melanogenesis regulatory pathways. Our computational approach identifies a set of screen targets that cluster topologically in a human PPI network with the known pigment regulator Endothelin receptor type B (EDNRB). Validation studies reveal that these genes impact pigment production and EDNRB signaling in pigmented melanoma cells (MNT-1) and normal melanocytes. We present an approach that identifies novel components of well-characterized biological pathways from functional genomics datasets that could not have been identified by existing statistical and computational approaches.

  2. Protein interaction network topology uncovers melanogenesis regulatory network components within functional genomics datasets

    PubMed Central

    2010-01-01

    Background RNA-mediated interference (RNAi)-based functional genomics is a systems-level approach to identify novel genes that control biological phenotypes. Existing computational approaches can identify individual genes from RNAi datasets that regulate a given biological process. However, currently available methods cannot identify which RNAi screen "hits" are novel components of well-characterized biological pathways known to regulate the interrogated phenotype. In this study, we describe a method to identify genes from RNAi datasets that are novel components of known biological pathways. We experimentally validate our approach in the context of a recently completed RNAi screen to identify novel regulators of melanogenesis. Results In this study, we utilize a PPI network topology-based approach to identify targets within our RNAi dataset that may be components of known melanogenesis regulatory pathways. Our computational approach identifies a set of screen targets that cluster topologically in a human PPI network with the known pigment regulator Endothelin receptor type B (EDNRB). Validation studies reveal that these genes impact pigment production and EDNRB signaling in pigmented melanoma cells (MNT-1) and normal melanocytes. Conclusions We present an approach that identifies novel components of well-characterized biological pathways from functional genomics datasets that could not have been identified by existing statistical and computational approaches. PMID:20550706

  3. Topological Vulnerability Evaluation Model Based on Fractal Dimension of Complex Networks.

    PubMed

    Gou, Li; Wei, Bo; Sadiq, Rehan; Sadiq, Yong; Deng, Yong

    2016-01-01

    With an increasing emphasis on network security, much more attentions have been attracted to the vulnerability of complex networks. In this paper, the fractal dimension, which can reflect space-filling capacity of networks, is redefined as the origin moment of the edge betweenness to obtain a more reasonable evaluation of vulnerability. The proposed model combining multiple evaluation indexes not only overcomes the shortage of average edge betweenness's failing to evaluate vulnerability of some special networks, but also characterizes the topological structure and highlights the space-filling capacity of networks. The applications to six US airline networks illustrate the practicality and effectiveness of our proposed method, and the comparisons with three other commonly used methods further validate the superiority of our proposed method.

  4. Artificial neural network prediction of aircraft aeroelastic behavior

    NASA Astrophysics Data System (ADS)

    Pesonen, Urpo Juhani

    An Artificial Neural Network that predicts aeroelastic behavior of aircraft is presented. The neural net was designed to predict the shape of a flexible wing in static flight conditions using results from a structural analysis and an aerodynamic analysis performed with traditional computational tools. To generate reliable training and testing data for the network, an aeroelastic analysis code using these tools as components was designed and validated. To demonstrate the advantages and reliability of Artificial Neural Networks, a network was also designed and trained to predict airfoil maximum lift at low Reynolds numbers where wind tunnel data was used for the training. Finally, a neural net was designed and trained to predict the static aeroelastic behavior of a wing without the need to iterate between the structural and aerodynamic solvers.

  5. Disease gene prioritization by integrating tissue-specific molecular networks using a robust multi-network model.

    PubMed

    Ni, Jingchao; Koyuturk, Mehmet; Tong, Hanghang; Haines, Jonathan; Xu, Rong; Zhang, Xiang

    2016-11-10

    recover true associations more accurately than other methods in terms of AUC values, and the performance differences are significant (with paired t-test p-values less than 0.05). This validates the importance to integrate tissue-specific molecular networks for studying disease gene prioritization and show the superiority of our network models and ranking algorithms toward this purpose. The source code and datasets are available at http://nijingchao.github.io/CRstar/ .

  6. Revealing degree distribution of bursting neuron networks.

    PubMed

    Shen, Yu; Hou, Zhonghuai; Xin, Houwen

    2010-03-01

    We present a method to infer the degree distribution of a bursting neuron network from its dynamics. Burst synchronization (BS) of coupled Morris-Lecar neurons has been studied under the weak coupling condition. In the BS state, all the neurons start and end bursting almost simultaneously, while the spikes inside the burst are incoherent among the neurons. Interestingly, we find that the spike amplitude of a given neuron shows an excellent linear relationship with its degree, which makes it possible to estimate the degree distribution of the network by simple statistics of the spike amplitudes. We demonstrate the validity of this scheme on scale-free as well as small-world networks. The underlying mechanism of such a method is also briefly discussed.

  7. Self-perceived Coparenting of Nonresident Fathers: Scale Development and Validation.

    PubMed

    Dyer, W Justin; Fagan, Jay; Kaufman, Rebecca; Pearson, Jessica; Cabrera, Natasha

    2017-11-16

    This study reports on the development and validation of the Fatherhood Research and Practice Network coparenting perceptions scale for nonresident fathers. Although other measures of coparenting have been developed, this is the first measure developed specifically for low-income, nonresident fathers. Focus groups were conducted to determine various aspects of coparenting. Based on this, a scale was created and administered to 542 nonresident fathers. Participants also responded to items used to examine convergent and predictive validity (i.e., parental responsibility, contact with the mother, father self-efficacy and satisfaction, child behavior problems, and contact and engagement with the child). Factor analyses and reliability tests revealed three distinct and reliable perceived coparenting factors: undermining, alliance, and gatekeeping. Validity tests suggest substantial overlap between the undermining and alliance factors, though undermining was uniquely related to child behavior problems. The alliance and gatekeeping factors showed strong convergent validity and evidence for predictive validity. Taken together, results suggest this relatively short measure (11 items) taps into three coparenting dimensions significantly predictive of aspects of individual and family life. © 2017 Family Process Institute.

  8. A permutation testing framework to compare groups of brain networks.

    PubMed

    Simpson, Sean L; Lyday, Robert G; Hayasaka, Satoru; Marsh, Anthony P; Laurienti, Paul J

    2013-01-01

    Brain network analyses have moved to the forefront of neuroimaging research over the last decade. However, methods for statistically comparing groups of networks have lagged behind. These comparisons have great appeal for researchers interested in gaining further insight into complex brain function and how it changes across different mental states and disease conditions. Current comparison approaches generally either rely on a summary metric or on mass-univariate nodal or edge-based comparisons that ignore the inherent topological properties of the network, yielding little power and failing to make network level comparisons. Gleaning deeper insights into normal and abnormal changes in complex brain function demands methods that take advantage of the wealth of data present in an entire brain network. Here we propose a permutation testing framework that allows comparing groups of networks while incorporating topological features inherent in each individual network. We validate our approach using simulated data with known group differences. We then apply the method to functional brain networks derived from fMRI data.

  9. Reference method for detection of Pgp mediated multidrug resistance in human hematological malignancies: a method validated by the laboratories of the French Drug Resistance Network.

    PubMed

    Huet, S; Marie, J P; Gualde, N; Robert, J

    1998-12-15

    Multidrug resistance (MDR) associated with overexpression of the MDR1 gene and of its product, P-glycoprotein (Pgp), plays an important role in limiting cancer treatment efficacy. Many studies have investigated Pgp expression in clinical samples of hematological malignancies but failed to give definitive conclusion on its usefulness. One convenient method for fluorescent detection of Pgp in malignant cells is flow cytometry which however gives variable results from a laboratory to another one, partly due to the lack of a reference method rigorously tested. The purpose of this technical note is to describe each step of a reference flow cytometric method. The guidelines for sample handling, staining and analysis have been established both for Pgp detection with monoclonal antibodies directed against extracellular epitopes (MRK16, UIC2 and 4E3), and for Pgp functional activity measurement with Rhodamine 123 as a fluorescent probe. Both methods have been validated on cultured cell lines and clinical samples by 12 laboratories of the French Drug Resistance Network. This cross-validated multicentric study points out crucial steps for the accuracy and reproducibility of the results, like cell viability, data analysis and expression.

  10. Validation of Aura Data: Needs and Implementation

    NASA Astrophysics Data System (ADS)

    Froidevaux, L.; Douglass, A. R.; Schoeberl, M. R.; Hilsenrath, E.; Kinnison, D. E.; Kroon, M.; Sander, S. P.

    2003-12-01

    Validation of Aura data: needs and implementation L. Froidevaux, A. R. Douglass, M. R. Schoeberl, E. Hilsenrath, D. Kinnison, M. Kroon, and S. P. Sander We describe the needs for validation of the Aura scientific data products expected in 2004 and for several years thereafter, as well as the implementation plan to fullfill these needs. Many profiles of stratospheric and tropospheric composition are expected from the combination of four instruments aboard Aura, along with column abundances, aerosol and cloud information. The Aura validation working group and the Aura Project have been developing programs and collaborations that are expected to lead to a significant number of validation activities after the Aura launch (in early 2004). Spatial and temporal variability in the lower stratosphere and troposphere present challenges to validation of Aura measurements even where cloud contamination effects can be minimized. Data from ground-based networks, balloons, and other satellites will contribute in a major way to Aura data validation. In addition, plans are in place to obtain correlative data for special conditions, such as profiles of O3 and NO2 in polluted areas. Several aircraft campaigns planned for the 2004-2007 time period will provide additional tropospheric and lower stratospheric validation opportunities for Aura; some atmospheric science goals will be addressed by the eventual combination of these data sets. A team of "Aura liaisons" will assist in the dissemination of information about various correlative measurements to be expected in the above timeframe, along with any needed protocols and agreements on data exchange and file formats. A data center is being established at the Goddard Space Flight Center to collect and distribute the various data files to be used in the validation of the Aura data.

  11. Intelligent QoS routing algorithm based on improved AODV protocol for Ad Hoc networks

    NASA Astrophysics Data System (ADS)

    Huibin, Liu; Jun, Zhang

    2016-04-01

    Mobile Ad Hoc Networks were playing an increasingly important part in disaster reliefs, military battlefields and scientific explorations. However, networks routing difficulties are more and more outstanding due to inherent structures. This paper proposed an improved cuckoo searching-based Ad hoc On-Demand Distance Vector Routing protocol (CSAODV). It elaborately designs the calculation methods of optimal routing algorithm used by protocol and transmission mechanism of communication-package. In calculation of optimal routing algorithm by CS Algorithm, by increasing QoS constraint, the found optimal routing algorithm can conform to the requirements of specified bandwidth and time delay, and a certain balance can be obtained among computation spending, bandwidth and time delay. Take advantage of NS2 simulation software to take performance test on protocol in three circumstances and validate the feasibility and validity of CSAODV protocol. In results, CSAODV routing protocol is more adapt to the change of network topological structure than AODV protocol, which improves package delivery fraction of protocol effectively, reduce the transmission time delay of network, reduce the extra burden to network brought by controlling information, and improve the routing efficiency of network.

  12. A new model of the spinal locomotor networks of a salamander and its properties.

    PubMed

    Liu, Qiang; Yang, Huizhen; Zhang, Jinxue; Wang, Jingzhuo

    2018-05-22

    A salamander is an ideal animal for studying the spinal locomotor network mechanism of vertebrates from an evolutionary perspective since it represents the transition from an aquatic to a terrestrial animal. However, little is known about the spinal locomotor network of a salamander. A spinal locomotor network model is a useful tool for exploring the working mechanism of the spinal networks of salamanders. A new spinal locomotor network model for a salamander is built for a three-dimensional (3D) biomechanical model of the salamander using a novel locomotion-controlled neural network model. Based on recent experimental data on the spinal circuitry and observational results of gaits of vertebrates, we assume that different interneuron sets recruited for mediating the frequency of spinal circuits are also related to the generation of different gaits. The spinal locomotor networks of salamanders are divided into low-frequency networks for walking and high-frequency networks for swimming. Additionally, a new topological structure between the body networks and limb networks is built, which only uses the body networks to coordinate the motion of limbs. There are no direct synaptic connections among limb networks. These techniques differ from existing salamander spinal locomotor network models. A simulation is performed and analyzed to validate the properties of the new spinal locomotor networks of salamanders. The simulation results show that the new spinal locomotor networks can generate a forward walking gait, a backward walking gait, a swimming gait, and a turning gait during swimming and walking. These gaits can be switched smoothly by changing external inputs from the brainstem. These properties are consistent with those of a real salamander. However, it is still difficult for the new spinal locomotor networks to generate highly efficient turning during walking, 3D swimming, nonrhythmic movements, and so on. New experimental data are required for further validation.

  13. Enhanced Handover Decision Algorithm in Heterogeneous Wireless Network

    PubMed Central

    Abdullah, Radhwan Mohamed; Zukarnain, Zuriati Ahmad

    2017-01-01

    Transferring a huge amount of data between different network locations over the network links depends on the network’s traffic capacity and data rate. Traditionally, a mobile device may be moved to achieve the operations of vertical handover, considering only one criterion, that is the Received Signal Strength (RSS). The use of a single criterion may cause service interruption, an unbalanced network load and an inefficient vertical handover. In this paper, we propose an enhanced vertical handover decision algorithm based on multiple criteria in the heterogeneous wireless network. The algorithm consists of three technology interfaces: Long-Term Evolution (LTE), Worldwide interoperability for Microwave Access (WiMAX) and Wireless Local Area Network (WLAN). It also employs three types of vertical handover decision algorithms: equal priority, mobile priority and network priority. The simulation results illustrate that the three types of decision algorithms outperform the traditional network decision algorithm in terms of handover number probability and the handover failure probability. In addition, it is noticed that the network priority handover decision algorithm produces better results compared to the equal priority and the mobile priority handover decision algorithm. Finally, the simulation results are validated by the analytical model. PMID:28708067

  14. Benford's Law Applies to Online Social Networks.

    PubMed

    Golbeck, Jennifer

    2015-01-01

    Benford's Law states that, in naturally occurring systems, the frequency of numbers' first digits is not evenly distributed. Numbers beginning with a 1 occur roughly 30% of the time, and are six times more common than numbers beginning with a 9. We show that Benford's Law applies to social and behavioral features of users in online social networks. Using social data from five major social networks (Facebook, Twitter, Google Plus, Pinterest, and LiveJournal), we show that the distribution of first significant digits of friend and follower counts for users in these systems follow Benford's Law. The same is true for the number of posts users make. We extend this to egocentric networks, showing that friend counts among the people in an individual's social network also follows the expected distribution. We discuss how this can be used to detect suspicious or fraudulent activity online and to validate datasets.

  15. Modeling integrated cellular machinery using hybrid Petri-Boolean networks.

    PubMed

    Berestovsky, Natalie; Zhou, Wanding; Nagrath, Deepak; Nakhleh, Luay

    2013-01-01

    The behavior and phenotypic changes of cells are governed by a cellular circuitry that represents a set of biochemical reactions. Based on biological functions, this circuitry is divided into three types of networks, each encoding for a major biological process: signal transduction, transcription regulation, and metabolism. This division has generally enabled taming computational complexity dealing with the entire system, allowed for using modeling techniques that are specific to each of the components, and achieved separation of the different time scales at which reactions in each of the three networks occur. Nonetheless, with this division comes loss of information and power needed to elucidate certain cellular phenomena. Within the cell, these three types of networks work in tandem, and each produces signals and/or substances that are used by the others to process information and operate normally. Therefore, computational techniques for modeling integrated cellular machinery are needed. In this work, we propose an integrated hybrid model (IHM) that combines Petri nets and Boolean networks to model integrated cellular networks. Coupled with a stochastic simulation mechanism, the model simulates the dynamics of the integrated network, and can be perturbed to generate testable hypotheses. Our model is qualitative and is mostly built upon knowledge from the literature and requires fine-tuning of very few parameters. We validated our model on two systems: the transcriptional regulation of glucose metabolism in human cells, and cellular osmoregulation in S. cerevisiae. The model produced results that are in very good agreement with experimental data, and produces valid hypotheses. The abstract nature of our model and the ease of its construction makes it a very good candidate for modeling integrated networks from qualitative data. The results it produces can guide the practitioner to zoom into components and interconnections and investigate them using such more

  16. Adaptive Optimization of Aircraft Engine Performance Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Long, Theresa W.

    1995-01-01

    Preliminary results are presented on the development of an adaptive neural network based control algorithm to enhance aircraft engine performance. This work builds upon a previous National Aeronautics and Space Administration (NASA) effort known as Performance Seeking Control (PSC). PSC is an adaptive control algorithm which contains a model of the aircraft's propulsion system which is updated on-line to match the operation of the aircraft's actual propulsion system. Information from the on-line model is used to adapt the control system during flight to allow optimal operation of the aircraft's propulsion system (inlet, engine, and nozzle) to improve aircraft engine performance without compromising reliability or operability. Performance Seeking Control has been shown to yield reductions in fuel flow, increases in thrust, and reductions in engine fan turbine inlet temperature. The neural network based adaptive control, like PSC, will contain a model of the propulsion system which will be used to calculate optimal control commands on-line. Hopes are that it will be able to provide some additional benefits above and beyond those of PSC. The PSC algorithm is computationally intensive, it is valid only at near steady-state flight conditions, and it has no way to adapt or learn on-line. These issues are being addressed in the development of the optimal neural controller. Specialized neural network processing hardware is being developed to run the software, the algorithm will be valid at steady-state and transient conditions, and will take advantage of the on-line learning capability of neural networks. Future plans include testing the neural network software and hardware prototype against an aircraft engine simulation. In this paper, the proposed neural network software and hardware is described and preliminary neural network training results are presented.

  17. Modeling Integrated Cellular Machinery Using Hybrid Petri-Boolean Networks

    PubMed Central

    Berestovsky, Natalie; Zhou, Wanding; Nagrath, Deepak; Nakhleh, Luay

    2013-01-01

    The behavior and phenotypic changes of cells are governed by a cellular circuitry that represents a set of biochemical reactions. Based on biological functions, this circuitry is divided into three types of networks, each encoding for a major biological process: signal transduction, transcription regulation, and metabolism. This division has generally enabled taming computational complexity dealing with the entire system, allowed for using modeling techniques that are specific to each of the components, and achieved separation of the different time scales at which reactions in each of the three networks occur. Nonetheless, with this division comes loss of information and power needed to elucidate certain cellular phenomena. Within the cell, these three types of networks work in tandem, and each produces signals and/or substances that are used by the others to process information and operate normally. Therefore, computational techniques for modeling integrated cellular machinery are needed. In this work, we propose an integrated hybrid model (IHM) that combines Petri nets and Boolean networks to model integrated cellular networks. Coupled with a stochastic simulation mechanism, the model simulates the dynamics of the integrated network, and can be perturbed to generate testable hypotheses. Our model is qualitative and is mostly built upon knowledge from the literature and requires fine-tuning of very few parameters. We validated our model on two systems: the transcriptional regulation of glucose metabolism in human cells, and cellular osmoregulation in S. cerevisiae. The model produced results that are in very good agreement with experimental data, and produces valid hypotheses. The abstract nature of our model and the ease of its construction makes it a very good candidate for modeling integrated networks from qualitative data. The results it produces can guide the practitioner to zoom into components and interconnections and investigate them using such more

  18. Reaction Time and Self-Report Psychopathological Assessment: Convergent and Discriminant Validity.

    ERIC Educational Resources Information Center

    Holden, Ronald R.; Fekken, G. Cynthia

    The processing of incoming psychological information along the network, or schemata, of self-knowledge was studied to determine the convergent and discriminant validity of the patterns of schemata-specific response latencies. Fifty-three female and 52 male university students completed the Basic Personality Inventory (BPI). BPI scales assess…

  19. Adaptive neural network motion control of manipulators with experimental evaluations.

    PubMed

    Puga-Guzmán, S; Moreno-Valenzuela, J; Santibáñez, V

    2014-01-01

    A nonlinear proportional-derivative controller plus adaptive neuronal network compensation is proposed. With the aim of estimating the desired torque, a two-layer neural network is used. Then, adaptation laws for the neural network weights are derived. Asymptotic convergence of the position and velocity tracking errors is proven, while the neural network weights are shown to be uniformly bounded. The proposed scheme has been experimentally validated in real time. These experimental evaluations were carried in two different mechanical systems: a horizontal two degrees-of-freedom robot and a vertical one degree-of-freedom arm which is affected by the gravitational force. In each one of the two experimental set-ups, the proposed scheme was implemented without and with adaptive neural network compensation. Experimental results confirmed the tracking accuracy of the proposed adaptive neural network-based controller.

  20. Adaptive Neural Network Motion Control of Manipulators with Experimental Evaluations

    PubMed Central

    Puga-Guzmán, S.; Moreno-Valenzuela, J.; Santibáñez, V.

    2014-01-01

    A nonlinear proportional-derivative controller plus adaptive neuronal network compensation is proposed. With the aim of estimating the desired torque, a two-layer neural network is used. Then, adaptation laws for the neural network weights are derived. Asymptotic convergence of the position and velocity tracking errors is proven, while the neural network weights are shown to be uniformly bounded. The proposed scheme has been experimentally validated in real time. These experimental evaluations were carried in two different mechanical systems: a horizontal two degrees-of-freedom robot and a vertical one degree-of-freedom arm which is affected by the gravitational force. In each one of the two experimental set-ups, the proposed scheme was implemented without and with adaptive neural network compensation. Experimental results confirmed the tracking accuracy of the proposed adaptive neural network-based controller. PMID:24574910

  1. Estimation of Global Network Statistics from Incomplete Data

    PubMed Central

    Bliss, Catherine A.; Danforth, Christopher M.; Dodds, Peter Sheridan

    2014-01-01

    Complex networks underlie an enormous variety of social, biological, physical, and virtual systems. A profound complication for the science of complex networks is that in most cases, observing all nodes and all network interactions is impossible. Previous work addressing the impacts of partial network data is surprisingly limited, focuses primarily on missing nodes, and suggests that network statistics derived from subsampled data are not suitable estimators for the same network statistics describing the overall network topology. We generate scaling methods to predict true network statistics, including the degree distribution, from only partial knowledge of nodes, links, or weights. Our methods are transparent and do not assume a known generating process for the network, thus enabling prediction of network statistics for a wide variety of applications. We validate analytical results on four simulated network classes and empirical data sets of various sizes. We perform subsampling experiments by varying proportions of sampled data and demonstrate that our scaling methods can provide very good estimates of true network statistics while acknowledging limits. Lastly, we apply our techniques to a set of rich and evolving large-scale social networks, Twitter reply networks. Based on 100 million tweets, we use our scaling techniques to propose a statistical characterization of the Twitter Interactome from September 2008 to November 2008. Our treatment allows us to find support for Dunbar's hypothesis in detecting an upper threshold for the number of active social contacts that individuals maintain over the course of one week. PMID:25338183

  2. Composability-Centered Convolutional Neural Network Pruning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, Xipeng; Guan, Hui; Lim, Seung-Hwan

    This work studies the composability of the building blocks ofstructural CNN models (e.g., GoogleLeNet and Residual Networks) in thecontext of network pruning. We empirically validate that a networkcomposed of pre-trained building blocks (e.g. residual blocks andInception modules) not only gives a better initial setting fortraining, but also allows the training process to converge at asignificantly higher accuracy in much less time. Based on thatinsight, we propose a {\\em composability-centered} design for CNNnetwork pruning. Experiments show that this new scheme shortens theconfiguration process in CNN network pruning by up to 186.8X forResNet-50 and up to 30.2X for Inception-V3, and meanwhile, themore » modelsit finds that meet the accuracy requirement are significantly morecompact than those found by default schemes.« less

  3. State feedback control design for Boolean networks.

    PubMed

    Liu, Rongjie; Qian, Chunjiang; Liu, Shuqian; Jin, Yu-Fang

    2016-08-26

    Driving Boolean networks to desired states is of paramount significance toward our ultimate goal of controlling the progression of biological pathways and regulatory networks. Despite recent computational development of controllability of general complex networks and structural controllability of Boolean networks, there is still a lack of bridging the mathematical condition on controllability to real boolean operations in a network. Further, no realtime control strategy has been proposed to drive a Boolean network. In this study, we applied semi-tensor product to represent boolean functions in a network and explored controllability of a boolean network based on the transition matrix and time transition diagram. We determined the necessary and sufficient condition for a controllable Boolean network and mapped this requirement in transition matrix to real boolean functions and structure property of a network. An efficient tool is offered to assess controllability of an arbitrary Boolean network and to determine all reachable and non-reachable states. We found six simplest forms of controllable 2-node Boolean networks and explored the consistency of transition matrices while extending these six forms to controllable networks with more nodes. Importantly, we proposed the first state feedback control strategy to drive the network based on the status of all nodes in the network. Finally, we applied our reachability condition to the major switch of P53 pathway to predict the progression of the pathway and validate the prediction with published experimental results. This control strategy allowed us to apply realtime control to drive Boolean networks, which could not be achieved by the current control strategy for Boolean networks. Our results enabled a more comprehensive understanding of the evolution of Boolean networks and might be extended to output feedback control design.

  4. Validation of spatial variability in downscaling results from the VALUE perfect predictor experiment

    NASA Astrophysics Data System (ADS)

    Widmann, Martin; Bedia, Joaquin; Gutiérrez, Jose Manuel; Maraun, Douglas; Huth, Radan; Fischer, Andreas; Keller, Denise; Hertig, Elke; Vrac, Mathieu; Wibig, Joanna; Pagé, Christian; Cardoso, Rita M.; Soares, Pedro MM; Bosshard, Thomas; Casado, Maria Jesus; Ramos, Petra

    2016-04-01

    VALUE is an open European network to validate and compare downscaling methods for climate change research. Within VALUE a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods has been developed. In the first validation experiment the downscaling methods are validated in a setup with perfect predictors taken from the ERA-interim reanalysis for the period 1997 - 2008. This allows to investigate the isolated skill of downscaling methods without further error contributions from the large-scale predictors. One aspect of the validation is the representation of spatial variability. As part of the VALUE validation we have compared various properties of the spatial variability of downscaled daily temperature and precipitation with the corresponding properties in observations. We have used two test validation datasets, one European-wide set of 86 stations, and one higher-density network of 50 stations in Germany. Here we present results based on three approaches, namely the analysis of i.) correlation matrices, ii.) pairwise joint threshold exceedances, and iii.) regions of similar variability. We summarise the information contained in correlation matrices by calculating the dependence of the correlations on distance and deriving decorrelation lengths, as well as by determining the independent degrees of freedom. Probabilities for joint threshold exceedances and (where appropriate) non-exceedances are calculated for various user-relevant thresholds related for instance to extreme precipitation or frost and heat days. The dependence of these probabilities on distance is again characterised by calculating typical length scales that separate dependent from independent exceedances. Regionalisation is based on rotated Principal Component Analysis. The results indicate which downscaling methods are preferable if the dependency of variability at different locations is relevant for the user.

  5. Neural network cloud top pressure and height for MODIS

    NASA Astrophysics Data System (ADS)

    Håkansson, Nina; Adok, Claudia; Thoss, Anke; Scheirer, Ronald; Hörnquist, Sara

    2018-06-01

    to give the most useful information of the spread of the errors. For all descriptive statistics presented MAE, IQR, RMSE (root mean square error), SD, mode, median, bias and percentage of absolute errors above 0.25, 0.5, 1 and 2 km the neural network perform better than the reference algorithms both validated with CALIOP and CPR (CloudSat). The neural networks using the brightness temperatures at 11 and 12 µm show at least 32 % (or 623 m) lower MAE compared to the two operational reference algorithms when validating with CALIOP height. Validation with CPR (CloudSat) height gives at least 25 % (or 430 m) reduction of MAE.

  6. Retinal Connectomics: Towards Complete, Accurate Networks

    PubMed Central

    Marc, Robert E.; Jones, Bryan W.; Watt, Carl B.; Anderson, James R.; Sigulinsky, Crystal; Lauritzen, Scott

    2013-01-01

    Connectomics is a strategy for mapping complex neural networks based on high-speed automated electron optical imaging, computational assembly of neural data volumes, web-based navigational tools to explore 1012–1015 byte (terabyte to petabyte) image volumes, and annotation and markup tools to convert images into rich networks with cellular metadata. These collections of network data and associated metadata, analyzed using tools from graph theory and classification theory, can be merged with classical systems theory, giving a more completely parameterized view of how biologic information processing systems are implemented in retina and brain. Networks have two separable features: topology and connection attributes. The first findings from connectomics strongly validate the idea that the topologies complete retinal networks are far more complex than the simple schematics that emerged from classical anatomy. In particular, connectomics has permitted an aggressive refactoring of the retinal inner plexiform layer, demonstrating that network function cannot be simply inferred from stratification; exposing the complex geometric rules for inserting different cells into a shared network; revealing unexpected bidirectional signaling pathways between mammalian rod and cone systems; documenting selective feedforward systems, novel candidate signaling architectures, new coupling motifs, and the highly complex architecture of the mammalian AII amacrine cell. This is but the beginning, as the underlying principles of connectomics are readily transferrable to non-neural cell complexes and provide new contexts for assessing intercellular communication. PMID:24016532

  7. Mechanisms of complex network growth: Synthesis of the preferential attachment and fitness models

    NASA Astrophysics Data System (ADS)

    Golosovsky, Michael

    2018-06-01

    We analyze growth mechanisms of complex networks and focus on their validation by measurements. To this end we consider the equation Δ K =A (t ) (K +K0) Δ t , where K is the node's degree, Δ K is its increment, A (t ) is the aging constant, and K0 is the initial attractivity. This equation has been commonly used to validate the preferential attachment mechanism. We show that this equation is undiscriminating and holds for the fitness model [Caldarelli et al., Phys. Rev. Lett. 89, 258702 (2002), 10.1103/PhysRevLett.89.258702] as well. In other words, accepted method of the validation of the microscopic mechanism of network growth does not discriminate between "rich-gets-richer" and "good-gets-richer" scenarios. This means that the growth mechanism of many natural complex networks can be based on the fitness model rather than on the preferential attachment, as it was believed so far. The fitness model yields the long-sought explanation for the initial attractivity K0, an elusive parameter which was left unexplained within the framework of the preferential attachment model. We show that the initial attractivity is determined by the width of the fitness distribution. We also present the network growth model based on recursive search with memory and show that this model contains both the preferential attachment and the fitness models as extreme cases.

  8. The graph neural network model.

    PubMed

    Scarselli, Franco; Gori, Marco; Tsoi, Ah Chung; Hagenbuchner, Markus; Monfardini, Gabriele

    2009-01-01

    Many underlying relationships among data in several areas of science and engineering, e.g., computer vision, molecular chemistry, molecular biology, pattern recognition, and data mining, can be represented in terms of graphs. In this paper, we propose a new neural network model, called graph neural network (GNN) model, that extends existing neural network methods for processing the data represented in graph domains. This GNN model, which can directly process most of the practically useful types of graphs, e.g., acyclic, cyclic, directed, and undirected, implements a function tau(G,n) is an element of IR(m) that maps a graph G and one of its nodes n into an m-dimensional Euclidean space. A supervised learning algorithm is derived to estimate the parameters of the proposed GNN model. The computational cost of the proposed algorithm is also considered. Some experimental results are shown to validate the proposed learning algorithm, and to demonstrate its generalization capabilities.

  9. Network-based stochastic semisupervised learning.

    PubMed

    Silva, Thiago Christiano; Zhao, Liang

    2012-03-01

    Semisupervised learning is a machine learning approach that is able to employ both labeled and unlabeled samples in the training process. In this paper, we propose a semisupervised data classification model based on a combined random-preferential walk of particles in a network (graph) constructed from the input dataset. The particles of the same class cooperate among themselves, while the particles of different classes compete with each other to propagate class labels to the whole network. A rigorous model definition is provided via a nonlinear stochastic dynamical system and a mathematical analysis of its behavior is carried out. A numerical validation presented in this paper confirms the theoretical predictions. An interesting feature brought by the competitive-cooperative mechanism is that the proposed model can achieve good classification rates while exhibiting low computational complexity order in comparison to other network-based semisupervised algorithms. Computer simulations conducted on synthetic and real-world datasets reveal the effectiveness of the model.

  10. “Guilt by Association” Is the Exception Rather Than the Rule in Gene Networks

    PubMed Central

    Gillis, Jesse; Pavlidis, Paul

    2012-01-01

    Gene networks are commonly interpreted as encoding functional information in their connections. An extensively validated principle called guilt by association states that genes which are associated or interacting are more likely to share function. Guilt by association provides the central top-down principle for analyzing gene networks in functional terms or assessing their quality in encoding functional information. In this work, we show that functional information within gene networks is typically concentrated in only a very few interactions whose properties cannot be reliably related to the rest of the network. In effect, the apparent encoding of function within networks has been largely driven by outliers whose behaviour cannot even be generalized to individual genes, let alone to the network at large. While experimentalist-driven analysis of interactions may use prior expert knowledge to focus on the small fraction of critically important data, large-scale computational analyses have typically assumed that high-performance cross-validation in a network is due to a generalizable encoding of function. Because we find that gene function is not systemically encoded in networks, but dependent on specific and critical interactions, we conclude it is necessary to focus on the details of how networks encode function and what information computational analyses use to extract functional meaning. We explore a number of consequences of this and find that network structure itself provides clues as to which connections are critical and that systemic properties, such as scale-free-like behaviour, do not map onto the functional connectivity within networks. PMID:22479173

  11. Wireless Networks under a Backoff Attack: A Game Theoretical Perspective

    PubMed Central

    Zazo, Santiago

    2018-01-01

    We study a wireless sensor network using CSMA/CA in the MAC layer under a backoff attack: some of the sensors of the network are malicious and deviate from the defined contention mechanism. We use Bianchi’s network model to study the impact of the malicious sensors on the total network throughput, showing that it causes the throughput to be unfairly distributed among sensors. We model this conflict using game theory tools, where each sensor is a player. We obtain analytical solutions and propose an algorithm, based on Regret Matching, to learn the equilibrium of the game with an arbitrary number of players. Our approach is validated via simulations, showing that our theoretical predictions adjust to reality. PMID:29385752

  12. Wireless Networks under a Backoff Attack: A Game Theoretical Perspective.

    PubMed

    Parras, Juan; Zazo, Santiago

    2018-01-30

    We study a wireless sensor network using CSMA/CA in the MAC layer under a backoff attack: some of the sensors of the network are malicious and deviate from the defined contention mechanism. We use Bianchi's network model to study the impact of the malicious sensors on the total network throughput, showing that it causes the throughput to be unfairly distributed among sensors. We model this conflict using game theory tools, where each sensor is a player. We obtain analytical solutions and propose an algorithm, based on Regret Matching, to learn the equilibrium of the game with an arbitrary number of players. Our approach is validated via simulations, showing that our theoretical predictions adjust to reality.

  13. Is Ecosystem-Atmosphere Observation in Long-Term Networks actually Science?

    NASA Astrophysics Data System (ADS)

    Schmid, H. P. E.

    2015-12-01

    Science uses observations to build knowledge by testable explanations and predictions. The "scientific method" requires controlled systematic observation to examine questions, hypotheses and predictions. Thus, enquiry along the scientific method responds to questions of the type "what if …?" In contrast, long-term observation programs follow a different strategy: we commonly take great care to minimize our influence on the environment of our measurements, with the aim to maximize their external validity. We observe what we think are key variables for ecosystem-atmosphere exchange and ask questions such as "what happens next?" or "how did this happen?" This apparent deviation from the scientific method begs the question whether any explanations we come up with for the phenomena we observe are actually contributing to testable knowledge, or whether their value remains purely anecdotal. Here, we present examples to argue that, under certain conditions, data from long-term observations and observation networks can have equivalent or even higher scientific validity than controlled experiments. Internal validity is particularly enhanced if observations are combined with modeling. Long-term observations of ecosystem-atmosphere fluxes identify trends and temporal scales of variability. Observation networks reveal spatial patterns and variations, and long-term observation networks combine both aspects. A necessary condition for such observations to gain validity beyond the anecdotal is the requirement that the data are comparable: a comparison of two measured values, separated in time or space, must inform us objectively whether (e.g.) one value is larger than the other. In turn, a necessary condition for the comparability of data is the compatibility of the sensors and procedures used to generate them. Compatibility ensures that we compare "apples to apples": that measurements conducted in identical conditions give the same values (within suitable uncertainty intervals

  14. Dynamic social networks based on movement

    USGS Publications Warehouse

    Scharf, Henry; Hooten, Mevin B.; Fosdick, Bailey K.; Johnson, Devin S.; London, Joshua M.; Durban, John W.

    2016-01-01

    Network modeling techniques provide a means for quantifying social structure in populations of individuals. Data used to define social connectivity are often expensive to collect and based on case-specific, ad hoc criteria. Moreover, in applications involving animal social networks, collection of these data is often opportunistic and can be invasive. Frequently, the social network of interest for a given population is closely related to the way individuals move. Thus, telemetry data, which are minimally invasive and relatively inexpensive to collect, present an alternative source of information. We develop a framework for using telemetry data to infer social relationships among animals. To achieve this, we propose a Bayesian hierarchical model with an underlying dynamic social network controlling movement of individuals via two mechanisms: an attractive effect and an aligning effect. We demonstrate the model and its ability to accurately identify complex social behavior in simulation, and apply our model to telemetry data arising from killer whales. Using auxiliary information about the study population, we investigate model validity and find the inferred dynamic social network is consistent with killer whale ecology and expert knowledge.

  15. Multilevel Hierarchical Kernel Spectral Clustering for Real-Life Large Scale Complex Networks

    PubMed Central

    Mall, Raghvendra; Langone, Rocco; Suykens, Johan A. K.

    2014-01-01

    Kernel spectral clustering corresponds to a weighted kernel principal component analysis problem in a constrained optimization framework. The primal formulation leads to an eigen-decomposition of a centered Laplacian matrix at the dual level. The dual formulation allows to build a model on a representative subgraph of the large scale network in the training phase and the model parameters are estimated in the validation stage. The KSC model has a powerful out-of-sample extension property which allows cluster affiliation for the unseen nodes of the big data network. In this paper we exploit the structure of the projections in the eigenspace during the validation stage to automatically determine a set of increasing distance thresholds. We use these distance thresholds in the test phase to obtain multiple levels of hierarchy for the large scale network. The hierarchical structure in the network is determined in a bottom-up fashion. We empirically showcase that real-world networks have multilevel hierarchical organization which cannot be detected efficiently by several state-of-the-art large scale hierarchical community detection techniques like the Louvain, OSLOM and Infomap methods. We show that a major advantage of our proposed approach is the ability to locate good quality clusters at both the finer and coarser levels of hierarchy using internal cluster quality metrics on 7 real-life networks. PMID:24949877

  16. Inferring personal economic status from social network location

    NASA Astrophysics Data System (ADS)

    Luo, Shaojun; Morone, Flaviano; Sarraute, Carlos; Travizano, Matías; Makse, Hernán A.

    2017-05-01

    It is commonly believed that patterns of social ties affect individuals' economic status. Here we translate this concept into an operational definition at the network level, which allows us to infer the economic well-being of individuals through a measure of their location and influence in the social network. We analyse two large-scale sources: telecommunications and financial data of a whole country's population. Our results show that an individual's location, measured as the optimal collective influence to the structural integrity of the social network, is highly correlated with personal economic status. The observed social network patterns of influence mimic the patterns of economic inequality. For pragmatic use and validation, we carry out a marketing campaign that shows a threefold increase in response rate by targeting individuals identified by our social network metrics as compared to random targeting. Our strategy can also be useful in maximizing the effects of large-scale economic stimulus policies.

  17. Inferring personal economic status from social network location.

    PubMed

    Luo, Shaojun; Morone, Flaviano; Sarraute, Carlos; Travizano, Matías; Makse, Hernán A

    2017-05-16

    It is commonly believed that patterns of social ties affect individuals' economic status. Here we translate this concept into an operational definition at the network level, which allows us to infer the economic well-being of individuals through a measure of their location and influence in the social network. We analyse two large-scale sources: telecommunications and financial data of a whole country's population. Our results show that an individual's location, measured as the optimal collective influence to the structural integrity of the social network, is highly correlated with personal economic status. The observed social network patterns of influence mimic the patterns of economic inequality. For pragmatic use and validation, we carry out a marketing campaign that shows a threefold increase in response rate by targeting individuals identified by our social network metrics as compared to random targeting. Our strategy can also be useful in maximizing the effects of large-scale economic stimulus policies.

  18. Reconstruction of stochastic temporal networks through diffusive arrival times

    NASA Astrophysics Data System (ADS)

    Li, Xun; Li, Xiang

    2017-06-01

    Temporal networks have opened a new dimension in defining and quantification of complex interacting systems. Our ability to identify and reproduce time-resolved interaction patterns is, however, limited by the restricted access to empirical individual-level data. Here we propose an inverse modelling method based on first-arrival observations of the diffusion process taking place on temporal networks. We describe an efficient coordinate-ascent implementation for inferring stochastic temporal networks that builds in particular but not exclusively on the null model assumption of mutually independent interaction sequences at the dyadic level. The results of benchmark tests applied on both synthesized and empirical network data sets confirm the validity of our algorithm, showing the feasibility of statistically accurate inference of temporal networks only from moderate-sized samples of diffusion cascades. Our approach provides an effective and flexible scheme for the temporally augmented inverse problems of network reconstruction and has potential in a broad variety of applications.

  19. Super-Joule heating in graphene and silver nanowire network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maize, Kerry; Das, Suprem R.; Sadeque, Sajia

    Transistors, sensors, and transparent conductors based on randomly assembled nanowire networks rely on multi-component percolation for unique and distinctive applications in flexible electronics, biochemical sensing, and solar cells. While conduction models for 1-D and 1-D/2-D networks have been developed, typically assuming linear electronic transport and self-heating, the model has not been validated by direct high-resolution characterization of coupled electronic pathways and thermal response. In this letter, we show the occurrence of nonlinear “super-Joule” self-heating at the transport bottlenecks in networks of silver nanowires and silver nanowire/single layer graphene hybrid using high resolution thermoreflectance (TR) imaging. TR images at the microscopicmore » self-heating hotspots within nanowire network and nanowire/graphene hybrid network devices with submicron spatial resolution are used to infer electrical current pathways. The results encourage a fundamental reevaluation of transport models for network-based percolating conductors.« less

  20. On investigating social dynamics in tactical opportunistic mobile networks

    NASA Astrophysics Data System (ADS)

    Gao, Wei; Li, Yong

    2014-06-01

    The efficiency of military mobile network operations at the tactical edge is challenging due to the practical Disconnected, Intermittent, and Limited (DIL) environments at the tactical edge which make it hard to maintain persistent end-to-end wireless network connectivity. Opportunistic mobile networks are hence devised to depict such tactical networking scenarios. Social relations among warfighters in tactical opportunistic mobile networks are implicitly represented by their opportunistic contacts via short-range radios, but were inappropriately considered as stationary over time by the conventional wisdom. In this paper, we develop analytical models to probabilistically investigate the temporal dynamics of this social relationship, which is critical to efficient mobile communication in the battlespace. We propose to formulate such dynamics by developing various sociological metrics, including centrality and community, with respect to the opportunistic mobile network contexts. These metrics investigate social dynamics based on the experimentally validated skewness of users' transient contact distributions over time.

  1. Reconstruction of stochastic temporal networks through diffusive arrival times

    PubMed Central

    Li, Xun; Li, Xiang

    2017-01-01

    Temporal networks have opened a new dimension in defining and quantification of complex interacting systems. Our ability to identify and reproduce time-resolved interaction patterns is, however, limited by the restricted access to empirical individual-level data. Here we propose an inverse modelling method based on first-arrival observations of the diffusion process taking place on temporal networks. We describe an efficient coordinate-ascent implementation for inferring stochastic temporal networks that builds in particular but not exclusively on the null model assumption of mutually independent interaction sequences at the dyadic level. The results of benchmark tests applied on both synthesized and empirical network data sets confirm the validity of our algorithm, showing the feasibility of statistically accurate inference of temporal networks only from moderate-sized samples of diffusion cascades. Our approach provides an effective and flexible scheme for the temporally augmented inverse problems of network reconstruction and has potential in a broad variety of applications. PMID:28604687

  2. Supervised local error estimation for nonlinear image registration using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Eppenhof, Koen A. J.; Pluim, Josien P. W.

    2017-02-01

    Error estimation in medical image registration is valuable when validating, comparing, or combining registration methods. To validate a nonlinear image registration method, ideally the registration error should be known for the entire image domain. We propose a supervised method for the estimation of a registration error map for nonlinear image registration. The method is based on a convolutional neural network that estimates the norm of the residual deformation from patches around each pixel in two registered images. This norm is interpreted as the registration error, and is defined for every pixel in the image domain. The network is trained using a set of artificially deformed images. Each training example is a pair of images: the original image, and a random deformation of that image. No manually labeled ground truth error is required. At test time, only the two registered images are required as input. We train and validate the network on registrations in a set of 2D digital subtraction angiography sequences, such that errors up to eight pixels can be estimated. We show that for this range of errors the convolutional network is able to learn the registration error in pairs of 2D registered images at subpixel precision. Finally, we present a proof of principle for the extension to 3D registration problems in chest CTs, showing that the method has the potential to estimate errors in 3D registration problems.

  3. Sequential defense against random and intentional attacks in complex networks.

    PubMed

    Chen, Pin-Yu; Cheng, Shin-Ming

    2015-02-01

    Network robustness against attacks is one of the most fundamental researches in network science as it is closely associated with the reliability and functionality of various networking paradigms. However, despite the study on intrinsic topological vulnerabilities to node removals, little is known on the network robustness when network defense mechanisms are implemented, especially for networked engineering systems equipped with detection capabilities. In this paper, a sequential defense mechanism is first proposed in complex networks for attack inference and vulnerability assessment, where the data fusion center sequentially infers the presence of an attack based on the binary attack status reported from the nodes in the network. The network robustness is evaluated in terms of the ability to identify the attack prior to network disruption under two major attack schemes, i.e., random and intentional attacks. We provide a parametric plug-in model for performance evaluation on the proposed mechanism and validate its effectiveness and reliability via canonical complex network models and real-world large-scale network topology. The results show that the sequential defense mechanism greatly improves the network robustness and mitigates the possibility of network disruption by acquiring limited attack status information from a small subset of nodes in the network.

  4. Using Cognitive Control in Software Defined Networking for Port Scan Detection

    DTIC Science & Technology

    2017-07-01

    ARL-TR-8059 ● July 2017 US Army Research Laboratory Using Cognitive Control in Software-Defined Networking for Port Scan...Cognitive Control in Software-Defined Networking for Port Scan Detection by Vinod K Mishra Computational and Information Sciences Directorate, ARL...currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD-MM-YYYY) July 2017 2. REPORT TYPE

  5. High-Throughput Classification of Radiographs Using Deep Convolutional Neural Networks.

    PubMed

    Rajkomar, Alvin; Lingam, Sneha; Taylor, Andrew G; Blum, Michael; Mongan, John

    2017-02-01

    The study aimed to determine if computer vision techniques rooted in deep learning can use a small set of radiographs to perform clinically relevant image classification with high fidelity. One thousand eight hundred eighty-five chest radiographs on 909 patients obtained between January 2013 and July 2015 at our institution were retrieved and anonymized. The source images were manually annotated as frontal or lateral and randomly divided into training, validation, and test sets. Training and validation sets were augmented to over 150,000 images using standard image manipulations. We then pre-trained a series of deep convolutional networks based on the open-source GoogLeNet with various transformations of the open-source ImageNet (non-radiology) images. These trained networks were then fine-tuned using the original and augmented radiology images. The model with highest validation accuracy was applied to our institutional test set and a publicly available set. Accuracy was assessed by using the Youden Index to set a binary cutoff for frontal or lateral classification. This retrospective study was IRB approved prior to initiation. A network pre-trained on 1.2 million greyscale ImageNet images and fine-tuned on augmented radiographs was chosen. The binary classification method correctly classified 100 % (95 % CI 99.73-100 %) of both our test set and the publicly available images. Classification was rapid, at 38 images per second. A deep convolutional neural network created using non-radiological images, and an augmented set of radiographs is effective in highly accurate classification of chest radiograph view type and is a feasible, rapid method for high-throughput annotation.

  6. Node fingerprinting: an efficient heuristic for aligning biological networks.

    PubMed

    Radu, Alex; Charleston, Michael

    2014-10-01

    With the continuing increase in availability of biological data and improvements to biological models, biological network analysis has become a promising area of research. An emerging technique for the analysis of biological networks is through network alignment. Network alignment has been used to calculate genetic distance, similarities between regulatory structures, and the effect of external forces on gene expression, and to depict conditional activity of expression modules in cancer. Network alignment is algorithmically complex, and therefore we must rely on heuristics, ideally as efficient and accurate as possible. The majority of current techniques for network alignment rely on precomputed information, such as with protein sequence alignment, or on tunable network alignment parameters, which may introduce an increased computational overhead. Our presented algorithm, which we call Node Fingerprinting (NF), is appropriate for performing global pairwise network alignment without precomputation or tuning, can be fully parallelized, and is able to quickly compute an accurate alignment between two biological networks. It has performed as well as or better than existing algorithms on biological and simulated data, and with fewer computational resources. The algorithmic validation performed demonstrates the low computational resource requirements of NF.

  7. Constructing Robust Cooperative Networks using a Multi-Objective Evolutionary Algorithm

    PubMed Central

    Wang, Shuai; Liu, Jing

    2017-01-01

    The design and construction of network structures oriented towards different applications has attracted much attention recently. The existing studies indicated that structural heterogeneity plays different roles in promoting cooperation and robustness. Compared with rewiring a predefined network, it is more flexible and practical to construct new networks that satisfy the desired properties. Therefore, in this paper, we study a method for constructing robust cooperative networks where the only constraint is that the number of nodes and links is predefined. We model this network construction problem as a multi-objective optimization problem and propose a multi-objective evolutionary algorithm, named MOEA-Netrc, to generate the desired networks from arbitrary initializations. The performance of MOEA-Netrc is validated on several synthetic and real-world networks. The results show that MOEA-Netrc can construct balanced candidates and is insensitive to the initializations. MOEA-Netrc can find the Pareto fronts for networks with different levels of cooperation and robustness. In addition, further investigation of the robustness of the constructed networks revealed the impact on other aspects of robustness during the construction process. PMID:28134314

  8. A two-stage flow-based intrusion detection model for next-generation networks.

    PubMed

    Umer, Muhammad Fahad; Sher, Muhammad; Bi, Yaxin

    2018-01-01

    The next-generation network provides state-of-the-art access-independent services over converged mobile and fixed networks. Security in the converged network environment is a major challenge. Traditional packet and protocol-based intrusion detection techniques cannot be used in next-generation networks due to slow throughput, low accuracy and their inability to inspect encrypted payload. An alternative solution for protection of next-generation networks is to use network flow records for detection of malicious activity in the network traffic. The network flow records are independent of access networks and user applications. In this paper, we propose a two-stage flow-based intrusion detection system for next-generation networks. The first stage uses an enhanced unsupervised one-class support vector machine which separates malicious flows from normal network traffic. The second stage uses a self-organizing map which automatically groups malicious flows into different alert clusters. We validated the proposed approach on two flow-based datasets and obtained promising results.

  9. A two-stage flow-based intrusion detection model for next-generation networks

    PubMed Central

    2018-01-01

    The next-generation network provides state-of-the-art access-independent services over converged mobile and fixed networks. Security in the converged network environment is a major challenge. Traditional packet and protocol-based intrusion detection techniques cannot be used in next-generation networks due to slow throughput, low accuracy and their inability to inspect encrypted payload. An alternative solution for protection of next-generation networks is to use network flow records for detection of malicious activity in the network traffic. The network flow records are independent of access networks and user applications. In this paper, we propose a two-stage flow-based intrusion detection system for next-generation networks. The first stage uses an enhanced unsupervised one-class support vector machine which separates malicious flows from normal network traffic. The second stage uses a self-organizing map which automatically groups malicious flows into different alert clusters. We validated the proposed approach on two flow-based datasets and obtained promising results. PMID:29329294

  10. Holographic neural networks versus conventional neural networks: a comparative evaluation for the classification of landmine targets in ground-penetrating radar images

    NASA Astrophysics Data System (ADS)

    Mudigonda, Naga R.; Kacelenga, Ray; Edwards, Mark

    2004-09-01

    This paper evaluates the performance of a holographic neural network in comparison with a conventional feedforward backpropagation neural network for the classification of landmine targets in ground penetrating radar images. The data used in the study was acquired from four different test sites using the landmine detection system developed by General Dynamics Canada Ltd., in collaboration with the Defense Research and Development Canada, Suffield. A set of seven features extracted for each detected alarm is used as stimulus inputs for the networks. The recall responses of the networks are then evaluated against the ground truth to declare true or false detections. The area computed under the receiver operating characteristic curve is used for comparative purposes. With a large dataset comprising of data from multiple sites, both the holographic and conventional networks showed comparable trends in recall accuracies with area values of 0.88 and 0.87, respectively. By using independent validation datasets, the holographic network"s generalization performance was observed to be better (mean area = 0.86) as compared to the conventional network (mean area = 0.82). Despite the widely publicized theoretical advantages of the holographic technology, use of more than the required number of cortical memory elements resulted in an over-fitting phenomenon of the holographic network.

  11. Functional abilities and cognitive decline in adult and aging intellectual disabilities. Psychometric validation of an Italian version of the Alzheimer's Functional Assessment Tool (AFAST): analysis of its clinical significance with linear statistics and artificial neural networks.

    PubMed

    De Vreese, L P; Gomiero, T; Uberti, M; De Bastiani, E; Weger, E; Mantesso, U; Marangoni, A

    2015-04-01

    (a) A psychometric validation of an Italian version of the Alzheimer's Functional Assessment Tool scale (AFAST-I), designed for informant-based assessment of the degree of impairment and of assistance required in seven basic daily activities in adult/elderly people with intellectual disabilities (ID) and (suspected) dementia; (b) a pilot analysis of its clinical significance with traditional statistical procedures and with an artificial neural network. AFAST-I was administered to the professional caregivers of 61 adults/seniors with ID with a mean age (± SD) of 53.4 (± 7.7) years (36% with Down syndrome). Internal consistency (Cronbach's α coefficient), inter/intra-rater reliabilities (intra-class coefficients, ICC) and concurrent, convergent and discriminant validity (Pearson's r coefficients) were computed. Clinical significance was probed by analysing the relationships among AFAST-I scores and the Sum of Cognitive Scores (SCS) and the Sum of Social Scores (SOS) of the Dementia Questionnaire for Persons with Intellectual Disabilities (DMR-I) after standardisation of their raw scores in equivalent scores (ES). An adaptive artificial system (AutoContractive Maps, AutoCM) was applied to all the variables recorded in the study sample, aimed at uncovering which variable occupies a central position and supports the entire network made up of the remaining variables interconnected among themselves with different weights. AFAST-I shows a high level of internal homogeneity with a Cronbach's α coefficient of 0.92. Inter-rater and intra-rater reliabilities were also excellent with ICC correlations of 0.96 and 0.93, respectively. The results of the analyses of the different AFAST-I validities all go in the expected direction: concurrent validity (r=-0.87 with ADL); convergent validity (r=0.63 with SCS; r=0.61 with SOS); discriminant validity (r=0.21 with the frequency of occurrence of dementia-related Behavioral Excesses of the Assessment for Adults with Developmental

  12. Synchronization Control of Neural Networks With State-Dependent Coefficient Matrices.

    PubMed

    Zhang, Junfeng; Zhao, Xudong; Huang, Jun

    2016-11-01

    This brief is concerned with synchronization control of a class of neural networks with state-dependent coefficient matrices. Being different from the existing drive-response neural networks in the literature, a novel model of drive-response neural networks is established. The concepts of uniformly ultimately bounded (UUB) synchronization and convex hull Lyapunov function are introduced. Then, by using the convex hull Lyapunov function approach, the UUB synchronization design of the drive-response neural networks is proposed, and a delay-independent control law guaranteeing the bounded synchronization of the neural networks is constructed. All present conditions are formulated in terms of bilinear matrix inequalities. By comparison, it is shown that the neural networks obtained in this brief are less conservative than those ones in the literature, and the bounded synchronization is suitable for the novel drive-response neural networks. Finally, an illustrative example is given to verify the validity of the obtained results.

  13. Coarse Scale In Situ Albedo Observations over Heterogeneous Land Surfaces and Validation Strategy

    NASA Astrophysics Data System (ADS)

    Xiao, Q.; Wu, X.; Wen, J.; BAI, J., Sr.

    2017-12-01

    To evaluate and improve the quality of coarse-pixel land surface albedo products, validation with ground measurements of albedo is crucial over the spatially and temporally heterogeneous land surface. The performance of albedo validation depends on the quality of ground-based albedo measurements at a corresponding coarse-pixel scale, which can be conceptualized as the "truth" value of albedo at coarse-pixel scale. The wireless sensor network (WSN) technology provides access to continuously observe on the large pixel scale. Taking the albedo products as an example, this paper was dedicated to the validation of coarse-scale albedo products over heterogeneous surfaces based on the WSN observed data, which is aiming at narrowing down the uncertainty of results caused by the spatial scaling mismatch between satellite and ground measurements over heterogeneous surfaces. The reference value of albedo at coarse-pixel scale can be obtained through an upscaling transform function based on all of the observations for that pixel. We will devote to further improve and develop new method that that are better able to account for the spatio-temporal characteristic of surface albedo in the future. Additionally, how to use the widely distributed single site measurements over the heterogeneous surfaces is also a question to be answered. Keywords: Remote sensing; Albedo; Validation; Wireless sensor network (WSN); Upscaling; Heterogeneous land surface; Albedo truth at coarse-pixel scale

  14. Development of a UNIX network compatible reactivity computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanchez, R.F.; Edwards, R.M.

    1996-12-31

    A state-of-the-art UNIX network compatible controller and UNIX host workstation with MATLAB/SIMULINK software were used to develop, implement, and validate a digital reactivity calculation. An objective of the development was to determine why a Macintosh-based reactivity computer reactivity output drifted intolerably.

  15. Is My Network Module Preserved and Reproducible?

    PubMed Central

    Langfelder, Peter; Luo, Rui; Oldham, Michael C.; Horvath, Steve

    2011-01-01

    In many applications, one is interested in determining which of the properties of a network module change across conditions. For example, to validate the existence of a module, it is desirable to show that it is reproducible (or preserved) in an independent test network. Here we study several types of network preservation statistics that do not require a module assignment in the test network. We distinguish network preservation statistics by the type of the underlying network. Some preservation statistics are defined for a general network (defined by an adjacency matrix) while others are only defined for a correlation network (constructed on the basis of pairwise correlations between numeric variables). Our applications show that the correlation structure facilitates the definition of particularly powerful module preservation statistics. We illustrate that evaluating module preservation is in general different from evaluating cluster preservation. We find that it is advantageous to aggregate multiple preservation statistics into summary preservation statistics. We illustrate the use of these methods in six gene co-expression network applications including 1) preservation of cholesterol biosynthesis pathway in mouse tissues, 2) comparison of human and chimpanzee brain networks, 3) preservation of selected KEGG pathways between human and chimpanzee brain networks, 4) sex differences in human cortical networks, 5) sex differences in mouse liver networks. While we find no evidence for sex specific modules in human cortical networks, we find that several human cortical modules are less preserved in chimpanzees. In particular, apoptosis genes are differentially co-expressed between humans and chimpanzees. Our simulation studies and applications show that module preservation statistics are useful for studying differences between the modular structure of networks. Data, R software and accompanying tutorials can be downloaded from the following webpage: http://www.genetics.ucla.edu/labs/horvath/CoexpressionNetwork

  16. Validating Network Security Policies via Static Analysis of Router ACL Configuration

    DTIC Science & Technology

    2006-12-01

    this research effort. A. SOFTWARE IMPLEMENTATION The system software was created with Java, using NetBeans IDE 5.0 [12]. NetBeans is a free, open...11. P. Gupta, and N. McKeown (2001), Algorithms for Packet Classification, IEEE Network, vol. 15, issue 2, pp. 24-32. 12, NetBeans (2006), Welcome to... NetBeans , http://www.netbeans.org, last accessed on 25 November 2006. 13. IANA.org (2006), Port Numbers, http://www.iana.org/assignments/port

  17. Distinctive Behaviors of Druggable Proteins in Cellular Networks

    PubMed Central

    Workman, Paul; Al-Lazikani, Bissan

    2015-01-01

    The interaction environment of a protein in a cellular network is important in defining the role that the protein plays in the system as a whole, and thus its potential suitability as a drug target. Despite the importance of the network environment, it is neglected during target selection for drug discovery. Here, we present the first systematic, comprehensive computational analysis of topological, community and graphical network parameters of the human interactome and identify discriminatory network patterns that strongly distinguish drug targets from the interactome as a whole. Importantly, we identify striking differences in the network behavior of targets of cancer drugs versus targets from other therapeutic areas and explore how they may relate to successful drug combinations to overcome acquired resistance to cancer drugs. We develop, computationally validate and provide the first public domain predictive algorithm for identifying druggable neighborhoods based on network parameters. We also make available full predictions for 13,345 proteins to aid target selection for drug discovery. All target predictions are available through canSAR.icr.ac.uk. Underlying data and tools are available at https://cansar.icr.ac.uk/cansar/publications/druggable_network_neighbourhoods/. PMID:26699810

  18. Multilayer Statistical Intrusion Detection in Wireless Networks

    NASA Astrophysics Data System (ADS)

    Hamdi, Mohamed; Meddeb-Makhlouf, Amel; Boudriga, Noureddine

    2008-12-01

    The rapid proliferation of mobile applications and services has introduced new vulnerabilities that do not exist in fixed wired networks. Traditional security mechanisms, such as access control and encryption, turn out to be inefficient in modern wireless networks. Given the shortcomings of the protection mechanisms, an important research focuses in intrusion detection systems (IDSs). This paper proposes a multilayer statistical intrusion detection framework for wireless networks. The architecture is adequate to wireless networks because the underlying detection models rely on radio parameters and traffic models. Accurate correlation between radio and traffic anomalies allows enhancing the efficiency of the IDS. A radio signal fingerprinting technique based on the maximal overlap discrete wavelet transform (MODWT) is developed. Moreover, a geometric clustering algorithm is presented. Depending on the characteristics of the fingerprinting technique, the clustering algorithm permits to control the false positive and false negative rates. Finally, simulation experiments have been carried out to validate the proposed IDS.

  19. Food for Thought ... Mechanistic Validation

    PubMed Central

    Hartung, Thomas; Hoffmann, Sebastian; Stephens, Martin

    2013-01-01

    Summary Validation of new approaches in regulatory toxicology is commonly defined as the independent assessment of the reproducibility and relevance (the scientific basis and predictive capacity) of a test for a particular purpose. In large ring trials, the emphasis to date has been mainly on reproducibility and predictive capacity (comparison to the traditional test) with less attention given to the scientific or mechanistic basis. Assessing predictive capacity is difficult for novel approaches (which are based on mechanism), such as pathways of toxicity or the complex networks within the organism (systems toxicology). This is highly relevant for implementing Toxicology for the 21st Century, either by high-throughput testing in the ToxCast/ Tox21 project or omics-based testing in the Human Toxome Project. This article explores the mostly neglected assessment of a test's scientific basis, which moves mechanism and causality to the foreground when validating/qualifying tests. Such mechanistic validation faces the problem of establishing causality in complex systems. However, pragmatic adaptations of the Bradford Hill criteria, as well as bioinformatic tools, are emerging. As critical infrastructures of the organism are perturbed by a toxic mechanism we argue that by focusing on the target of toxicity and its vulnerability, in addition to the way it is perturbed, we can anchor the identification of the mechanism and its verification. PMID:23665802

  20. GFD-Net: A novel semantic similarity methodology for the analysis of gene networks.

    PubMed

    Díaz-Montaña, Juan J; Díaz-Díaz, Norberto; Gómez-Vela, Francisco

    2017-04-01

    Since the popularization of biological network inference methods, it has become crucial to create methods to validate the resulting models. Here we present GFD-Net, the first methodology that applies the concept of semantic similarity to gene network analysis. GFD-Net combines the concept of semantic similarity with the use of gene network topology to analyze the functional dissimilarity of gene networks based on Gene Ontology (GO). The main innovation of GFD-Net lies in the way that semantic similarity is used to analyze gene networks taking into account the network topology. GFD-Net selects a functionality for each gene (specified by a GO term), weights each edge according to the dissimilarity between the nodes at its ends and calculates a quantitative measure of the network functional dissimilarity, i.e. a quantitative value of the degree of dissimilarity between the connected genes. The robustness of GFD-Net as a gene network validation tool was demonstrated by performing a ROC analysis on several network repositories. Furthermore, a well-known network was analyzed showing that GFD-Net can also be used to infer knowledge. The relevance of GFD-Net becomes more evident in Section "GFD-Net applied to the study of human diseases" where an example of how GFD-Net can be applied to the study of human diseases is presented. GFD-Net is available as an open-source Cytoscape app which offers a user-friendly interface to configure and execute the algorithm as well as the ability to visualize and interact with the results(http://apps.cytoscape.org/apps/gfdnet). Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Multi-mode clustering model for hierarchical wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Hu, Xiangdong; Li, Yongfu; Xu, Huifen

    2017-03-01

    The topology management, i.e., clusters maintenance, of wireless sensor networks (WSNs) is still a challenge due to its numerous nodes, diverse application scenarios and limited resources as well as complex dynamics. To address this issue, a multi-mode clustering model (M2 CM) is proposed to maintain the clusters for hierarchical WSNs in this study. In particular, unlike the traditional time-trigger model based on the whole-network and periodic style, the M2 CM is proposed based on the local and event-trigger operations. In addition, an adaptive local maintenance algorithm is designed for the broken clusters in the WSNs using the spatial-temporal demand changes accordingly. Numerical experiments are performed using the NS2 network simulation platform. Results validate the effectiveness of the proposed model with respect to the network maintenance costs, node energy consumption and transmitted data as well as the network lifetime.

  2. A method of network topology optimization design considering application process characteristic

    NASA Astrophysics Data System (ADS)

    Wang, Chunlin; Huang, Ning; Bai, Yanan; Zhang, Shuo

    2018-03-01

    Communication networks are designed to meet the usage requirements of users for various network applications. The current studies of network topology optimization design mainly considered network traffic, which is the result of network application operation, but not a design element of communication networks. A network application is a procedure of the usage of services by users with some demanded performance requirements, and has obvious process characteristic. In this paper, we first propose a method to optimize the design of communication network topology considering the application process characteristic. Taking the minimum network delay as objective, and the cost of network design and network connective reliability as constraints, an optimization model of network topology design is formulated, and the optimal solution of network topology design is searched by Genetic Algorithm (GA). Furthermore, we investigate the influence of network topology parameter on network delay under the background of multiple process-oriented applications, which can guide the generation of initial population and then improve the efficiency of GA. Numerical simulations show the effectiveness and validity of our proposed method. Network topology optimization design considering applications can improve the reliability of applications, and provide guidance for network builders in the early stage of network design, which is of great significance in engineering practices.

  3. Validation of the Soil Moisture Active Passive mission using USDA-ARS experimental watersheds

    USDA-ARS?s Scientific Manuscript database

    The calibration and validation program of the Soil Moisture Active Passive mission (SMAP) relies upon an international cooperative of in situ networks to provide ground truth references across a variety of landscapes. The USDA Agricultural Research Service operates several experimental watersheds wh...

  4. A mobile sensing system for structural health monitoring: design and validation

    NASA Astrophysics Data System (ADS)

    Zhu, Dapeng; Yi, Xiaohua; Wang, Yang; Lee, Kok-Meng; Guo, Jiajie

    2010-05-01

    This paper describes a new approach using mobile sensor networks for structural health monitoring. Compared with static sensors, mobile sensor networks offer flexible system architectures with adaptive spatial resolutions. The paper first describes the design of a mobile sensing node that is capable of maneuvering on structures built with ferromagnetic materials. The mobile sensing node can also attach/detach an accelerometer onto/from the structural surface. The performance of the prototype mobile sensor network has been validated through laboratory experiments. Two mobile sensing nodes are adopted for navigating on a steel portal frame and providing dense acceleration measurements. Transmissibility function analysis is conducted to identify structural damage using data collected by the mobile sensing nodes. This preliminary work is expected to spawn transformative changes in the use of mobile sensors for future structural health monitoring.

  5. Remote Sensing Product Verification and Validation at the NASA Stennis Space Center

    NASA Technical Reports Server (NTRS)

    Stanley, Thomas M.

    2005-01-01

    Remote sensing data product verification and validation (V&V) is critical to successful science research and applications development. People who use remote sensing products to make policy, economic, or scientific decisions require confidence in and an understanding of the products' characteristics to make informed decisions about the products' use. NASA data products of coarse to moderate spatial resolution are validated by NASA science teams. NASA's Stennis Space Center (SSC) serves as the science validation team lead for validating commercial data products of moderate to high spatial resolution. At SSC, the Applications Research Toolbox simulates sensors and targets, and the Instrument Validation Laboratory validates critical sensors. The SSC V&V Site consists of radiometric tarps, a network of ground control points, a water surface temperature sensor, an atmospheric measurement system, painted concrete radial target and edge targets, and other instrumentation. NASA's Applied Sciences Directorate participates in the Joint Agency Commercial Imagery Evaluation (JACIE) team formed by NASA, the U.S. Geological Survey, and the National Geospatial-Intelligence Agency to characterize commercial systems and imagery.

  6. TARGET's role in knowledge acquisition, engineering, validation, and documentation

    NASA Technical Reports Server (NTRS)

    Levi, Keith R.

    1994-01-01

    We investigate the use of the TARGET task analysis tool for use in the development of rule-based expert systems. We found TARGET to be very helpful in the knowledge acquisition process. It enabled us to perform knowledge acquisition with one knowledge engineer rather than two. In addition, it improved communication between the domain expert and knowledge engineer. We also found it to be useful for both the rule development and refinement phases of the knowledge engineering process. Using the network in these phases required us to develop guidelines that enabled us to easily translate the network into production rules. A significant requirement for TARGET remaining useful throughout the knowledge engineering process was the need to carefully maintain consistency between the network and the rule representations. Maintaining consistency not only benefited the knowledge engineering process, but also has significant payoffs in the areas of validation of the expert system and documentation of the knowledge in the system.

  7. The use of artificial neural networks in experimental data acquisition and aerodynamic design

    NASA Technical Reports Server (NTRS)

    Meade, Andrew J., Jr.

    1991-01-01

    It is proposed that an artificial neural network be used to construct an intelligent data acquisition system. The artificial neural networks (ANN) model has a potential for replacing traditional procedures as well as for use in computational fluid dynamics validation. Potential advantages of the ANN model are listed. As a proof of concept, the author modeled a NACA 0012 airfoil at specific conditions, using the neural network simulator NETS, developed by James Baffes of the NASA Johnson Space Center. The neural network predictions were compared to the actual data. It is concluded that artificial neural networks can provide an elegant and valuable class of mathematical tools for data analysis.

  8. Polarity related influence maximization in signed social networks.

    PubMed

    Li, Dong; Xu, Zhi-Ming; Chakraborty, Nilanjan; Gupta, Anika; Sycara, Katia; Li, Sheng

    2014-01-01

    Influence maximization in social networks has been widely studied motivated by applications like spread of ideas or innovations in a network and viral marketing of products. Current studies focus almost exclusively on unsigned social networks containing only positive relationships (e.g. friend or trust) between users. Influence maximization in signed social networks containing both positive relationships and negative relationships (e.g. foe or distrust) between users is still a challenging problem that has not been studied. Thus, in this paper, we propose the polarity-related influence maximization (PRIM) problem which aims to find the seed node set with maximum positive influence or maximum negative influence in signed social networks. To address the PRIM problem, we first extend the standard Independent Cascade (IC) model to the signed social networks and propose a Polarity-related Independent Cascade (named IC-P) diffusion model. We prove that the influence function of the PRIM problem under the IC-P model is monotonic and submodular Thus, a greedy algorithm can be used to achieve an approximation ratio of 1-1/e for solving the PRIM problem in signed social networks. Experimental results on two signed social network datasets, Epinions and Slashdot, validate that our approximation algorithm for solving the PRIM problem outperforms state-of-the-art methods.

  9. Polarity Related Influence Maximization in Signed Social Networks

    PubMed Central

    Li, Dong; Xu, Zhi-Ming; Chakraborty, Nilanjan; Gupta, Anika; Sycara, Katia; Li, Sheng

    2014-01-01

    Influence maximization in social networks has been widely studied motivated by applications like spread of ideas or innovations in a network and viral marketing of products. Current studies focus almost exclusively on unsigned social networks containing only positive relationships (e.g. friend or trust) between users. Influence maximization in signed social networks containing both positive relationships and negative relationships (e.g. foe or distrust) between users is still a challenging problem that has not been studied. Thus, in this paper, we propose the polarity-related influence maximization (PRIM) problem which aims to find the seed node set with maximum positive influence or maximum negative influence in signed social networks. To address the PRIM problem, we first extend the standard Independent Cascade (IC) model to the signed social networks and propose a Polarity-related Independent Cascade (named IC-P) diffusion model. We prove that the influence function of the PRIM problem under the IC-P model is monotonic and submodular Thus, a greedy algorithm can be used to achieve an approximation ratio of 1-1/e for solving the PRIM problem in signed social networks. Experimental results on two signed social network datasets, Epinions and Slashdot, validate that our approximation algorithm for solving the PRIM problem outperforms state-of-the-art methods. PMID:25061986

  10. Iowa Hydrologic and Environmental Validation Site: A Proposal to the Community

    NASA Astrophysics Data System (ADS)

    Bradley, A. A.; Ciach, G. J.; Eichinger, W. N.; Hornbuckle, K. C.; Illman, W.; Krajewski, W. F.; Kruger, A.; Patel, V. C.; Weirich, F. H.; Zhang, Y.

    2002-05-01

    We present a proposal to the hydrologic research community to establish a validation site in eastern Iowa. Many hydrological and meteorological variables observed using remote sensing techniques or predicted using numerical simulation models require validation. Validation, understood as quantification of the uncertainty, is difficult and often even impossible using operationally available in-situ observations. Specialized high-density networks of sensors with well-established error characteristics are required to serve as reference. We propose to establish a well-instrumented site for validation of several hydrometeorlogical and environmental variables near Iowa City, Iowa. We foresee this site as a national resource of detailed information collected in partnership with federal, state, and local agencies but independent of their routine mission oriented operations. The data would be distributed in real-time via the Internet to the research community nation wide to support model validation and development studies. In the presentation we justify the need for such sites, we make the case for setting a prototype site in Iowa, and we present preliminary considerations for the site's design and the data distribution system.

  11. VALUE - A Framework to Validate Downscaling Approaches for Climate Change Studies

    NASA Astrophysics Data System (ADS)

    Maraun, Douglas; Widmann, Martin; Gutiérrez, José M.; Kotlarski, Sven; Chandler, Richard E.; Hertig, Elke; Wibig, Joanna; Huth, Radan; Wilke, Renate A. I.

    2015-04-01

    VALUE is an open European network to validate and compare downscaling methods for climate change research. VALUE aims to foster collaboration and knowledge exchange between climatologists, impact modellers, statisticians, and stakeholders to establish an interdisciplinary downscaling community. A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods. Here, we present the key ingredients of this framework. VALUE's main approach to validation is user-focused: starting from a specific user problem, a validation tree guides the selection of relevant validation indices and performance measures. Several experiments have been designed to isolate specific points in the downscaling procedure where problems may occur: what is the isolated downscaling skill? How do statistical and dynamical methods compare? How do methods perform at different spatial scales? Do methods fail in representing regional climate change? How is the overall representation of regional climate, including errors inherited from global climate models? The framework will be the basis for a comprehensive community-open downscaling intercomparison study, but is intended also to provide general guidance for other validation studies.

  12. VALUE: A framework to validate downscaling approaches for climate change studies

    NASA Astrophysics Data System (ADS)

    Maraun, Douglas; Widmann, Martin; Gutiérrez, José M.; Kotlarski, Sven; Chandler, Richard E.; Hertig, Elke; Wibig, Joanna; Huth, Radan; Wilcke, Renate A. I.

    2015-01-01

    VALUE is an open European network to validate and compare downscaling methods for climate change research. VALUE aims to foster collaboration and knowledge exchange between climatologists, impact modellers, statisticians, and stakeholders to establish an interdisciplinary downscaling community. A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods. In this paper, we present the key ingredients of this framework. VALUE's main approach to validation is user- focused: starting from a specific user problem, a validation tree guides the selection of relevant validation indices and performance measures. Several experiments have been designed to isolate specific points in the downscaling procedure where problems may occur: what is the isolated downscaling skill? How do statistical and dynamical methods compare? How do methods perform at different spatial scales? Do methods fail in representing regional climate change? How is the overall representation of regional climate, including errors inherited from global climate models? The framework will be the basis for a comprehensive community-open downscaling intercomparison study, but is intended also to provide general guidance for other validation studies.

  13. A Mobile Sensor Network System for Monitoring of Unfriendly Environments.

    PubMed

    Song, Guangming; Zhou, Yaoxin; Ding, Fei; Song, Aiguo

    2008-11-14

    Observing microclimate changes is one of the most popular applications of wireless sensor networks. However, some target environments are often too dangerous or inaccessible to humans or large robots and there are many challenges for deploying and maintaining wireless sensor networks in those unfriendly environments. This paper presents a mobile sensor network system for solving this problem. The system architecture, the mobile node design, the basic behaviors and advanced network capabilities have been investigated respectively. A wheel-based robotic node architecture is proposed here that can add controlled mobility to wireless sensor networks. A testbed including some prototype nodes has also been created for validating the basic functions of the proposed mobile sensor network system. Motion performance tests have been done to get the positioning errors and power consumption model of the mobile nodes. Results of the autonomous deployment experiment show that the mobile nodes can be distributed evenly into the previously unknown environments. It provides powerful support for network deployment and maintenance and can ensure that the sensor network will work properly in unfriendly environments.

  14. Validation of Proposed "DSM-5" Criteria for Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Frazier, Thomas W.; Youngstrom, Eric A.; Speer, Leslie; Embacher, Rebecca; Law, Paul; Constantino, John; Findling, Robert L.; Hardan, Antonio Y.; Eng, Charis

    2012-01-01

    Objective: The primary aim of the present study was to evaluate the validity of proposed "DSM-5" criteria for autism spectrum disorder (ASD). Method: We analyzed symptoms from 14,744 siblings (8,911 ASD and 5,863 non-ASD) included in a national registry, the Interactive Autism Network. Youth 2 through 18 years of age were included if at least one…

  15. Robustness and Vulnerability of Networks with Dynamical Dependency Groups.

    PubMed

    Bai, Ya-Nan; Huang, Ning; Wang, Lei; Wu, Zhi-Xi

    2016-11-28

    The dependency property and self-recovery of failure nodes both have great effects on the robustness of networks during the cascading process. Existing investigations focused mainly on the failure mechanism of static dependency groups without considering the time-dependency of interdependent nodes and the recovery mechanism in reality. In this study, we present an evolving network model consisting of failure mechanisms and a recovery mechanism to explore network robustness, where the dependency relations among nodes vary over time. Based on generating function techniques, we provide an analytical framework for random networks with arbitrary degree distribution. In particular, we theoretically find that an abrupt percolation transition exists corresponding to the dynamical dependency groups for a wide range of topologies after initial random removal. Moreover, when the abrupt transition point is above the failure threshold of dependency groups, the evolving network with the larger dependency groups is more vulnerable; when below it, the larger dependency groups make the network more robust. Numerical simulations employing the Erdős-Rényi network and Barabási-Albert scale free network are performed to validate our theoretical results.

  16. Joint estimation of preferential attachment and node fitness in growing complex networks

    NASA Astrophysics Data System (ADS)

    Pham, Thong; Sheridan, Paul; Shimodaira, Hidetoshi

    2016-09-01

    Complex network growth across diverse fields of science is hypothesized to be driven in the main by a combination of preferential attachment and node fitness processes. For measuring the respective influences of these processes, previous approaches make strong and untested assumptions on the functional forms of either the preferential attachment function or fitness function or both. We introduce a Bayesian statistical method called PAFit to estimate preferential attachment and node fitness without imposing such functional constraints that works by maximizing a log-likelihood function with suitably added regularization terms. We use PAFit to investigate the interplay between preferential attachment and node fitness processes in a Facebook wall-post network. While we uncover evidence for both preferential attachment and node fitness, thus validating the hypothesis that these processes together drive complex network evolution, we also find that node fitness plays the bigger role in determining the degree of a node. This is the first validation of its kind on real-world network data. But surprisingly the rate of preferential attachment is found to deviate from the conventional log-linear form when node fitness is taken into account. The proposed method is implemented in the R package PAFit.

  17. Joint estimation of preferential attachment and node fitness in growing complex networks

    PubMed Central

    Pham, Thong; Sheridan, Paul; Shimodaira, Hidetoshi

    2016-01-01

    Complex network growth across diverse fields of science is hypothesized to be driven in the main by a combination of preferential attachment and node fitness processes. For measuring the respective influences of these processes, previous approaches make strong and untested assumptions on the functional forms of either the preferential attachment function or fitness function or both. We introduce a Bayesian statistical method called PAFit to estimate preferential attachment and node fitness without imposing such functional constraints that works by maximizing a log-likelihood function with suitably added regularization terms. We use PAFit to investigate the interplay between preferential attachment and node fitness processes in a Facebook wall-post network. While we uncover evidence for both preferential attachment and node fitness, thus validating the hypothesis that these processes together drive complex network evolution, we also find that node fitness plays the bigger role in determining the degree of a node. This is the first validation of its kind on real-world network data. But surprisingly the rate of preferential attachment is found to deviate from the conventional log-linear form when node fitness is taken into account. The proposed method is implemented in the R package PAFit. PMID:27601314

  18. A neural network-based estimator for the mixture ratio of the Space Shuttle Main Engine

    NASA Astrophysics Data System (ADS)

    Guo, T. H.; Musgrave, J.

    1992-11-01

    In order to properly utilize the available fuel and oxidizer of a liquid propellant rocket engine, the mixture ratio is closed loop controlled during main stage (65 percent - 109 percent power) operation. However, because of the lack of flight-capable instrumentation for measuring mixture ratio, the value of mixture ratio in the control loop is estimated using available sensor measurements such as the combustion chamber pressure and the volumetric flow, and the temperature and pressure at the exit duct on the low pressure fuel pump. This estimation scheme has two limitations. First, the estimation formula is based on an empirical curve fitting which is accurate only within a narrow operating range. Second, the mixture ratio estimate relies on a few sensor measurements and loss of any of these measurements will make the estimate invalid. In this paper, we propose a neural network-based estimator for the mixture ratio of the Space Shuttle Main Engine. The estimator is an extension of a previously developed neural network based sensor failure detection and recovery algorithm (sensor validation). This neural network uses an auto associative structure which utilizes the redundant information of dissimilar sensors to detect inconsistent measurements. Two approaches have been identified for synthesizing mixture ratio from measurement data using a neural network. The first approach uses an auto associative neural network for sensor validation which is modified to include the mixture ratio as an additional output. The second uses a new network for the mixture ratio estimation in addition to the sensor validation network. Although mixture ratio is not directly measured in flight, it is generally available in simulation and in test bed firing data from facility measurements of fuel and oxidizer volumetric flows. The pros and cons of these two approaches will be discussed in terms of robustness to sensor failures and accuracy of the estimate during typical transients using

  19. A neural network-based estimator for the mixture ratio of the Space Shuttle Main Engine

    NASA Technical Reports Server (NTRS)

    Guo, T. H.; Musgrave, J.

    1992-01-01

    In order to properly utilize the available fuel and oxidizer of a liquid propellant rocket engine, the mixture ratio is closed loop controlled during main stage (65 percent - 109 percent power) operation. However, because of the lack of flight-capable instrumentation for measuring mixture ratio, the value of mixture ratio in the control loop is estimated using available sensor measurements such as the combustion chamber pressure and the volumetric flow, and the temperature and pressure at the exit duct on the low pressure fuel pump. This estimation scheme has two limitations. First, the estimation formula is based on an empirical curve fitting which is accurate only within a narrow operating range. Second, the mixture ratio estimate relies on a few sensor measurements and loss of any of these measurements will make the estimate invalid. In this paper, we propose a neural network-based estimator for the mixture ratio of the Space Shuttle Main Engine. The estimator is an extension of a previously developed neural network based sensor failure detection and recovery algorithm (sensor validation). This neural network uses an auto associative structure which utilizes the redundant information of dissimilar sensors to detect inconsistent measurements. Two approaches have been identified for synthesizing mixture ratio from measurement data using a neural network. The first approach uses an auto associative neural network for sensor validation which is modified to include the mixture ratio as an additional output. The second uses a new network for the mixture ratio estimation in addition to the sensor validation network. Although mixture ratio is not directly measured in flight, it is generally available in simulation and in test bed firing data from facility measurements of fuel and oxidizer volumetric flows. The pros and cons of these two approaches will be discussed in terms of robustness to sensor failures and accuracy of the estimate during typical transients using

  20. An Evolutionarily Conserved Innate Immunity Protein Interaction Network*

    PubMed Central

    De Arras, Lesly; Seng, Amara; Lackford, Brad; Keikhaee, Mohammad R.; Bowerman, Bruce; Freedman, Jonathan H.; Schwartz, David A.; Alper, Scott

    2013-01-01

    The innate immune response plays a critical role in fighting infection; however, innate immunity also can affect the pathogenesis of a variety of diseases, including sepsis, asthma, cancer, and atherosclerosis. To identify novel regulators of innate immunity, we performed comparative genomics RNA interference screens in the nematode Caenorhabditis elegans and mouse macrophages. These screens have uncovered many candidate regulators of the response to lipopolysaccharide (LPS), several of which interact physically in multiple species to form an innate immunity protein interaction network. This protein interaction network contains several proteins in the canonical LPS-responsive TLR4 pathway as well as many novel interacting proteins. Using RNAi and overexpression studies, we show that almost every gene in this network can modulate the innate immune response in mouse cell lines. We validate the importance of this network in innate immunity regulation in vivo using available mutants in C. elegans and mice. PMID:23209288

  1. The EOS land validation core sites: background information and current status

    USGS Publications Warehouse

    Morisette, J.; Privette, J.L.; Justice, C.; Olson, D.; Dwyer, John L.; Davis, P.; Starr, D.; Wickland, D.

    1999-01-01

    The EOS Land Validation Core Sites1 will provide the user community with timely ground, aircraft, and satellite data for EOS science and validation investigations. The sites, currently 24 distributed worldwide, represent a consensus among the instrument teams and validation investigators and represent a range of global biome types (see Figure 1 and Table 1; Privette et al., 1999; Justice et al., 1998). The sites typically have a history of in situ and remote observations and can expect continued monitoring and land cover research activities. In many cases, a Core Site will have a tower equipped with above-canopy instrumentation for nearcontinuous sampling of landscape radiometric, energy and CO2 flux, meteorological variables, and atmospheric aerosol and water vapor data. These will be complemented by intensive field measurement campaigns. The data collected at these sites will provide an important resource for the broader science community. These sites can also provide a foundation for a validation network supported and used by all international space agencies.

  2. Network coding based joint signaling and dynamic bandwidth allocation scheme for inter optical network unit communication in passive optical networks

    NASA Astrophysics Data System (ADS)

    Wei, Pei; Gu, Rentao; Ji, Yuefeng

    2014-06-01

    As an innovative and promising technology, network coding has been introduced to passive optical networks (PON) in recent years to support inter optical network unit (ONU) communication, yet the signaling process and dynamic bandwidth allocation (DBA) in PON with network coding (NC-PON) still need further study. Thus, we propose a joint signaling and DBA scheme for efficiently supporting differentiated services of inter ONU communication in NC-PON. In the proposed joint scheme, the signaling process lays the foundation to fulfill network coding in PON, and it can not only avoid the potential threat to downstream security in previous schemes but also be suitable for the proposed hybrid dynamic bandwidth allocation (HDBA) scheme. In HDBA, a DBA cycle is divided into two sub-cycles for applying different coding, scheduling and bandwidth allocation strategies to differentiated classes of services. Besides, as network traffic load varies, the entire upstream transmission window for all REPORT messages slides accordingly, leaving the transmission time of one or two sub-cycles to overlap with the bandwidth allocation calculation time at the optical line terminal (the OLT), so that the upstream idle time can be efficiently eliminated. Performance evaluation results validate that compared with the existing two DBA algorithms deployed in NC-PON, HDBA demonstrates the best quality of service (QoS) support in terms of delay for all classes of services, especially guarantees the end-to-end delay bound of high class services. Specifically, HDBA can eliminate queuing delay and scheduling delay of high class services, reduce those of lower class services by at least 20%, and reduce the average end-to-end delay of all services over 50%. Moreover, HDBA also achieves the maximum delay fairness between coded and uncoded lower class services, and medium delay fairness for high class services.

  3. Validation of MODIS Aerosol Retrieval Over Ocean

    NASA Technical Reports Server (NTRS)

    Remer, Lorraine A.; Tanre, Didier; Kaufman, Yoram J.; Ichoku, Charles; Mattoo, Shana; Levy, Robert; Chu, D. Allen; Holben, Brent N.; Dubovik, Oleg; Ahmad, Ziauddin; hide

    2001-01-01

    The MODerate resolution Imaging Spectroradiometer (MODIS) algorithm for determining aerosol characteristics over ocean is performing with remarkable accuracy. A two-month data set of MODIS retrievals co-located with observations from the AErosol RObotic NETwork (AERONET) ground-based sunphotometer network provides the necessary validation. Spectral radiation measured by MODIS (in the range 550 - 2100 nm) is used to retrieve the aerosol optical thickness, effective particle radius and ratio between the submicron and micron size particles. MODIS-retrieved aerosol optical thickness at 660 nm and 870 nm fall within the expected uncertainty, with the ensemble average at 660 nm differing by only 2% from the AERONET observations and having virtually no offset. MODIS retrievals of aerosol effective radius agree with AERONET retrievals to within +/- 0.10 micrometers, while MODIS-derived ratios between large and small mode aerosol show definite correlation with ratios derived from AERONET data.

  4. Improved classification of drainage networks using junction angles and secondary tributary lengths

    NASA Astrophysics Data System (ADS)

    Jung, Kichul; Marpu, Prashanth R.; Ouarda, Taha B. M. J.

    2015-06-01

    River networks in different regions have distinct characteristics generated by geological processes. These differences enable classification of drainage networks using several measures with many features of the networks. In this study, we propose a new approach that only uses the junction angles with secondary tributary lengths to directly classify different network types. This methodology is based on observations on 50 predefined channel networks. The cumulative distributions of secondary tributary lengths for different ranges of junction angles are used to obtain the descriptive values that are defined using a power-law representation. The averages of the values for the known networks are used to represent the classes, and any unclassified network can be classified based on the similarity of the representative values to those of the known classes. The methodology is applied to 10 networks in the United Arab Emirates and Oman and five networks in the USA, and the results are validated using the classification obtained with other methods.

  5. Identifying the starting point of a spreading process in complex networks.

    PubMed

    Comin, Cesar Henrique; Costa, Luciano da Fontoura

    2011-11-01

    When dealing with the dissemination of epidemics, one important question that can be asked is the location where the contamination began. In this paper, we analyze three spreading schemes and propose and validate an effective methodology for the identification of the source nodes. The method is based on the calculation of the centrality of the nodes on the sampled network, expressed here by degree, betweenness, closeness, and eigenvector centrality. We show that the source node tends to have the highest measurement values. The potential of the methodology is illustrated with respect to three theoretical complex network models as well as a real-world network, the email network of the University Rovira i Virgili.

  6. Disintegration of Sensorimotor Brain Networks in Schizophrenia.

    PubMed

    Kaufmann, Tobias; Skåtun, Kristina C; Alnæs, Dag; Doan, Nhat Trung; Duff, Eugene P; Tønnesen, Siren; Roussos, Evangelos; Ueland, Torill; Aminoff, Sofie R; Lagerberg, Trine V; Agartz, Ingrid; Melle, Ingrid S; Smith, Stephen M; Andreassen, Ole A; Westlye, Lars T

    2015-11-01

    Schizophrenia is a severe mental disorder associated with derogated function across various domains, including perception, language, motor, emotional, and social behavior. Due to its complex symptomatology, schizophrenia is often regarded a disorder of cognitive processes. Yet due to the frequent involvement of sensory and perceptual symptoms, it has been hypothesized that functional disintegration between sensory and cognitive processes mediates the heterogeneous and comprehensive schizophrenia symptomatology. Here, using resting-state functional magnetic resonance imaging in 71 patients and 196 healthy controls, we characterized the standard deviation in BOLD (blood-oxygen-level-dependent) signal amplitude and the functional connectivity across a range of functional brain networks. We investigated connectivity on the edge and node level using network modeling based on independent component analysis and utilized the brain network features in cross-validated classification procedures. Both amplitude and connectivity were significantly altered in patients, largely involving sensory networks. Reduced standard deviation in amplitude was observed in a range of visual, sensorimotor, and auditory nodes in patients. The strongest differences in connectivity implicated within-sensorimotor and sensorimotor-thalamic connections. Furthermore, sensory nodes displayed widespread alterations in the connectivity with higher-order nodes. We demonstrated robustness of effects across subjects by significantly classifying diagnostic group on the individual level based on cross-validated multivariate connectivity features. Taken together, the findings support the hypothesis of disintegrated sensory and cognitive processes in schizophrenia, and the foci of effects emphasize that targeting the sensory and perceptual domains may be key to enhance our understanding of schizophrenia pathophysiology. © The Author 2015. Published by Oxford University Press on behalf of the Maryland

  7. Validation and Inter-comparison Against Observations of GODAE Ocean View Ocean Prediction Systems

    NASA Astrophysics Data System (ADS)

    Xu, J.; Davidson, F. J. M.; Smith, G. C.; Lu, Y.; Hernandez, F.; Regnier, C.; Drevillon, M.; Ryan, A.; Martin, M.; Spindler, T. D.; Brassington, G. B.; Oke, P. R.

    2016-02-01

    For weather forecasts, validation of forecast performance is done at the end user level as well as by the meteorological forecast centers. In the development of Ocean Prediction Capacity, the same level of care for ocean forecast performance and validation is needed. Herein we present results from a validation against observations of 6 Global Ocean Forecast Systems under the GODAE OceanView International Collaboration Network. These systems include the Global Ocean Ice Forecast System (GIOPS) developed by the Government of Canada, two systems PSY3 and PSY4 from the French Mercator-Ocean Ocean Forecasting Group, the FOAM system from UK met office, HYCOM-RTOFS from NOAA/NCEP/NWA of USA, and the Australian Bluelink-OceanMAPS system from the CSIRO, the Australian Meteorological Bureau and the Australian Navy.The observation data used in the comparison are sea surface temperature, sub-surface temperature, sub-surface salinity, sea level anomaly, and sea ice total concentration data. Results of the inter-comparison demonstrate forecast performance limits, strengths and weaknesses of each of the six systems. This work establishes validation protocols and routines by which all new prediction systems developed under the CONCEPTS Collaborative Network will be benchmarked prior to approval for operations. This includes anticipated delivery of CONCEPTS regional prediction systems over the next two years including a pan Canadian 1/12th degree resolution ice ocean prediction system and limited area 1/36th degree resolution prediction systems. The validation approach of comparing forecasts to observations at the time and location of the observation is called Class 4 metrics. It has been adopted by major international ocean prediction centers, and will be recommended to JCOMM-WMO as routine validation approach for operational oceanography worldwide.

  8. Optical simulations for experimental networks: lessons from MONET

    NASA Astrophysics Data System (ADS)

    Richards, Dwight H.; Jackel, Janet L.; Goodman, Matthew S.; Roudas, Ioannis; Wagner, Richard E.; Antoniades, Neophytos

    1999-08-01

    We have used optical simulations as a means of setting component requirements, assessing component compatibility, and designing experiments in the MONET (Multiwavelength Optical Networking) Project. This paper reviews the simulation method, gives some examples of the types of simulations that have been performed, and discusses the validation of the simulations.

  9. Distributed control for energy-efficient and fast consensus in wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Manfredi, Sabato; Tucci, Edmondo Di

    2017-05-01

    The paper proposes a distributed control of nodes transmission radii in energy-harvesting wireless sensor networks for simultaneously coping with energy consumption and consensus responsiveness requirement. The stability of the closed-loop network under the proposed control law is proved. Simulation validations show the effectiveness of the proposed approach in nominal scenario as well as in the presence of uncertain node power requirements and harvesting system supply.

  10. VALUE - Validating and Integrating Downscaling Methods for Climate Change Research

    NASA Astrophysics Data System (ADS)

    Maraun, Douglas; Widmann, Martin; Benestad, Rasmus; Kotlarski, Sven; Huth, Radan; Hertig, Elke; Wibig, Joanna; Gutierrez, Jose

    2013-04-01

    Our understanding of global climate change is mainly based on General Circulation Models (GCMs) with a relatively coarse resolution. Since climate change impacts are mainly experienced on regional scales, high-resolution climate change scenarios need to be derived from GCM simulations by downscaling. Several projects have been carried out over the last years to validate the performance of statistical and dynamical downscaling, yet several aspects have not been systematically addressed: variability on sub-daily, decadal and longer time-scales, extreme events, spatial variability and inter-variable relationships. Different downscaling approaches such as dynamical downscaling, statistical downscaling and bias correction approaches have not been systematically compared. Furthermore, collaboration between different communities, in particular regional climate modellers, statistical downscalers and statisticians has been limited. To address these gaps, the EU Cooperation in Science and Technology (COST) action VALUE (www.value-cost.eu) has been brought into life. VALUE is a research network with participants from currently 23 European countries running from 2012 to 2015. Its main aim is to systematically validate and develop downscaling methods for climate change research in order to improve regional climate change scenarios for use in climate impact studies. Inspired by the co-design idea of the international research initiative "future earth", stakeholders of climate change information have been involved in the definition of research questions to be addressed and are actively participating in the network. The key idea of VALUE is to identify the relevant weather and climate characteristics required as input for a wide range of impact models and to define an open framework to systematically validate these characteristics. Based on a range of benchmark data sets, in principle every downscaling method can be validated and compared with competing methods. The results of

  11. Nonparametric Simulation of Signal Transduction Networks with Semi-Synchronized Update

    PubMed Central

    Nassiri, Isar; Masoudi-Nejad, Ali; Jalili, Mahdi; Moeini, Ali

    2012-01-01

    Simulating signal transduction in cellular signaling networks provides predictions of network dynamics by quantifying the changes in concentration and activity-level of the individual proteins. Since numerical values of kinetic parameters might be difficult to obtain, it is imperative to develop non-parametric approaches that combine the connectivity of a network with the response of individual proteins to signals which travel through the network. The activity levels of signaling proteins computed through existing non-parametric modeling tools do not show significant correlations with the observed values in experimental results. In this work we developed a non-parametric computational framework to describe the profile of the evolving process and the time course of the proportion of active form of molecules in the signal transduction networks. The model is also capable of incorporating perturbations. The model was validated on four signaling networks showing that it can effectively uncover the activity levels and trends of response during signal transduction process. PMID:22737250

  12. Designing communication and remote controlling of virtual instrument network system

    NASA Astrophysics Data System (ADS)

    Lei, Lin; Wang, Houjun; Zhou, Xue; Zhou, Wenjian

    2005-01-01

    In this paper, a virtual instrument network through the LAN and finally remote control of virtual instruments is realized based on virtual instrument and LabWindows/CVI software platform. The virtual instrument network system is made up of three subsystems. There are server subsystem, telnet client subsystem and local instrument control subsystem. This paper introduced virtual instrument network structure in detail based on LabWindows. Application procedure design of virtual instrument network communication, the Client/the programming mode of the server, remote PC and server communication far realizing, the control power of the workstation is transmitted, server program and so on essential technical were introduced. And virtual instruments network may connect to entire Internet on. Above-mentioned technology, through measuring the application in the electronic measurement virtual instrument network that is already built up, has verified the actual using value of the technology. Experiment and application validate that this design is resultful.

  13. Optimal social-networking strategy is a function of socioeconomic conditions.

    PubMed

    Oishi, Shigehiro; Kesebir, Selin

    2012-12-01

    In the two studies reported here, we examined the relation among residential mobility, economic conditions, and optimal social-networking strategy. In study 1, a computer simulation showed that regardless of economic conditions, having a broad social network with weak friendship ties is advantageous when friends are likely to move away. By contrast, having a small social network with deep friendship ties is advantageous when the economy is unstable but friends are not likely to move away. In study 2, we examined the validity of the computer simulation using a sample of American adults. Results were consistent with the simulation: American adults living in a zip code where people are residentially stable but economically challenged were happier if they had a narrow but deep social network, whereas in other socioeconomic conditions, people were generally happier if they had a broad but shallow networking strategy. Together, our studies demonstrate that the optimal social-networking strategy varies as a function of socioeconomic conditions.

  14. Formal Models of the Network Co-occurrence Underlying Mental Operations.

    PubMed

    Bzdok, Danilo; Varoquaux, Gaël; Grisel, Olivier; Eickenberg, Michael; Poupon, Cyril; Thirion, Bertrand

    2016-06-01

    Systems neuroscience has identified a set of canonical large-scale networks in humans. These have predominantly been characterized by resting-state analyses of the task-unconstrained, mind-wandering brain. Their explicit relationship to defined task performance is largely unknown and remains challenging. The present work contributes a multivariate statistical learning approach that can extract the major brain networks and quantify their configuration during various psychological tasks. The method is validated in two extensive datasets (n = 500 and n = 81) by model-based generation of synthetic activity maps from recombination of shared network topographies. To study a use case, we formally revisited the poorly understood difference between neural activity underlying idling versus goal-directed behavior. We demonstrate that task-specific neural activity patterns can be explained by plausible combinations of resting-state networks. The possibility of decomposing a mental task into the relative contributions of major brain networks, the "network co-occurrence architecture" of a given task, opens an alternative access to the neural substrates of human cognition.

  15. Independent verification and validation report of Washington state ferries' wireless high speed data project

    DOT National Transportation Integrated Search

    2008-06-30

    The following Independent Verification and Validation (IV&V) report documents and presents the results of a study of the Washington State Ferries Prototype Wireless High Speed Data Network. The purpose of the study was to evaluate and determine if re...

  16. Vehicle-network defensive aids suite

    NASA Astrophysics Data System (ADS)

    Rapanotti, John

    2005-05-01

    Defensive Aids Suites (DAS) developed for vehicles can be extended to the vehicle network level. The vehicle network, typically comprising four platoon vehicles, will benefit from improved communications and automation based on low latency response to threats from a flexible, dynamic, self-healing network environment. Improved DAS performance and reliability relies on four complementary sensor technologies including: acoustics, visible and infrared optics, laser detection and radar. Long-range passive threat detection and avoidance is based on dual-purpose optics, primarily designed for manoeuvring, targeting and surveillance, combined with dazzling, obscuration and countermanoeuvres. Short-range active armour is based on search and track radar and intercepting grenades to defeat the threat. Acoustic threat detection increases the overall robustness of the DAS and extends the detection range to include small calibers. Finally, detection of active targeting systems is carried out with laser and radar warning receivers. Synthetic scene generation will provide the integrated environment needed to investigate, develop and validate these new capabilities. Computer generated imagery, based on validated models and an acceptable set of benchmark vignettes, can be used to investigate and develop fieldable sensors driven by real-time algorithms and countermeasure strategies. The synthetic scene environment will be suitable for sensor and countermeasure development in hardware-in-the-loop simulation. The research effort focuses on two key technical areas: a) computing aspects of the synthetic scene generation and b) and development of adapted models and databases. OneSAF is being developed for research and development, in addition to the original requirement of Simulation and Modelling for Acquisition, Rehearsal, Requirements and Training (SMARRT), and is becoming useful as a means for transferring technology to other users, researchers and contractors. This procedure

  17. Learning a Markov Logic network for supervised gene regulatory network inference

    PubMed Central

    2013-01-01

    Background Gene regulatory network inference remains a challenging problem in systems biology despite the numerous approaches that have been proposed. When substantial knowledge on a gene regulatory network is already available, supervised network inference is appropriate. Such a method builds a binary classifier able to assign a class (Regulation/No regulation) to an ordered pair of genes. Once learnt, the pairwise classifier can be used to predict new regulations. In this work, we explore the framework of Markov Logic Networks (MLN) that combine features of probabilistic graphical models with the expressivity of first-order logic rules. Results We propose to learn a Markov Logic network, e.g. a set of weighted rules that conclude on the predicate “regulates”, starting from a known gene regulatory network involved in the switch proliferation/differentiation of keratinocyte cells, a set of experimental transcriptomic data and various descriptions of genes all encoded into first-order logic. As training data are unbalanced, we use asymmetric bagging to learn a set of MLNs. The prediction of a new regulation can then be obtained by averaging predictions of individual MLNs. As a side contribution, we propose three in silico tests to assess the performance of any pairwise classifier in various network inference tasks on real datasets. A first test consists of measuring the average performance on balanced edge prediction problem; a second one deals with the ability of the classifier, once enhanced by asymmetric bagging, to update a given network. Finally our main result concerns a third test that measures the ability of the method to predict regulations with a new set of genes. As expected, MLN, when provided with only numerical discretized gene expression data, does not perform as well as a pairwise SVM in terms of AUPR. However, when a more complete description of gene properties is provided by heterogeneous sources, MLN achieves the same performance as a black

  18. Learning a Markov Logic network for supervised gene regulatory network inference.

    PubMed

    Brouard, Céline; Vrain, Christel; Dubois, Julie; Castel, David; Debily, Marie-Anne; d'Alché-Buc, Florence

    2013-09-12

    Gene regulatory network inference remains a challenging problem in systems biology despite the numerous approaches that have been proposed. When substantial knowledge on a gene regulatory network is already available, supervised network inference is appropriate. Such a method builds a binary classifier able to assign a class (Regulation/No regulation) to an ordered pair of genes. Once learnt, the pairwise classifier can be used to predict new regulations. In this work, we explore the framework of Markov Logic Networks (MLN) that combine features of probabilistic graphical models with the expressivity of first-order logic rules. We propose to learn a Markov Logic network, e.g. a set of weighted rules that conclude on the predicate "regulates", starting from a known gene regulatory network involved in the switch proliferation/differentiation of keratinocyte cells, a set of experimental transcriptomic data and various descriptions of genes all encoded into first-order logic. As training data are unbalanced, we use asymmetric bagging to learn a set of MLNs. The prediction of a new regulation can then be obtained by averaging predictions of individual MLNs. As a side contribution, we propose three in silico tests to assess the performance of any pairwise classifier in various network inference tasks on real datasets. A first test consists of measuring the average performance on balanced edge prediction problem; a second one deals with the ability of the classifier, once enhanced by asymmetric bagging, to update a given network. Finally our main result concerns a third test that measures the ability of the method to predict regulations with a new set of genes. As expected, MLN, when provided with only numerical discretized gene expression data, does not perform as well as a pairwise SVM in terms of AUPR. However, when a more complete description of gene properties is provided by heterogeneous sources, MLN achieves the same performance as a black-box model such as a

  19. Divisibility patterns of natural numbers on a complex network.

    PubMed

    Shekatkar, Snehal M; Bhagwat, Chandrasheel; Ambika, G

    2015-09-16

    Investigation of divisibility properties of natural numbers is one of the most important themes in the theory of numbers. Various tools have been developed over the centuries to discover and study the various patterns in the sequence of natural numbers in the context of divisibility. In the present paper, we study the divisibility of natural numbers using the framework of a growing complex network. In particular, using tools from the field of statistical inference, we show that the network is scale-free but has a non-stationary degree distribution. Along with this, we report a new kind of similarity pattern for the local clustering, which we call "stretching similarity", in this network. We also show that the various characteristics like average degree, global clustering coefficient and assortativity coefficient of the network vary smoothly with the size of the network. Using analytical arguments we estimate the asymptotic behavior of global clustering and average degree which is validated using numerical analysis.

  20. Popularity versus similarity in growing networks.

    PubMed

    Papadopoulos, Fragkiskos; Kitsak, Maksim; Serrano, M Ángeles; Boguñá, Marián; Krioukov, Dmitri

    2012-09-27

    The principle that 'popularity is attractive' underlies preferential attachment, which is a common explanation for the emergence of scaling in growing networks. If new connections are made preferentially to more popular nodes, then the resulting distribution of the number of connections possessed by nodes follows power laws, as observed in many real networks. Preferential attachment has been directly validated for some real networks (including the Internet), and can be a consequence of different underlying processes based on node fitness, ranking, optimization, random walks or duplication. Here we show that popularity is just one dimension of attractiveness; another dimension is similarity. We develop a framework in which new connections optimize certain trade-offs between popularity and similarity, instead of simply preferring popular nodes. The framework has a geometric interpretation in which popularity preference emerges from local optimization. As opposed to preferential attachment, our optimization framework accurately describes the large-scale evolution of technological (the Internet), social (trust relationships between people) and biological (Escherichia coli metabolic) networks, predicting the probability of new links with high precision. The framework that we have developed can thus be used for predicting new links in evolving networks, and provides a different perspective on preferential attachment as an emergent phenomenon.

  1. Bayesian network modelling of upper gastrointestinal bleeding

    NASA Astrophysics Data System (ADS)

    Aisha, Nazziwa; Shohaimi, Shamarina; Adam, Mohd Bakri

    2013-09-01

    Bayesian networks are graphical probabilistic models that represent causal and other relationships between domain variables. In the context of medical decision making, these models have been explored to help in medical diagnosis and prognosis. In this paper, we discuss the Bayesian network formalism in building medical support systems and we learn a tree augmented naive Bayes Network (TAN) from gastrointestinal bleeding data. The accuracy of the TAN in classifying the source of gastrointestinal bleeding into upper or lower source is obtained. The TAN achieves a high classification accuracy of 86% and an area under curve of 92%. A sensitivity analysis of the model shows relatively high levels of entropy reduction for color of the stool, history of gastrointestinal bleeding, consistency and the ratio of blood urea nitrogen to creatinine. The TAN facilitates the identification of the source of GIB and requires further validation.

  2. Deciphering microbial interactions and detecting keystone species with co-occurrence networks

    PubMed Central

    Berry, David; Widder, Stefanie

    2014-01-01

    Co-occurrence networks produced from microbial survey sequencing data are frequently used to identify interactions between community members. While this approach has potential to reveal ecological processes, it has been insufficiently validated due to the technical limitations inherent in studying complex microbial ecosystems. Here, we simulate multi-species microbial communities with known interaction patterns using generalized Lotka-Volterra dynamics. We then construct co-occurrence networks and evaluate how well networks reveal the underlying interactions and how experimental and ecological parameters can affect network inference and interpretation. We find that co-occurrence networks can recapitulate interaction networks under certain conditions, but that they lose interpretability when the effects of habitat filtering become significant. We demonstrate that networks suffer from local hot spots of spurious correlation in the neighborhood of hub species that engage in many interactions. We also identify topological features associated with keystone species in co-occurrence networks. This study provides a substantiated framework to guide environmental microbiologists in the construction and interpretation of co-occurrence networks from microbial survey datasets. PMID:24904535

  3. Deciphering microbial interactions and detecting keystone species with co-occurrence networks.

    PubMed

    Berry, David; Widder, Stefanie

    2014-01-01

    Co-occurrence networks produced from microbial survey sequencing data are frequently used to identify interactions between community members. While this approach has potential to reveal ecological processes, it has been insufficiently validated due to the technical limitations inherent in studying complex microbial ecosystems. Here, we simulate multi-species microbial communities with known interaction patterns using generalized Lotka-Volterra dynamics. We then construct co-occurrence networks and evaluate how well networks reveal the underlying interactions and how experimental and ecological parameters can affect network inference and interpretation. We find that co-occurrence networks can recapitulate interaction networks under certain conditions, but that they lose interpretability when the effects of habitat filtering become significant. We demonstrate that networks suffer from local hot spots of spurious correlation in the neighborhood of hub species that engage in many interactions. We also identify topological features associated with keystone species in co-occurrence networks. This study provides a substantiated framework to guide environmental microbiologists in the construction and interpretation of co-occurrence networks from microbial survey datasets.

  4. Stability and stabilisation of a class of networked dynamic systems

    NASA Astrophysics Data System (ADS)

    Liu, H. B.; Wang, D. Q.

    2018-04-01

    We investigate the stability and stabilisation of a linear time invariant networked heterogeneous system with arbitrarily connected subsystems. A new linear matrix inequality based sufficient and necessary condition for the stability is derived, based on which the stabilisation is provided. The obtained conditions efficiently utilise the block-diagonal characteristic of system parameter matrices and the sparseness of subsystem connection matrix. Moreover, a sufficient condition only dependent on each individual subsystem is also presented for the stabilisation of the networked systems with a large scale. Numerical simulations show that these conditions are computationally valid in the analysis and synthesis of a large-scale networked system.

  5. Analysis and logical modeling of biological signaling transduction networks

    NASA Astrophysics Data System (ADS)

    Sun, Zhongyao

    The study of network theory and its application span across a multitude of seemingly disparate fields of science and technology: computer science, biology, social science, linguistics, etc. It is the intrinsic similarities embedded in the entities and the way they interact with one another in these systems that link them together. In this dissertation, I present from both the aspect of theoretical analysis and the aspect of application three projects, which primarily focus on signal transduction networks in biology. In these projects, I assembled a network model through extensively perusing literature, performed model-based simulations and validation, analyzed network topology, and proposed a novel network measure. The application of network modeling to the system of stomatal opening in plants revealed a fundamental question about the process that has been left unanswered in decades. The novel measure of the redundancy of signal transduction networks with Boolean dynamics by calculating its maximum node-independent elementary signaling mode set accurately predicts the effect of single node knockout in such signaling processes. The three projects as an organic whole advance the understanding of a real system as well as the behavior of such network models, giving me an opportunity to take a glimpse at the dazzling facets of the immense world of network science.

  6. Diversity Performance Analysis on Multiple HAP Networks.

    PubMed

    Dong, Feihong; Li, Min; Gong, Xiangwu; Li, Hongjun; Gao, Fengyue

    2015-06-30

    One of the main design challenges in wireless sensor networks (WSNs) is achieving a high-data-rate transmission for individual sensor devices. The high altitude platform (HAP) is an important communication relay platform for WSNs and next-generation wireless networks. Multiple-input multiple-output (MIMO) techniques provide the diversity and multiplexing gain, which can improve the network performance effectively. In this paper, a virtual MIMO (V-MIMO) model is proposed by networking multiple HAPs with the concept of multiple assets in view (MAV). In a shadowed Rician fading channel, the diversity performance is investigated. The probability density function (PDF) and cumulative distribution function (CDF) of the received signal-to-noise ratio (SNR) are derived. In addition, the average symbol error rate (ASER) with BPSK and QPSK is given for the V-MIMO model. The system capacity is studied for both perfect channel state information (CSI) and unknown CSI individually. The ergodic capacity with various SNR and Rician factors for different network configurations is also analyzed. The simulation results validate the effectiveness of the performance analysis. It is shown that the performance of the HAPs network in WSNs can be significantly improved by utilizing the MAV to achieve overlapping coverage, with the help of the V-MIMO techniques.

  7. Diversity Performance Analysis on Multiple HAP Networks

    PubMed Central

    Dong, Feihong; Li, Min; Gong, Xiangwu; Li, Hongjun; Gao, Fengyue

    2015-01-01

    One of the main design challenges in wireless sensor networks (WSNs) is achieving a high-data-rate transmission for individual sensor devices. The high altitude platform (HAP) is an important communication relay platform for WSNs and next-generation wireless networks. Multiple-input multiple-output (MIMO) techniques provide the diversity and multiplexing gain, which can improve the network performance effectively. In this paper, a virtual MIMO (V-MIMO) model is proposed by networking multiple HAPs with the concept of multiple assets in view (MAV). In a shadowed Rician fading channel, the diversity performance is investigated. The probability density function (PDF) and cumulative distribution function (CDF) of the received signal-to-noise ratio (SNR) are derived. In addition, the average symbol error rate (ASER) with BPSK and QPSK is given for the V-MIMO model. The system capacity is studied for both perfect channel state information (CSI) and unknown CSI individually. The ergodic capacity with various SNR and Rician factors for different network configurations is also analyzed. The simulation results validate the effectiveness of the performance analysis. It is shown that the performance of the HAPs network in WSNs can be significantly improved by utilizing the MAV to achieve overlapping coverage, with the help of the V-MIMO techniques. PMID:26134102

  8. EOS-Aura's Ozone Monitoring Instrument (OMI): Validation Requirements

    NASA Technical Reports Server (NTRS)

    Brinksma, E. J.; McPeters, R.; deHaan, J. F.; Levelt, P. F.; Hilsenrath, E.; Bhartia, P. K.

    2003-01-01

    OMI is an advanced hyperspectral instrument that measures backscattered radiation in the UV and visible. It will be flown as part of the EOS Aura mission and provide data on atmospheric chemistry that is highly synergistic with other Aura instruments HIRDLS, MLS, and TES. OMI is designed to measure total ozone, aerosols, cloud information, and UV irradiances, continuing the TOMS series of global mapped products but with higher spatial resolution. In addition its hyperspectral capability enables measurements of trace gases such as SO2, NO2, HCHO, BrO, and OClO. A plan for validation of the various OM1 products is now being formulated. Validation of the total column and UVB products will rely heavily on existing networks of instruments, like NDSC. NASA and its European partners are planning aircraft missions for the validation of Aura instruments. New instruments and techniques (DOAS systems for example) will need to be developed, both ground and aircraft based. Lidar systems are needed for validation of the vertical distributions of ozone, aerosols, NO2 and possibly SO2. The validation emphasis will be on the retrieval of these products under polluted conditions. This is challenging because they often depend on the tropospheric profiles of the product in question, and because of large spatial variations in the troposphere. Most existing ground stations are located in, and equipped for, pristine environments. This is also true for almost all NDSC stations. OMI validation will need ground based sites in polluted environments and specially developed instruments, complementing the existing instrumentation.

  9. Finite-time synchronization of uncertain coupled switched neural networks under asynchronous switching.

    PubMed

    Wu, Yuanyuan; Cao, Jinde; Li, Qingbo; Alsaedi, Ahmed; Alsaadi, Fuad E

    2017-01-01

    This paper deals with the finite-time synchronization problem for a class of uncertain coupled switched neural networks under asynchronous switching. By constructing appropriate Lyapunov-like functionals and using the average dwell time technique, some sufficient criteria are derived to guarantee the finite-time synchronization of considered uncertain coupled switched neural networks. Meanwhile, the asynchronous switching feedback controller is designed to finite-time synchronize the concerned networks. Finally, two numerical examples are introduced to show the validity of the main results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Minimizing communication cost among distributed controllers in software defined networks

    NASA Astrophysics Data System (ADS)

    Arlimatti, Shivaleela; Elbreiki, Walid; Hassan, Suhaidi; Habbal, Adib; Elshaikh, Mohamed

    2016-08-01

    Software Defined Networking (SDN) is a new paradigm to increase the flexibility of today's network by promising for a programmable network. The fundamental idea behind this new architecture is to simplify network complexity by decoupling control plane and data plane of the network devices, and by making the control plane centralized. Recently controllers have distributed to solve the problem of single point of failure, and to increase scalability and flexibility during workload distribution. Even though, controllers are flexible and scalable to accommodate more number of network switches, yet the problem of intercommunication cost between distributed controllers is still challenging issue in the Software Defined Network environment. This paper, aims to fill the gap by proposing a new mechanism, which minimizes intercommunication cost with graph partitioning algorithm, an NP hard problem. The methodology proposed in this paper is, swapping of network elements between controller domains to minimize communication cost by calculating communication gain. The swapping of elements minimizes inter and intra communication cost among network domains. We validate our work with the OMNeT++ simulation environment tool. Simulation results show that the proposed mechanism minimizes the inter domain communication cost among controllers compared to traditional distributed controllers.

  11. The Validity of Quasi-Steady-State Approximations in Discrete Stochastic Simulations

    PubMed Central

    Kim, Jae Kyoung; Josić, Krešimir; Bennett, Matthew R.

    2014-01-01

    In biochemical networks, reactions often occur on disparate timescales and can be characterized as either fast or slow. The quasi-steady-state approximation (QSSA) utilizes timescale separation to project models of biochemical networks onto lower-dimensional slow manifolds. As a result, fast elementary reactions are not modeled explicitly, and their effect is captured by nonelementary reaction-rate functions (e.g., Hill functions). The accuracy of the QSSA applied to deterministic systems depends on how well timescales are separated. Recently, it has been proposed to use the nonelementary rate functions obtained via the deterministic QSSA to define propensity functions in stochastic simulations of biochemical networks. In this approach, termed the stochastic QSSA, fast reactions that are part of nonelementary reactions are not simulated, greatly reducing computation time. However, it is unclear when the stochastic QSSA provides an accurate approximation of the original stochastic simulation. We show that, unlike the deterministic QSSA, the validity of the stochastic QSSA does not follow from timescale separation alone, but also depends on the sensitivity of the nonelementary reaction rate functions to changes in the slow species. The stochastic QSSA becomes more accurate when this sensitivity is small. Different types of QSSAs result in nonelementary functions with different sensitivities, and the total QSSA results in less sensitive functions than the standard or the prefactor QSSA. We prove that, as a result, the stochastic QSSA becomes more accurate when nonelementary reaction functions are obtained using the total QSSA. Our work provides an apparently novel condition for the validity of the QSSA in stochastic simulations of biochemical reaction networks with disparate timescales. PMID:25099817

  12. Inferring Gene Regulatory Networks by Singular Value Decomposition and Gravitation Field Algorithm

    PubMed Central

    Zheng, Ming; Wu, Jia-nan; Huang, Yan-xin; Liu, Gui-xia; Zhou, You; Zhou, Chun-guang

    2012-01-01

    Reconstruction of gene regulatory networks (GRNs) is of utmost interest and has become a challenge computational problem in system biology. However, every existing inference algorithm from gene expression profiles has its own advantages and disadvantages. In particular, the effectiveness and efficiency of every previous algorithm is not high enough. In this work, we proposed a novel inference algorithm from gene expression data based on differential equation model. In this algorithm, two methods were included for inferring GRNs. Before reconstructing GRNs, singular value decomposition method was used to decompose gene expression data, determine the algorithm solution space, and get all candidate solutions of GRNs. In these generated family of candidate solutions, gravitation field algorithm was modified to infer GRNs, used to optimize the criteria of differential equation model, and search the best network structure result. The proposed algorithm is validated on both the simulated scale-free network and real benchmark gene regulatory network in networks database. Both the Bayesian method and the traditional differential equation model were also used to infer GRNs, and the results were used to compare with the proposed algorithm in our work. And genetic algorithm and simulated annealing were also used to evaluate gravitation field algorithm. The cross-validation results confirmed the effectiveness of our algorithm, which outperforms significantly other previous algorithms. PMID:23226565

  13. Initial validation of the Soil Moisture Active Passive mission using USDA-ARS watersheds

    USDA-ARS?s Scientific Manuscript database

    The Soil Moisture Active Passive (SMAP) Mission was launched in January 2015 to measure global surface soil moisture. The calibration and validation program of SMAP relies upon an international cooperative of in situ networks to provide ground truth references across a variety of landscapes. The U...

  14. Modeling and optimization of Quality of Service routing in Mobile Ad hoc Networks

    NASA Astrophysics Data System (ADS)

    Rafsanjani, Marjan Kuchaki; Fatemidokht, Hamideh; Balas, Valentina Emilia

    2016-01-01

    Mobile ad hoc networks (MANETs) are a group of mobile nodes that are connected without using a fixed infrastructure. In these networks, nodes communicate with each other by forming a single-hop or multi-hop network. To design effective mobile ad hoc networks, it is important to evaluate the performance of multi-hop paths. In this paper, we present a mathematical model for a routing protocol under energy consumption and packet delivery ratio of multi-hop paths. In this model, we use geometric random graphs rather than random graphs. Our proposed model finds effective paths that minimize the energy consumption and maximizes the packet delivery ratio of the network. Validation of the mathematical model is performed through simulation.

  15. Inferring causal molecular networks: empirical assessment through a community-based effort.

    PubMed

    Hill, Steven M; Heiser, Laura M; Cokelaer, Thomas; Unger, Michael; Nesser, Nicole K; Carlin, Daniel E; Zhang, Yang; Sokolov, Artem; Paull, Evan O; Wong, Chris K; Graim, Kiley; Bivol, Adrian; Wang, Haizhou; Zhu, Fan; Afsari, Bahman; Danilova, Ludmila V; Favorov, Alexander V; Lee, Wai Shing; Taylor, Dane; Hu, Chenyue W; Long, Byron L; Noren, David P; Bisberg, Alexander J; Mills, Gordon B; Gray, Joe W; Kellen, Michael; Norman, Thea; Friend, Stephen; Qutub, Amina A; Fertig, Elana J; Guan, Yuanfang; Song, Mingzhou; Stuart, Joshua M; Spellman, Paul T; Koeppl, Heinz; Stolovitzky, Gustavo; Saez-Rodriguez, Julio; Mukherjee, Sach

    2016-04-01

    It remains unclear whether causal, rather than merely correlational, relationships in molecular networks can be inferred in complex biological settings. Here we describe the HPN-DREAM network inference challenge, which focused on learning causal influences in signaling networks. We used phosphoprotein data from cancer cell lines as well as in silico data from a nonlinear dynamical model. Using the phosphoprotein data, we scored more than 2,000 networks submitted by challenge participants. The networks spanned 32 biological contexts and were scored in terms of causal validity with respect to unseen interventional data. A number of approaches were effective, and incorporating known biology was generally advantageous. Additional sub-challenges considered time-course prediction and visualization. Our results suggest that learning causal relationships may be feasible in complex settings such as disease states. Furthermore, our scoring approach provides a practical way to empirically assess inferred molecular networks in a causal sense.

  16. On the inherent competition between valid and spurious inductive inferences in Boolean data

    NASA Astrophysics Data System (ADS)

    Andrecut, M.

    Inductive inference is the process of extracting general rules from specific observations. This problem also arises in the analysis of biological networks, such as genetic regulatory networks, where the interactions are complex and the observations are incomplete. A typical task in these problems is to extract general interaction rules as combinations of Boolean covariates, that explain a measured response variable. The inductive inference process can be considered as an incompletely specified Boolean function synthesis problem. This incompleteness of the problem will also generate spurious inferences, which are a serious threat to valid inductive inference rules. Using random Boolean data as a null model, here we attempt to measure the competition between valid and spurious inductive inference rules from a given data set. We formulate two greedy search algorithms, which synthesize a given Boolean response variable in a sparse disjunct normal form, and respectively a sparse generalized algebraic normal form of the variables from the observation data, and we evaluate numerically their performance.

  17. Implementing partnership-driven clinical federated electronic health record data sharing networks.

    PubMed

    Stephens, Kari A; Anderson, Nicholas; Lin, Ching-Ping; Estiri, Hossein

    2016-09-01

    Building federated data sharing architectures requires supporting a range of data owners, effective and validated semantic alignment between data resources, and consistent focus on end-users. Establishing these resources requires development methodologies that support internal validation of data extraction and translation processes, sustaining meaningful partnerships, and delivering clear and measurable system utility. We describe findings from two federated data sharing case examples that detail critical factors, shared outcomes, and production environment results. Two federated data sharing pilot architectures developed to support network-based research associated with the University of Washington's Institute of Translational Health Sciences provided the basis for the findings. A spiral model for implementation and evaluation was used to structure iterations of development and support knowledge share between the two network development teams, which cross collaborated to support and manage common stages. We found that using a spiral model of software development and multiple cycles of iteration was effective in achieving early network design goals. Both networks required time and resource intensive efforts to establish a trusted environment to create the data sharing architectures. Both networks were challenged by the need for adaptive use cases to define and test utility. An iterative cyclical model of development provided a process for developing trust with data partners and refining the design, and supported measureable success in the development of new federated data sharing architectures. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. Auxetic metamaterials from disordered networks

    NASA Astrophysics Data System (ADS)

    Reid, Daniel R.; Pashine, Nidhi; Wozniak, Justin M.; Jaeger, Heinrich M.; Liu, Andrea J.; Nagel, Sidney R.; de Pablo, Juan J.

    2018-02-01

    Recent theoretical work suggests that systematic pruning of disordered networks consisting of nodes connected by springs can lead to materials that exhibit a host of unusual mechanical properties. In particular, global properties such as Poisson’s ratio or local responses related to deformation can be precisely altered. Tunable mechanical responses would be useful in areas ranging from impact mitigation to robotics and, more generally, for creation of metamaterials with engineered properties. However, experimental attempts to create auxetic materials based on pruning-based theoretical ideas have not been successful. Here we introduce a more realistic model of the networks, which incorporates angle-bending forces and the appropriate experimental boundary conditions. A sequential pruning strategy of select bonds in this model is then devised and implemented that enables engineering of specific mechanical behaviors upon deformation, both in the linear and in the nonlinear regimes. In particular, it is shown that Poisson’s ratio can be tuned to arbitrary values. The model and concepts discussed here are validated by preparing physical realizations of the networks designed in this manner, which are produced by laser cutting 2D sheets and are found to behave as predicted. Furthermore, by relying on optimization algorithms, we exploit the networks’ susceptibility to tuning to design networks that possess a distribution of stiffer and more compliant bonds and whose auxetic behavior is even greater than that of homogeneous networks. Taken together, the findings reported here serve to establish that pruned networks represent a promising platform for the creation of unique mechanical metamaterials.

  19. Inferring monopartite projections of bipartite networks: an entropy-based approach

    NASA Astrophysics Data System (ADS)

    Saracco, Fabio; Straka, Mika J.; Di Clemente, Riccardo; Gabrielli, Andrea; Caldarelli, Guido; Squartini, Tiziano

    2017-05-01

    Bipartite networks are currently regarded as providing a major insight into the organization of many real-world systems, unveiling the mechanisms driving the interactions occurring between distinct groups of nodes. One of the most important issues encountered when modeling bipartite networks is devising a way to obtain a (monopartite) projection on the layer of interest, which preserves as much as possible the information encoded into the original bipartite structure. In the present paper we propose an algorithm to obtain statistically-validated projections of bipartite networks, according to which any two nodes sharing a statistically-significant number of neighbors are linked. Since assessing the statistical significance of nodes similarity requires a proper statistical benchmark, here we consider a set of four null models, defined within the exponential random graph framework. Our algorithm outputs a matrix of link-specific p-values, from which a validated projection is straightforwardly obtainable, upon running a multiple hypothesis testing procedure. Finally, we test our method on an economic network (i.e. the countries-products World Trade Web representation) and a social network (i.e. MovieLens, collecting the users’ ratings of a list of movies). In both cases non-trivial communities are detected: while projecting the World Trade Web on the countries layer reveals modules of similarly-industrialized nations, projecting it on the products layer allows communities characterized by an increasing level of complexity to be detected; in the second case, projecting MovieLens on the films layer allows clusters of movies whose affinity cannot be fully accounted for by genre similarity to be individuated.

  20. Study of co-authorship network of papers in the Journal of Research in Medical Sciences using social network analysis

    PubMed Central

    Zare-Farashbandi, Firoozeh; Geraei, Ehsan; Siamaki, Saba

    2014-01-01

    Background: Co-authorship is one of the most tangible forms of research collaboration. A co-authorship network is a social network in which the authors through participation in one or more publication through an indirect path have linked to each other. The present research using the social network analysis studied co-authorship network of 681 articles published in Journal of Research in Medical Sciences (JRMS) during 2008-2012. Materials and Methods: The study was carried out with the scientometrics approach and using co-authorship network analysis of authors. The topology of the co-authorship network of 681 published articles in JRMS between 2008 and 2012 was analyzed using macro-level metrics indicators of network analysis such as density, clustering coefficient, components and mean distance. In addition, in order to evaluate the performance of each authors and countries in the network, the micro-level indicators such as degree centrality, closeness centrality and betweenness centrality as well as productivity index were used. The UCINET and NetDraw softwares were used to draw and analyze the co-authorship network of the papers. Results: The assessment of the authors productivity in this journal showed that the first ranks were belonged to only five authors, respectively. Furthermore, analysis of the co-authorship of the authors in the network demonstrated that in the betweenness centrality index, three authors of them had the good position in the network. They can be considered as the network leaders able to control the flow of information in the network compared with the other members based on the shortest paths. On the other hand, the key role of the network according to the productivity and centrality indexes was belonged to Iran, Malaysia and United States of America. Conclusion: Co-authorship network of JRMS has the characteristics of a small world network. In addition, the theory of 6° separation is valid in this network was also true. PMID:24672564

  1. Robustness and fragility in coupled oscillator networks under targeted attacks.

    PubMed

    Yuan, Tianyu; Aihara, Kazuyuki; Tanaka, Gouhei

    2017-01-01

    The dynamical tolerance of coupled oscillator networks against local failures is studied. As the fraction of failed oscillator nodes gradually increases, the mean oscillation amplitude in the entire network decreases and then suddenly vanishes at a critical fraction as a phase transition. This critical fraction, widely used as a measure of the network robustness, was analytically derived for random failures but not for targeted attacks so far. Here we derive the general formula for the critical fraction, which can be applied to both random failures and targeted attacks. We consider the effects of targeting oscillator nodes based on their degrees. First we deal with coupled identical oscillators with homogeneous edge weights. Then our theory is applied to networks with heterogeneous edge weights and to those with nonidentical oscillators. The analytical results are validated by numerical experiments. Our results reveal the key factors governing the robustness and fragility of oscillator networks.

  2. Passenger flow analysis of Beijing urban rail transit network using fractal approach

    NASA Astrophysics Data System (ADS)

    Li, Xiaohong; Chen, Peiwen; Chen, Feng; Wang, Zijia

    2018-04-01

    To quantify the spatiotemporal distribution of passenger flow and the characteristics of an urban rail transit network, we introduce four radius fractal dimensions and two branch fractal dimensions by combining a fractal approach with passenger flow assignment model. These fractal dimensions can numerically describe the complexity of passenger flow in the urban rail transit network and its change characteristics. Based on it, we establish a fractal quantification method to measure the fractal characteristics of passenger follow in the rail transit network. Finally, we validate the reasonability of our proposed method by using the actual data of Beijing subway network. It has been shown that our proposed method can effectively measure the scale-free range of the urban rail transit network, network development and the fractal characteristics of time-varying passenger flow, which further provides a reference for network planning and analysis of passenger flow.

  3. Predicting the evolution of spreading on complex networks

    PubMed Central

    Chen, Duan-Bing; Xiao, Rui; Zeng, An

    2014-01-01

    Due to the wide applications, spreading processes on complex networks have been intensively studied. However, one of the most fundamental problems has not yet been well addressed: predicting the evolution of spreading based on a given snapshot of the propagation on networks. With this problem solved, one can accelerate or slow down the spreading in advance if the predicted propagation result is narrower or wider than expected. In this paper, we propose an iterative algorithm to estimate the infection probability of the spreading process and then apply it to a mean-field approach to predict the spreading coverage. The validation of the method is performed in both artificial and real networks. The results show that our method is accurate in both infection probability estimation and spreading coverage prediction. PMID:25130862

  4. Analysis of wireless sensor network topology and estimation of optimal network deployment by deterministic radio channel characterization.

    PubMed

    Aguirre, Erik; Lopez-Iturri, Peio; Azpilicueta, Leire; Astrain, José Javier; Villadangos, Jesús; Falcone, Francisco

    2015-02-05

    One of the main challenges in the implementation and design of context-aware scenarios is the adequate deployment strategy for Wireless Sensor Networks (WSNs), mainly due to the strong dependence of the radiofrequency physical layer with the surrounding media, which can lead to non-optimal network designs. In this work, radioplanning analysis for WSN deployment is proposed by employing a deterministic 3D ray launching technique in order to provide insight into complex wireless channel behavior in context-aware indoor scenarios. The proposed radioplanning procedure is validated with a testbed implemented with a Mobile Ad Hoc Network WSN following a chain configuration, enabling the analysis and assessment of a rich variety of parameters, such as received signal level, signal quality and estimation of power consumption. The adoption of deterministic radio channel techniques allows the design and further deployment of WSNs in heterogeneous wireless scenarios with optimized behavior in terms of coverage, capacity, quality of service and energy consumption.

  5. Benford’s Law Applies to Online Social Networks

    PubMed Central

    Golbeck, Jennifer

    2015-01-01

    Benford’s Law states that, in naturally occurring systems, the frequency of numbers’ first digits is not evenly distributed. Numbers beginning with a 1 occur roughly 30% of the time, and are six times more common than numbers beginning with a 9. We show that Benford’s Law applies to social and behavioral features of users in online social networks. Using social data from five major social networks (Facebook, Twitter, Google Plus, Pinterest, and LiveJournal), we show that the distribution of first significant digits of friend and follower counts for users in these systems follow Benford’s Law. The same is true for the number of posts users make. We extend this to egocentric networks, showing that friend counts among the people in an individual’s social network also follows the expected distribution. We discuss how this can be used to detect suspicious or fraudulent activity online and to validate datasets. PMID:26308716

  6. Analyzing psychotherapy process as intersubjective sensemaking: an approach based on discourse analysis and neural networks.

    PubMed

    Nitti, Mariangela; Ciavolino, Enrico; Salvatore, Sergio; Gennaro, Alessandro

    2010-09-01

    The authors propose a method for analyzing the psychotherapy process: discourse flow analysis (DFA). DFA is a technique representing the verbal interaction between therapist and patient as a discourse network, aimed at measuring the therapist-patient discourse ability to generate new meanings through time. DFA assumes that the main function of psychotherapy is to produce semiotic novelty. DFA is applied to the verbatim transcript of the psychotherapy. It defines the main meanings active within the therapeutic discourse by means of the combined use of text analysis and statistical techniques. Subsequently, it represents the dynamic interconnections among these meanings in terms of a "discursive network." The dynamic and structural indexes of the discursive network have been shown to provide a valid representation of the patient-therapist communicative flow as well as an estimation of its clinical quality. Finally, a neural network is designed specifically to identify patterns of functioning of the discursive network and to verify the clinical validity of these patterns in terms of their association with specific phases of the psychotherapy process. An application of the DFA to a case of psychotherapy is provided to illustrate the method and the kinds of results it produces.

  7. Multi-criteria anomaly detection in urban noise sensor networks.

    PubMed

    Dauwe, Samuel; Oldoni, Damiano; De Baets, Bernard; Van Renterghem, Timothy; Botteldooren, Dick; Dhoedt, Bart

    2014-01-01

    The growing concern of citizens about the quality of their living environment and the emergence of low-cost microphones and data acquisition systems triggered the deployment of numerous noise monitoring networks spread over large geographical areas. Due to the local character of noise pollution in an urban environment, a dense measurement network is needed in order to accurately assess the spatial and temporal variations. The use of consumer grade microphones in this context appears to be very cost-efficient compared to the use of measurement microphones. However, the lower reliability of these sensing units requires a strong quality control of the measured data. To automatically validate sensor (microphone) data, prior to their use in further processing, a multi-criteria measurement quality assessment model for detecting anomalies such as microphone breakdowns, drifts and critical outliers was developed. Each of the criteria results in a quality score between 0 and 1. An ordered weighted average (OWA) operator combines these individual scores into a global quality score. The model is validated on datasets acquired from a real-world, extensive noise monitoring network consisting of more than 50 microphones. Over a period of more than a year, the proposed approach successfully detected several microphone faults and anomalies.

  8. Assessment of the External Validity of the National Comprehensive Cancer Network and European Society for Medical Oncology Guidelines for Non-Small-Cell Lung Cancer in a Population of Patients Aged 80 Years and Older.

    PubMed

    Battisti, Nicolò Matteo Luca; Sehovic, Marina; Extermann, Martine

    2017-09-01

    Non-small-cell lung cancer (NSCLC) is a disease of the elderly, who are under-represented in clinical trials. This challenges the external validity of the evidence base for its management and of current guidelines, that we evaluated in a population of older patients. We retrieved randomized clinical trials (RCTs) supporting the guidelines and identified 18 relevant topics. We matched a cohort of NSCLC patients aged older than 80 years from the Moffitt Cancer Center database with the studies' eligibility criteria to check their qualification for at least 2 studies. Eligibility > 60% was rated full validity, 30% to 60% partial validity, and < 30% limited validity. We obtained data from 760 elderly patients in stage-adjusted groups and collected 244 RCTs from the National Comprehensive Cancer Network (NCCN) and 148 from the European Society for Medical Oncology (ESMO) guidelines. External validity was deemed insufficient for neoadjuvant chemotherapy in stage III disease (27.37% and 25.26% of patients eligible for NCCN and ESMO guidelines, respectively) and use of bevacizumab (13.86% and 16.27% of patients eligible). For ESMO guidelines, it was inadequate regarding double-agent chemotherapy (25.90% of patients eligible), its duration (24.10%) and therapy for Eastern Cooperative Oncology Group performance status 2 patients (17.74%). For NCCN guidelines external validity was lacking for neoadjuvant chemoradiotherapy in stage IIIA disease (25.86% of patients eligible). Our analysis highlighted the effect of RCT eligibility criteria on guidelines' external validity in elderly patients. Eligibility criteria should be carefully considered in trial design and more studies that do not exclude elderly patients should be included in guidelines. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. A neural networks application for the study of the influence of transport conditions on the working performance

    NASA Astrophysics Data System (ADS)

    Anghel, D.-C.; Ene, A.; Ştirbu, C.; Sicoe, G.

    2017-10-01

    This paper presents a study about the factors that influence the working performances of workers in the automotive industry. These factors regard mainly the transportations conditions, taking into account the fact that a large number of workers live in places that are far away of the enterprise. The quantitative data obtained from this study will be generalized by using a neural network, software simulated. The neural network is able to estimate the performance of workers even for the combinations of input factors that had been not recorded by the study. The experimental data obtained from the study will be divided in two classes. The first class that contains approximately 80% of data will be used by the Java software for the training of the neural network. The weights resulted from the training process will be saved in a text file. The other class that contains the rest of the 20% of experimental data will be used to validate the neural network. The training and the validation of the networks are performed in a Java software (TrainAndValidate java class). We designed another java class, Test.java that will be used with new input data, for new situations. The experimental data collected from the study. The software that simulated the neural network. The software that estimates the working performance, when new situations are met. This application is useful for human resources department of an enterprise. The output results are not quantitative. They are qualitative (from low performance to high performance, divided in five classes).

  10. Validation and reconstruction of FY-3B/MWRI soil moisture using an artificial neural network based on reconstructed MODIS optical products over the Tibetan Plateau

    NASA Astrophysics Data System (ADS)

    Cui, Yaokui; Long, Di; Hong, Yang; Zeng, Chao; Zhou, Jie; Han, Zhongying; Liu, Ronghua; Wan, Wei

    2016-12-01

    Soil moisture is a key variable in the exchange of water and energy between the land surface and the atmosphere, especially over the Tibetan Plateau (TP) which is climatically and hydrologically sensitive as the Earth's 'third pole'. Large-scale spatially consistent and temporally continuous soil moisture datasets are of great importance to meteorological and hydrological applications, such as weather forecasting and drought monitoring. The Fengyun-3B Microwave Radiation Imager (FY-3B/MWRI) soil moisture product is a relatively new passive microwave product, with the satellite being launched on November 5, 2010. This study validates and reconstructs FY-3B/MWRI soil moisture across the TP. First, the validation is performed using in situ measurements within two in situ soil moisture measurement networks (1° × 1° and 0.25° × 0.25°), and also compared with the Essential Climate Variable (ECV) soil moisture product from multiple active and passive satellite soil moisture products using new merging procedures. Results show that the ascending FY-3B/MWRI product outperforms the descending product. The ascending FY-3B/MWRI product has almost the same correlation as the ECV product with the in situ measurements. The ascending FY-3B/MWRI product has better performance than the ECV product in the frozen season and under the lower NDVI condition. When the NDVI is higher in the unfrozen season, uncertainty in the ascending FY-3B/MWRI product increases with increasing NDVI, but it could still capture the variability in soil moisture. Second, the FY-3B/MWRI soil moisture product is subsequently reconstructed using the back-propagation neural network (BP-NN) based on reconstructed MODIS products, i.e., LST, NDVI, and albedo. The reconstruction method of generating the soil moisture product not only considers the relationship between the soil moisture and NDVI, LST, and albedo, but also the relationship between the soil moisture and four-dimensional variations using the

  11. Performance Modeling of Network-Attached Storage Device Based Hierarchical Mass Storage Systems

    NASA Technical Reports Server (NTRS)

    Menasce, Daniel A.; Pentakalos, Odysseas I.

    1995-01-01

    Network attached storage devices improve I/O performance by separating control and data paths and eliminating host intervention during the data transfer phase. Devices are attached to both a high speed network for data transfer and to a slower network for control messages. Hierarchical mass storage systems use disks to cache the most recently used files and a combination of robotic and manually mounted tapes to store the bulk of the files in the file system. This paper shows how queuing network models can be used to assess the performance of hierarchical mass storage systems that use network attached storage devices as opposed to host attached storage devices. Simulation was used to validate the model. The analytic model presented here can be used, among other things, to evaluate the protocols involved in 1/0 over network attached devices.

  12. Generative models for network neuroscience: prospects and promise

    PubMed Central

    Betzel, Richard F.

    2017-01-01

    Network neuroscience is the emerging discipline concerned with investigating the complex patterns of interconnections found in neural systems, and identifying principles with which to understand them. Within this discipline, one particularly powerful approach is network generative modelling, in which wiring rules are algorithmically implemented to produce synthetic network architectures with the same properties as observed in empirical network data. Successful models can highlight the principles by which a network is organized and potentially uncover the mechanisms by which it grows and develops. Here, we review the prospects and promise of generative models for network neuroscience. We begin with a primer on network generative models, with a discussion of compressibility and predictability, and utility in intuiting mechanisms, followed by a short history on their use in network science, broadly. We then discuss generative models in practice and application, paying particular attention to the critical need for cross-validation. Next, we review generative models of biological neural networks, both at the cellular and large-scale level, and across a variety of species including Caenorhabditis elegans, Drosophila, mouse, rat, cat, macaque and human. We offer a careful treatment of a few relevant distinctions, including differences between generative models and null models, sufficiency and redundancy, inferring and claiming mechanism, and functional and structural connectivity. We close with a discussion of future directions, outlining exciting frontiers both in empirical data collection efforts as well as in method and theory development that, together, further the utility of the generative network modelling approach for network neuroscience. PMID:29187640

  13. DGs for Service Restoration to Critical Loads in a Secondary Network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Yin; Liu, Chen-Ching; Wang, Zhiwen

    During a major outage in a secondary network distribution system, distributed generators (DGs) connected to the primary feeders as well as the secondary network can be used to serve critical loads. This paper proposed a resilience-oriented method to determine restoration strategies for secondary network distribution systems after a major disaster. Technical issues associated with the restoration process are analyzed, including the operation of network protectors, inrush currents caused by the energization of network transformers, synchronization of DGs to the network, and circulating currents among DGs. A look-ahead load restoration framework is proposed, incorporating technical issues associated with secondary networks, limitsmore » on DG capacity and generation resources, dynamic constraints, and operational limits. The entire outage duration is divided into a sequence of periods. Restoration strategies can be adjusted at the beginning of each period using the latest information. Finally, numerical simulation of the modified IEEE 342-node low voltage networked test system is performed to validate the effectiveness of the proposed method.« less

  14. DGs for Service Restoration to Critical Loads in a Secondary Network

    DOE PAGES

    Xu, Yin; Liu, Chen-Ching; Wang, Zhiwen; ...

    2017-08-25

    During a major outage in a secondary network distribution system, distributed generators (DGs) connected to the primary feeders as well as the secondary network can be used to serve critical loads. This paper proposed a resilience-oriented method to determine restoration strategies for secondary network distribution systems after a major disaster. Technical issues associated with the restoration process are analyzed, including the operation of network protectors, inrush currents caused by the energization of network transformers, synchronization of DGs to the network, and circulating currents among DGs. A look-ahead load restoration framework is proposed, incorporating technical issues associated with secondary networks, limitsmore » on DG capacity and generation resources, dynamic constraints, and operational limits. The entire outage duration is divided into a sequence of periods. Restoration strategies can be adjusted at the beginning of each period using the latest information. Finally, numerical simulation of the modified IEEE 342-node low voltage networked test system is performed to validate the effectiveness of the proposed method.« less

  15. In Situ Validation of the Soil Moisture Active Passive (SMAP) Satellite Mission

    NASA Technical Reports Server (NTRS)

    Jackson, T.; Cosh, M.; Crow, W.; Colliander, A.; Walker, J.

    2011-01-01

    SMAP is a new NASA mission proposed for 2014 that would provide a number of soil moisture and freeze/thaw products. The soil moisture products span spatial resolutions from 3 to 40 km. In situ soil moisture observations will be one of the key elements of the validation program for SMAP. Data from the currently available set of soil moisture observing sites and networks need improvement if they are to be useful. Problems include a lack of standardization of instrumentation and installation and the disparity in spatial scale between the point-scale in situ data (a few centimeters) and the coarser satellite products. SMAP has initiated activities to resolve these issues for some of the existing resources. The other challenge to soil moisture validation is the need to expand the number of sites and their geographic distribution. SMAP is attempting to increase the number of sites and their value in validation through collaboration. The issues and solutions involving in situ validation being investigated will be described along with recent results from SMAP validation projects.

  16. Control of cancer-related signal transduction networks

    NASA Astrophysics Data System (ADS)

    Albert, Reka

    2013-03-01

    Intra-cellular signaling networks are crucial to the maintenance of cellular homeostasis and for cell behavior (growth, survival, apoptosis, movement). Mutations or alterations in the expression of elements of cellular signaling networks can lead to incorrect behavioral decisions that could result in tumor development and/or the promotion of cell migration and metastasis. Thus, mitigation of the cascading effects of such dysregulations is an important control objective. My group at Penn State is collaborating with wet-bench biologists to develop and validate predictive models of various biological systems. Over the years we found that discrete dynamic modeling is very useful in molding qualitative interaction information into a predictive model. We recently demonstrated the effectiveness of network-based targeted manipulations on mitigating the disease T cell large granular lymphocyte (T-LGL) leukemia. The root of this disease is the abnormal survival of T cells which, after successfully fighting an infection, should undergo programmed cell death. We synthesized the relevant network of within-T-cell interactions from the literature, integrated it with qualitative knowledge of the dysregulated (abnormal) states of several network components, and formulated a Boolean dynamic model. The model indicated that the system possesses a steady state corresponding to the normal cell death state and a T-LGL steady state corresponding to the abnormal survival state. For each node, we evaluated the restorative manipulation consisting of maintaining the node in the state that is the opposite of its T-LGL state, e.g. knocking it out if it is overexpressed in the T-LGL state. We found that such control of any of 15 nodes led to the disappearance of the T-LGL steady state, leaving cell death as the only potential outcome from any initial condition. In four additional cases the probability of reaching the T-LGL state decreased dramatically, thus these nodes are also possible control

  17. Rule extraction from minimal neural networks for credit card screening.

    PubMed

    Setiono, Rudy; Baesens, Bart; Mues, Christophe

    2011-08-01

    While feedforward neural networks have been widely accepted as effective tools for solving classification problems, the issue of finding the best network architecture remains unresolved, particularly so in real-world problem settings. We address this issue in the context of credit card screening, where it is important to not only find a neural network with good predictive performance but also one that facilitates a clear explanation of how it produces its predictions. We show that minimal neural networks with as few as one hidden unit provide good predictive accuracy, while having the added advantage of making it easier to generate concise and comprehensible classification rules for the user. To further reduce model size, a novel approach is suggested in which network connections from the input units to this hidden unit are removed by a very straightaway pruning procedure. In terms of predictive accuracy, both the minimized neural networks and the rule sets generated from them are shown to compare favorably with other neural network based classifiers. The rules generated from the minimized neural networks are concise and thus easier to validate in a real-life setting.

  18. Workplace status: The development and validation of a scale.

    PubMed

    Djurdjevic, Emilija; Stoverink, Adam C; Klotz, Anthony C; Koopman, Joel; da Motta Veiga, Serge P; Yam, Kai Chi; Chiang, Jack Ting-Ju

    2017-07-01

    Research suggests that employee status, and various status proxies, relate to a number of meaningful outcomes in the workplace. The advancement of the study of status in organizational settings has, however, been stymied by the lack of a validated workplace status measure. The purpose of this manuscript, therefore, is to develop and validate a measure of workplace status based on a theoretically grounded definition of status in organizations. Subject-matter experts were used to examine the content validity of the measure. Then, 2 separate samples were employed to assess the psychometric properties (i.e., factor structure, reliability, convergent and discriminant validity) and nomological network of a 5-item, self-report Workplace Status Scale (WSS). To allow for methodological flexibility, an additional 3 samples were used to extend the WSS to coworker reports of a focal employee's status, provide additional evidence for the validity and reliability of the WSS, and to demonstrate consensus among coworker ratings. Together, these studies provide evidence of the psychometric soundness of the WSS for assessing employee status using either self-reports or other-source reports. The implications of the development of the WSS for the study of status in organizations are discussed, and suggestions for future research using the new measure are offered. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  19. Validation of the H-SAF precipitation product H03 over Greece using rain gauge data

    NASA Astrophysics Data System (ADS)

    Feidas, H.; Porcu, F.; Puca, S.; Rinollo, A.; Lagouvardos, C.; Kotroni, V.

    2018-01-01

    This paper presents an extensive validation of the combined infrared/microwave H-SAF (EUMETSAT Satellite Application Facility on Support to Operational Hydrology and Water Management) precipitation product H03, for a 1-year period, using gauge observations from a relatively dense network of 233 stations over Greece. First, the quality of the interpolated data used to validate the precipitation product is assessed and a quality index is constructed based on parameters such as the density of the station network and the orography. Then, a validation analysis is conducted based on comparisons of satellite (H03) with interpolated rain gauge data to produce continuous and multi-categorical statistics at monthly and annual timescales by taking into account the different geophysical characteristics of the terrain (land, coast, sea, elevation). Finally, the impact of the quality of interpolated data on the validation statistics is examined in terms of different configurations of the interpolation model and the rain gauge network characteristics used in the interpolation. The possibility of using a quality index of the interpolated data as a filter in the validation procedure is also investigated. The continuous validation statistics show yearly root mean squared error (RMSE) and mean absolute error (MAE) corresponding to the 225 and 105 % of the mean rain rate, respectively. Mean error (ME) indicates a slight overall tendency for underestimation of the rain gauge rates, which takes large values for the high rain rates. In general, the H03 algorithm cannot retrieve very well the light (< 1 mm/h) and the convective type (>10 mm/h) precipitation. The poor correlation between satellite and gauge data points to algorithm problems in co-locating precipitation patterns. Seasonal comparison shows that retrieval errors are lower for cold months than in the summer months of the year. The multi-categorical statistics indicate that the H03 algorithm is able to discriminate efficiently

  20. Using machine learning to produce near surface soil moisture estimates from deeper in situ records at U.S. Climate Reference Network (USCRN) locations: Analysis and applications to AMSR-E satellite validation

    NASA Astrophysics Data System (ADS)

    Coopersmith, Evan J.; Cosh, Michael H.; Bell, Jesse E.; Boyles, Ryan

    2016-12-01

    Surface soil moisture is a critical parameter for understanding the energy flux at the land atmosphere boundary. Weather modeling, climate prediction, and remote sensing validation are some of the applications for surface soil moisture information. The most common in situ measurement for these purposes are sensors that are installed at depths of approximately 5 cm. There are however, sensor technologies and network designs that do not provide an estimate at this depth. If soil moisture estimates at deeper depths could be extrapolated to the near surface, in situ networks providing estimates at other depths would see their values enhanced. Soil moisture sensors from the U.S. Climate Reference Network (USCRN) were used to generate models of 5 cm soil moisture, with 10 cm soil moisture measurements and antecedent precipitation as inputs, via machine learning techniques. Validation was conducted with the available, in situ, 5 cm resources. It was shown that a 5 cm estimate, which was extrapolated from a 10 cm sensor and antecedent local precipitation, produced a root-mean-squared-error (RMSE) of 0.0215 m3/m3. Next, these machine-learning-generated 5 cm estimates were also compared to AMSR-E estimates at these locations. These results were then compared with the performance of the actual in situ readings against the AMSR-E data. The machine learning estimates at 5 cm produced an RMSE of approximately 0.03 m3/m3 when an optimized gain and offset were applied. This is necessary considering the performance of AMSR-E in locations characterized by high vegetation water contents, which are present across North Carolina. Lastly, the application of this extrapolation technique is applied to the ECONet in North Carolina, which provides a 10 cm depth measurement as its shallowest soil moisture estimate. A raw RMSE of 0.028 m3/m3 was achieved, and with a linear gain and offset applied at each ECONet site, an RMSE of 0.013 m3/m3 was possible.

  1. Pattern Storage, Bifurcations, and Groupwise Correlation Structure of an Exactly Solvable Asymmetric Neural Network Model.

    PubMed

    Fasoli, Diego; Cattani, Anna; Panzeri, Stefano

    2018-05-01

    Despite their biological plausibility, neural network models with asymmetric weights are rarely solved analytically, and closed-form solutions are available only in some limiting cases or in some mean-field approximations. We found exact analytical solutions of an asymmetric spin model of neural networks with arbitrary size without resorting to any approximation, and we comprehensively studied its dynamical and statistical properties. The network had discrete time evolution equations and binary firing rates, and it could be driven by noise with any distribution. We found analytical expressions of the conditional and stationary joint probability distributions of the membrane potentials and the firing rates. By manipulating the conditional probability distribution of the firing rates, we extend to stochastic networks the associating learning rule previously introduced by Personnaz and coworkers. The new learning rule allowed the safe storage, under the presence of noise, of point and cyclic attractors, with useful implications for content-addressable memories. Furthermore, we studied the bifurcation structure of the network dynamics in the zero-noise limit. We analytically derived examples of the codimension 1 and codimension 2 bifurcation diagrams of the network, which describe how the neuronal dynamics changes with the external stimuli. This showed that the network may undergo transitions among multistable regimes, oscillatory behavior elicited by asymmetric synaptic connections, and various forms of spontaneous symmetry breaking. We also calculated analytically groupwise correlations of neural activity in the network in the stationary regime. This revealed neuronal regimes where, statistically, the membrane potentials and the firing rates are either synchronous or asynchronous. Our results are valid for networks with any number of neurons, although our equations can be realistically solved only for small networks. For completeness, we also derived the network

  2. Mapping the temporary and perennial character of whole river networks

    NASA Astrophysics Data System (ADS)

    González-Ferreras, A. M.; Barquín, J.

    2017-08-01

    Knowledge of the spatial distribution of temporary and perennial river channels in a whole catchment is important for effective integrated basin management and river biodiversity conservation. However, this information is usually not available or is incomplete. In this study, we present a statistically based methodology to classify river segments from a whole river network (Deva-Cares catchment, Northern Spain) as temporary or perennial. This method is based on an a priori classification of a subset of river segments as temporary or perennial, using field surveys and aerial images, and then running Random Forest models to predict classification membership for the rest of the river network. The independent variables and the river network were derived following a computer-based geospatial simulation of riverine landscapes. The model results show high values of overall accuracy, sensitivity, and specificity for the evaluation of the fitted model to the training and testing data set (≥0.9). The most important independent variables were catchment area, area occupied by broadleaf forest, minimum monthly precipitation in August, and average catchment elevation. The final map shows 7525 temporary river segments (1012.5 km) and 3731 perennial river segments (662.5 km). A subsequent validation of the mapping results using River Habitat Survey data and expert knowledge supported the validity of the proposed maps. We conclude that the proposed methodology is a valid method for mapping the limits of flow permanence that could substantially increase our understanding of the spatial links between terrestrial and aquatic interfaces, improving the research, management, and conservation of river biodiversity and functioning.

  3. Probabilistic track coverage in cooperative sensor networks.

    PubMed

    Ferrari, Silvia; Zhang, Guoxian; Wettergren, Thomas A

    2010-12-01

    The quality of service of a network performing cooperative track detection is represented by the probability of obtaining multiple elementary detections over time along a target track. Recently, two different lines of research, namely, distributed-search theory and geometric transversals, have been used in the literature for deriving the probability of track detection as a function of random and deterministic sensors' positions, respectively. In this paper, we prove that these two approaches are equivalent under the same problem formulation. Also, we present a new performance function that is derived by extending the geometric-transversal approach to the case of random sensors' positions using Poisson flats. As a result, a unified approach for addressing track detection in both deterministic and probabilistic sensor networks is obtained. The new performance function is validated through numerical simulations and is shown to bring about considerable computational savings for both deterministic and probabilistic sensor networks.

  4. The Application of Social Characteristic and L1 Optimization in the Error Correction for Network Coding in Wireless Sensor Networks.

    PubMed

    Zhang, Guangzhi; Cai, Shaobin; Xiong, Naixue

    2018-02-03

    One of the remarkable challenges about Wireless Sensor Networks (WSN) is how to transfer the collected data efficiently due to energy limitation of sensor nodes. Network coding will increase network throughput of WSN dramatically due to the broadcast nature of WSN. However, the network coding usually propagates a single original error over the whole network. Due to the special property of error propagation in network coding, most of error correction methods cannot correct more than C /2 corrupted errors where C is the max flow min cut of the network. To maximize the effectiveness of network coding applied in WSN, a new error-correcting mechanism to confront the propagated error is urgently needed. Based on the social network characteristic inherent in WSN and L1 optimization, we propose a novel scheme which successfully corrects more than C /2 corrupted errors. What is more, even if the error occurs on all the links of the network, our scheme also can correct errors successfully. With introducing a secret channel and a specially designed matrix which can trap some errors, we improve John and Yi's model so that it can correct the propagated errors in network coding which usually pollute exactly 100% of the received messages. Taking advantage of the social characteristic inherent in WSN, we propose a new distributed approach that establishes reputation-based trust among sensor nodes in order to identify the informative upstream sensor nodes. With referred theory of social networks, the informative relay nodes are selected and marked with high trust value. The two methods of L1 optimization and utilizing social characteristic coordinate with each other, and can correct the propagated error whose fraction is even exactly 100% in WSN where network coding is performed. The effectiveness of the error correction scheme is validated through simulation experiments.

  5. The Application of Social Characteristic and L1 Optimization in the Error Correction for Network Coding in Wireless Sensor Networks

    PubMed Central

    Zhang, Guangzhi; Cai, Shaobin; Xiong, Naixue

    2018-01-01

    One of the remarkable challenges about Wireless Sensor Networks (WSN) is how to transfer the collected data efficiently due to energy limitation of sensor nodes. Network coding will increase network throughput of WSN dramatically due to the broadcast nature of WSN. However, the network coding usually propagates a single original error over the whole network. Due to the special property of error propagation in network coding, most of error correction methods cannot correct more than C/2 corrupted errors where C is the max flow min cut of the network. To maximize the effectiveness of network coding applied in WSN, a new error-correcting mechanism to confront the propagated error is urgently needed. Based on the social network characteristic inherent in WSN and L1 optimization, we propose a novel scheme which successfully corrects more than C/2 corrupted errors. What is more, even if the error occurs on all the links of the network, our scheme also can correct errors successfully. With introducing a secret channel and a specially designed matrix which can trap some errors, we improve John and Yi’s model so that it can correct the propagated errors in network coding which usually pollute exactly 100% of the received messages. Taking advantage of the social characteristic inherent in WSN, we propose a new distributed approach that establishes reputation-based trust among sensor nodes in order to identify the informative upstream sensor nodes. With referred theory of social networks, the informative relay nodes are selected and marked with high trust value. The two methods of L1 optimization and utilizing social characteristic coordinate with each other, and can correct the propagated error whose fraction is even exactly 100% in WSN where network coding is performed. The effectiveness of the error correction scheme is validated through simulation experiments. PMID:29401668

  6. Formal Specification and Design Techniques for Wireless Sensor and Actuator Networks

    PubMed Central

    Martínez, Diego; González, Apolinar; Blanes, Francisco; Aquino, Raúl; Simo, José; Crespo, Alfons

    2011-01-01

    A current trend in the development and implementation of industrial applications is to use wireless networks to communicate the system nodes, mainly to increase application flexibility, reliability and portability, as well as to reduce the implementation cost. However, the nondeterministic and concurrent behavior of distributed systems makes their analysis and design complex, often resulting in less than satisfactory performance in simulation and test bed scenarios, which is caused by using imprecise models to analyze, validate and design these systems. Moreover, there are some simulation platforms that do not support these models. This paper presents a design and validation method for Wireless Sensor and Actuator Networks (WSAN) which is supported on a minimal set of wireless components represented in Colored Petri Nets (CPN). In summary, the model presented allows users to verify the design properties and structural behavior of the system. PMID:22344203

  7. Inference of Gene Regulatory Networks Using Time-Series Data: A Survey

    PubMed Central

    Sima, Chao; Hua, Jianping; Jung, Sungwon

    2009-01-01

    The advent of high-throughput technology like microarrays has provided the platform for studying how different cellular components work together, thus created an enormous interest in mathematically modeling biological network, particularly gene regulatory network (GRN). Of particular interest is the modeling and inference on time-series data, which capture a more thorough picture of the system than non-temporal data do. We have given an extensive review of methodologies that have been used on time-series data. In realizing that validation is an impartible part of the inference paradigm, we have also presented a discussion on the principles and challenges in performance evaluation of different methods. This survey gives a panoramic view on these topics, with anticipation that the readers will be inspired to improve and/or expand GRN inference and validation tool repository. PMID:20190956

  8. Formal specification and design techniques for wireless sensor and actuator networks.

    PubMed

    Martínez, Diego; González, Apolinar; Blanes, Francisco; Aquino, Raúl; Simo, José; Crespo, Alfons

    2011-01-01

    A current trend in the development and implementation of industrial applications is to use wireless networks to communicate the system nodes, mainly to increase application flexibility, reliability and portability, as well as to reduce the implementation cost. However, the nondeterministic and concurrent behavior of distributed systems makes their analysis and design complex, often resulting in less than satisfactory performance in simulation and test bed scenarios, which is caused by using imprecise models to analyze, validate and design these systems. Moreover, there are some simulation platforms that do not support these models. This paper presents a design and validation method for Wireless Sensor and Actuator Networks (WSAN) which is supported on a minimal set of wireless components represented in Colored Petri Nets (CPN). In summary, the model presented allows users to verify the design properties and structural behavior of the system.

  9. IndeCut evaluates performance of network motif discovery algorithms.

    PubMed

    Ansariola, Mitra; Megraw, Molly; Koslicki, David

    2018-05-01

    Genomic networks represent a complex map of molecular interactions which are descriptive of the biological processes occurring in living cells. Identifying the small over-represented circuitry patterns in these networks helps generate hypotheses about the functional basis of such complex processes. Network motif discovery is a systematic way of achieving this goal. However, a reliable network motif discovery outcome requires generating random background networks which are the result of a uniform and independent graph sampling method. To date, there has been no method to numerically evaluate whether any network motif discovery algorithm performs as intended on realistically sized datasets-thus it was not possible to assess the validity of resulting network motifs. In this work, we present IndeCut, the first method to date that characterizes network motif finding algorithm performance in terms of uniform sampling on realistically sized networks. We demonstrate that it is critical to use IndeCut prior to running any network motif finder for two reasons. First, IndeCut indicates the number of samples needed for a tool to produce an outcome that is both reproducible and accurate. Second, IndeCut allows users to choose the tool that generates samples in the most independent fashion for their network of interest among many available options. The open source software package is available at https://github.com/megrawlab/IndeCut. megrawm@science.oregonstate.edu or david.koslicki@math.oregonstate.edu. Supplementary data are available at Bioinformatics online.

  10. A hybrid linear/nonlinear training algorithm for feedforward neural networks.

    PubMed

    McLoone, S; Brown, M D; Irwin, G; Lightbody, A

    1998-01-01

    This paper presents a new hybrid optimization strategy for training feedforward neural networks. The algorithm combines gradient-based optimization of nonlinear weights with singular value decomposition (SVD) computation of linear weights in one integrated routine. It is described for the multilayer perceptron (MLP) and radial basis function (RBF) networks and then extended to the local model network (LMN), a new feedforward structure in which a global nonlinear model is constructed from a set of locally valid submodels. Simulation results are presented demonstrating the superiority of the new hybrid training scheme compared to second-order gradient methods. It is particularly effective for the LMN architecture where the linear to nonlinear parameter ratio is large.

  11. Identifying critical transitions and their leading biomolecular networks in complex diseases.

    PubMed

    Liu, Rui; Li, Meiyi; Liu, Zhi-Ping; Wu, Jiarui; Chen, Luonan; Aihara, Kazuyuki

    2012-01-01

    Identifying a critical transition and its leading biomolecular network during the initiation and progression of a complex disease is a challenging task, but holds the key to early diagnosis and further elucidation of the essential mechanisms of disease deterioration at the network level. In this study, we developed a novel computational method for identifying early-warning signals of the critical transition and its leading network during a disease progression, based on high-throughput data using a small number of samples. The leading network makes the first move from the normal state toward the disease state during a transition, and thus is causally related with disease-driving genes or networks. Specifically, we first define a state-transition-based local network entropy (SNE), and prove that SNE can serve as a general early-warning indicator of any imminent transitions, regardless of specific differences among systems. The effectiveness of this method was validated by functional analysis and experimental data.

  12. An Algorithm for Critical Nodes Problem in Social Networks Based on Owen Value

    PubMed Central

    Wang, Xue-Guang

    2014-01-01

    Discovering critical nodes in social networks has many important applications. For finding out the critical nodes and considering the widespread community structure in social networks, we obtain each node's marginal contribution by Owen value. And then we can give a method for the solution of the critical node problem. We validate the feasibility and effectiveness of our method on two synthetic datasets and six real datasets. At the same time, the result obtained by using our method to analyze the terrorist network is in line with the actual situation. PMID:25006592

  13. A pilot study on diagnostic sensor networks for structure health monitoring.

    DOT National Transportation Integrated Search

    2013-08-01

    The proposal was submitted in an effort to obtain some preliminary results on using sensor networks for real-time structure health : monitoring. The proposed work has twofold: to develop and validate an elective algorithm for the diagnosis of coupled...

  14. A program to compute the soft Robinson-Foulds distance between phylogenetic networks.

    PubMed

    Lu, Bingxin; Zhang, Louxin; Leong, Hon Wai

    2017-03-14

    Over the past two decades, phylogenetic networks have been studied to model reticulate evolutionary events. The relationships among phylogenetic networks, phylogenetic trees and clusters serve as the basis for reconstruction and comparison of phylogenetic networks. To understand these relationships, two problems are raised: the tree containment problem, which asks whether a phylogenetic tree is displayed in a phylogenetic network, and the cluster containment problem, which asks whether a cluster is represented at a node in a phylogenetic network. Both the problems are NP-complete. A fast exponential-time algorithm for the cluster containment problem on arbitrary networks is developed and implemented in C. The resulting program is further extended into a computer program for fast computation of the Soft Robinson-Foulds distance between phylogenetic networks. Two computer programs are developed for facilitating reconstruction and validation of phylogenetic network models in evolutionary and comparative genomics. Our simulation tests indicated that they are fast enough for use in practice. Additionally, the distribution of the Soft Robinson-Foulds distance between phylogenetic networks is demonstrated to be unlikely normal by our simulation data.

  15. Time series analysis of temporal networks

    NASA Astrophysics Data System (ADS)

    Sikdar, Sandipan; Ganguly, Niloy; Mukherjee, Animesh

    2016-01-01

    A common but an important feature of all real-world networks is that they are temporal in nature, i.e., the network structure changes over time. Due to this dynamic nature, it becomes difficult to propose suitable growth models that can explain the various important characteristic properties of these networks. In fact, in many application oriented studies only knowing these properties is sufficient. For instance, if one wishes to launch a targeted attack on a network, this can be done even without the knowledge of the full network structure; rather an estimate of some of the properties is sufficient enough to launch the attack. We, in this paper show that even if the network structure at a future time point is not available one can still manage to estimate its properties. We propose a novel method to map a temporal network to a set of time series instances, analyze them and using a standard forecast model of time series, try to predict the properties of a temporal network at a later time instance. To our aim, we consider eight properties such as number of active nodes, average degree, clustering coefficient etc. and apply our prediction framework on them. We mainly focus on the temporal network of human face-to-face contacts and observe that it represents a stochastic process with memory that can be modeled as Auto-Regressive-Integrated-Moving-Average (ARIMA). We use cross validation techniques to find the percentage accuracy of our predictions. An important observation is that the frequency domain properties of the time series obtained from spectrogram analysis could be used to refine the prediction framework by identifying beforehand the cases where the error in prediction is likely to be high. This leads to an improvement of 7.96% (for error level ≤20%) in prediction accuracy on an average across all datasets. As an application we show how such prediction scheme can be used to launch targeted attacks on temporal networks. Contribution to the Topical Issue

  16. Social support network typologies and health outcomes of older people in low and middle income countries--a 10/66 Dementia Research Group population-based study.

    PubMed

    Thiyagarajan, Jotheeswaran A; Prince, Martin; Webber, Martin

    2014-08-01

    This study aims to assess the construct validity of the Wenger social support network typology in low and middle income countries. We hypothesize that, in comparison with the integrated network type, the non-integrated network type is associated with loneliness, depression, poor quality of life (less happiness), poor self-reported health, increased disability and higher care needs. Cross-sectional one-phase surveys were conducted of all residents aged 65 and over in catchment areas in eight low and middle income countries (India, China, Cuba, Dominican Republic, Venezuela, Mexico, Peru and Puerto Rico). Wenger's Practitioner Assessment of Network Type (PANT) was used to measure social network type. Family dependent, local self-contained, wider community-focused and private restricted network types were considered non-integrated, in comparison to the locally integrated network type. Overall, 17,031 participants were interviewed. Family dependent and locally integrated network types were the most prevalent. Adjusted pooled estimates across sites showed that loneliness, depression, less happiness, poor health, disability, and need for care were significantly associated with non-integrated network type. The findings of this study support the construct validity of Wenger's network typology in low and middle income countries. However, further research is required to test the criterion validity of Wenger typology using longitudinal data. Identifying older people who are vulnerable could inform the development of social care interventions to support older people and their families in the context of deteriorating health.

  17. Practical Aspects of Designing and Conducting Validation Studies Involving Multi-study Trials.

    PubMed

    Coecke, Sandra; Bernasconi, Camilla; Bowe, Gerard; Bostroem, Ann-Charlotte; Burton, Julien; Cole, Thomas; Fortaner, Salvador; Gouliarmou, Varvara; Gray, Andrew; Griesinger, Claudius; Louhimies, Susanna; Gyves, Emilio Mendoza-de; Joossens, Elisabeth; Prinz, Maurits-Jan; Milcamps, Anne; Parissis, Nicholaos; Wilk-Zasadna, Iwona; Barroso, João; Desprez, Bertrand; Langezaal, Ingrid; Liska, Roman; Morath, Siegfried; Reina, Vittorio; Zorzoli, Chiara; Zuang, Valérie

    This chapter focuses on practical aspects of conducting prospective in vitro validation studies, and in particular, by laboratories that are members of the European Union Network of Laboratories for the Validation of Alternative Methods (EU-NETVAL) that is coordinated by the EU Reference Laboratory for Alternatives to Animal Testing (EURL ECVAM). Prospective validation studies involving EU-NETVAL, comprising a multi-study trial involving several laboratories or "test facilities", typically consist of two main steps: (1) the design of the validation study by EURL ECVAM and (2) the execution of the multi-study trial by a number of qualified laboratories within EU-NETVAL, coordinated and supported by EURL ECVAM. The approach adopted in the conduct of these validation studies adheres to the principles described in the OECD Guidance Document on the Validation and International Acceptance of new or updated test methods for Hazard Assessment No. 34 (OECD 2005). The context and scope of conducting prospective in vitro validation studies is dealt with in Chap. 4 . Here we focus mainly on the processes followed to carry out a prospective validation of in vitro methods involving different laboratories with the ultimate aim of generating a dataset that can support a decision in relation to the possible development of an international test guideline (e.g. by the OECD) or the establishment of performance standards.

  18. Predicting non-melanoma skin cancer via a multi-parameterized artificial neural network.

    PubMed

    Roffman, David; Hart, Gregory; Girardi, Michael; Ko, Christine J; Deng, Jun

    2018-01-26

    Ultraviolet radiation (UVR) exposure and family history are major associated risk factors for the development of non-melanoma skin cancer (NMSC). The objective of this study was to develop and validate a multi-parameterized artificial neural network based on available personal health information for early detection of NMSC with high sensitivity and specificity, even in the absence of known UVR exposure and family history. The 1997-2015 NHIS adult survey data used to train and validate our neural network (NN) comprised of 2,056 NMSC and 460,574 non-cancer cases. We extracted 13 parameters for our NN: gender, age, BMI, diabetic status, smoking status, emphysema, asthma, race, Hispanic ethnicity, hypertension, heart diseases, vigorous exercise habits, and history of stroke. This study yielded an area under the ROC curve of 0.81 and 0.81 for training and validation, respectively. Our results (training sensitivity 88.5% and specificity 62.2%, validation sensitivity 86.2% and specificity 62.7%) were comparable to a previous study of basal and squamous cell carcinoma prediction that also included UVR exposure and family history information. These results indicate that our NN is robust enough to make predictions, suggesting that we have identified novel associations and potential predictive parameters of NMSC.

  19. Performance evaluation of power control algorithms in wireless cellular networks

    NASA Astrophysics Data System (ADS)

    Temaneh-Nyah, C.; Iita, V.

    2014-10-01

    Power control in a mobile communication network intents to control the transmission power levels in such a way that the required quality of service (QoS) for the users is guaranteed with lowest possible transmission powers. Most of the studies of power control algorithms in the literature are based on some kind of simplified assumptions which leads to compromise in the validity of the results when applied in a real environment. In this paper, a CDMA network was simulated. The real environment was accounted for by defining the analysis area and the network base stations and mobile stations are defined by their geographical coordinates, the mobility of the mobile stations is accounted for. The simulation also allowed for a number of network parameters including the network traffic, and the wireless channel models to be modified. Finally, we present the simulation results of a convergence speed based comparative analysis of three uplink power control algorithms.

  20. A generalized optimization principle for asymmetric branching in fluidic networks

    PubMed Central

    Stephenson, David

    2016-01-01

    When applied to a branching network, Murray’s law states that the optimal branching of vascular networks is achieved when the cube of the parent channel radius is equal to the sum of the cubes of the daughter channel radii. It is considered integral to understanding biological networks and for the biomimetic design of artificial fluidic systems. However, despite its ubiquity, we demonstrate that Murray’s law is only optimal (i.e. maximizes flow conductance per unit volume) for symmetric branching, where the local optimization of each individual channel corresponds to the global optimum of the network as a whole. In this paper, we present a generalized law that is valid for asymmetric branching, for any cross-sectional shape, and for a range of fluidic models. We verify our analytical solutions with the numerical optimization of a bifurcating fluidic network for the examples of laminar, turbulent and non-Newtonian fluid flows. PMID:27493583

  1. Enabling parallel simulation of large-scale HPC network systems

    DOE PAGES

    Mubarak, Misbah; Carothers, Christopher D.; Ross, Robert B.; ...

    2016-04-07

    Here, with the increasing complexity of today’s high-performance computing (HPC) architectures, simulation has become an indispensable tool for exploring the design space of HPC systems—in particular, networks. In order to make effective design decisions, simulations of these systems must possess the following properties: (1) have high accuracy and fidelity, (2) produce results in a timely manner, and (3) be able to analyze a broad range of network workloads. Most state-of-the-art HPC network simulation frameworks, however, are constrained in one or more of these areas. In this work, we present a simulation framework for modeling two important classes of networks usedmore » in today’s IBM and Cray supercomputers: torus and dragonfly networks. We use the Co-Design of Multi-layer Exascale Storage Architecture (CODES) simulation framework to simulate these network topologies at a flit-level detail using the Rensselaer Optimistic Simulation System (ROSS) for parallel discrete-event simulation. Our simulation framework meets all the requirements of a practical network simulation and can assist network designers in design space exploration. First, it uses validated and detailed flit-level network models to provide an accurate and high-fidelity network simulation. Second, instead of relying on serial time-stepped or traditional conservative discrete-event simulations that limit simulation scalability and efficiency, we use the optimistic event-scheduling capability of ROSS to achieve efficient and scalable HPC network simulations on today’s high-performance cluster systems. Third, our models give network designers a choice in simulating a broad range of network workloads, including HPC application workloads using detailed network traces, an ability that is rarely offered in parallel with high-fidelity network simulations« less

  2. Enabling parallel simulation of large-scale HPC network systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mubarak, Misbah; Carothers, Christopher D.; Ross, Robert B.

    Here, with the increasing complexity of today’s high-performance computing (HPC) architectures, simulation has become an indispensable tool for exploring the design space of HPC systems—in particular, networks. In order to make effective design decisions, simulations of these systems must possess the following properties: (1) have high accuracy and fidelity, (2) produce results in a timely manner, and (3) be able to analyze a broad range of network workloads. Most state-of-the-art HPC network simulation frameworks, however, are constrained in one or more of these areas. In this work, we present a simulation framework for modeling two important classes of networks usedmore » in today’s IBM and Cray supercomputers: torus and dragonfly networks. We use the Co-Design of Multi-layer Exascale Storage Architecture (CODES) simulation framework to simulate these network topologies at a flit-level detail using the Rensselaer Optimistic Simulation System (ROSS) for parallel discrete-event simulation. Our simulation framework meets all the requirements of a practical network simulation and can assist network designers in design space exploration. First, it uses validated and detailed flit-level network models to provide an accurate and high-fidelity network simulation. Second, instead of relying on serial time-stepped or traditional conservative discrete-event simulations that limit simulation scalability and efficiency, we use the optimistic event-scheduling capability of ROSS to achieve efficient and scalable HPC network simulations on today’s high-performance cluster systems. Third, our models give network designers a choice in simulating a broad range of network workloads, including HPC application workloads using detailed network traces, an ability that is rarely offered in parallel with high-fidelity network simulations« less

  3. Discovery and validation of gene classifiers for endocrine-disrupting chemicals in zebrafish (danio rerio)

    PubMed Central

    2012-01-01

    Background Development and application of transcriptomics-based gene classifiers for ecotoxicological applications lag far behind those of biomedical sciences. Many such classifiers discovered thus far lack vigorous statistical and experimental validations. A combination of genetic algorithm/support vector machines and genetic algorithm/K nearest neighbors was used in this study to search for classifiers of endocrine-disrupting chemicals (EDCs) in zebrafish. Searches were conducted on both tissue-specific and tissue-combined datasets, either across the entire transcriptome or within individual transcription factor (TF) networks previously linked to EDC effects. Candidate classifiers were evaluated by gene set enrichment analysis (GSEA) on both the original training data and a dedicated validation dataset. Results Multi-tissue dataset yielded no classifiers. Among the 19 chemical-tissue conditions evaluated, the transcriptome-wide searches yielded classifiers for six of them, each having approximately 20 to 30 gene features unique to a condition. Searches within individual TF networks produced classifiers for 15 chemical-tissue conditions, each containing 100 or fewer top-ranked gene features pooled from those of multiple TF networks and also unique to each condition. For the training dataset, 10 out of 11 classifiers successfully identified the gene expression profiles (GEPs) of their targeted chemical-tissue conditions by GSEA. For the validation dataset, classifiers for prochloraz-ovary and flutamide-ovary also correctly identified the GEPs of corresponding conditions while no classifier could predict the GEP from prochloraz-brain. Conclusions The discrepancies in the performance of these classifiers were attributed in part to varying data complexity among the conditions, as measured to some degree by Fisher’s discriminant ratio statistic. This variation in data complexity could likely be compensated by adjusting sample size for individual chemical

  4. Development and validation of a general approach to predict and quantify the synergism of anti-cancer drugs using experimental design and artificial neural networks.

    PubMed

    Pivetta, Tiziana; Isaia, Francesco; Trudu, Federica; Pani, Alessandra; Manca, Matteo; Perra, Daniela; Amato, Filippo; Havel, Josef

    2013-10-15

    The combination of two or more drugs using multidrug mixtures is a trend in the treatment of cancer. The goal is to search for a synergistic effect and thereby reduce the required dose and inhibit the development of resistance. An advanced model-free approach for data exploration and analysis, based on artificial neural networks (ANN) and experimental design is proposed to predict and quantify the synergism of drugs. The proposed method non-linearly correlates the concentrations of drugs with the cytotoxicity of the mixture, providing the possibility of choosing the optimal drug combination that gives the maximum synergism. The use of ANN allows for the prediction of the cytotoxicity of each combination of drugs in the chosen concentration interval. The method was validated by preparing and experimentally testing the combinations with the predicted highest synergistic effect. In all cases, the data predicted by the network were experimentally confirmed. The method was applied to several binary mixtures of cisplatin and [Cu(1,10-orthophenanthroline)2(H2O)](ClO4)2, Cu(1,10-orthophenanthroline)(H2O)2(ClO4)2 or [Cu(1,10-orthophenanthroline)2(imidazolidine-2-thione)](ClO4)2. The cytotoxicity of the two drugs, alone and in combination, was determined against human acute T-lymphoblastic leukemia cells (CCRF-CEM). For all systems, a synergistic effect was found for selected combinations. © 2013 Elsevier B.V. All rights reserved.

  5. Formal Models of the Network Co-occurrence Underlying Mental Operations

    PubMed Central

    Bzdok, Danilo; Varoquaux, Gaël; Grisel, Olivier; Eickenberg, Michael; Poupon, Cyril; Thirion, Bertrand

    2016-01-01

    Systems neuroscience has identified a set of canonical large-scale networks in humans. These have predominantly been characterized by resting-state analyses of the task-unconstrained, mind-wandering brain. Their explicit relationship to defined task performance is largely unknown and remains challenging. The present work contributes a multivariate statistical learning approach that can extract the major brain networks and quantify their configuration during various psychological tasks. The method is validated in two extensive datasets (n = 500 and n = 81) by model-based generation of synthetic activity maps from recombination of shared network topographies. To study a use case, we formally revisited the poorly understood difference between neural activity underlying idling versus goal-directed behavior. We demonstrate that task-specific neural activity patterns can be explained by plausible combinations of resting-state networks. The possibility of decomposing a mental task into the relative contributions of major brain networks, the "network co-occurrence architecture" of a given task, opens an alternative access to the neural substrates of human cognition. PMID:27310288

  6. Multiscale GPS tomography during COPS: validation and applications

    NASA Astrophysics Data System (ADS)

    Champollion, Cédric; Flamant, Cyrille; Masson, Frédéric; Gégout, Pascal; Boniface, Karen; Richard, Evelyne

    2010-05-01

    Accurate 3D description of the water vapour field is of interest for process studies such as convection initiation. None of the current techniques (LIDAR, satellite, radio soundings, GPS) can provide an all weather continuous 3D field of moisture. The combination of GPS tomography with radio-soundings (and/or LIDAR) has been used for such process studies using both advantages of vertically resolved soundings and high temporal density of GPS measurements. GPS tomography has been used at short scale (10 km horizontal resolution but in a 50 km² area) for process studies such as the ESCOMPTE experiment (Bastin et al., 2005) and at larger scale (50 km horizontal resolution) during IHOP_2002. But no extensive statistical validation has been done so far. The overarching goal of the COPS field experiment is to advance the quality of forecasts of orographically induced convective precipitation by four-dimensional observations and modeling of its life cycle for identifying the physical and chemical processes responsible for deficiencies in QPF over low-mountain regions. During the COPS field experiment, a GPS network of about 100 GPS stations has been continuously operating during three months in an area of 500 km² in the East of France (Vosges Mountains) and West of Germany (Black Forest). If the mean spacing between the GPS is about 50 km, an East-West GPS profile with a density of about 10 km is dedicated to high resolution tomography. One major goal of the GPS COPS experiment is to validate the GPS tomography with different spatial resolutions. Validation is based on additional radio-soundings and airborne / ground-based LIDAR measurement. The number and the high quality of vertically resolved water vapor observations give an unique data set for GPS tomography validation. Numerous tests have been done on real data to show the type water vapor structures that can be imaging by GPS tomography depending of the assimilation of additional data (radio soundings), the

  7. A hybrid network-based method for the detection of disease-related genes

    NASA Astrophysics Data System (ADS)

    Cui, Ying; Cai, Meng; Dai, Yang; Stanley, H. Eugene

    2018-02-01

    Detecting disease-related genes is crucial in disease diagnosis and drug design. The accepted view is that neighbors of a disease-causing gene in a molecular network tend to cause the same or similar diseases, and network-based methods have been recently developed to identify novel hereditary disease-genes in available biomedical networks. Despite the steady increase in the discovery of disease-associated genes, there is still a large fraction of disease genes that remains under the tip of the iceberg. In this paper we exploit the topological properties of the protein-protein interaction (PPI) network to detect disease-related genes. We compute, analyze, and compare the topological properties of disease genes with non-disease genes in PPI networks. We also design an improved random forest classifier based on these network topological features, and a cross-validation test confirms that our method performs better than previous similar studies.

  8. Ubiquitousness of link-density and link-pattern communities in real-world networks

    NASA Astrophysics Data System (ADS)

    Šubelj, L.; Bajec, M.

    2012-01-01

    Community structure appears to be an intrinsic property of many complex real-world networks. However, recent work shows that real-world networks reveal even more sophisticated modules than classical cohesive (link-density) communities. In particular, networks can also be naturally partitioned according to similar patterns of connectedness among the nodes, revealing link-pattern communities. We here propose a propagation based algorithm that can extract both link-density and link-pattern communities, without any prior knowledge of the true structure. The algorithm was first validated on different classes of synthetic benchmark networks with community structure, and also on random networks. We have further applied the algorithm to different social, information, technological and biological networks, where it indeed reveals meaningful (composites of) link-density and link-pattern communities. The results thus seem to imply that, similarly as link-density counterparts, link-pattern communities appear ubiquitous in nature and design.

  9. Predicting the survival of diabetes using neural network

    NASA Astrophysics Data System (ADS)

    Mamuda, Mamman; Sathasivam, Saratha

    2017-08-01

    Data mining techniques at the present time are used in predicting diseases of health care industries. Neural Network is one among the prevailing method in data mining techniques of an intelligent field for predicting diseases in health care industries. This paper presents a study on the prediction of the survival of diabetes diseases using different learning algorithms from the supervised learning algorithms of neural network. Three learning algorithms are considered in this study: (i) The levenberg-marquardt learning algorithm (ii) The Bayesian regulation learning algorithm and (iii) The scaled conjugate gradient learning algorithm. The network is trained using the Pima Indian Diabetes Dataset with the help of MATLAB R2014(a) software. The performance of each algorithm is further discussed through regression analysis. The prediction accuracy of the best algorithm is further computed to validate the accurate prediction

  10. Video transmission on ATM networks. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Chen, Yun-Chung

    1993-01-01

    The broadband integrated services digital network (B-ISDN) is expected to provide high-speed and flexible multimedia applications. Multimedia includes data, graphics, image, voice, and video. Asynchronous transfer mode (ATM) is the adopted transport techniques for B-ISDN and has the potential for providing a more efficient and integrated environment for multimedia. It is believed that most broadband applications will make heavy use of visual information. The prospect of wide spread use of image and video communication has led to interest in coding algorithms for reducing bandwidth requirements and improving image quality. The major results of a study on the bridging of network transmission performance and video coding are: Using two representative video sequences, several video source models are developed. The fitness of these models are validated through the use of statistical tests and network queuing performance. A dual leaky bucket algorithm is proposed as an effective network policing function. The concept of the dual leaky bucket algorithm can be applied to a prioritized coding approach to achieve transmission efficiency. A mapping of the performance/control parameters at the network level into equivalent parameters at the video coding level is developed. Based on that, a complete set of principles for the design of video codecs for network transmission is proposed.

  11. Flood quantile estimation at ungauged sites by Bayesian networks

    NASA Astrophysics Data System (ADS)

    Mediero, L.; Santillán, D.; Garrote, L.

    2012-04-01

    stochastic generator of synthetic data was developed. Synthetic basin characteristics were randomised, keeping the statistical properties of observed physical and climatic variables in the homogeneous region. The synthetic flood quantiles were stochastically generated taking the regression equation as basis. The learnt Bayesian network was validated by the reliability diagram, the Brier Score and the ROC diagram, which are common measures used in the validation of probabilistic forecasts. Summarising, the flood quantile estimations through Bayesian networks supply information about the prediction uncertainty as a probability distribution function of discharges is given as result. Therefore, the Bayesian network model has application as a decision support for water resources and planning management.

  12. Manifold absolute pressure estimation using neural network with hybrid training algorithm

    PubMed Central

    Selamat, Hazlina; Alimin, Ahmad Jais; Haniff, Mohamad Fadzli

    2017-01-01

    In a modern small gasoline engine fuel injection system, the load of the engine is estimated based on the measurement of the manifold absolute pressure (MAP) sensor, which took place in the intake manifold. This paper present a more economical approach on estimating the MAP by using only the measurements of the throttle position and engine speed, resulting in lower implementation cost. The estimation was done via two-stage multilayer feed-forward neural network by combining Levenberg-Marquardt (LM) algorithm, Bayesian Regularization (BR) algorithm and Particle Swarm Optimization (PSO) algorithm. Based on the results found in 20 runs, the second variant of the hybrid algorithm yields a better network performance than the first variant of hybrid algorithm, LM, LM with BR and PSO by estimating the MAP closely to the simulated MAP values. By using a valid experimental training data, the estimator network that trained with the second variant of the hybrid algorithm showed the best performance among other algorithms when used in an actual retrofit fuel injection system (RFIS). The performance of the estimator was also validated in steady-state and transient condition by showing a closer MAP estimation to the actual value. PMID:29190779

  13. Validation of a SysML based design for wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Berrachedi, Amel; Rahim, Messaoud; Ioualalen, Malika; Hammad, Ahmed

    2017-07-01

    When developing complex systems, the requirement for the verification of the systems' design is one of the main challenges. Wireless Sensor Networks (WSNs) are examples of such systems. We address the problem of how WSNs must be designed to fulfil the system requirements. Using the SysML Language, we propose a Model Based System Engineering (MBSE) specification and verification methodology for designing WSNs. This methodology uses SysML to describe the WSNs requirements, structure and behaviour. Then, it translates the SysML elements to an analytic model, specifically, to a Deterministic Stochastic Petri Net. The proposed approach allows to design WSNs and study their behaviors and their energy performances.

  14. Contributions of the SDR Task Network tool to Calibration and Validation of the NPOESS Preparatory Project instruments

    NASA Astrophysics Data System (ADS)

    Feeley, J.; Zajic, J.; Metcalf, A.; Baucom, T.

    2009-12-01

    The National Polar-orbiting Operational Environmental Satellite System (NPOESS) Preparatory Project (NPP) Calibration and Validation (Cal/Val) team is planning post-launch activities to calibrate the NPP sensors and validate Sensor Data Records (SDRs). The IPO has developed a web-based data collection and visualization tool in order to effectively collect, coordinate, and manage the calibration and validation tasks for the OMPS, ATMS, CrIS, and VIIRS instruments. This tool is accessible to the multi-institutional Cal/Val teams consisting of the Prime Contractor and Government Cal/Val leads along with the NASA NPP Mission team, and is used for mission planning and identification/resolution of conflicts between sensor activities. Visualization techniques aid in displaying task dependencies, including prerequisites and exit criteria, allowing for the identification of a critical path. This presentation will highlight how the information is collected, displayed, and used to coordinate the diverse instrument calibration/validation teams.

  15. Network-based expression analyses and experimental validations revealed high co-expression between Yap1 and stem cell markers compared to differentiated cells.

    PubMed

    Dehghanian, Fariba; Hojati, Zohreh; Esmaeili, Fariba; Masoudi-Nejad, Ali

    2018-05-21

    The Hippo signaling pathway is identified as a potential regulatory pathway which plays critical roles in differentiation and stem cell self-renewal. Yap1 is a primary transcriptional effector of this pathway. The importance of Yap1 in embryonic stem cells (ESCs) and differentiation procedure remains a challenging question, since two different observations have been reported. To answer this question we used co-expression network and differential co-expression analyses followed by experimental validations. Our results indicate that Yap1 is highly co-expressed with stem cell markers in ESCs but not in differentiated cells (DCs). The significant Yap1 down-regulation and also translocation of Yap1 into the cytoplasm during P19 differentiation was also detected. Moreover, our results suggest the E2f7, Lin28a and Dppa4 genes as possible regulatory nuclear factors of Hippo pathway in stem cells. The present findings are actively consistent with studies that suggested Yap1 as an essential factor for stem cell self-renewal. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. Continuous time Bayesian networks identify Prdm1 as a negative regulator of TH17 cell differentiation in humans

    PubMed Central

    Acerbi, Enzo; Viganò, Elena; Poidinger, Michael; Mortellaro, Alessandra; Zelante, Teresa; Stella, Fabio

    2016-01-01

    T helper 17 (TH17) cells represent a pivotal adaptive cell subset involved in multiple immune disorders in mammalian species. Deciphering the molecular interactions regulating TH17 cell differentiation is particularly critical for novel drug target discovery designed to control maladaptive inflammatory conditions. Using continuous time Bayesian networks over a time-course gene expression dataset, we inferred the global regulatory network controlling TH17 differentiation. From the network, we identified the Prdm1 gene encoding the B lymphocyte-induced maturation protein 1 as a crucial negative regulator of human TH17 cell differentiation. The results have been validated by perturbing Prdm1 expression on freshly isolated CD4+ naïve T cells: reduction of Prdm1 expression leads to augmentation of IL-17 release. These data unravel a possible novel target to control TH17 polarization in inflammatory disorders. Furthermore, this study represents the first in vitro validation of continuous time Bayesian networks as gene network reconstruction method and as hypothesis generation tool for wet-lab biological experiments. PMID:26976045

  17. Delay and Disruption Tolerant Networking MACHETE Model

    NASA Technical Reports Server (NTRS)

    Segui, John S.; Jennings, Esther H.; Gao, Jay L.

    2011-01-01

    To verify satisfaction of communication requirements imposed by unique missions, as early as 2000, the Communications Networking Group at the Jet Propulsion Laboratory (JPL) saw the need for an environment to support interplanetary communication protocol design, validation, and characterization. JPL's Multi-mission Advanced Communications Hybrid Environment for Test and Evaluation (MACHETE), described in Simulator of Space Communication Networks (NPO-41373) NASA Tech Briefs, Vol. 29, No. 8 (August 2005), p. 44, combines various commercial, non-commercial, and in-house custom tools for simulation and performance analysis of space networks. The MACHETE environment supports orbital analysis, link budget analysis, communications network simulations, and hardware-in-the-loop testing. As NASA is expanding its Space Communications and Navigation (SCaN) capabilities to support planned and future missions, building infrastructure to maintain services and developing enabling technologies, an important and broader role is seen for MACHETE in design-phase evaluation of future SCaN architectures. To support evaluation of the developing Delay Tolerant Networking (DTN) field and its applicability for space networks, JPL developed MACHETE models for DTN Bundle Protocol (BP) and Licklider/Long-haul Transmission Protocol (LTP). DTN is an Internet Research Task Force (IRTF) architecture providing communication in and/or through highly stressed networking environments such as space exploration and battlefield networks. Stressed networking environments include those with intermittent (predictable and unknown) connectivity, large and/or variable delays, and high bit error rates. To provide its services over existing domain specific protocols, the DTN protocols reside at the application layer of the TCP/IP stack, forming a store-and-forward overlay network. The key capabilities of the Bundle Protocol include custody-based reliability, the ability to cope with intermittent connectivity

  18. Comparison of Event Detection Methods for Centralized Sensor Networks

    NASA Technical Reports Server (NTRS)

    Sauvageon, Julien; Agogiono, Alice M.; Farhang, Ali; Tumer, Irem Y.

    2006-01-01

    The development of an Integrated Vehicle Health Management (IVHM) for space vehicles has become a great concern. Smart Sensor Networks is one of the promising technologies that are catching a lot of attention. In this paper, we propose to a qualitative comparison of several local event (hot spot) detection algorithms in centralized redundant sensor networks. The algorithms are compared regarding their ability to locate and evaluate the event under noise and sensor failures. The purpose of this study is to check if the ratio performance/computational power of the Mote Fuzzy Validation and Fusion algorithm is relevant compare to simpler methods.

  19. Early Detection Research Network (EDRN) | Division of Cancer Prevention

    Cancer.gov

    http://edrn.nci.nih.gov/EDRN is a collaborative network that maintains comprehensive infrastructure and resources critical to the discovery, development and validation of biomarkers for cancer risk and early detection. The program comprises a public/private sector consortium to accelerate the development of biomarkers that will change medical practice, ensure data

  20. Validation of PROMIS ® Physical Function computerized adaptive tests for orthopaedic foot and ankle outcome research.

    PubMed

    Hung, Man; Baumhauer, Judith F; Latt, L Daniel; Saltzman, Charles L; SooHoo, Nelson F; Hunt, Kenneth J

    2013-11-01

    In 2012, the American Orthopaedic Foot & Ankle Society(®) established a national network for collecting and sharing data on treatment outcomes and improving patient care. One of the network's initiatives is to explore the use of computerized adaptive tests (CATs) for patient-level outcome reporting. We determined whether the CAT from the NIH Patient Reported Outcome Measurement Information System(®) (PROMIS(®)) Physical Function (PF) item bank provides efficient, reliable, valid, precise, and adequately covered point estimates of patients' physical function. After informed consent, 288 patients with a mean age of 51 years (range, 18-81 years) undergoing surgery for common foot and ankle problems completed a web-based questionnaire. Efficiency was determined by time for test administration. Reliability was assessed with person and item reliability estimates. Validity evaluation included content validity from expert review and construct validity measured against the PROMIS(®) Pain CAT and patient responses based on tradeoff perceptions. Precision was assessed by standard error of measurement (SEM) across patients' physical function levels. Instrument coverage was based on a person-item map. Average time of test administration was 47 seconds. Reliability was 0.96 for person and 0.99 for item. Construct validity against the Pain CAT had an r value of -0.657 (p < 0.001). Precision had an SEM of less than 3.3 (equivalent to a Cronbach's alpha of ≥ 0.90) across a broad range of function. Concerning coverage, the ceiling effect was 0.32% and there was no floor effect. The PROMIS(®) PF CAT appears to be an excellent method for measuring outcomes for patients with foot and ankle surgery. Further validation of the PROMIS(®) item banks may ultimately provide a valid and reliable tool for measuring patient-reported outcomes after injuries and treatment.