Sample records for sampling network design

  1. Note: Design and development of wireless controlled aerosol sampling network for large scale aerosol dispersion experiments.

    PubMed

    Gopalakrishnan, V; Subramanian, V; Baskaran, R; Venkatraman, B

    2015-07-01

    Wireless based custom built aerosol sampling network is designed, developed, and implemented for environmental aerosol sampling. These aerosol sampling systems are used in field measurement campaign, in which sodium aerosol dispersion experiments have been conducted as a part of environmental impact studies related to sodium cooled fast reactor. The sampling network contains 40 aerosol sampling units and each contains custom built sampling head and the wireless control networking designed with Programmable System on Chip (PSoC™) and Xbee Pro RF modules. The base station control is designed using graphical programming language LabView. The sampling network is programmed to operate in a preset time and the running status of the samplers in the network is visualized from the base station. The system is developed in such a way that it can be used for any other environment sampling system deployed in wide area and uneven terrain where manual operation is difficult due to the requirement of simultaneous operation and status logging.

  2. Note: Design and development of wireless controlled aerosol sampling network for large scale aerosol dispersion experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gopalakrishnan, V.; Subramanian, V.; Baskaran, R.

    2015-07-15

    Wireless based custom built aerosol sampling network is designed, developed, and implemented for environmental aerosol sampling. These aerosol sampling systems are used in field measurement campaign, in which sodium aerosol dispersion experiments have been conducted as a part of environmental impact studies related to sodium cooled fast reactor. The sampling network contains 40 aerosol sampling units and each contains custom built sampling head and the wireless control networking designed with Programmable System on Chip (PSoC™) and Xbee Pro RF modules. The base station control is designed using graphical programming language LabView. The sampling network is programmed to operate in amore » preset time and the running status of the samplers in the network is visualized from the base station. The system is developed in such a way that it can be used for any other environment sampling system deployed in wide area and uneven terrain where manual operation is difficult due to the requirement of simultaneous operation and status logging.« less

  3. Network Sampling with Memory: A proposal for more efficient sampling from social networks.

    PubMed

    Mouw, Ted; Verdery, Ashton M

    2012-08-01

    Techniques for sampling from networks have grown into an important area of research across several fields. For sociologists, the possibility of sampling from a network is appealing for two reasons: (1) A network sample can yield substantively interesting data about network structures and social interactions, and (2) it is useful in situations where study populations are difficult or impossible to survey with traditional sampling approaches because of the lack of a sampling frame. Despite its appeal, methodological concerns about the precision and accuracy of network-based sampling methods remain. In particular, recent research has shown that sampling from a network using a random walk based approach such as Respondent Driven Sampling (RDS) can result in high design effects (DE)-the ratio of the sampling variance to the sampling variance of simple random sampling (SRS). A high design effect means that more cases must be collected to achieve the same level of precision as SRS. In this paper we propose an alternative strategy, Network Sampling with Memory (NSM), which collects network data from respondents in order to reduce design effects and, correspondingly, the number of interviews needed to achieve a given level of statistical power. NSM combines a "List" mode, where all individuals on the revealed network list are sampled with the same cumulative probability, with a "Search" mode, which gives priority to bridge nodes connecting the current sample to unexplored parts of the network. We test the relative efficiency of NSM compared to RDS and SRS on 162 school and university networks from Add Health and Facebook that range in size from 110 to 16,278 nodes. The results show that the average design effect for NSM on these 162 networks is 1.16, which is very close to the efficiency of a simple random sample (DE=1), and 98.5% lower than the average DE we observed for RDS.

  4. Network Sampling with Memory: A proposal for more efficient sampling from social networks

    PubMed Central

    Mouw, Ted; Verdery, Ashton M.

    2013-01-01

    Techniques for sampling from networks have grown into an important area of research across several fields. For sociologists, the possibility of sampling from a network is appealing for two reasons: (1) A network sample can yield substantively interesting data about network structures and social interactions, and (2) it is useful in situations where study populations are difficult or impossible to survey with traditional sampling approaches because of the lack of a sampling frame. Despite its appeal, methodological concerns about the precision and accuracy of network-based sampling methods remain. In particular, recent research has shown that sampling from a network using a random walk based approach such as Respondent Driven Sampling (RDS) can result in high design effects (DE)—the ratio of the sampling variance to the sampling variance of simple random sampling (SRS). A high design effect means that more cases must be collected to achieve the same level of precision as SRS. In this paper we propose an alternative strategy, Network Sampling with Memory (NSM), which collects network data from respondents in order to reduce design effects and, correspondingly, the number of interviews needed to achieve a given level of statistical power. NSM combines a “List” mode, where all individuals on the revealed network list are sampled with the same cumulative probability, with a “Search” mode, which gives priority to bridge nodes connecting the current sample to unexplored parts of the network. We test the relative efficiency of NSM compared to RDS and SRS on 162 school and university networks from Add Health and Facebook that range in size from 110 to 16,278 nodes. The results show that the average design effect for NSM on these 162 networks is 1.16, which is very close to the efficiency of a simple random sample (DE=1), and 98.5% lower than the average DE we observed for RDS. PMID:24159246

  5. Adaptive web sampling.

    PubMed

    Thompson, Steven K

    2006-12-01

    A flexible class of adaptive sampling designs is introduced for sampling in network and spatial settings. In the designs, selections are made sequentially with a mixture distribution based on an active set that changes as the sampling progresses, using network or spatial relationships as well as sample values. The new designs have certain advantages compared with previously existing adaptive and link-tracing designs, including control over sample sizes and of the proportion of effort allocated to adaptive selections. Efficient inference involves averaging over sample paths consistent with the minimal sufficient statistic. A Markov chain resampling method makes the inference computationally feasible. The designs are evaluated in network and spatial settings using two empirical populations: a hidden human population at high risk for HIV/AIDS and an unevenly distributed bird population.

  6. Rapid Sampling of Hydrogen Bond Networks for Computational Protein Design.

    PubMed

    Maguire, Jack B; Boyken, Scott E; Baker, David; Kuhlman, Brian

    2018-05-08

    Hydrogen bond networks play a critical role in determining the stability and specificity of biomolecular complexes, and the ability to design such networks is important for engineering novel structures, interactions, and enzymes. One key feature of hydrogen bond networks that makes them difficult to rationally engineer is that they are highly cooperative and are not energetically favorable until the hydrogen bonding potential has been satisfied for all buried polar groups in the network. Existing computational methods for protein design are ill-equipped for creating these highly cooperative networks because they rely on energy functions and sampling strategies that are focused on pairwise interactions. To enable the design of complex hydrogen bond networks, we have developed a new sampling protocol in the molecular modeling program Rosetta that explicitly searches for sets of amino acid mutations that can form self-contained hydrogen bond networks. For a given set of designable residues, the protocol often identifies many alternative sets of mutations/networks, and we show that it can readily be applied to large sets of residues at protein-protein interfaces or in the interior of proteins. The protocol builds on a recently developed method in Rosetta for designing hydrogen bond networks that has been experimentally validated for small symmetric systems but was not extensible to many larger protein structures and complexes. The sampling protocol we describe here not only recapitulates previously validated designs with performance improvements but also yields viable hydrogen bond networks for cases where the previous method fails, such as the design of large, asymmetric interfaces relevant to engineering protein-based therapeutics.

  7. Networking for Teacher Learning: Toward a Theory of Effective Design.

    ERIC Educational Resources Information Center

    McDonald, Joseph P.; Klein, Emily J.

    2003-01-01

    Examines how teacher networks design for teacher learning, describing several dynamic tensions inherent in the designs of a sample of teacher networks and assessing the relationships of these tensions to teacher learning. The paper illustrates these design concepts with reference to the work of seven networks that aim to revamp teacher' knowledge…

  8. A proposal of optimal sampling design using a modularity strategy

    NASA Astrophysics Data System (ADS)

    Simone, A.; Giustolisi, O.; Laucelli, D. B.

    2016-08-01

    In real water distribution networks (WDNs) are present thousands nodes and optimal placement of pressure and flow observations is a relevant issue for different management tasks. The planning of pressure observations in terms of spatial distribution and number is named sampling design and it was faced considering model calibration. Nowadays, the design of system monitoring is a relevant issue for water utilities e.g., in order to manage background leakages, to detect anomalies and bursts, to guarantee service quality, etc. In recent years, the optimal location of flow observations related to design of optimal district metering areas (DMAs) and leakage management purposes has been faced considering optimal network segmentation and the modularity index using a multiobjective strategy. Optimal network segmentation is the basis to identify network modules by means of optimal conceptual cuts, which are the candidate locations of closed gates or flow meters creating the DMAs. Starting from the WDN-oriented modularity index, as a metric for WDN segmentation, this paper proposes a new way to perform the sampling design, i.e., the optimal location of pressure meters, using newly developed sampling-oriented modularity index. The strategy optimizes the pressure monitoring system mainly based on network topology and weights assigned to pipes according to the specific technical tasks. A multiobjective optimization minimizes the cost of pressure meters while maximizing the sampling-oriented modularity index. The methodology is presented and discussed using the Apulian and Exnet networks.

  9. Local synchronization of chaotic neural networks with sampled-data and saturating actuators.

    PubMed

    Wu, Zheng-Guang; Shi, Peng; Su, Hongye; Chu, Jian

    2014-12-01

    This paper investigates the problem of local synchronization of chaotic neural networks with sampled-data and actuator saturation. A new time-dependent Lyapunov functional is proposed for the synchronization error systems. The advantage of the constructed Lyapunov functional lies in the fact that it is positive definite at sampling times but not necessarily between sampling times, and makes full use of the available information about the actual sampling pattern. A local stability condition of the synchronization error systems is derived, based on which a sampled-data controller with respect to the actuator saturation is designed to ensure that the master neural networks and slave neural networks are locally asymptotically synchronous. Two optimization problems are provided to compute the desired sampled-data controller with the aim of enlarging the set of admissible initial conditions or the admissible sampling upper bound ensuring the local synchronization of the considered chaotic neural networks. A numerical example is used to demonstrate the effectiveness of the proposed design technique.

  10. Statistical approaches used to assess and redesign surface water-quality-monitoring networks.

    PubMed

    Khalil, B; Ouarda, T B M J

    2009-11-01

    An up-to-date review of the statistical approaches utilized for the assessment and redesign of surface water quality monitoring (WQM) networks is presented. The main technical aspects of network design are covered in four sections, addressing monitoring objectives, water quality variables, sampling frequency and spatial distribution of sampling locations. This paper discusses various monitoring objectives and related procedures used for the assessment and redesign of long-term surface WQM networks. The appropriateness of each approach for the design, contraction or expansion of monitoring networks is also discussed. For each statistical approach, its advantages and disadvantages are examined from a network design perspective. Possible methods to overcome disadvantages and deficiencies in the statistical approaches that are currently in use are recommended.

  11. Nonlinear inversion of electrical resistivity imaging using pruning Bayesian neural networks

    NASA Astrophysics Data System (ADS)

    Jiang, Fei-Bo; Dai, Qian-Wei; Dong, Li

    2016-06-01

    Conventional artificial neural networks used to solve electrical resistivity imaging (ERI) inversion problem suffer from overfitting and local minima. To solve these problems, we propose to use a pruning Bayesian neural network (PBNN) nonlinear inversion method and a sample design method based on the K-medoids clustering algorithm. In the sample design method, the training samples of the neural network are designed according to the prior information provided by the K-medoids clustering results; thus, the training process of the neural network is well guided. The proposed PBNN, based on Bayesian regularization, is used to select the hidden layer structure by assessing the effect of each hidden neuron to the inversion results. Then, the hyperparameter α k , which is based on the generalized mean, is chosen to guide the pruning process according to the prior distribution of the training samples under the small-sample condition. The proposed algorithm is more efficient than other common adaptive regularization methods in geophysics. The inversion of synthetic data and field data suggests that the proposed method suppresses the noise in the neural network training stage and enhances the generalization. The inversion results with the proposed method are better than those of the BPNN, RBFNN, and RRBFNN inversion methods as well as the conventional least squares inversion.

  12. Design for mosquito abundance, diversity, and phenology sampling within the National Ecological Observatory Network

    USGS Publications Warehouse

    Hoekman, D.; Springer, Yuri P.; Barker, C.M.; Barrera, R.; Blackmore, M.S.; Bradshaw, W.E.; Foley, D. H.; Ginsberg, Howard; Hayden, M. H.; Holzapfel, C. M.; Juliano, S. A.; Kramer, L. D.; LaDeau, S. L.; Livdahl, T. P.; Moore, C. G.; Nasci, R.S.; Reisen, W.K.; Savage, H. M.

    2016-01-01

    The National Ecological Observatory Network (NEON) intends to monitor mosquito populations across its broad geographical range of sites because of their prevalence in food webs, sensitivity to abiotic factors and relevance for human health. We describe the design of mosquito population sampling in the context of NEON’s long term continental scale monitoring program, emphasizing the sampling design schedule, priorities and collection methods. Freely available NEON data and associated field and laboratory samples, will increase our understanding of how mosquito abundance, demography, diversity and phenology are responding to land use and climate change.

  13. Design of Neural Networks for Fast Convergence and Accuracy

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Sparks, Dean W., Jr.

    1998-01-01

    A novel procedure for the design and training of artificial neural networks, used for rapid and efficient controls and dynamics design and analysis for flexible space systems, has been developed. Artificial neural networks are employed to provide a means of evaluating the impact of design changes rapidly. Specifically, two-layer feedforward neural networks are designed to approximate the functional relationship between the component spacecraft design changes and measures of its performance. A training algorithm, based on statistical sampling theory, is presented, which guarantees that the trained networks provide a designer-specified degree of accuracy in mapping the functional relationship. Within each iteration of this statistical-based algorithm, a sequential design algorithm is used for the design and training of the feedforward network to provide rapid convergence to the network goals. Here, at each sequence a new network is trained to minimize the error of previous network. The design algorithm attempts to avoid the local minima phenomenon that hampers the traditional network training. A numerical example is performed on a spacecraft application in order to demonstrate the feasibility of the proposed approach.

  14. Training Valence, Instrumentality, and Expectancy Scale (T-VIES-it): Factor Structure and Nomological Network in an Italian Sample

    ERIC Educational Resources Information Center

    Zaniboni, Sara; Fraccaroli, Franco; Truxillo, Donald M.; Bertolino, Marilena; Bauer, Talya N.

    2011-01-01

    Purpose: The purpose of this study is to validate, in an Italian sample, a multidimensional training motivation measure (T-VIES-it) based on expectancy (VIE) theory, and to examine the nomological network surrounding the construct. Design/methodology/approach: Using a cross-sectional design study, 258 public sector employees in Northeast Italy…

  15. Quality-control design for surface-water sampling in the National Water-Quality Network

    USGS Publications Warehouse

    Riskin, Melissa L.; Reutter, David C.; Martin, Jeffrey D.; Mueller, David K.

    2018-04-10

    The data-quality objectives for samples collected at surface-water sites in the National Water-Quality Network include estimating the extent to which contamination, matrix effects, and measurement variability affect interpretation of environmental conditions. Quality-control samples provide insight into how well the samples collected at surface-water sites represent the true environmental conditions. Quality-control samples used in this program include field blanks, replicates, and field matrix spikes. This report describes the design for collection of these quality-control samples and the data management needed to properly identify these samples in the U.S. Geological Survey’s national database.

  16. Network Model-Assisted Inference from Respondent-Driven Sampling Data

    PubMed Central

    Gile, Krista J.; Handcock, Mark S.

    2015-01-01

    Summary Respondent-Driven Sampling is a widely-used method for sampling hard-to-reach human populations by link-tracing over their social networks. Inference from such data requires specialized techniques because the sampling process is both partially beyond the control of the researcher, and partially implicitly defined. Therefore, it is not generally possible to directly compute the sampling weights for traditional design-based inference, and likelihood inference requires modeling the complex sampling process. As an alternative, we introduce a model-assisted approach, resulting in a design-based estimator leveraging a working network model. We derive a new class of estimators for population means and a corresponding bootstrap standard error estimator. We demonstrate improved performance compared to existing estimators, including adjustment for an initial convenience sample. We also apply the method and an extension to the estimation of HIV prevalence in a high-risk population. PMID:26640328

  17. Network Model-Assisted Inference from Respondent-Driven Sampling Data.

    PubMed

    Gile, Krista J; Handcock, Mark S

    2015-06-01

    Respondent-Driven Sampling is a widely-used method for sampling hard-to-reach human populations by link-tracing over their social networks. Inference from such data requires specialized techniques because the sampling process is both partially beyond the control of the researcher, and partially implicitly defined. Therefore, it is not generally possible to directly compute the sampling weights for traditional design-based inference, and likelihood inference requires modeling the complex sampling process. As an alternative, we introduce a model-assisted approach, resulting in a design-based estimator leveraging a working network model. We derive a new class of estimators for population means and a corresponding bootstrap standard error estimator. We demonstrate improved performance compared to existing estimators, including adjustment for an initial convenience sample. We also apply the method and an extension to the estimation of HIV prevalence in a high-risk population.

  18. Sampling of temporal networks: Methods and biases

    NASA Astrophysics Data System (ADS)

    Rocha, Luis E. C.; Masuda, Naoki; Holme, Petter

    2017-11-01

    Temporal networks have been increasingly used to model a diversity of systems that evolve in time; for example, human contact structures over which dynamic processes such as epidemics take place. A fundamental aspect of real-life networks is that they are sampled within temporal and spatial frames. Furthermore, one might wish to subsample networks to reduce their size for better visualization or to perform computationally intensive simulations. The sampling method may affect the network structure and thus caution is necessary to generalize results based on samples. In this paper, we study four sampling strategies applied to a variety of real-life temporal networks. We quantify the biases generated by each sampling strategy on a number of relevant statistics such as link activity, temporal paths and epidemic spread. We find that some biases are common in a variety of networks and statistics, but one strategy, uniform sampling of nodes, shows improved performance in most scenarios. Given the particularities of temporal network data and the variety of network structures, we recommend that the choice of sampling methods be problem oriented to minimize the potential biases for the specific research questions on hand. Our results help researchers to better design network data collection protocols and to understand the limitations of sampled temporal network data.

  19. Design of Neural Networks for Fast Convergence and Accuracy: Dynamics and Control

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Sparks, Dean W., Jr.

    1997-01-01

    A procedure for the design and training of artificial neural networks, used for rapid and efficient controls and dynamics design and analysis for flexible space systems, has been developed. Artificial neural networks are employed, such that once properly trained, they provide a means of evaluating the impact of design changes rapidly. Specifically, two-layer feedforward neural networks are designed to approximate the functional relationship between the component/spacecraft design changes and measures of its performance or nonlinear dynamics of the system/components. A training algorithm, based on statistical sampling theory, is presented, which guarantees that the trained networks provide a designer-specified degree of accuracy in mapping the functional relationship. Within each iteration of this statistical-based algorithm, a sequential design algorithm is used for the design and training of the feedforward network to provide rapid convergence to the network goals. Here, at each sequence a new network is trained to minimize the error of previous network. The proposed method should work for applications wherein an arbitrary large source of training data can be generated. Two numerical examples are performed on a spacecraft application in order to demonstrate the feasibility of the proposed approach.

  20. Design of neural networks for fast convergence and accuracy: dynamics and control.

    PubMed

    Maghami, P G; Sparks, D R

    2000-01-01

    A procedure for the design and training of artificial neural networks, used for rapid and efficient controls and dynamics design and analysis for flexible space systems, has been developed. Artificial neural networks are employed, such that once properly trained, they provide a means of evaluating the impact of design changes rapidly. Specifically, two-layer feedforward neural networks are designed to approximate the functional relationship between the component/spacecraft design changes and measures of its performance or nonlinear dynamics of the system/components. A training algorithm, based on statistical sampling theory, is presented, which guarantees that the trained networks provide a designer-specified degree of accuracy in mapping the functional relationship. Within each iteration of this statistical-based algorithm, a sequential design algorithm is used for the design and training of the feedforward network to provide rapid convergence to the network goals. Here, at each sequence a new network is trained to minimize the error of previous network. The proposed method should work for applications wherein an arbitrary large source of training data can be generated. Two numerical examples are performed on a spacecraft application in order to demonstrate the feasibility of the proposed approach.

  1. Nanophotonic particle simulation and inverse design using artificial neural networks.

    PubMed

    Peurifoy, John; Shen, Yichen; Jing, Li; Yang, Yi; Cano-Renteria, Fidel; DeLacy, Brendan G; Joannopoulos, John D; Tegmark, Max; Soljačić, Marin

    2018-06-01

    We propose a method to use artificial neural networks to approximate light scattering by multilayer nanoparticles. We find that the network needs to be trained on only a small sampling of the data to approximate the simulation to high precision. Once the neural network is trained, it can simulate such optical processes orders of magnitude faster than conventional simulations. Furthermore, the trained neural network can be used to solve nanophotonic inverse design problems by using back propagation, where the gradient is analytical, not numerical.

  2. Simulating and assessing boson sampling experiments with phase-space representations

    NASA Astrophysics Data System (ADS)

    Opanchuk, Bogdan; Rosales-Zárate, Laura; Reid, Margaret D.; Drummond, Peter D.

    2018-04-01

    The search for new, application-specific quantum computers designed to outperform any classical computer is driven by the ending of Moore's law and the quantum advantages potentially obtainable. Photonic networks are promising examples, with experimental demonstrations and potential for obtaining a quantum computer to solve problems believed classically impossible. This introduces a challenge: how does one design or understand such photonic networks? One must be able to calculate observables using general methods capable of treating arbitrary inputs, dissipation, and noise. We develop complex phase-space software for simulating these photonic networks, and apply this to boson sampling experiments. Our techniques give sampling errors orders of magnitude lower than experimental correlation measurements for the same number of samples. We show that these techniques remove systematic errors in previous algorithms for estimating correlations, with large improvements in errors in some cases. In addition, we obtain a scalable channel-combination strategy for assessment of boson sampling devices.

  3. State criminal justice telecommunications (STACOM). Volume 4: Network design software user's guide

    NASA Technical Reports Server (NTRS)

    Lee, J. J.

    1977-01-01

    A user's guide to the network design program is presented. The program is written in FORTRAN V and implemented on a UNIVAC 1108 computer under the EXEC-8 operating system which enables the user to construct least-cost network topologies for criminal justice digital telecommunications networks. A complete description of program features, inputs, processing logic, and outputs is presented, and a sample run and a program listing are included.

  4. Social network recruitment for Yo Puedo: an innovative sexual health intervention in an underserved urban neighborhood—sample and design implications.

    PubMed

    Minnis, Alexandra M; vanDommelen-Gonzalez, Evan; Luecke, Ellen; Cheng, Helen; Dow, William; Bautista-Arredondo, Sergio; Padian, Nancy S

    2015-02-01

    Most existing evidence-based sexual health interventions focus on individual-level behavior, even though there is substantial evidence that highlights the influential role of social environments in shaping adolescents' behaviors and reproductive health outcomes. We developed Yo Puedo, a combined conditional cash transfer and life skills intervention for youth to promote educational attainment, job training, and reproductive health wellness that we then evaluated for feasibility among 162 youth aged 16-21 years in a predominantly Latino community in San Francisco, CA. The intervention targeted youth's social networks and involved recruitment and randomization of small social network clusters. In this paper we describe the design of the feasibility study and report participants' baseline characteristics. Furthermore, we examined the sample and design implications of recruiting social network clusters as the unit of randomization. Baseline data provide evidence that we successfully enrolled high risk youth using a social network recruitment approach in community and school-based settings. Nearly all participants (95%) were high risk for adverse educational and reproductive health outcomes based on multiple measures of low socioeconomic status (81%) and/or reported high risk behaviors (e.g., gang affiliation, past pregnancy, recent unprotected sex, frequent substance use; 62%). We achieved variability in the study sample through heterogeneity in recruitment of the index participants, whereas the individuals within the small social networks of close friends demonstrated substantial homogeneity across sociodemographic and risk profile characteristics. Social networks recruitment was feasible and yielded a sample of high risk youth willing to enroll in a randomized study to evaluate a novel sexual health intervention.

  5. Multi-Objective Design Of Optimal Greenhouse Gas Observation Networks

    NASA Astrophysics Data System (ADS)

    Lucas, D. D.; Bergmann, D. J.; Cameron-Smith, P. J.; Gard, E.; Guilderson, T. P.; Rotman, D.; Stolaroff, J. K.

    2010-12-01

    One of the primary scientific functions of a Greenhouse Gas Information System (GHGIS) is to infer GHG source emission rates and their uncertainties by combining measurements from an observational network with atmospheric transport modeling. Certain features of the observational networks that serve as inputs to a GHGIS --for example, sampling location and frequency-- can greatly impact the accuracy of the retrieved GHG emissions. Observation System Simulation Experiments (OSSEs) provide a framework to characterize emission uncertainties associated with a given network configuration. By minimizing these uncertainties, OSSEs can be used to determine optimal sampling strategies. Designing a real-world GHGIS observing network, however, will involve multiple, conflicting objectives; there will be trade-offs between sampling density, coverage and measurement costs. To address these issues, we have added multi-objective optimization capabilities to OSSEs. We demonstrate these capabilities by quantifying the trade-offs between retrieval error and measurement costs for a prototype GHGIS, and deriving GHG observing networks that are Pareto optimal. [LLNL-ABS-452333: This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  6. QUALITY ASSURANCE PROGRAM FOR WET DEPOSITION SAMPLING AND CHEMICAL ANALYSES FOR THE NATIONAL TRENDS NETWORK.

    USGS Publications Warehouse

    Schroder, LeRoy J.; Malo, Bernard A.; ,

    1985-01-01

    The purpose of the National Trends Network is to delineate the major inorganic constituents in the wet deposition in the United States. The approach chosen to monitor the Nation's wet deposition is to install approximately 150 automatic sampling devices with at least one collector in each state. Samples are collected at one week intervals, removed from collectors, and transported to an analytical laboratory for chemical analysis. The quality assurance program has divided wet deposition monitoring into 5 parts: (1) Sampling site selection, (2) sampling device, (3) sample container, (4) sample handling, and (5) laboratory analysis. Each of these five components is being examined using existing designs or new designs. Each existing or proposed sampling site is visited and a criteria audit is performed.

  7. Networks for image acquisition, processing and display

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.

    1990-01-01

    The human visual system comprises layers of networks which sample, process, and code images. Understanding these networks is a valuable means of understanding human vision and of designing autonomous vision systems based on network processing. Ames Research Center has an ongoing program to develop computational models of such networks. The models predict human performance in detection of targets and in discrimination of displayed information. In addition, the models are artificial vision systems sharing properties with biological vision that has been tuned by evolution for high performance. Properties include variable density sampling, noise immunity, multi-resolution coding, and fault-tolerance. The research stresses analysis of noise in visual networks, including sampling, photon, and processing unit noises. Specific accomplishments include: models of sampling array growth with variable density and irregularity comparable to that of the retinal cone mosaic; noise models of networks with signal-dependent and independent noise; models of network connection development for preserving spatial registration and interpolation; multi-resolution encoding models based on hexagonal arrays (HOP transform); and mathematical procedures for simplifying analysis of large networks.

  8. Nanophotonic particle simulation and inverse design using artificial neural networks

    PubMed Central

    Peurifoy, John; Shen, Yichen; Jing, Li; Cano-Renteria, Fidel; DeLacy, Brendan G.; Joannopoulos, John D.; Tegmark, Max

    2018-01-01

    We propose a method to use artificial neural networks to approximate light scattering by multilayer nanoparticles. We find that the network needs to be trained on only a small sampling of the data to approximate the simulation to high precision. Once the neural network is trained, it can simulate such optical processes orders of magnitude faster than conventional simulations. Furthermore, the trained neural network can be used to solve nanophotonic inverse design problems by using back propagation, where the gradient is analytical, not numerical. PMID:29868640

  9. Geometry-driven distributed compression of the plenoptic function: performance bounds and constructive algorithms.

    PubMed

    Gehrig, Nicolas; Dragotti, Pier Luigi

    2009-03-01

    In this paper, we study the sampling and the distributed compression of the data acquired by a camera sensor network. The effective design of these sampling and compression schemes requires, however, the understanding of the structure of the acquired data. To this end, we show that the a priori knowledge of the configuration of the camera sensor network can lead to an effective estimation of such structure and to the design of effective distributed compression algorithms. For idealized scenarios, we derive the fundamental performance bounds of a camera sensor network and clarify the connection between sampling and distributed compression. We then present a distributed compression algorithm that takes advantage of the structure of the data and that outperforms independent compression algorithms on real multiview images.

  10. On-board processing satellite network architectures for broadband ISDN

    NASA Technical Reports Server (NTRS)

    Inukai, Thomas; Faris, Faris; Shyy, Dong-Jye

    1992-01-01

    Onboard baseband processing architectures for future satellite broadband integrated services digital networks (B-ISDN's) are addressed. To assess the feasibility of implementing satellite B-ISDN services, critical design issues, such as B-ISDN traffic characteristics, transmission link design, and a trade-off between onboard circuit and fast packet switching, are analyzed. Examples of the two types of switching mechanisms and potential onboard network control functions are presented. A sample network architecture is also included to illustrate a potential onboard processing system.

  11. Social network recruitment for Yo Puedo - an innovative sexual health intervention in an underserved urban neighborhood: sample and design implications

    PubMed Central

    Minnis, Alexandra M.; vanDommelen-Gonzalez, Evan; Luecke, Ellen; Cheng, Helen; Dow, William; Bautista-Arredondo, Sergio; Padian, Nancy S.

    2016-01-01

    Most existing evidence-based sexual health interventions focus on individual-level behavior, even though there is substantial evidence that highlights the influential role of social environments in shaping adolescents’ behaviors and reproductive health outcomes. We developed Yo Puedo, a combined conditional cash transfer (CCT) and life skills intervention for youth to promote educational attainment, job training, and reproductive health wellness that we then evaluated for feasibility among 162 youth aged 16–21 years in a predominantly Latino community in San Francisco, CA. The intervention targeted youth’s social networks and involved recruitment and randomization of small social network clusters. In this paper we describe the design of the feasibility study and report participants’ baseline characteristics. Furthermore, we examined the sample and design implications of recruiting social network clusters as the unit of randomization. Baseline data provide evidence that we successfully enrolled high risk youth using a social network recruitment approach in community and school-based settings. Nearly all participants (95%) were high risk for adverse educational and reproductive health outcomes based on multiple measures of low socioeconomic status (81%) and/or reported high risk behaviors (e.g., gang affiliation, past pregnancy, recent unprotected sex, frequent substance use) (62%). We achieved variability in the study sample through heterogeneity in recruitment of the index participants, whereas the individuals within the small social networks of close friends demonstrated substantial homogeneity across sociodemographic and risk profile characteristics. Social networks recruitment was feasible and yielded a sample of high risk youth willing to enroll in a randomized study to evaluate a novel sexual health intervention. PMID:25358834

  12. Lognormal kriging for the assessment of reliability in groundwater quality control observation networks

    USGS Publications Warehouse

    Candela, L.; Olea, R.A.; Custodio, E.

    1988-01-01

    Groundwater quality observation networks are examples of discontinuous sampling on variables presenting spatial continuity and highly skewed frequency distributions. Anywhere in the aquifer, lognormal kriging provides estimates of the variable being sampled and a standard error of the estimate. The average and the maximum standard error within the network can be used to dynamically improve the network sampling efficiency or find a design able to assure a given reliability level. The approach does not require the formulation of any physical model for the aquifer or any actual sampling of hypothetical configurations. A case study is presented using the network monitoring salty water intrusion into the Llobregat delta confined aquifer, Barcelona, Spain. The variable chloride concentration used to trace the intrusion exhibits sudden changes within short distances which make the standard error fairly invariable to changes in sampling pattern and to substantial fluctuations in the number of wells. ?? 1988.

  13. Saltwater intrusion monitoring in Florida

    USGS Publications Warehouse

    Prinos, Scott T.

    2016-01-01

    Florida's communities are largely dependent on freshwater from groundwater aquifers. Existing saltwater in the aquifers, or seawater that intrudes parts of the aquifers that were fresh, can make the water unusable without additional processing. The quality of Florida's saltwater intrusion monitoring networks varies. In Miami-Dade and Broward Counties, for example, there is a well-designed network with recently constructed short open-interval monitoring wells that bracket the saltwater interface in the Biscayne aquifer. Geochemical analyses of water samples from the network help scientists evaluate pathways of saltwater intrusion and movement of the saltwater interface. Geophysical measurements, collected in these counties, aid the mapping of the saltwater interface and the design of monitoring networks. In comparison, deficiencies in the Collier County monitoring network include the positioning of monitoring wells, reliance on wells with long open intervals that when sampled might provide questionable results, and the inability of existing analyses to differentiate between multiple pathways of saltwater intrusion. A state-wide saltwater intrusion monitoring network is being planned; the planned network could improve saltwater intrusion monitoring by adopting the applicable strategies of the networks of Miami-Dade and Broward Counties, and by addressing deficiencies such as those described for the Collier County network.

  14. LMI designmethod for networked-based PID control

    NASA Astrophysics Data System (ADS)

    Souza, Fernando de Oliveira; Mozelli, Leonardo Amaral; de Oliveira, Maurício Carvalho; Palhares, Reinaldo Martinez

    2016-10-01

    In this paper, we propose a methodology for the design of networked PID controllers for second-order delayed processes using linear matrix inequalities. The proposed procedure takes into account time-varying delay on the plant, time-varying delays induced by the network and packed dropouts. The design is carried on entirely using a continuous-time model of the closed-loop system where time-varying delays are used to represent sampling and holding occurring in a discrete-time digital PID controller.

  15. Results of a prototype surface water network design for pesticides developed for the San Joaquin River Basin, California

    USGS Publications Warehouse

    Domagalski, Joseph L.

    1997-01-01

    A nested surface water monitoring network was designed and tested to measure variability in pesticide concentrations in the San Joaquin River and selected tributaries during the irrigation season. The network design an d sampling frequency necessary for determining the variability and distribution in pesticide concentrations were tested in a prototype study. The San Joaquin River Basin, California, was sampled from April to August 1992, a period during the irrigation season where there was no rainfall. Orestimba Creek, which drains a part of the western San Joaquin Valley, was sampled three times per week for 6 weeks, followed by a once per week sampling for 6 weeks, and the three times per week sampling for 6 weeks. A site on the San Joaquin River near the mouth of the basin, and an irrigation drain of the eastern San Joaquin Valley, were sampled weekly during the entire sampling period. Pesticides were most often detected in samples collected from Orestimba Creek. This suggests that the western valley was the principal source of pesticides to the San Joaquin River during the irrigation season. Irrigation drainage water was the source of pesticides to Orestimba Creek. Pesticide concentrations of Orestimba Creek showed greater temporal variability when sampled three times per week than when sampled once a week, due to variations in field management and irrigation. The implication for the San Joaquin River basin (an irrigation-dominated agricultural setting) is that frequent sampling of tributary sites is necessary to describe the variability in pesticides transported to the San Joaquin River.

  16. Social Network Type and Subjective Well-Being in a National Sample of Older Americans

    ERIC Educational Resources Information Center

    Litwin, Howard; Shiovitz-Ezra, Sharon

    2011-01-01

    Purpose: The study considers the social networks of older Americans, a population for whom there have been few studies of social network type. It also examines associations between network types and well-being indicators: loneliness, anxiety, and happiness. Design and Methods: A subsample of persons aged 65 years and older from the first wave of…

  17. Frequency and Types of Foods Advertised on Saturday Morning and Weekday Afternoon English- and Spanish-Language American Television Programs

    ERIC Educational Resources Information Center

    Bell, Robert A.; Cassady, Diana; Culp, Jennifer; Alcalay, Rina

    2009-01-01

    Objective: To describe food advertised on networks serving children and youth, and to compare ads on English-language networks with ads on Spanish networks. Design: Analysis of television food advertisements appearing on Saturday morning and weekday afternoons in 2005-2006. A random sample of 1,130 advertisements appearing on 12 networks catering…

  18. On-board processing architectures for satellite B-ISDN services

    NASA Technical Reports Server (NTRS)

    Inukai, Thomas; Shyy, Dong-Jye; Faris, Faris

    1991-01-01

    Onboard baseband processing architectures for future satellite broadband integrated services digital networks (B-ISDN's) are addressed. To assess the feasibility of implementing satellite B-ISDN services, critical design issues, such as B-ISDN traffic characteristics, transmission link design, and a trade-off between onboard circuit and fast packet switching, are analyzed. Examples of the two types of switching mechanisms and potential onboard network control functions are presented. A sample network architecture is also included to illustrate a potential onboard processing system.

  19. Tick-, mosquito-, and rodent-borne parasite sampling designs for the National Ecological Observatory Network

    USGS Publications Warehouse

    Springer, Yuri P.; Hoekman, David; Johnson, Pieter T. J.; Duffy, Paul A.; Hufft, Rebecca A.; Barnett, David T.; Allan, Brian F.; Amman, Brian R.; Barker, Christopher M.; Barrera, Roberto; Beard, Charles B.; Beati, Lorenza; Begon, Mike; Blackmore, Mark S.; Bradshaw, William E.; Brisson, Dustin; Calisher, Charles H.; Childs, James E.; Diuk-Wasser, Maria A.; Douglass, Richard J.; Eisen, Rebecca J.; Foley, Desmond H.; Foley, Janet E.; Gaff, Holly D.; Gardner, Scott L.; Ginsberg, Howard; Glass, Gregory E.; Hamer, Sarah A.; Hayden, Mary H.; Hjelle, Brian; Holzapfel, Christina M.; Juliano, Steven A.; Kramer, Laura D.; Kuenzi, Amy J.; LaDeau, Shannon L.; Livdahl, Todd P.; Mills, James N.; Moore, Chester G.; Morand, Serge; Nasci, Roger S.; Ogden, Nicholas H.; Ostfeld, Richard S.; Parmenter, Robert R.; Piesman, Joseph; Reisen, William K.; Savage, Harry M.; Sonenshine, Daniel E.; Swei, Andrea; Yabsley, Michael J.

    2016-01-01

    Parasites and pathogens are increasingly recognized as significant drivers of ecological and evolutionary change in natural ecosystems. Concurrently, transmission of infectious agents among human, livestock, and wildlife populations represents a growing threat to veterinary and human health. In light of these trends and the scarcity of long-term time series data on infection rates among vectors and reservoirs, the National Ecological Observatory Network (NEON) will collect measurements and samples of a suite of tick-, mosquito-, and rodent-borne parasites through a continental-scale surveillance program. Here, we describe the sampling designs for these efforts, highlighting sampling priorities, field and analytical methods, and the data as well as archived samples to be made available to the research community. Insights generated by this sampling will advance current understanding of and ability to predict changes in infection and disease dynamics in novel, interdisciplinary, and collaborative ways.

  20. Detection of eardrum abnormalities using ensemble deep learning approaches

    NASA Astrophysics Data System (ADS)

    Senaras, Caglar; Moberly, Aaron C.; Teknos, Theodoros; Essig, Garth; Elmaraghy, Charles; Taj-Schaal, Nazhat; Yua, Lianbo; Gurcan, Metin N.

    2018-02-01

    In this study, we proposed an approach to report the condition of the eardrum as "normal" or "abnormal" by ensembling two different deep learning architectures. In the first network (Network 1), we applied transfer learning to the Inception V3 network by using 409 labeled samples. As a second network (Network 2), we designed a convolutional neural network to take advantage of auto-encoders by using additional 673 unlabeled eardrum samples. The individual classification accuracies of the Network 1 and Network 2 were calculated as 84.4%(+/- 12.1%) and 82.6% (+/- 11.3%), respectively. Only 32% of the errors of the two networks were the same, making it possible to combine two approaches to achieve better classification accuracy. The proposed ensemble method allows us to achieve robust classification because it has high accuracy (84.4%) with the lowest standard deviation (+/- 10.3%).

  1. A Mobile Satellite Experiment (MSAT-X) network definition

    NASA Technical Reports Server (NTRS)

    Wang, Charles C.; Yan, Tsun-Yee

    1990-01-01

    The network architecture development of the Mobile Satellite Experiment (MSAT-X) project for the past few years is described. The results and findings of the network research activities carried out under the MSAT-X project are summarized. A framework is presented upon which the Mobile Satellite Systems (MSSs) operator can design a commercial network. A sample network configuration and its capability are also included under the projected scenario. The Communication Interconnection aspect of the MSAT-X network is discussed. In the MSAT-X network structure two basic protocols are presented: the channel access protocol, and the link connection protocol. The error-control techniques used in the MSAT-X project and the packet structure are also discussed. A description of two testbeds developed for experimentally simulating the channel access protocol and link control protocol, respectively, is presented. A sample network configuration and some future network activities of the MSAT-X project are also presented.

  2. Sample selection via angular distance in the space of the arguments of an artificial neural network

    NASA Astrophysics Data System (ADS)

    Fernández Jaramillo, J. M.; Mayerle, R.

    2018-05-01

    In the construction of an artificial neural network (ANN) a proper data splitting of the available samples plays a major role in the training process. This selection of subsets for training, testing and validation affects the generalization ability of the neural network. Also the number of samples has an impact in the time required for the design of the ANN and the training. This paper introduces an efficient and simple method for reducing the set of samples used for training a neural network. The method reduces the required time to calculate the network coefficients, while keeping the diversity and avoiding overtraining the ANN due the presence of similar samples. The proposed method is based on the calculation of the angle between two vectors, each one representing one input of the neural network. When the angle formed among samples is smaller than a defined threshold only one input is accepted for the training. The accepted inputs are scattered throughout the sample space. Tidal records are used to demonstrate the proposed method. The results of a cross-validation show that with few inputs the quality of the outputs is not accurate and depends on the selection of the first sample, but as the number of inputs increases the accuracy is improved and differences among the scenarios with a different starting sample have and important reduction. A comparison with the K-means clustering algorithm shows that for this application the proposed method with a smaller number of samples is producing a more accurate network.

  3. Generalizing the Network Scale-Up Method: A New Estimator for the Size of Hidden Populations*

    PubMed Central

    Feehan, Dennis M.; Salganik, Matthew J.

    2018-01-01

    The network scale-up method enables researchers to estimate the size of hidden populations, such as drug injectors and sex workers, using sampled social network data. The basic scale-up estimator offers advantages over other size estimation techniques, but it depends on problematic modeling assumptions. We propose a new generalized scale-up estimator that can be used in settings with non-random social mixing and imperfect awareness about membership in the hidden population. Further, the new estimator can be used when data are collected via complex sample designs and from incomplete sampling frames. However, the generalized scale-up estimator also requires data from two samples: one from the frame population and one from the hidden population. In some situations these data from the hidden population can be collected by adding a small number of questions to already planned studies. For other situations, we develop interpretable adjustment factors that can be applied to the basic scale-up estimator. We conclude with practical recommendations for the design and analysis of future studies. PMID:29375167

  4. Criteria for Choosing the Best Neural Network: Part 1

    DTIC Science & Technology

    1991-07-24

    Touretzky, pp. 177-185. San Mateo: Morgan Kaufmann. Harp, S.A., Samad , T., and Guha, A . (1990). Designing application-specific neural networks using genetic...determining a parsimonious neural network for use in prediction/generalization based on a given fixed learning sample. Both the classification and...statistical settings, algorithms for selecting the number of hidden layer nodes in a three layer, feedforward neural network are presented. The selection

  5. Static-dynamic hybrid communication scheduling and control co-design for networked control systems.

    PubMed

    Wen, Shixi; Guo, Ge

    2017-11-01

    In this paper, the static-dynamic hybrid communication scheduling and control co-design is proposed for the networked control systems (NCSs) to solve the capacity limitation of the wireless communication network. The analytical most regular binary sequences (MRBSs) are used as the communication scheduling function for NCSs. When the communication conflicts yielded in the binary sequence MRBSs, a dynamic scheduling strategy is proposed to on-line reallocate the medium access status for each plant. Under such static-dynamic hybrid scheduling policy, plants in NCSs are described as the non-uniform sampled-control systems, whose controller have a group of controller gains and switch according to the sampling interval yielded by the binary sequence. A useful communication scheduling and control co-design framework is proposed for the NCSs to simultaneously decide the controller gains and the parameters used to generate the communication sequences MRBS. Numerical example and realistic example are respectively given to demonstrate the effectiveness of the proposed co-design method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Evaluation of water-quality characteristics and sampling design for streams in North Dakota, 1970–2008

    USGS Publications Warehouse

    Galloway, Joel M.; Vecchia, Aldo V.; Vining, Kevin C.; Densmore, Brenda K.; Lundgren, Robert F.

    2012-01-01

    In response to the need to examine the large amount of historic water-quality data comprehensively across North Dakota and evaluate the efficiency of the State-wide sampling programs, a study was done by the U.S. Geological Survey in cooperation with the North Dakota State Water Commission and the North Dakota Department of Health to describe the water-quality data collected for the various programs and determine an efficient State-wide sampling design for monitoring future water-quality conditions. Although data collected for the North Dakota State Water Commission High-Low Sampling Program, the North Dakota Department of Health Ambient Water-Quality Network, and other projects and programs provide valuable information on the quality of water in streams in North Dakota, the objectives vary among the programs, some of the programs overlap spatially and temporally, and the various sampling designs may not be the most efficient or relevant to the objectives of the individual programs as they have changed through time. One objective of a State-wide sampling program was to evaluate ways to describe the spatial variability of water-quality conditions across the State in the most efficient manner. Weighted least-squares regression analysis was used to relate the average absolute difference between paired downstream and upstream concentrations, expressed as a percent of the average downstream concentration, to the average absolute difference in daily flow between the downstream and upstream pairs, expressed as a percent of the average downstream flow. The analysis showed that a reasonable spatial network would consist of including the most downstream sites in large basins first, followed by the next upstream site(s) that roughly bisect the downstream flows at the first sites, followed by the next upstream site(s) that roughly bisect flows for the second sites. Sampling sites to be included in a potential State-wide network were prioritized into 3 design levels: level 1 (highest priority), level 2 (second priority), and level 3 (third priority). Given the spatial distribution and priority designation (levels 1–3) of sites in the potential spatial network, the next consideration was to determine the appropriate temporal sampling frequency to use for monitoring future water-quality conditions. The time-series model used to detect concentration trends for this report also was used to evaluate sampling designs to monitor future water-quality trends. Sampling designs were evaluated with regard to their sensitivity to detect seasonal trends that occurred during three 4-month seasons—March through June, July through October, and November through February. For the 34 level-1 sites, samples would be collected for major ions, trace metals, nutrients, bacteria, and sediment eight times per year, with samples in January, April (2 samples),May, June, July, August, and October. For the 21 level-2 sites, samples would be collected for major ions, trace metals, and nutrients six times per year (January, April, May, June, August, and October), and for the 26 level-3 sites, samples would be collected for these constituents four times per year (April, June, August, and October).

  7. Evaluating data worth for ground-water management under uncertainty

    USGS Publications Warehouse

    Wagner, B.J.

    1999-01-01

    A decision framework is presented for assessing the value of ground-water sampling within the context of ground-water management under uncertainty. The framework couples two optimization models-a chance-constrained ground-water management model and an integer-programing sampling network design model-to identify optimal pumping and sampling strategies. The methodology consists of four steps: (1) The optimal ground-water management strategy for the present level of model uncertainty is determined using the chance-constrained management model; (2) for a specified data collection budget, the monitoring network design model identifies, prior to data collection, the sampling strategy that will minimize model uncertainty; (3) the optimal ground-water management strategy is recalculated on the basis of the projected model uncertainty after sampling; and (4) the worth of the monitoring strategy is assessed by comparing the value of the sample information-i.e., the projected reduction in management costs-with the cost of data collection. Steps 2-4 are repeated for a series of data collection budgets, producing a suite of management/monitoring alternatives, from which the best alternative can be selected. A hypothetical example demonstrates the methodology's ability to identify the ground-water sampling strategy with greatest net economic benefit for ground-water management.A decision framework is presented for assessing the value of ground-water sampling within the context of ground-water management under uncertainty. The framework couples two optimization models - a chance-constrained ground-water management model and an integer-programming sampling network design model - to identify optimal pumping and sampling strategies. The methodology consists of four steps: (1) The optimal ground-water management strategy for the present level of model uncertainty is determined using the chance-constrained management model; (2) for a specified data collection budget, the monitoring network design model identifies, prior to data collection, the sampling strategy that will minimize model uncertainty; (3) the optimal ground-water management strategy is recalculated on the basis of the projected model uncertainty after sampling; and (4) the worth of the monitoring strategy is assessed by comparing the value of the sample information - i.e., the projected reduction in management costs - with the cost of data collection. Steps 2-4 are repeated for a series of data collection budgets, producing a suite of management/monitoring alternatives, from which the best alternative can be selected. A hypothetical example demonstrates the methodology's ability to identify the ground-water sampling strategy with greatest net economic benefit for ground-water management.

  8. Identifying causal networks linking cancer processes and anti-tumor immunity using Bayesian network inference and metagene constructs.

    PubMed

    Kaiser, Jacob L; Bland, Cassidy L; Klinke, David J

    2016-03-01

    Cancer arises from a deregulation of both intracellular and intercellular networks that maintain system homeostasis. Identifying the architecture of these networks and how they are changed in cancer is a pre-requisite for designing drugs to restore homeostasis. Since intercellular networks only appear in intact systems, it is difficult to identify how these networks become altered in human cancer using many of the common experimental models. To overcome this, we used the diversity in normal and malignant human tissue samples from the Cancer Genome Atlas (TCGA) database of human breast cancer to identify the topology associated with intercellular networks in vivo. To improve the underlying biological signals, we constructed Bayesian networks using metagene constructs, which represented groups of genes that are concomitantly associated with different immune and cancer states. We also used bootstrap resampling to establish the significance associated with the inferred networks. In short, we found opposing relationships between cell proliferation and epithelial-to-mesenchymal transformation (EMT) with regards to macrophage polarization. These results were consistent across multiple carcinomas in that proliferation was associated with a type 1 cell-mediated anti-tumor immune response and EMT was associated with a pro-tumor anti-inflammatory response. To address the identifiability of these networks from other datasets, we could identify the relationship between EMT and macrophage polarization with fewer samples when the Bayesian network was generated from malignant samples alone. However, the relationship between proliferation and macrophage polarization was identified with fewer samples when the samples were taken from a combination of the normal and malignant samples. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:470-479, 2016. © 2016 American Institute of Chemical Engineers.

  9. An evaluation of potential sampling locations in a reservoir with emphasis on conserved spatial correlation structure.

    PubMed

    Yenilmez, Firdes; Düzgün, Sebnem; Aksoy, Aysegül

    2015-01-01

    In this study, kernel density estimation (KDE) was coupled with ordinary two-dimensional kriging (OK) to reduce the number of sampling locations in measurement and kriging of dissolved oxygen (DO) concentrations in Porsuk Dam Reservoir (PDR). Conservation of the spatial correlation structure in the DO distribution was a target. KDE was used as a tool to aid in identification of the sampling locations that would be removed from the sampling network in order to decrease the total number of samples. Accordingly, several networks were generated in which sampling locations were reduced from 65 to 10 in increments of 4 or 5 points at a time based on kernel density maps. DO variograms were constructed, and DO values in PDR were kriged. Performance of the networks in DO estimations were evaluated through various error metrics, standard error maps (SEM), and whether the spatial correlation structure was conserved or not. Results indicated that smaller number of sampling points resulted in loss of information in regard to spatial correlation structure in DO. The minimum representative sampling points for PDR was 35. Efficacy of the sampling location selection method was tested against the networks generated by experts. It was shown that the evaluation approach proposed in this study provided a better sampling network design in which the spatial correlation structure of DO was sustained for kriging.

  10. Statistical assessment on a combined analysis of GRYN-ROMN-UCBN upland vegetation vital signs

    USGS Publications Warehouse

    Irvine, Kathryn M.; Rodhouse, Thomas J.

    2014-01-01

    As of 2013, Rocky Mountain and Upper Columbia Basin Inventory and Monitoring Networks have multiple years of vegetation data and Greater Yellowstone Network has three years of vegetation data and monitoring is ongoing in all three networks. Our primary objective is to assess whether a combined analysis of these data aimed at exploring correlations with climate and weather data is feasible. We summarize the core survey design elements across protocols and point out the major statistical challenges for a combined analysis at present. The dissimilarity in response designs between ROMN and UCBN-GRYN network protocols presents a statistical challenge that has not been resolved yet. However, the UCBN and GRYN data are compatible as they implement a similar response design; therefore, a combined analysis is feasible and will be pursued in future. When data collected by different networks are combined, the survey design describing the merged dataset is (likely) a complex survey design. A complex survey design is the result of combining datasets from different sampling designs. A complex survey design is characterized by unequal probability sampling, varying stratification, and clustering (see Lohr 2010 Chapter 7 for general overview). Statistical analysis of complex survey data requires modifications to standard methods, one of which is to include survey design weights within a statistical model. We focus on this issue for a combined analysis of upland vegetation from these networks, leaving other topics for future research. We conduct a simulation study on the possible effects of equal versus unequal probability selection of points on parameter estimates of temporal trend using available packages within the R statistical computing package. We find that, as written, using lmer or lm for trend detection in a continuous response and clm and clmm for visually estimated cover classes with “raw” GRTS design weights specified for the weight argument leads to substantially different results and/or computational instability. However, when only fixed effects are of interest, the survey package (svyglm and svyolr) may be suitable for a model-assisted analysis for trend. We provide possible directions for future research into combined analysis for ordinal and continuous vital sign indictors.

  11. Optimal design of monitoring networks for multiple groundwater quality parameters using a Kalman filter: application to the Irapuato-Valle aquifer.

    PubMed

    Júnez-Ferreira, H E; Herrera, G S; González-Hita, L; Cardona, A; Mora-Rodríguez, J

    2016-01-01

    A new method for the optimal design of groundwater quality monitoring networks is introduced in this paper. Various indicator parameters were considered simultaneously and tested for the Irapuato-Valle aquifer in Mexico. The steps followed in the design were (1) establishment of the monitoring network objectives, (2) definition of a groundwater quality conceptual model for the study area, (3) selection of the parameters to be sampled, and (4) selection of a monitoring network by choosing the well positions that minimize the estimate error variance of the selected indicator parameters. Equal weight for each parameter was given to most of the aquifer positions and a higher weight to priority zones. The objective for the monitoring network in the specific application was to obtain a general reconnaissance of the water quality, including water types, water origin, and first indications of contamination. Water quality indicator parameters were chosen in accordance with this objective, and for the selection of the optimal monitoring sites, it was sought to obtain a low-uncertainty estimate of these parameters for the entire aquifer and with more certainty in priority zones. The optimal monitoring network was selected using a combination of geostatistical methods, a Kalman filter and a heuristic optimization method. Results show that when monitoring the 69 locations with higher priority order (the optimal monitoring network), the joint average standard error in the study area for all the groundwater quality parameters was approximately 90 % of the obtained with the 140 available sampling locations (the set of pilot wells). This demonstrates that an optimal design can help to reduce monitoring costs, by avoiding redundancy in data acquisition.

  12. A Bayesian maximum entropy-based methodology for optimal spatiotemporal design of groundwater monitoring networks.

    PubMed

    Hosseini, Marjan; Kerachian, Reza

    2017-09-01

    This paper presents a new methodology for analyzing the spatiotemporal variability of water table levels and redesigning a groundwater level monitoring network (GLMN) using the Bayesian Maximum Entropy (BME) technique and a multi-criteria decision-making approach based on ordered weighted averaging (OWA). The spatial sampling is determined using a hexagonal gridding pattern and a new method, which is proposed to assign a removal priority number to each pre-existing station. To design temporal sampling, a new approach is also applied to consider uncertainty caused by lack of information. In this approach, different time lag values are tested by regarding another source of information, which is simulation result of a numerical groundwater flow model. Furthermore, to incorporate the existing uncertainties in available monitoring data, the flexibility of the BME interpolation technique is taken into account in applying soft data and improving the accuracy of the calculations. To examine the methodology, it is applied to the Dehgolan plain in northwestern Iran. Based on the results, a configuration of 33 monitoring stations for a regular hexagonal grid of side length 3600 m is proposed, in which the time lag between samples is equal to 5 weeks. Since the variance estimation errors of the BME method are almost identical for redesigned and existing networks, the redesigned monitoring network is more cost-effective and efficient than the existing monitoring network with 52 stations and monthly sampling frequency.

  13. Using Anticipative Malware Analysis to Support Decision Making

    DTIC Science & Technology

    2010-11-01

    specifically, we have designed and implemented a network sandbox, i.e. a sandbox that allows us to study malware behaviour from the network perspective. We...plan to use this sandbox to generate malware-sample profiles that can be used by decision making algorithms to help network administrators and security...also allows the user to specify the network topology to be used. 1 INTRODUCTION Once the presence of a malicious software (malware) threat has been

  14. Statistical analysis of stream water-quality data and sampling network design near Oklahoma City, central Oklahoma, 1977-1999

    USGS Publications Warehouse

    Brigham, Mark E.; Payne, Gregory A.; Andrews, William J.; Abbott, Marvin M.

    2002-01-01

    The sampling network was evaluated with respect to areal coverage, sampling frequency, and analytical schedules. Areal coverage could be expanded to include one additional watershed that is not part of the current network. A new sampling site on the North Canadian River might be useful because of expanding urbanization west of the city, but sampling at some other sites could be discontinued or reduced based on comparisons of data between the sites. Additional real-time or periodic monitoring for dissolved oxygen may be useful to prevent anoxic conditions in pools behind new low-water dams. The sampling schedules, both monthly and quarterly, are adequate to evaluate trends, but additional sampling during flow extremes may be needed to quantify loads and evaluate water-quality during flow extremes. Emerging water-quality issues may require sampling for volatile organic compounds, sulfide, total phosphorus, chlorophyll-a, Esherichia coli, and enterococci, as well as use of more sensitive laboratory analytical methods for determination of cadmium, mercury, lead, and silver.

  15. Representativeness-based sampling network design for the State of Alaska

    Treesearch

    Forrest M. Hoffman; Jitendra Kumar; Richard T. Mills; William W. Hargrove

    2013-01-01

    Resource and logistical constraints limit the frequency and extent of environmental observations, particularly in the Arctic, necessitating the development of a systematic sampling strategy to maximize coverage and objectively represent environmental variability at desired scales. A quantitative methodology for stratifying sampling domains, informing site selection,...

  16. A Structure-Adaptive Hybrid RBF-BP Classifier with an Optimized Learning Strategy

    PubMed Central

    Wen, Hui; Xie, Weixin; Pei, Jihong

    2016-01-01

    This paper presents a structure-adaptive hybrid RBF-BP (SAHRBF-BP) classifier with an optimized learning strategy. SAHRBF-BP is composed of a structure-adaptive RBF network and a BP network of cascade, where the number of RBF hidden nodes is adjusted adaptively according to the distribution of sample space, the adaptive RBF network is used for nonlinear kernel mapping and the BP network is used for nonlinear classification. The optimized learning strategy is as follows: firstly, a potential function is introduced into training sample space to adaptively determine the number of initial RBF hidden nodes and node parameters, and a form of heterogeneous samples repulsive force is designed to further optimize each generated RBF hidden node parameters, the optimized structure-adaptive RBF network is used for adaptively nonlinear mapping the sample space; then, according to the number of adaptively generated RBF hidden nodes, the number of subsequent BP input nodes can be determined, and the overall SAHRBF-BP classifier is built up; finally, different training sample sets are used to train the BP network parameters in SAHRBF-BP. Compared with other algorithms applied to different data sets, experiments show the superiority of SAHRBF-BP. Especially on most low dimensional and large number of data sets, the classification performance of SAHRBF-BP outperforms other training SLFNs algorithms. PMID:27792737

  17. Performance evaluation of an importance sampling technique in a Jackson network

    NASA Astrophysics Data System (ADS)

    brahim Mahdipour, E.; Masoud Rahmani, Amir; Setayeshi, Saeed

    2014-03-01

    Importance sampling is a technique that is commonly used to speed up Monte Carlo simulation of rare events. However, little is known regarding the design of efficient importance sampling algorithms in the context of queueing networks. The standard approach, which simulates the system using an a priori fixed change of measure suggested by large deviation analysis, has been shown to fail in even the simplest network settings. Estimating probabilities associated with rare events has been a topic of great importance in queueing theory, and in applied probability at large. In this article, we analyse the performance of an importance sampling estimator for a rare event probability in a Jackson network. This article carries out strict deadlines to a two-node Jackson network with feedback whose arrival and service rates are modulated by an exogenous finite state Markov process. We have estimated the probability of network blocking for various sets of parameters, and also the probability of missing the deadline of customers for different loads and deadlines. We have finally shown that the probability of total population overflow may be affected by various deadline values, service rates and arrival rates.

  18. Solving Constraint Satisfaction Problems with Networks of Spiking Neurons

    PubMed Central

    Jonke, Zeno; Habenschuss, Stefan; Maass, Wolfgang

    2016-01-01

    Network of neurons in the brain apply—unlike processors in our current generation of computer hardware—an event-based processing strategy, where short pulses (spikes) are emitted sparsely by neurons to signal the occurrence of an event at a particular point in time. Such spike-based computations promise to be substantially more power-efficient than traditional clocked processing schemes. However, it turns out to be surprisingly difficult to design networks of spiking neurons that can solve difficult computational problems on the level of single spikes, rather than rates of spikes. We present here a new method for designing networks of spiking neurons via an energy function. Furthermore, we show how the energy function of a network of stochastically firing neurons can be shaped in a transparent manner by composing the networks of simple stereotypical network motifs. We show that this design approach enables networks of spiking neurons to produce approximate solutions to difficult (NP-hard) constraint satisfaction problems from the domains of planning/optimization and verification/logical inference. The resulting networks employ noise as a computational resource. Nevertheless, the timing of spikes plays an essential role in their computations. Furthermore, networks of spiking neurons carry out for the Traveling Salesman Problem a more efficient stochastic search for good solutions compared with stochastic artificial neural networks (Boltzmann machines) and Gibbs sampling. PMID:27065785

  19. Sampling design for groundwater solute transport: Tests of methods and analysis of Cape Cod tracer test data

    USGS Publications Warehouse

    Knopman, Debra S.; Voss, Clifford I.; Garabedian, Stephen P.

    1991-01-01

    Tests of a one-dimensional sampling design methodology on measurements of bromide concentration collected during the natural gradient tracer test conducted by the U.S. Geological Survey on Cape Cod, Massachusetts, demonstrate its efficacy for field studies of solute transport in groundwater and the utility of one-dimensional analysis. The methodology was applied to design of sparse two-dimensional networks of fully screened wells typical of those often used in engineering practice. In one-dimensional analysis, designs consist of the downstream distances to rows of wells oriented perpendicular to the groundwater flow direction and the timing of sampling to be carried out on each row. The power of a sampling design is measured by its effectiveness in simultaneously meeting objectives of model discrimination, parameter estimation, and cost minimization. One-dimensional models of solute transport, differing in processes affecting the solute and assumptions about the structure of the flow field, were considered for description of tracer cloud migration. When fitting each model using nonlinear regression, additive and multiplicative error forms were allowed for the residuals which consist of both random and model errors. The one-dimensional single-layer model of a nonreactive solute with multiplicative error was judged to be the best of those tested. Results show the efficacy of the methodology in designing sparse but powerful sampling networks. Designs that sample five rows of wells at five or fewer times in any given row performed as well for model discrimination as the full set of samples taken up to eight times in a given row from as many as 89 rows. Also, designs for parameter estimation judged to be good by the methodology were as effective in reducing the variance of parameter estimates as arbitrary designs with many more samples. Results further showed that estimates of velocity and longitudinal dispersivity in one-dimensional models based on data from only five rows of fully screened wells each sampled five or fewer times were practically equivalent to values determined from moments analysis of the complete three-dimensional set of 29,285 samples taken during 16 sampling times.

  20. Designing and implementing sample and data collection for an international genetics study: the Type 1 Diabetes Genetics Consortium (T1DGC).

    PubMed

    Hilner, Joan E; Perdue, Letitia H; Sides, Elizabeth G; Pierce, June J; Wägner, Ana M; Aldrich, Alan; Loth, Amanda; Albret, Lotte; Wagenknecht, Lynne E; Nierras, Concepcion; Akolkar, Beena

    2010-01-01

    The Type 1 Diabetes Genetics Consortium (T1DGC) is an international project whose primary aims are to: (a) discover genes that modify type 1 diabetes risk; and (b) expand upon the existing genetic resources for type 1 diabetes research. The initial goal was to collect 2500 affected sibling pair (ASP) families worldwide. T1DGC was organized into four regional networks (Asia-Pacific, Europe, North America, and the United Kingdom) and a Coordinating Center. A Steering Committee, with representatives from each network, the Coordinating Center, and the funding organizations, was responsible for T1DGC operations. The Coordinating Center, with regional network representatives, developed study documents and data systems. Each network established laboratories for: DNA extraction and cell line production; human leukocyte antigen genotyping; and autoantibody measurement. Samples were tracked from the point of collection, processed at network laboratories and stored for deposit at National Institute for Diabetes and Digestive and Kidney Diseases (NIDDK) Central Repositories. Phenotypic data were collected and entered into the study database maintained by the Coordinating Center. T1DGC achieved its original ASP recruitment goal. In response to research design changes, the T1DGC infrastructure also recruited trios, cases, and controls. Results of genetic analyses have identified many novel regions that affect susceptibility to type 1 diabetes. T1DGC created a resource of data and samples that is accessible to the research community. Participation in T1DGC was declined by some countries due to study requirements for the processing of samples at network laboratories and/or final deposition of samples in NIDDK Central Repositories. Re-contact of participants was not included in informed consent templates, preventing collection of additional samples for functional studies. T1DGC implemented a distributed, regional network structure to reach ASP recruitment targets. The infrastructure proved robust and flexible enough to accommodate additional recruitment. T1DGC has established significant resources that provide a basis for future discovery in the study of type 1 diabetes genetics.

  1. Networking Behaviour, Graduate Employability: A Social Capital Perspective

    ERIC Educational Resources Information Center

    Batistic, Saša; Tymon, Alex

    2017-01-01

    Purpose: Drawing on the overarching framework of social capital theory, the purpose of this paper is to develop and empirically examine networking behaviour and employability within the higher education context. Design/methodology/approach: In a sample of 376 full-time business students the authors measured perceived employability, networking…

  2. Method for collecting thermocouple data via secured shell over a wireless local area network in real time

    NASA Astrophysics Data System (ADS)

    Arnold, F.; DeMallie, I.; Florence, L.; Kashinski, D. O.

    2015-03-01

    This manuscript addresses the design, hardware details, construction, and programming of an apparatus allowing an experimenter to monitor and record high-temperature thermocouple measurements of dynamic systems in real time. The apparatus uses wireless network technology to bridge the gap between a dynamic (moving) sample frame and the static laboratory frame. Our design is a custom solution applied to samples that rotate through large angular displacements where hard-wired and typical slip-ring solutions are not practical because of noise considerations. The apparatus consists of a Raspberry PI mini-Linux computer, an Arduino micro-controller, an Ocean Controls thermocouple multiplexer shield, and k-type thermocouples.

  3. Method for collecting thermocouple data via secured shell over a wireless local area network in real time.

    PubMed

    Arnold, F; DeMallie, I; Florence, L; Kashinski, D O

    2015-03-01

    This manuscript addresses the design, hardware details, construction, and programming of an apparatus allowing an experimenter to monitor and record high-temperature thermocouple measurements of dynamic systems in real time. The apparatus uses wireless network technology to bridge the gap between a dynamic (moving) sample frame and the static laboratory frame. Our design is a custom solution applied to samples that rotate through large angular displacements where hard-wired and typical slip-ring solutions are not practical because of noise considerations. The apparatus consists of a Raspberry PI mini-Linux computer, an Arduino micro-controller, an Ocean Controls thermocouple multiplexer shield, and k-type thermocouples.

  4. Application of Student Book Based On Integrated Learning Model Of Networked Type With Heart Electrical Activity Theme For Junior High School

    NASA Astrophysics Data System (ADS)

    Gusnedi, G.; Ratnawulan, R.; Triana, L.

    2018-04-01

    The purpose of this study is to determine the effect of the use of Integrated Science IPA books Using Networked Learning Model of knowledge competence through improved learning outcomes obtained. The experimental design used is one group pre test post test design to know the results before and after being treated. The number of samples used is one class that is divided into two categories of initial ability to see the improvement of knowledge competence. The sample used was taken from the students of grade VIII SMPN 2 Sawahlunto, Indonesia. The results of this study indicate that most students have increased knowledge competence.

  5. Individual-Based Ant-Plant Networks: Diurnal-Nocturnal Structure and Species-Area Relationship

    PubMed Central

    Dáttilo, Wesley; Fagundes, Roberth; Gurka, Carlos A. Q.; Silva, Mara S. A.; Vieira, Marisa C. L.; Izzo, Thiago J.; Díaz-Castelazo, Cecília; Del-Claro, Kleber; Rico-Gray, Victor

    2014-01-01

    Despite the importance and increasing knowledge of ecological networks, sampling effort and intrapopulation variation has been widely overlooked. Using continuous daily sampling of ants visiting three plant species in the Brazilian Neotropical savanna, we evaluated for the first time the topological structure over 24 h and species-area relationships (based on the number of extrafloral nectaries available) in individual-based ant-plant networks. We observed that diurnal and nocturnal ant-plant networks exhibited the same pattern of interactions: a nested and non-modular pattern and an average level of network specialization. Despite the high similarity in the ants’ composition between the two collection periods, ant species found in the central core of highly interacting species totally changed between diurnal and nocturnal sampling for all plant species. In other words, this “night-turnover” suggests that the ecological dynamics of these ant-plant interactions can be temporally partitioned (day and night) at a small spatial scale. Thus, it is possible that in some cases processes shaping mutualistic networks formed by protective ants and plants may be underestimated by diurnal sampling alone. Moreover, we did not observe any effect of the number of extrafloral nectaries on ant richness and their foraging on such plants in any of the studied ant-plant networks. We hypothesize that competitively superior ants could monopolize individual plants and allow the coexistence of only a few other ant species, however, other alternative hypotheses are also discussed. Thus, sampling period and species-area relationship produces basic information that increases our confidence in how individual-based ant-plant networks are structured, and the need to consider nocturnal records in ant-plant network sampling design so as to decrease inappropriate inferences. PMID:24918750

  6. Learning Bayesian Networks from Correlated Data

    NASA Astrophysics Data System (ADS)

    Bae, Harold; Monti, Stefano; Montano, Monty; Steinberg, Martin H.; Perls, Thomas T.; Sebastiani, Paola

    2016-05-01

    Bayesian networks are probabilistic models that represent complex distributions in a modular way and have become very popular in many fields. There are many methods to build Bayesian networks from a random sample of independent and identically distributed observations. However, many observational studies are designed using some form of clustered sampling that introduces correlations between observations within the same cluster and ignoring this correlation typically inflates the rate of false positive associations. We describe a novel parameterization of Bayesian networks that uses random effects to model the correlation within sample units and can be used for structure and parameter learning from correlated data without inflating the Type I error rate. We compare different learning metrics using simulations and illustrate the method in two real examples: an analysis of genetic and non-genetic factors associated with human longevity from a family-based study, and an example of risk factors for complications of sickle cell anemia from a longitudinal study with repeated measures.

  7. Classification of urine sediment based on convolution neural network

    NASA Astrophysics Data System (ADS)

    Pan, Jingjing; Jiang, Cunbo; Zhu, Tiantian

    2018-04-01

    By designing a new convolution neural network framework, this paper breaks the constraints of the original convolution neural network framework requiring large training samples and samples of the same size. Move and cropping the input images, generate the same size of the sub-graph. And then, the generated sub-graph uses the method of dropout, increasing the diversity of samples and preventing the fitting generation. Randomly select some proper subset in the sub-graphic set and ensure that the number of elements in the proper subset is same and the proper subset is not the same. The proper subsets are used as input layers for the convolution neural network. Through the convolution layer, the pooling, the full connection layer and output layer, we can obtained the classification loss rate of test set and training set. In the red blood cells, white blood cells, calcium oxalate crystallization classification experiment, the classification accuracy rate of 97% or more.

  8. Multiple contexts and adolescent body mass index: Schools, neighborhoods, and social networks.

    PubMed

    Evans, Clare R; Onnela, Jukka-Pekka; Williams, David R; Subramanian, S V

    2016-08-01

    Adolescent health and behaviors are influenced by multiple contexts, including schools, neighborhoods, and social networks, yet these contexts are rarely considered simultaneously. In this study we combine social network community detection analysis and cross-classified multilevel modeling in order to compare the contributions of each of these three contexts to the total variation in adolescent body mass index (BMI). Wave 1 of the National Longitudinal Study of Adolescent to Adult Health is used, and for robustness we conduct the analysis in both the core sample (122 schools; N = 14,144) and a sub-set of the sample (16 schools; N = 3335), known as the saturated sample due to its completeness of neighborhood data. After adjusting for relevant covariates, we find that the school-level and neighborhood-level contributions to the variance are modest compared with the network community-level (σ(2)school = 0.069, σ(2)neighborhood = 0.144, σ(2)network = 0.463). These results are robust to two alternative algorithms for specifying network communities, and to analysis in the saturated sample. While this study does not determine whether network effects are attributable to social influence or selection, it does highlight the salience of adolescent social networks and indicates that they may be a promising context to address in the design of health promotion programs. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Adaptive sampling in research on risk-related behaviors.

    PubMed

    Thompson, Steven K; Collins, Linda M

    2002-11-01

    This article introduces adaptive sampling designs to substance use researchers. Adaptive sampling is particularly useful when the population of interest is rare, unevenly distributed, hidden, or hard to reach. Examples of such populations are injection drug users, individuals at high risk for HIV/AIDS, and young adolescents who are nicotine dependent. In conventional sampling, the sampling design is based entirely on a priori information, and is fixed before the study begins. By contrast, in adaptive sampling, the sampling design adapts based on observations made during the survey; for example, drug users may be asked to refer other drug users to the researcher. In the present article several adaptive sampling designs are discussed. Link-tracing designs such as snowball sampling, random walk methods, and network sampling are described, along with adaptive allocation and adaptive cluster sampling. It is stressed that special estimation procedures taking the sampling design into account are needed when adaptive sampling has been used. These procedures yield estimates that are considerably better than conventional estimates. For rare and clustered populations adaptive designs can give substantial gains in efficiency over conventional designs, and for hidden populations link-tracing and other adaptive procedures may provide the only practical way to obtain a sample large enough for the study objectives.

  10. Random sampling of elementary flux modes in large-scale metabolic networks.

    PubMed

    Machado, Daniel; Soons, Zita; Patil, Kiran Raosaheb; Ferreira, Eugénio C; Rocha, Isabel

    2012-09-15

    The description of a metabolic network in terms of elementary (flux) modes (EMs) provides an important framework for metabolic pathway analysis. However, their application to large networks has been hampered by the combinatorial explosion in the number of modes. In this work, we develop a method for generating random samples of EMs without computing the whole set. Our algorithm is an adaptation of the canonical basis approach, where we add an additional filtering step which, at each iteration, selects a random subset of the new combinations of modes. In order to obtain an unbiased sample, all candidates are assigned the same probability of getting selected. This approach avoids the exponential growth of the number of modes during computation, thus generating a random sample of the complete set of EMs within reasonable time. We generated samples of different sizes for a metabolic network of Escherichia coli, and observed that they preserve several properties of the full EM set. It is also shown that EM sampling can be used for rational strain design. A well distributed sample, that is representative of the complete set of EMs, should be suitable to most EM-based methods for analysis and optimization of metabolic networks. Source code for a cross-platform implementation in Python is freely available at http://code.google.com/p/emsampler. dmachado@deb.uminho.pt Supplementary data are available at Bioinformatics online.

  11. A reverse engineering approach to optimize experiments for the construction of biological regulatory networks.

    PubMed

    Zhang, Xiaomeng; Shao, Bin; Wu, Yangle; Qi, Ouyang

    2013-01-01

    One of the major objectives in systems biology is to understand the relation between the topological structures and the dynamics of biological regulatory networks. In this context, various mathematical tools have been developed to deduct structures of regulatory networks from microarray expression data. In general, from a single data set, one cannot deduct the whole network structure; additional expression data are usually needed. Thus how to design a microarray expression experiment in order to get the most information is a practical problem in systems biology. Here we propose three methods, namely, maximum distance method, trajectory entropy method, and sampling method, to derive the optimal initial conditions for experiments. The performance of these methods is tested and evaluated in three well-known regulatory networks (budding yeast cell cycle, fission yeast cell cycle, and E. coli. SOS network). Based on the evaluation, we propose an efficient strategy for the design of microarray expression experiments.

  12. NEON, Establishing a Standardized Network for Groundwater Observations

    NASA Astrophysics Data System (ADS)

    Fitzgerald, M.; Schroeter, N.; Goodman, K. J.; Roehm, C. L.

    2013-12-01

    The National Ecological Observatory Network (NEON) is establishing a standardized set of data collection systems comprised of in-situ sensors and observational sampling to obtain data fundamental to the analysis of environmental change at a continental scale. NEON will be collecting aquatic, terrestrial, and atmospheric data using Observatory-wide standardized designs and methods via a systems engineering approach. This approach ensures a wealth of high quality data, data algorithms, and models that will be freely accessible to all communities such as academic researchers, policy makers, and the general public. The project is established to provide 30 years of data which will enable prediction and forecasting of drivers and responses of ecological change at scales ranging from localized responses through regional gradients and up to the continental scale. The Observatory is a distributed system of sites spread across the United States, including Alaska, Hawaii, and Puerto Rico, which is subdivided into 20 statistically unique domains, based on a set of 18 ecologically important parameters. Each domain contains at least one core aquatic and terrestrial site which are located in unmanaged lands, and up to 2 additional sites selected to study domain specific questions such as nitrogen deposition gradients and responses of land use change activities on the ecosystem. Here, we present the development of NEON's groundwater observation well network design and the timing strategy for sampling groundwater chemistry. Shallow well networks, up to 100 feet in depth, will be installed at NEON aquatic sites and will allow for observation of localized ecohydrologic site conditions, by providing basic spatio-temporal near-real time data on groundwater parameters (level, temperature, conductivity) collected from in situ high-resolution instrumentation positioned in each well; and biannual sampling of geochemical and nutrient (N and P) concentrations in a subset of wells for each site. These data will be used to calculate several higher level data products such as hydrologic gradients which drive nutrient fluxes and their change over time. When coupled with other NEON data products, these data will allow for examining surface water/groundwater interactions as well as additional terrestrial and aquatic linkages, such as riparian vegetation response to changing ecohydrologic conditions (i.e. groundwater withdraw for irrigation, land use change) and natural sources (i.e. drought and changing precipitation patterns). This work will present the well network arrays designed for the different types of aquatic sites (1st/2nd order streams, larger rivers, and lakes) including variations on the well network designs for sites where physical constraints hinder a consistent design due to topographic (steep topography, wetlands) or physical constraints (such as permafrost). A generalized sampling strategy for each type of environment will also be detailed indicating the time of year, largely governed by hydrologic conditions, when sampling should take place to provide consistent groundwater chemistry data to allow for analyzing geochemical trends spatially across the network and through time.

  13. Is First-Order Vector Autoregressive Model Optimal for fMRI Data?

    PubMed

    Ting, Chee-Ming; Seghouane, Abd-Krim; Khalid, Muhammad Usman; Salleh, Sh-Hussain

    2015-09-01

    We consider the problem of selecting the optimal orders of vector autoregressive (VAR) models for fMRI data. Many previous studies used model order of one and ignored that it may vary considerably across data sets depending on different data dimensions, subjects, tasks, and experimental designs. In addition, the classical information criteria (IC) used (e.g., the Akaike IC (AIC)) are biased and inappropriate for the high-dimensional fMRI data typically with a small sample size. We examine the mixed results on the optimal VAR orders for fMRI, especially the validity of the order-one hypothesis, by a comprehensive evaluation using different model selection criteria over three typical data types--a resting state, an event-related design, and a block design data set--with varying time series dimensions obtained from distinct functional brain networks. We use a more balanced criterion, Kullback's IC (KIC) based on Kullback's symmetric divergence combining two directed divergences. We also consider the bias-corrected versions (AICc and KICc) to improve VAR model selection in small samples. Simulation results show better small-sample selection performance of the proposed criteria over the classical ones. Both bias-corrected ICs provide more accurate and consistent model order choices than their biased counterparts, which suffer from overfitting, with KICc performing the best. Results on real data show that orders greater than one were selected by all criteria across all data sets for the small to moderate dimensions, particularly from small, specific networks such as the resting-state default mode network and the task-related motor networks, whereas low orders close to one but not necessarily one were chosen for the large dimensions of full-brain networks.

  14. [Application of simulated annealing method and neural network on optimizing soil sampling schemes based on road distribution].

    PubMed

    Han, Zong-wei; Huang, Wei; Luo, Yun; Zhang, Chun-di; Qi, Da-cheng

    2015-03-01

    Taking the soil organic matter in eastern Zhongxiang County, Hubei Province, as a research object, thirteen sample sets from different regions were arranged surrounding the road network, the spatial configuration of which was optimized by the simulated annealing approach. The topographic factors of these thirteen sample sets, including slope, plane curvature, profile curvature, topographic wetness index, stream power index and sediment transport index, were extracted by the terrain analysis. Based on the results of optimization, a multiple linear regression model with topographic factors as independent variables was built. At the same time, a multilayer perception model on the basis of neural network approach was implemented. The comparison between these two models was carried out then. The results revealed that the proposed approach was practicable in optimizing soil sampling scheme. The optimal configuration was capable of gaining soil-landscape knowledge exactly, and the accuracy of optimal configuration was better than that of original samples. This study designed a sampling configuration to study the soil attribute distribution by referring to the spatial layout of road network, historical samples, and digital elevation data, which provided an effective means as well as a theoretical basis for determining the sampling configuration and displaying spatial distribution of soil organic matter with low cost and high efficiency.

  15. Sampling design for spatially distributed hydrogeologic and environmental processes

    USGS Publications Warehouse

    Christakos, G.; Olea, R.A.

    1992-01-01

    A methodology for the design of sampling networks over space is proposed. The methodology is based on spatial random field representations of nonhomogeneous natural processes, and on optimal spatial estimation techniques. One of the most important results of random field theory for physical sciences is its rationalization of correlations in spatial variability of natural processes. This correlation is extremely important both for interpreting spatially distributed observations and for predictive performance. The extent of site sampling and the types of data to be collected will depend on the relationship of subsurface variability to predictive uncertainty. While hypothesis formulation and initial identification of spatial variability characteristics are based on scientific understanding (such as knowledge of the physics of the underlying phenomena, geological interpretations, intuition and experience), the support offered by field data is statistically modelled. This model is not limited by the geometric nature of sampling and covers a wide range in subsurface uncertainties. A factorization scheme of the sampling error variance is derived, which possesses certain atttactive properties allowing significant savings in computations. By means of this scheme, a practical sampling design procedure providing suitable indices of the sampling error variance is established. These indices can be used by way of multiobjective decision criteria to obtain the best sampling strategy. Neither the actual implementation of the in-situ sampling nor the solution of the large spatial estimation systems of equations are necessary. The required values of the accuracy parameters involved in the network design are derived using reference charts (readily available for various combinations of data configurations and spatial variability parameters) and certain simple yet accurate analytical formulas. Insight is gained by applying the proposed sampling procedure to realistic examples related to sampling problems in two dimensions. ?? 1992.

  16. Evaluation of Residential Consumers Knowledge of Wireless Network Security and Its Correlation with Identity Theft

    ERIC Educational Resources Information Center

    Kpaduwa, Fidelis Iheanyi

    2010-01-01

    This current quantitative correlational research study evaluated the residential consumers' knowledge of wireless network security and its relationship with identity theft. Data analysis was based on a sample of 254 randomly selected students. All the study participants completed a survey questionnaire designed to measure their knowledge of…

  17. The UK DNA banking network: a "fair access" biobank.

    PubMed

    Yuille, Martin; Dixon, Katherine; Platt, Andrew; Pullum, Simon; Lewis, David; Hall, Alistair; Ollier, William

    2010-08-01

    The UK DNA Banking Network (UDBN) is a secondary biobank: it aggregates and manages resources (samples and data) originated by others. The network comprises, on the one hand, investigator groups led by clinicians each with a distinct disease specialism and, on the other hand, a research infrastructure to manage samples and data. The infrastructure addresses the problem of providing secure quality-assured accrual, storage, replenishment and distribution capacities for samples and of facilitating access to DNA aliquots and data for new peer-reviewed studies in genetic epidemiology. 'Fair access' principles and practices have been pragmatically developed that, unlike open access policies in this area, are not cumbersome but, rather, are fit for the purpose of expediting new study designs and their implementation. UDBN has so far distributed >60,000 samples for major genotyping studies yielding >10 billion genotypes. It provides a working model that can inform progress in biobanking nationally, across Europe and internationally.

  18. Integrated communication and control systems. II - Design considerations

    NASA Technical Reports Server (NTRS)

    Ray, Asok; Halevi, Yoram

    1988-01-01

    The ICCS design issues for nonperiodic and stochastic delays are addressed and the framework for alternative design procedures is outlined. The impact of network-induced delays on system stability is investigated and their physical significance is demonstrated using a simulation. The negative effects of vacant sampling and message rejection at the controller are demonstrated.

  19. Optimal Design of River Monitoring Network in Taizihe River by Matter Element Analysis

    PubMed Central

    Wang, Hui; Liu, Zhe; Sun, Lina; Luo, Qing

    2015-01-01

    The objective of this study is to optimize the river monitoring network in Taizihe River, Northeast China. The situation of the network and water characteristics were studied in this work. During this study, water samples were collected once a month during January 2009 - December 2010 from seventeen sites. Futhermore, the 16 monitoring indexes were analyzed in the field and laboratory. The pH value of surface water sample was found to be in the range of 6.83 to 9.31, and the average concentrations of NH4 +-N, chemical oxygen demand (COD), volatile phenol and total phosphorus (TP) were found decreasing significantly. The water quality of the river has been improved from 2009 to 2010. Through the calculation of the data availability and the correlation between adjacent sections, it was found that the present monitoring network was inefficient as well as the optimization was indispensable. In order to improve the situation, the matter element analysis and gravity distance were applied in the optimization of river monitoring network, which were proved to be a useful method to optimize river quality monitoring network. The amount of monitoring sections were cut from 17 to 13 for the monitoring network was more cost-effective after being optimized. The results of this study could be used in developing effective management strategies to improve the environmental quality of Taizihe River. Also, the results show that the proposed model can be effectively used for the optimal design of monitoring networks in river systems. PMID:26023785

  20. I/O impedance controller

    DOEpatents

    Ruesch, Rodney; Jenkins, Philip N.; Ma, Nan

    2004-03-09

    There is disclosed apparatus and apparatus for impedance control to provide for controlling the impedance of a communication circuit using an all-digital impedance control circuit wherein one or more control bits are used to tune the output impedance. In one example embodiment, the impedance control circuit is fabricated using circuit components found in a standard macro library of a computer aided design system. According to another example embodiment, there is provided a control for an output driver on an integrated circuit ("IC") device to provide for forming a resistor divider network with the output driver and a resistor off the IC device so that the divider network produces an output voltage, comparing the output voltage of the divider network with a reference voltage, and adjusting the output impedance of the output driver to attempt to match the output voltage of the divider network and the reference voltage. Also disclosed is over-sampling the divider network voltage, storing the results of the over sampling, repeating the over-sampling and storing, averaging the results of multiple over sampling operations, controlling the impedance with a plurality of bits forming a word, and updating the value of the word by only one least significant bit at a time.

  1. Design and Benchmarking of a Network-In-the-Loop Simulation for Use in a Hardware-In-the-Loop System

    NASA Technical Reports Server (NTRS)

    Aretskin-Hariton, Eliot; Thomas, George; Culley, Dennis; Kratz, Jonathan

    2017-01-01

    Distributed engine control (DEC) systems alter aircraft engine design constraints because of fundamental differences in the input and output communication between DEC and centralized control architectures. The change in the way communication is implemented may create new optimum engine-aircraft configurations. This paper continues the exploration of digital network communication by demonstrating a Network-In-the-Loop simulation at the NASA Glenn Research Center. This simulation incorporates a real-time network protocol, the Engine Area Distributed Interconnect Network Lite (EADIN Lite), with the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k) software. The objective of this study is to assess digital control network impact to the control system. Performance is evaluated relative to a truth model for large transient maneuvers and a typical flight profile for commercial aircraft. Results show that a decrease in network bandwidth from 250 Kbps (sampling all sensors every time step) to 40 Kbps, resulted in very small differences in control system performance.

  2. Design and Benchmarking of a Network-In-the-Loop Simulation for Use in a Hardware-In-the-Loop System

    NASA Technical Reports Server (NTRS)

    Aretskin-Hariton, Eliot D.; Thomas, George Lindsey; Culley, Dennis E.; Kratz, Jonathan L.

    2017-01-01

    Distributed engine control (DEC) systems alter aircraft engine design constraints be- cause of fundamental differences in the input and output communication between DEC and centralized control architectures. The change in the way communication is implemented may create new optimum engine-aircraft configurations. This paper continues the exploration of digital network communication by demonstrating a Network-In-the-Loop simulation at the NASA Glenn Research Center. This simulation incorporates a real-time network protocol, the Engine Area Distributed Interconnect Network Lite (EADIN Lite), with the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k) software. The objective of this study is to assess digital control network impact to the control system. Performance is evaluated relative to a truth model for large transient maneuvers and a typical flight profile for commercial aircraft. Results show that a decrease in network bandwidth from 250 Kbps (sampling all sensors every time step) to 40 Kbps, resulted in very small differences in control system performance.

  3. Development of a Prediction Model Based on RBF Neural Network for Sheet Metal Fixture Locating Layout Design and Optimization.

    PubMed

    Wang, Zhongqi; Yang, Bo; Kang, Yonggang; Yang, Yuan

    2016-01-01

    Fixture plays an important part in constraining excessive sheet metal part deformation at machining, assembly, and measuring stages during the whole manufacturing process. However, it is still a difficult and nontrivial task to design and optimize sheet metal fixture locating layout at present because there is always no direct and explicit expression describing sheet metal fixture locating layout and responding deformation. To that end, an RBF neural network prediction model is proposed in this paper to assist design and optimization of sheet metal fixture locating layout. The RBF neural network model is constructed by training data set selected by uniform sampling and finite element simulation analysis. Finally, a case study is conducted to verify the proposed method.

  4. Development of a Prediction Model Based on RBF Neural Network for Sheet Metal Fixture Locating Layout Design and Optimization

    PubMed Central

    Wang, Zhongqi; Yang, Bo; Kang, Yonggang; Yang, Yuan

    2016-01-01

    Fixture plays an important part in constraining excessive sheet metal part deformation at machining, assembly, and measuring stages during the whole manufacturing process. However, it is still a difficult and nontrivial task to design and optimize sheet metal fixture locating layout at present because there is always no direct and explicit expression describing sheet metal fixture locating layout and responding deformation. To that end, an RBF neural network prediction model is proposed in this paper to assist design and optimization of sheet metal fixture locating layout. The RBF neural network model is constructed by training data set selected by uniform sampling and finite element simulation analysis. Finally, a case study is conducted to verify the proposed method. PMID:27127499

  5. An end-to-end workflow for engineering of biological networks from high-level specifications.

    PubMed

    Beal, Jacob; Weiss, Ron; Densmore, Douglas; Adler, Aaron; Appleton, Evan; Babb, Jonathan; Bhatia, Swapnil; Davidsohn, Noah; Haddock, Traci; Loyall, Joseph; Schantz, Richard; Vasilev, Viktor; Yaman, Fusun

    2012-08-17

    We present a workflow for the design and production of biological networks from high-level program specifications. The workflow is based on a sequence of intermediate models that incrementally translate high-level specifications into DNA samples that implement them. We identify algorithms for translating between adjacent models and implement them as a set of software tools, organized into a four-stage toolchain: Specification, Compilation, Part Assignment, and Assembly. The specification stage begins with a Boolean logic computation specified in the Proto programming language. The compilation stage uses a library of network motifs and cellular platforms, also specified in Proto, to transform the program into an optimized Abstract Genetic Regulatory Network (AGRN) that implements the programmed behavior. The part assignment stage assigns DNA parts to the AGRN, drawing the parts from a database for the target cellular platform, to create a DNA sequence implementing the AGRN. Finally, the assembly stage computes an optimized assembly plan to create the DNA sequence from available part samples, yielding a protocol for producing a sample of engineered plasmids with robotics assistance. Our workflow is the first to automate the production of biological networks from a high-level program specification. Furthermore, the workflow's modular design allows the same program to be realized on different cellular platforms simply by swapping workflow configurations. We validated our workflow by specifying a small-molecule sensor-reporter program and verifying the resulting plasmids in both HEK 293 mammalian cells and in E. coli bacterial cells.

  6. Performance Analysis of Optical Mobile Fronthaul for Cloud Radio Access Networks

    NASA Astrophysics Data System (ADS)

    Zhang, Jiawei; Xiao, Yuming; Li, Hui; Ji, Yuefeng

    2017-10-01

    Cloud radio access networks (C-RAN) separates baseband units (BBU) of conventional base station to a centralized pool which connects remote radio heads (RRH) through mobile fronthaul. Mobile fronthaul is a new network segment of C-RAN, it is designed to transport digital sampling data between BBU and RRH. Optical transport networks that provide large bandwidth and low latency is a promising fronthaul solution. In this paper, we discuss several optical transport networks which are candidates for mobile fronthaul, analyze their performances including the number of used wavelength, round-trip latency and wavelength utilization.

  7. Application of artificial neural networks for conformity analysis of fuel performed with an optical fiber sensor

    NASA Astrophysics Data System (ADS)

    Possetti, Gustavo Rafael Collere; Coradin, Francelli Klemba; Côcco, Lílian Cristina; Yamamoto, Carlos Itsuo; de Arruda, Lucia Valéria Ramos; Falate, Rosane; Muller, Marcia; Fabris, José Luís

    2008-04-01

    The liquid fuel quality control is an important issue that brings benefits for the State, for the consumers and for the environment. The conformity analysis, in special for gasoline, demands a rigorous sampling technique among gas stations and other economic agencies, followed by a series of standard physicochemical tests. Such procedures are commonly expensive and time demanding and, moreover, a specialist is often required to carry out the tasks. Such drawbacks make the development of alternative analysis tools an important research field. The fuel refractive index is an additional parameter to help the fuel conformity analysis, besides the prospective optical fiber sensors, which operate like transducers with singular properties. When this parameter is correlated with the sample density, it becomes possible to determine conformity zones that cannot be analytically defined. This work presents an application of artificial neural networks based on Radial Basis Function to determine these zones. A set of 45 gasoline samples, collected in several gas stations and previously analyzed according to the rules of Agência Nacional do Petróleo, Gás Natural e Biocombustíveis, a Brazilian regulatory agency, constituted the database to build two neural networks. The input variables of first network are the samples refractive indices, measured with an Abbe refractometer, and the density of the samples measured with a digital densimeter. For the second network the input variables included, besides the samples densities, the wavelength response of a long-period grating to the samples refractive indices. The used grating was written in an optical fiber using the point-to-point technique by submitting the fiber to consecutive electrical arcs from a splice machine. The output variables of both Radial Basis Function Networks are represented by the conformity status of each sample, according to report of tests carried out following the American Society for Testing and Materials and/or Brazilian Association of Technical Rules standards. A subset of 35 samples, randomly chosen from the database, was used to design and calibrate (train) both networks. The two networks topologies (numbers of Radial Basis Function neurons of the hidden layer and function radius) were built in order to minimize the root mean square error. The subset composed by the other 10 samples was used to validate the final networks architectures. The obtained results have demonstrated that both networks reach a good predictive capability.

  8. A Hierarchical Poisson Log-Normal Model for Network Inference from RNA Sequencing Data

    PubMed Central

    Gallopin, Mélina; Rau, Andrea; Jaffrézic, Florence

    2013-01-01

    Gene network inference from transcriptomic data is an important methodological challenge and a key aspect of systems biology. Although several methods have been proposed to infer networks from microarray data, there is a need for inference methods able to model RNA-seq data, which are count-based and highly variable. In this work we propose a hierarchical Poisson log-normal model with a Lasso penalty to infer gene networks from RNA-seq data; this model has the advantage of directly modelling discrete data and accounting for inter-sample variance larger than the sample mean. Using real microRNA-seq data from breast cancer tumors and simulations, we compare this method to a regularized Gaussian graphical model on log-transformed data, and a Poisson log-linear graphical model with a Lasso penalty on power-transformed data. For data simulated with large inter-sample dispersion, the proposed model performs better than the other methods in terms of sensitivity, specificity and area under the ROC curve. These results show the necessity of methods specifically designed for gene network inference from RNA-seq data. PMID:24147011

  9. Learners' Attitudes toward Foreign Language Practice on Social Network Sites

    ERIC Educational Resources Information Center

    Villafuerte, Jhonny; Romero, Asier

    2017-01-01

    This work aims to study learners' attitudes towards practicing English Language on Social Networks Sites (SNS). The sample involved 110 students from the University Laica Eloy Alfaro de Manabi in Ecuador, and the University of the Basque Country in Spain. The instrument applied was a Likert scale questionnaire designed Ad hoc by the researchers,…

  10. Design of smart sensing components for volcano monitoring

    USGS Publications Warehouse

    Xu, M.; Song, W.-Z.; Huang, R.; Peng, Y.; Shirazi, B.; LaHusen, R.; Kiely, A.; Peterson, N.; Ma, A.; Anusuya-Rangappa, L.; Miceli, M.; McBride, D.

    2009-01-01

    In a volcano monitoring application, various geophysical and geochemical sensors generate continuous high-fidelity data, and there is a compelling need for real-time raw data for volcano eruption prediction research. It requires the network to support network synchronized sampling, online configurable sensing and situation awareness, which pose significant challenges on sensing component design. Ideally, the resource usages shall be driven by the environment and node situations, and the data quality is optimized under resource constraints. In this paper, we present our smart sensing component design, including hybrid time synchronization, configurable sensing, and situation awareness. Both design details and evaluation results are presented to show their efficiency. Although the presented design is for a volcano monitoring application, its design philosophy and framework can also apply to other similar applications and platforms. ?? 2009 Elsevier B.V.

  11. Reviews and syntheses: guiding the evolution of the observing system for the carbon cycle through quantitative network design

    NASA Astrophysics Data System (ADS)

    Kaminski, Thomas; Rayner, Peter Julian

    2017-10-01

    Various observational data streams have been shown to provide valuable constraints on the state and evolution of the global carbon cycle. These observations have the potential to reduce uncertainties in past, current, and predicted natural and anthropogenic surface fluxes. In particular such observations provide independent information for verification of actions as requested by the Paris Agreement. It is, however, difficult to decide which variables to sample, and how, where, and when to sample them, in order to achieve an optimal use of the observational capabilities. Quantitative network design (QND) assesses the impact of a given set of existing or hypothetical observations in a modelling framework. QND has been used to optimise in situ networks and assess the benefit to be expected from planned space missions. This paper describes recent progress and highlights aspects that are not yet sufficiently addressed. It demonstrates the advantage of an integrated QND system that can simultaneously evaluate a multitude of observational data streams and assess their complementarity and redundancy.

  12. Fine-tuning gene networks using simple sequence repeats

    PubMed Central

    Egbert, Robert G.; Klavins, Eric

    2012-01-01

    The parameters in a complex synthetic gene network must be extensively tuned before the network functions as designed. Here, we introduce a simple and general approach to rapidly tune gene networks in Escherichia coli using hypermutable simple sequence repeats embedded in the spacer region of the ribosome binding site. By varying repeat length, we generated expression libraries that incrementally and predictably sample gene expression levels over a 1,000-fold range. We demonstrate the utility of the approach by creating a bistable switch library that programmatically samples the expression space to balance the two states of the switch, and we illustrate the need for tuning by showing that the switch’s behavior is sensitive to host context. Further, we show that mutation rates of the repeats are controllable in vivo for stability or for targeted mutagenesis—suggesting a new approach to optimizing gene networks via directed evolution. This tuning methodology should accelerate the process of engineering functionally complex gene networks. PMID:22927382

  13. Design of a monitoring network over France in case of a radiological accidental release

    NASA Astrophysics Data System (ADS)

    Abida, Rachid; Bocquet, Marc; Vercauteren, Nikki; Isnard, Olivier

    The Institute of Radiation Protection and Nuclear Safety (France) is planning the set-up of an automatic nuclear aerosol monitoring network over the French territory. Each of the stations will be able to automatically sample the air aerosol content and provide activity concentration measurements on several radionuclides. This should help monitor the French and neighbouring countries nuclear power plants set. It would help evaluate the impact of a radiological incident occurring at one of these nuclear facilities. This paper is devoted to the spatial design of such a network. Here, any potential network is judged on its ability to extrapolate activity concentrations measured on the network stations over the whole domain. The performance of a network is quantitatively assessed through a cost function that measures the discrepancy between the extrapolation and the true concentration fields. These true fields are obtained through the computation of a database of dispersion accidents over one year of meteorology and originating from 20 French nuclear sites. A close to optimal network is then looked for using a simulated annealing optimisation. The results emphasise the importance of the cost function in the design of a network aimed at monitoring an accidental dispersion. Several choices of norm used in the cost function are studied and give way to different designs. The influence of the number of stations is discussed. A comparison with a purely geometric approach which does not involve simulations with a chemistry-transport model is performed.

  14. Data Centric Sensor Stream Reduction for Real-Time Applications in Wireless Sensor Networks

    PubMed Central

    Aquino, Andre Luiz Lins; Nakamura, Eduardo Freire

    2009-01-01

    This work presents a data-centric strategy to meet deadlines in soft real-time applications in wireless sensor networks. This strategy considers three main aspects: (i) The design of real-time application to obtain the minimum deadlines; (ii) An analytic model to estimate the ideal sample size used by data-reduction algorithms; and (iii) Two data-centric stream-based sampling algorithms to perform data reduction whenever necessary. Simulation results show that our data-centric strategies meet deadlines without loosing data representativeness. PMID:22303145

  15. Metal Resistivity Measuring Device

    DOEpatents

    Renken, Jr, C. J.; Myers, R. G.

    1960-12-20

    An eddy current device is designed for detecting discontinuities in metal samples. Alternate short and long duration pulses are inductively applied to a metal sample via the outer coil of a probe. The lorg pulses give a resultant signal from the metal sample responsive to probe-tosample spacing and discontinuities with the sample, and the short pulses give a resultant signal responsive only to probe-to-sample spacing. The inner coil of the probe detects the two resultant signals and transmits them to a separation network where the two signals are separated. The two separated signals are then transmitted to a compensation network where the detected signals due to the short pulses are used to compensate for variations due to probeto-sample spacing contained in the detected signals from the long pulses. Thus a resultant signal is obtained responsive to discontinuities within the sample and independent of probe-to- sample spacing.

  16. MID-ATLANTIC COASTAL STREAMS STUDY: STATISTICAL DESIGN FOR REGIONAL ASSESSMENT AND LANDSCAPE MODEL DEVELOPMENT

    EPA Science Inventory

    A network of stream-sampling sites was developed for the Mid-Atlantic Coastal Plain (New Jersey through North Carolina) a collaborative study between the U.S. Environmental Protection Agency and the U.S. Geological Survey. A stratified random sampling with unequal weighting was u...

  17. MID-ATLANTIC COASTAL STREAMS STUDY: STATISTICAL DESIGN FOR REGIONAL ASSESSMENT AND LANDSCAPE MODEL DEVELOPMENT

    EPA Science Inventory

    A network of stream-sampling sites was developed for the Mid-Atlantic Coastal Plain (New Jersey through North Carolina) as part of collaborative research between the U.S. Environmental Protection Agency and the U.S. Geological Survey. A stratified random sampling with unequal wei...

  18. Transformational principles for NEON sampling of mammalian parasites and pathogens: a response to Springer et al. (2016)

    USDA-ARS?s Scientific Manuscript database

    The National Environmental Observatory Network (NEON) has recently released a series of protocols presented with apparently broad community support for studies of small mammals and parasites. Sampling designs were outlined outlined, collectively aimed at understanding how changing environmental cond...

  19. Downsizing a long-term precipitation network: Using a quantitative approach to inform difficult decisions.

    PubMed

    Green, Mark B; Campbell, John L; Yanai, Ruth D; Bailey, Scott W; Bailey, Amey S; Grant, Nicholas; Halm, Ian; Kelsey, Eric P; Rustad, Lindsey E

    2018-01-01

    The design of a precipitation monitoring network must balance the demand for accurate estimates with the resources needed to build and maintain the network. If there are changes in the objectives of the monitoring or the availability of resources, network designs should be adjusted. At the Hubbard Brook Experimental Forest in New Hampshire, USA, precipitation has been monitored with a network established in 1955 that has grown to 23 gauges distributed across nine small catchments. This high sampling intensity allowed us to simulate reduced sampling schemes and thereby evaluate the effect of decommissioning gauges on the quality of precipitation estimates. We considered all possible scenarios of sampling intensity for the catchments on the south-facing slope (2047 combinations) and the north-facing slope (4095 combinations), from the current scenario with 11 or 12 gauges to only 1 gauge remaining. Gauge scenarios differed by as much as 6.0% from the best estimate (based on all the gauges), depending on the catchment, but 95% of the scenarios gave estimates within 2% of the long-term average annual precipitation. The insensitivity of precipitation estimates and the catchment fluxes that depend on them under many reduced monitoring scenarios allowed us to base our reduction decision on other factors such as technician safety, the time required for monitoring, and co-location with other hydrometeorological measurements (snow, air temperature). At Hubbard Brook, precipitation gauges could be reduced from 23 to 10 with a change of <2% in the long-term precipitation estimates. The decision-making approach illustrated in this case study is applicable to the redesign of monitoring networks when reduction of effort seems warranted.

  20. DEVELOPMENT, EVALUATION AND APPLICATION OF AN AUTOMATED EVENT PRECIPITATION SAMPLER FOR NETWORK OPERATION

    EPA Science Inventory

    In 1993, the University of Michigan Air Quality Laboratory (UMAQL) designed a new wet-only precipitation collection system that was utilized in the Lake Michigan Loading Study. The collection system was designed to collect discrete mercury and trace element samples on an event b...

  1. Network-scale spatial and temporal variation in Chinook salmon (Oncorhynchus tshawytscha) redd distributions: patterns inferred from spatially continuous replicate surveys

    Treesearch

    Daniel J. Isaak; Russell F. Thurow

    2006-01-01

    Spatially continuous sampling designs, when temporally replicated, provide analytical flexibility and are unmatched in their ability to provide a dynamic system view. We have compiled such a data set by georeferencing the network-scale distribution of Chinook salmon (Oncorhynchus tshawytscha) redds across a large wilderness basin (7330 km2) in...

  2. The Impact of Using Mobile Social Network Applications on Students' Social-Life

    ERIC Educational Resources Information Center

    Abdelraheem, Ahmed Yousif; Ahmed, Abdelrahman Mohammed

    2018-01-01

    The aim of the study was to investigate the impact of using Mobile Social Network Applications (MSNAs) on students' social life (social relations, family relations and social awareness). The study was designed as a survey study using a five-point Likert-type scale to collect data from the students. A sample of 211 students' response was analyzed.…

  3. Optimal Design of Multitype Groundwater Monitoring Networks Using Easily Accessible Tools.

    PubMed

    Wöhling, Thomas; Geiges, Andreas; Nowak, Wolfgang

    2016-11-01

    Monitoring networks are expensive to establish and to maintain. In this paper, we extend an existing data-worth estimation method from the suite of PEST utilities with a global optimization method for optimal sensor placement (called optimal design) in groundwater monitoring networks. Design optimization can include multiple simultaneous sensor locations and multiple sensor types. Both location and sensor type are treated simultaneously as decision variables. Our method combines linear uncertainty quantification and a modified genetic algorithm for discrete multilocation, multitype search. The efficiency of the global optimization is enhanced by an archive of past samples and parallel computing. We demonstrate our methodology for a groundwater monitoring network at the Steinlach experimental site, south-western Germany, which has been established to monitor river-groundwater exchange processes. The target of optimization is the best possible exploration for minimum variance in predicting the mean travel time of the hyporheic exchange. Our results demonstrate that the information gain of monitoring network designs can be explored efficiently and with easily accessible tools prior to taking new field measurements or installing additional measurement points. The proposed methods proved to be efficient and can be applied for model-based optimal design of any type of monitoring network in approximately linear systems. Our key contributions are (1) the use of easy-to-implement tools for an otherwise complex task and (2) yet to consider data-worth interdependencies in simultaneous optimization of multiple sensor locations and sensor types. © 2016, National Ground Water Association.

  4. The UK DNA banking network: a “fair access” biobank

    PubMed Central

    Dixon, Katherine; Platt, Andrew; Pullum, Simon; Lewis, David; Hall, Alistair; Ollier, William

    2009-01-01

    The UK DNA Banking Network (UDBN) is a secondary biobank: it aggregates and manages resources (samples and data) originated by others. The network comprises, on the one hand, investigator groups led by clinicians each with a distinct disease specialism and, on the other hand, a research infrastructure to manage samples and data. The infrastructure addresses the problem of providing secure quality-assured accrual, storage, replenishment and distribution capacities for samples and of facilitating access to DNA aliquots and data for new peer-reviewed studies in genetic epidemiology. ‘Fair access’ principles and practices have been pragmatically developed that, unlike open access policies in this area, are not cumbersome but, rather, are fit for the purpose of expediting new study designs and their implementation. UDBN has so far distributed >60,000 samples for major genotyping studies yielding >10 billion genotypes. It provides a working model that can inform progress in biobanking nationally, across Europe and internationally. PMID:19672698

  5. Synchronization of hybrid coupled reaction-diffusion neural networks with time delays via generalized intermittent control with spacial sampled-data.

    PubMed

    Lu, Binglong; Jiang, Haijun; Hu, Cheng; Abdurahman, Abdujelil

    2018-05-04

    The exponential synchronization of hybrid coupled reaction-diffusion neural networks with time delays is discussed in this article. At first, a generalized intermittent control with spacial sampled-data is introduced, which is intermittent in time and data sampling in space. This type of control strategy not only can unify the traditional periodic intermittent control and the aperiodic case, but also can lower the update rate of the controller in both temporal and spatial domains. Next, based on the designed control protocol and the Lyapunov-Krasovskii functional approach, some novel and readily verified criteria are established to guarantee the exponential synchronization of the considered networks. These criteria depend on the diffusion coefficients, coupled strengths, time delays as well as control parameters. Finally, the effectiveness of the proposed control strategy is shown by a numerical example. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Saltwater intrusion in the surficial aquifer system of the Big Cypress Basin, southwest Florida, and a proposed plan for improved salinity monitoring

    USGS Publications Warehouse

    Prinos, Scott T.

    2013-01-01

    The installation of drainage canals, poorly cased wells, and water-supply withdrawals have led to saltwater intrusion in the primary water-use aquifers in southwest Florida. Increasing population and water use have exacerbated this problem. Installation of water-control structures, well-plugging projects, and regulation of water use have slowed saltwater intrusion, but the chloride concentration of samples from some of the monitoring wells in this area indicates that saltwater intrusion continues to occur. In addition, rising sea level could increase the rate and extent of saltwater intrusion. The existing saltwater intrusion monitoring network was examined and found to lack the necessary organization, spatial distribution, and design to properly evaluate saltwater intrusion. The most recent hydrogeologic framework of southwest Florida indicates that some wells may be open to multiple aquifers or have an incorrect aquifer designation. Some of the sampling methods being used could result in poor-quality data. Some older wells are badly corroded, obstructed, or damaged and may not yield useable samples. Saltwater in some of the canals is in close proximity to coastal well fields. In some instances, saltwater occasionally occurs upstream from coastal salinity control structures. These factors lead to an incomplete understanding of the extent and threat of saltwater intrusion in southwest Florida. A proposed plan to improve the saltwater intrusion monitoring network in the South Florida Water Management District’s Big Cypress Basin describes improvements in (1) network management, (2) quality assurance, (3) documentation, (4) training, and (5) data accessibility. The plan describes improvements to hydrostratigraphic and geospatial network coverage that can be accomplished using additional monitoring, surface geophysical surveys, and borehole geophysical logging. Sampling methods and improvements to monitoring well design are described in detail. Geochemical analyses that provide insights concerning the sources of saltwater in the aquifers are described. The requirement to abandon inactive wells is discussed.

  7. NEON terrestrial field observations: designing continental scale, standardized sampling

    Treesearch

    R. H. Kao; C.M. Gibson; R. E. Gallery; C. L. Meier; D. T. Barnett; K. M. Docherty; K. K. Blevins; P. D. Travers; E. Azuaje; Y. P. Springer; K. M. Thibault; V. J. McKenzie; M. Keller; L. F. Alves; E. L. S. Hinckley; J. Parnell; D. Schimel

    2012-01-01

    Rapid changes in climate and land use and the resulting shifts in species distributions and ecosystem functions have motivated the development of the National Ecological Observatory Network (NEON). Integrating across spatial scales from ground sampling to remote sensing, NEON will provide data for users to address ecological responses to changes in climate, land use,...

  8. 40 CFR 80.79 - Liability for violations of the prohibited activities.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...; and (iii)(A) That it has conducted a quality assurance sampling and testing program, as described in... imposed by the refiner designed to prevent such action, and despite periodic sampling and testing by the... reformulated gasoline at all points in the gasoline distribution network, other than at retail outlets and...

  9. Adaptive sampling in behavioral surveys.

    PubMed

    Thompson, S K

    1997-01-01

    Studies of populations such as drug users encounter difficulties because the members of the populations are rare, hidden, or hard to reach. Conventionally designed large-scale surveys detect relatively few members of the populations so that estimates of population characteristics have high uncertainty. Ethnographic studies, on the other hand, reach suitable numbers of individuals only through the use of link-tracing, chain referral, or snowball sampling procedures that often leave the investigators unable to make inferences from their sample to the hidden population as a whole. In adaptive sampling, the procedure for selecting people or other units to be in the sample depends on variables of interest observed during the survey, so the design adapts to the population as encountered. For example, when self-reported drug use is found among members of the sample, sampling effort may be increased in nearby areas. Types of adaptive sampling designs include ordinary sequential sampling, adaptive allocation in stratified sampling, adaptive cluster sampling, and optimal model-based designs. Graph sampling refers to situations with nodes (for example, people) connected by edges (such as social links or geographic proximity). An initial sample of nodes or edges is selected and edges are subsequently followed to bring other nodes into the sample. Graph sampling designs include network sampling, snowball sampling, link-tracing, chain referral, and adaptive cluster sampling. A graph sampling design is adaptive if the decision to include linked nodes depends on variables of interest observed on nodes already in the sample. Adjustment methods for nonsampling errors such as imperfect detection of drug users in the sample apply to adaptive as well as conventional designs.

  10. Hybrid Optimal Design of the Eco-Hydrological Wireless Sensor Network in the Middle Reach of the Heihe River Basin, China

    PubMed Central

    Kang, Jian; Li, Xin; Jin, Rui; Ge, Yong; Wang, Jinfeng; Wang, Jianghao

    2014-01-01

    The eco-hydrological wireless sensor network (EHWSN) in the middle reaches of the Heihe River Basin in China is designed to capture the spatial and temporal variability and to estimate the ground truth for validating the remote sensing productions. However, there is no available prior information about a target variable. To meet both requirements, a hybrid model-based sampling method without any spatial autocorrelation assumptions is developed to optimize the distribution of EHWSN nodes based on geostatistics. This hybrid model incorporates two sub-criteria: one for the variogram modeling to represent the variability, another for improving the spatial prediction to evaluate remote sensing productions. The reasonability of the optimized EHWSN is validated from representativeness, the variogram modeling and the spatial accuracy through using 15 types of simulation fields generated with the unconditional geostatistical stochastic simulation. The sampling design shows good representativeness; variograms estimated by samples have less than 3% mean error relative to true variograms. Then, fields at multiple scales are predicted. As the scale increases, estimated fields have higher similarities to simulation fields at block sizes exceeding 240 m. The validations prove that this hybrid sampling method is effective for both objectives when we do not know the characteristics of an optimized variables. PMID:25317762

  11. Hybrid optimal design of the eco-hydrological wireless sensor network in the middle reach of the Heihe River Basin, China.

    PubMed

    Kang, Jian; Li, Xin; Jin, Rui; Ge, Yong; Wang, Jinfeng; Wang, Jianghao

    2014-10-14

    The eco-hydrological wireless sensor network (EHWSN) in the middle reaches of the Heihe River Basin in China is designed to capture the spatial and temporal variability and to estimate the ground truth for validating the remote sensing productions. However, there is no available prior information about a target variable. To meet both requirements, a hybrid model-based sampling method without any spatial autocorrelation assumptions is developed to optimize the distribution of EHWSN nodes based on geostatistics. This hybrid model incorporates two sub-criteria: one for the variogram modeling to represent the variability, another for improving the spatial prediction to evaluate remote sensing productions. The reasonability of the optimized EHWSN is validated from representativeness, the variogram modeling and the spatial accuracy through using 15 types of simulation fields generated with the unconditional geostatistical stochastic simulation. The sampling design shows good representativeness; variograms estimated by samples have less than 3% mean error relative to true variograms. Then, fields at multiple scales are predicted. As the scale increases, estimated fields have higher similarities to simulation fields at block sizes exceeding 240 m. The validations prove that this hybrid sampling method is effective for both objectives when we do not know the characteristics of an optimized variables.

  12. Network Interventions on Physical Activity in an Afterschool Program: An Agent-Based Social Network Study

    PubMed Central

    Zhang, Jun; Shoham, David A.; Tesdahl, Eric

    2015-01-01

    Objectives. We studied simulated interventions that leveraged social networks to increase physical activity in children. Methods. We studied a real-world social network of 81 children (average age = 7.96 years) who lived in low socioeconomic status neighborhoods, and attended public schools and 1 of 2 structured afterschool programs. The sample was ethnically diverse, and 44% were overweight or obese. We used social network analysis and agent-based modeling simulations to test whether implementing a network intervention would increase children’s physical activity. We tested 3 intervention strategies. Results. The intervention that targeted opinion leaders was effective in increasing the average level of physical activity across the entire network. However, the intervention that targeted the most sedentary children was the best at increasing their physical activity levels. Conclusions. Which network intervention to implement depends on whether the goal is to shift the entire distribution of physical activity or to influence those most adversely affected by low physical activity. Agent-based modeling could be an important complement to traditional project planning tools, analogous to sample size and power analyses, to help researchers design more effective interventions for increasing children’s physical activity. PMID:25689202

  13. RENEB intercomparisons applying the conventional Dicentric Chromosome Assay (DCA).

    PubMed

    Oestreicher, Ursula; Samaga, Daniel; Ainsbury, Elizabeth; Antunes, Ana Catarina; Baeyens, Ans; Barrios, Leonardo; Beinke, Christina; Beukes, Philip; Blakely, William F; Cucu, Alexandra; De Amicis, Andrea; Depuydt, Julie; De Sanctis, Stefania; Di Giorgio, Marina; Dobos, Katalin; Dominguez, Inmaculada; Duy, Pham Ngoc; Espinoza, Marco E; Flegal, Farrah N; Figel, Markus; Garcia, Omar; Monteiro Gil, Octávia; Gregoire, Eric; Guerrero-Carbajal, C; Güçlü, İnci; Hadjidekova, Valeria; Hande, Prakash; Kulka, Ulrike; Lemon, Jennifer; Lindholm, Carita; Lista, Florigio; Lumniczky, Katalin; Martinez-Lopez, Wilner; Maznyk, Nataliya; Meschini, Roberta; M'kacher, Radia; Montoro, Alegria; Moquet, Jayne; Moreno, Mercedes; Noditi, Mihaela; Pajic, Jelena; Radl, Analía; Ricoul, Michelle; Romm, Horst; Roy, Laurence; Sabatier, Laure; Sebastià, Natividad; Slabbert, Jacobus; Sommer, Sylwester; Stuck Oliveira, Monica; Subramanian, Uma; Suto, Yumiko; Que, Tran; Testa, Antonella; Terzoudi, Georgia; Vral, Anne; Wilkins, Ruth; Yanti, LusiYanti; Zafiropoulos, Demetre; Wojcik, Andrzej

    2017-01-01

    Two quality controlled inter-laboratory exercises were organized within the EU project 'Realizing the European Network of Biodosimetry (RENEB)' to further optimize the dicentric chromosome assay (DCA) and to identify needs for training and harmonization activities within the RENEB network. The general study design included blood shipment, sample processing, analysis of chromosome aberrations and radiation dose assessment. After manual scoring of dicentric chromosomes in different cell numbers dose estimations and corresponding 95% confidence intervals were submitted by the participants. The shipment of blood samples to the partners in the European Community (EU) were performed successfully. Outside the EU unacceptable delays occurred. The results of the dose estimation demonstrate a very successful classification of the blood samples in medically relevant groups. In comparison to the 1st exercise the 2nd intercomparison showed an improvement in the accuracy of dose estimations especially for the high dose point. In case of a large-scale radiological incident, the pooling of ressources by networks can enhance the rapid classification of individuals in medically relevant treatment groups based on the DCA. The performance of the RENEB network as a whole has clearly benefited from harmonization processes and specific training activities for the network partners.

  14. A Novel User Classification Method for Femtocell Network by Using Affinity Propagation Algorithm and Artificial Neural Network

    PubMed Central

    Ahmed, Afaz Uddin; Tariqul Islam, Mohammad; Ismail, Mahamod; Kibria, Salehin; Arshad, Haslina

    2014-01-01

    An artificial neural network (ANN) and affinity propagation (AP) algorithm based user categorization technique is presented. The proposed algorithm is designed for closed access femtocell network. ANN is used for user classification process and AP algorithm is used to optimize the ANN training process. AP selects the best possible training samples for faster ANN training cycle. The users are distinguished by using the difference of received signal strength in a multielement femtocell device. A previously developed directive microstrip antenna is used to configure the femtocell device. Simulation results show that, for a particular house pattern, the categorization technique without AP algorithm takes 5 indoor users and 10 outdoor users to attain an error-free operation. While integrating AP algorithm with ANN, the system takes 60% less training samples reducing the training time up to 50%. This procedure makes the femtocell more effective for closed access operation. PMID:25133214

  15. A novel user classification method for femtocell network by using affinity propagation algorithm and artificial neural network.

    PubMed

    Ahmed, Afaz Uddin; Islam, Mohammad Tariqul; Ismail, Mahamod; Kibria, Salehin; Arshad, Haslina

    2014-01-01

    An artificial neural network (ANN) and affinity propagation (AP) algorithm based user categorization technique is presented. The proposed algorithm is designed for closed access femtocell network. ANN is used for user classification process and AP algorithm is used to optimize the ANN training process. AP selects the best possible training samples for faster ANN training cycle. The users are distinguished by using the difference of received signal strength in a multielement femtocell device. A previously developed directive microstrip antenna is used to configure the femtocell device. Simulation results show that, for a particular house pattern, the categorization technique without AP algorithm takes 5 indoor users and 10 outdoor users to attain an error-free operation. While integrating AP algorithm with ANN, the system takes 60% less training samples reducing the training time up to 50%. This procedure makes the femtocell more effective for closed access operation.

  16. Auxiliary Parameter MCMC for Exponential Random Graph Models

    NASA Astrophysics Data System (ADS)

    Byshkin, Maksym; Stivala, Alex; Mira, Antonietta; Krause, Rolf; Robins, Garry; Lomi, Alessandro

    2016-11-01

    Exponential random graph models (ERGMs) are a well-established family of statistical models for analyzing social networks. Computational complexity has so far limited the appeal of ERGMs for the analysis of large social networks. Efficient computational methods are highly desirable in order to extend the empirical scope of ERGMs. In this paper we report results of a research project on the development of snowball sampling methods for ERGMs. We propose an auxiliary parameter Markov chain Monte Carlo (MCMC) algorithm for sampling from the relevant probability distributions. The method is designed to decrease the number of allowed network states without worsening the mixing of the Markov chains, and suggests a new approach for the developments of MCMC samplers for ERGMs. We demonstrate the method on both simulated and actual (empirical) network data and show that it reduces CPU time for parameter estimation by an order of magnitude compared to current MCMC methods.

  17. Design of a Threat-Based Gunnery Performance Test: Issues and Procedures for Crew and Platoon Tank Gunnery

    DTIC Science & Technology

    1990-06-01

    Uses visual communication . _._Changes direction/formation __Crews transmit timely, accurate quickly. messages. NOTES. Figure 22. Sample engagement...and concise. The network control station (NCS) effectively maintains network discipline. Radio security equipment, visual communication , wire...net discipline, (c) clarity and brevity of radio messages, (d) use of transmission security equipment, (e) use of visual communication , (f) use of wire

  18. Counselling Implications of Teachers' Digital Competencies in the Use of Social Networking Sites (SNSs) in the Teaching-Learning Process in Calabar, Nigeria

    ERIC Educational Resources Information Center

    Eyo, Mfon

    2016-01-01

    The study investigated teachers' digital competencies in the use of Social Networking Sites (SNSs) in the teaching-learning process. It had five research questions and two hypotheses. Adopting a survey design, it used a sample of 250 teachers from 10 out of 16 secondary schools in Calabar Municipal Local Government. A researcher-developed…

  19. Estimating uncertainty in respondent-driven sampling using a tree bootstrap method.

    PubMed

    Baraff, Aaron J; McCormick, Tyler H; Raftery, Adrian E

    2016-12-20

    Respondent-driven sampling (RDS) is a network-based form of chain-referral sampling used to estimate attributes of populations that are difficult to access using standard survey tools. Although it has grown quickly in popularity since its introduction, the statistical properties of RDS estimates remain elusive. In particular, the sampling variability of these estimates has been shown to be much higher than previously acknowledged, and even methods designed to account for RDS result in misleadingly narrow confidence intervals. In this paper, we introduce a tree bootstrap method for estimating uncertainty in RDS estimates based on resampling recruitment trees. We use simulations from known social networks to show that the tree bootstrap method not only outperforms existing methods but also captures the high variability of RDS, even in extreme cases with high design effects. We also apply the method to data from injecting drug users in Ukraine. Unlike other methods, the tree bootstrap depends only on the structure of the sampled recruitment trees, not on the attributes being measured on the respondents, so correlations between attributes can be estimated as well as variability. Our results suggest that it is possible to accurately assess the high level of uncertainty inherent in RDS.

  20. Impact of degree truncation on the spread of a contagious process on networks.

    PubMed

    Harling, Guy; Onnela, Jukka-Pekka

    2018-03-01

    Understanding how person-to-person contagious processes spread through a population requires accurate information on connections between population members. However, such connectivity data, when collected via interview, is often incomplete due to partial recall, respondent fatigue or study design, e.g., fixed choice designs (FCD) truncate out-degree by limiting the number of contacts each respondent can report. Past research has shown how FCD truncation affects network properties, but its implications for predicted speed and size of spreading processes remain largely unexplored. To study the impact of degree truncation on predictions of spreading process outcomes, we generated collections of synthetic networks containing specific properties (degree distribution, degree-assortativity, clustering), and also used empirical social network data from 75 villages in Karnataka, India. We simulated FCD using various truncation thresholds and ran a susceptible-infectious-recovered (SIR) process on each network. We found that spreading processes propagated on truncated networks resulted in slower and smaller epidemics, with a sudden decrease in prediction accuracy at a level of truncation that varied by network type. Our results have implications beyond FCD to truncation due to any limited sampling from a larger network. We conclude that knowledge of network structure is important for understanding the accuracy of predictions of process spread on degree truncated networks.

  1. Two States Mapping Based Time Series Neural Network Model for Compensation Prediction Residual Error

    NASA Astrophysics Data System (ADS)

    Jung, Insung; Koo, Lockjo; Wang, Gi-Nam

    2008-11-01

    The objective of this paper was to design a model of human bio signal data prediction system for decreasing of prediction error using two states mapping based time series neural network BP (back-propagation) model. Normally, a lot of the industry has been applied neural network model by training them in a supervised manner with the error back-propagation algorithm for time series prediction systems. However, it still has got a residual error between real value and prediction result. Therefore, we designed two states of neural network model for compensation residual error which is possible to use in the prevention of sudden death and metabolic syndrome disease such as hypertension disease and obesity. We determined that most of the simulation cases were satisfied by the two states mapping based time series prediction model. In particular, small sample size of times series were more accurate than the standard MLP model.

  2. Matching technique yields optimum LNA performance. [Low Noise Amplifiers

    NASA Technical Reports Server (NTRS)

    Sifri, J. D.

    1986-01-01

    The present article is concerned with a case in which an optimum noise figure and unconditional stability have been designed into a 2.385-GHz low-noise preamplifier via an unusual method for matching the input with a suspended line. The results obtained with several conventional line-matching techniques were not satisfactory. Attention is given to the minimization of thermal noise, the design procedure, requirements for a high-impedance line, a sampling of four matching networks, the noise figure of the single-line matching network as a function of frequency, and the approaches used to achieve unconditional stability.

  3. A regional monitoring network to investigate the occurrence of agricultural chemicals in near-surface aquifers of the midcontinental USA

    USGS Publications Warehouse

    Kolpin, D.W.; Goolsby, D.A.

    1995-01-01

    Previous state and national surveys conducted in the mid-continental USA have produced a wide range in results regarding the occurrence of agricultural chemicals in groundwater. At least some of these differences can be attributed to inconsistencies between the surveys, such as different analytical reporting limits. The US Geological Survey has designed a sampling network that is geographically and hydrogeologically representative of near-surface aquifers in the corn- and soybean-producing region of the midcontinental USA. More than 800 water quality samples have been collected from the network since 1991. Six of the seven most frequently detected compounds from this study were herbicide metabolites. A direct relation was determined between tritium content to herbicide and nitrate contamination. The unconsolidated aquifers sampled were found to be more susceptible to herbicide and nitrate contamination than the bedrock aquifers. Knowledge of the regional occurrence and distribution of agricultural chemicals acquired through the study of data collected at network sites will assist policy makers and planners with decisions regarding the protection of drinking-water supplies.

  4. Stochastic Simulation of Biomolecular Networks in Dynamic Environments

    PubMed Central

    Voliotis, Margaritis; Thomas, Philipp; Grima, Ramon; Bowsher, Clive G.

    2016-01-01

    Simulation of biomolecular networks is now indispensable for studying biological systems, from small reaction networks to large ensembles of cells. Here we present a novel approach for stochastic simulation of networks embedded in the dynamic environment of the cell and its surroundings. We thus sample trajectories of the stochastic process described by the chemical master equation with time-varying propensities. A comparative analysis shows that existing approaches can either fail dramatically, or else can impose impractical computational burdens due to numerical integration of reaction propensities, especially when cell ensembles are studied. Here we introduce the Extrande method which, given a simulated time course of dynamic network inputs, provides a conditionally exact and several orders-of-magnitude faster simulation solution. The new approach makes it feasible to demonstrate—using decision-making by a large population of quorum sensing bacteria—that robustness to fluctuations from upstream signaling places strong constraints on the design of networks determining cell fate. Our approach has the potential to significantly advance both understanding of molecular systems biology and design of synthetic circuits. PMID:27248512

  5. Nested sampling at karst springs: from basic patterns to event triggered sampling and on-line monitoring.

    NASA Astrophysics Data System (ADS)

    Stadler, Hermann; Skritek, Paul; Zerobin, Wolfgang; Klock, Erich; Farnleitner, Andreas H.

    2010-05-01

    In the last year, global changes in ecosystems, the growth of population, and modifications of the legal framework within the EU have caused an increased need of qualitative groundwater and spring water monitoring with the target to continue to supply the consumers with high-quality drinking water in the future. Additionally the demand for sustainable protection of drinking water resources effected the initiated implementation of early warning systems and quality assurance networks in water supplies. In the field of hydrogeological investigations, event monitoring and event sampling is worst case scenario monitoring. Therefore, such tools become more and more indispensible to get detailed information about aquifer parameter and vulnerability. In the framework of water supplies, smart sampling designs combined with in-situ measurements of different parameters and on-line access can play an important role in early warning systems and quality surveillance networks. In this study nested sampling tiers are presented, which were designed to cover total system dynamic. Basic monitoring sampling (BMS), high frequency sampling (HFS) and automated event sampling (AES) were combined. BMS was organized with a monthly increment for at least two years, and HFS was performed during times of increased groundwater recharge (e.g. during snowmelt). At least one AES tier was embedded in this system. AES was enabled by cross-linking of hydrological stations, so the system could be run fully automated and could include real-time availability of data. By means of networking via Low Earth Orbiting Satellites (LEO-satellites), data from the precipitation station (PS) in the catchment area are brought together with data from the spring sampling station (SSS) without the need of terrestrial infrastructure for communication and power supply. Furthermore, the whole course of input and output parameters, like precipitation (input system) and discharge (output system), and the status of the sampling system is transmitted via LEO-Satellites to a Central Monitoring Station (CMS), which can be linked with a web-server to have unlimited real-time data access. The automatically generated notice of event to a local service team of the sampling station is transmitted in combination with internet, GSM, GPRS or LEO-Satellites. If a GPRS-network is available for the stations, this system could be realized also via this network. However, one great problem of these terrestrial communication systems is the risk of default when their networks are overloaded, like during flood events or thunderstorms. Therefore, in addition, it is necessary to have the possibility to transmit the measured values via communication satellites when a terrestrial infrastructure is not available. LEO-satellites are especially useful in the alpine regions because they have no deadspots, but only sometimes latency periods. In the workouts we combined in-situ measurements (precipitation, electrical conductivity, discharge, water temperature, spectral absorption coefficient, turbidity) with time increments from 1 to 15 minutes with data from the different sampling tires (environmental isotopes, chemical, mineralogical and bacteriological data).

  6. Audio Spectrogram Representations for Processing with Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Wyse, L.

    2017-05-01

    One of the decisions that arise when designing a neural network for any application is how the data should be represented in order to be presented to, and possibly generated by, a neural network. For audio, the choice is less obvious than it seems to be for visual images, and a variety of representations have been used for different applications including the raw digitized sample stream, hand-crafted features, machine discovered features, MFCCs and variants that include deltas, and a variety of spectral representations. This paper reviews some of these representations and issues that arise, focusing particularly on spectrograms for generating audio using neural networks for style transfer.

  7. Doctors' opinion on the contribution of coordination mechanisms to improving clinical coordination between primary and outpatient secondary care in the Catalan national health system.

    PubMed

    Aller, Marta-Beatriz; Vargas, Ingrid; Coderch, Jordi; Vázquez, Maria-Luisa

    2017-12-22

    Clinical coordination is considered a health policy priority as its absence can lead to poor quality of care and inefficiency. A key challenge is to identify which strategies should be implemented to improve coordination. The aim is to analyse doctors' opinions on the contribution of mechanisms to improving clinical coordination between primary and outpatient secondary care and the factors influencing their use. A qualitative descriptive study in three healthcare networks of the Catalan national health system. A two-stage theoretical sample was designed: in the first stage, networks with different management models were selected; in the second, primary care (n = 26) and secondary care (n = 24) doctors. Data were collected using semi-structured interviews. Final sample size was reached by saturation. A thematic content analysis was conducted, segmented by network and care level. With few differences across networks, doctors identified similar mechanisms contributing to clinical coordination: 1) shared EMR facilitating clinical information transfer and uptake; 2) mechanisms enabling problem-solving communication and agreement on clinical approaches, which varied across networks (joint clinical case conferences, which also promote mutual knowledge and training of primary care doctors; virtual consultations through EMR and email); and 3) referral protocols and use of the telephone facilitating access to secondary care after referrals. Doctors identified organizational (insufficient time, incompatible timetables, design of mechanisms) and professional factors (knowing each other, attitude towards collaboration, concerns over misdiagnosis) that influence the use of mechanisms. Mechanisms that most contribute to clinical coordination are feedback mechanisms, that is those based on mutual adjustment, that allow doctors to exchange information and communicate. Their use might be enhanced by focusing on adequate working conditions, mechanism design and creating conditions that promote mutual knowledge and positive attitudes towards collaboration.

  8. Design of the primary pre-TRMM and TRMM ground truth site

    NASA Technical Reports Server (NTRS)

    Garstang, Michael

    1988-01-01

    The primary objective of the Tropical Rain Measuring Mission (TRMM) were to: integrate the rain gage measurements with radar measurements of rainfall using the KSFC/Patrick digitized radar and associated rainfall network; delineate the major rain bearing systems over Florida using the Weather Service reported radar/rainfall distributions; combine the integrated measurements with the delineated rain bearing systems; use the results of the combined measurements and delineated rain bearing systems to represent patterns of rainfall which actually exist and contribute significantly to the rainfall to test sampling strategies and based on the results of these analyses decide upon the ground truth network; and complete the design begun in Phase 1 of a multi-scale (space and time) surface observing precipitation network centered upon KSFC. Work accomplished and in progress is discussed.

  9. An overview of the Columbia Habitat Monitoring Program's (CHaMP) spatial-temporal design framework

    EPA Science Inventory

    We briefly review the concept of a master sample applied to stream networks in which a randomized set of stream sites is selected across a broad region to serve as a list of sites from which a subset of sites is selected to achieve multiple objectives of specific designs. The Col...

  10. UCLA High Speed, High Volume Laboratory Network for Infectious Diseases. Addendum

    DTIC Science & Technology

    2009-08-01

    s) and should not be construed as an official Department of the Army position, policy or decision unless so designated by other documentation... Design : Because of current public health and national security threats, influenza surveillance and analysis will be the initial focus. In the upcoming...throughput and automated systems will enable processing of tens of thousands of samples and provide critical laboratory capacity. Its overall design and

  11. Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks

    USGS Publications Warehouse

    Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.

    2011-01-01

    Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.

  12. Observer-based output feedback control of networked control systems with non-uniform sampling and time-varying delay

    NASA Astrophysics Data System (ADS)

    Meng, Su; Chen, Jie; Sun, Jian

    2017-10-01

    This paper investigates the problem of observer-based output feedback control for networked control systems with non-uniform sampling and time-varying transmission delay. The sampling intervals are assumed to vary within a given interval. The transmission delay belongs to a known interval. A discrete-time model is first established, which contains time-varying delay and norm-bounded uncertainties coming from non-uniform sampling intervals. It is then converted to an interconnection of two subsystems in which the forward channel is delay-free. The scaled small gain theorem is used to derive the stability condition for the closed-loop system. Moreover, the observer-based output feedback controller design method is proposed by utilising a modified cone complementary linearisation algorithm. Finally, numerical examples illustrate the validity and superiority of the proposed method.

  13. Convolutional neural networks based on augmented training samples for synthetic aperture radar target recognition

    NASA Astrophysics Data System (ADS)

    Yan, Yue

    2018-03-01

    A synthetic aperture radar (SAR) automatic target recognition (ATR) method based on the convolutional neural networks (CNN) trained by augmented training samples is proposed. To enhance the robustness of CNN to various extended operating conditions (EOCs), the original training images are used to generate the noisy samples at different signal-to-noise ratios (SNRs), multiresolution representations, and partially occluded images. Then, the generated images together with the original ones are used to train a designed CNN for target recognition. The augmented training samples can contrapuntally improve the robustness of the trained CNN to the covered EOCs, i.e., the noise corruption, resolution variance, and partial occlusion. Moreover, the significantly larger training set effectively enhances the representation capability for other conditions, e.g., the standard operating condition (SOC), as well as the stability of the network. Therefore, better performance can be achieved by the proposed method for SAR ATR. For experimental evaluation, extensive experiments are conducted on the Moving and Stationary Target Acquisition and Recognition dataset under SOC and several typical EOCs.

  14. Converging Redundant Sensor Network Information for Improved Building Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dale Tiller; D. Phil; Gregor Henze

    2007-09-30

    This project investigated the development and application of sensor networks to enhance building energy management and security. Commercial, industrial and residential buildings often incorporate systems used to determine occupancy, but current sensor technology and control algorithms limit the effectiveness of these systems. For example, most of these systems rely on single monitoring points to detect occupancy, when more than one monitoring point could improve system performance. Phase I of the project focused on instrumentation and data collection. During the initial project phase, a new occupancy detection system was developed, commissioned and installed in a sample of private offices and open-planmore » office workstations. Data acquisition systems were developed and deployed to collect data on space occupancy profiles. Phase II of the project demonstrated that a network of several sensors provides a more accurate measure of occupancy than is possible using systems based on single monitoring points. This phase also established that analysis algorithms could be applied to the sensor network data stream to improve the accuracy of system performance in energy management and security applications. In Phase III of the project, the sensor network from Phase I was complemented by a control strategy developed based on the results from the first two project phases: this controller was implemented in a small sample of work areas, and applied to lighting control. Two additional technologies were developed in the course of completing the project. A prototype web-based display that portrays the current status of each detector in a sensor network monitoring building occupancy was designed and implemented. A new capability that enables occupancy sensors in a sensor network to dynamically set the 'time delay' interval based on ongoing occupant behavior in the space was also designed and implemented.« less

  15. Estuarine water quality in parks of the Northeast Coastal and Barrier Network: Development and early implementation of vital signs estuarine nutrient-enrichment monitoring, 2003-06

    USGS Publications Warehouse

    Kopp, Blaine S.; Nielsen, Martha; Glisic, Dejan; Neckles, Hilary A.

    2009-01-01

    This report documents results of pilot tests of a protocol for monitoring estuarine nutrient enrichment for the Vital Signs Monitoring Program of the National Park Service Northeast Coastal and Barrier Network. Data collected from four parks during protocol development in 2003-06 are presented: Gateway National Recreation Area, Colonial National Historic Park, Fire Island National Seashore, and Assateague Island National Seashore. The monitoring approach incorporates several spatial and temporal designs to address questions at a hierarchy of scales. Indicators of estuarine response to nutrient enrichment were sampled using a probability design within park estuaries during a late-summer index period. Monitoring variables consisted of dissolved-oxygen concentration, chlorophyll a concentration, water temperature, salinity, attenuation of downwelling photosynthetically available radiation (PAR), and turbidity. The statistical sampling design allowed the condition of unsampled locations to be inferred from the distribution of data from a set of randomly positioned "probability" stations. A subset of sampling stations was sampled repeatedly during the index period, and stations were not rerandomized in subsequent years. These "trend stations" allowed us to examine temporal variability within the index period, and to improve the sensitivity of the monitoring protocol to detecting change through time. Additionally, one index site in each park was equipped for continuous monitoring throughout the index period. Thus, the protocol includes elements of probabilistic and targeted spatial sampling, and the temporal intensity ranges from snapshot assessments to continuous monitoring.

  16. Efficient Bayesian experimental design for contaminant source identification

    NASA Astrophysics Data System (ADS)

    Zhang, Jiangjiang; Zeng, Lingzao; Chen, Cheng; Chen, Dingjiang; Wu, Laosheng

    2015-01-01

    In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameters identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from concentration measurements in identifying unknown parameters. In this approach, the sampling locations that give the maximum expected relative entropy are selected as the optimal design. After the sampling locations are determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport equation. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. It is shown that the methods can be used to assist in both single sampling location and monitoring network design for contaminant source identifications in groundwater.

  17. 40 CFR Appendix N to Part 50 - Interpretation of the National Ambient Air Quality Standards for PM2.5

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... monitors utilize the same specific sampling and analysis method. Combined site data record is the data set... monitors are suitable monitors designated by a state or local agency in their annual network plan (and in... appendix. Seasonal sampling is the practice of collecting data at a reduced frequency during a season of...

  18. Simulation of Wind Profile Perturbations for Launch Vehicle Design

    NASA Technical Reports Server (NTRS)

    Adelfang, S. I.

    2004-01-01

    Ideally, a statistically representative sample of measured high-resolution wind profiles with wavelengths as small as tens of meters is required in design studies to establish aerodynamic load indicator dispersions and vehicle control system capability. At most potential launch sites, high- resolution wind profiles may not exist. Representative samples of Rawinsonde wind profiles to altitudes of 30 km are more likely to be available from the extensive network of measurement sites established for routine sampling in support of weather observing and forecasting activity. Such a sample, large enough to be statistically representative of relatively large wavelength perturbations, would be inadequate for launch vehicle design assessments because the Rawinsonde system accurately measures wind perturbations with wavelengths no smaller than 2000 m (1000 m altitude increment). The Kennedy Space Center (KSC) Jimsphere wind profiles (150/month and seasonal 2 and 3.5-hr pairs) are the only adequate samples of high resolution profiles approx. 150 to 300 m effective resolution, but over-sampled at 25 m intervals) that have been used extensively for launch vehicle design assessments. Therefore, a simulation process has been developed for enhancement of measured low-resolution Rawinsonde profiles that would be applicable in preliminary launch vehicle design studies at launch sites other than KSC.

  19. ACS sampling system: design, implementation, and performance evaluation

    NASA Astrophysics Data System (ADS)

    Di Marcantonio, Paolo; Cirami, Roberto; Chiozzi, Gianluca

    2004-09-01

    By means of ACS (ALMA Common Software) framework we designed and implemented a sampling system which allows sampling of every Characteristic Component Property with a specific, user-defined, sustained frequency limited only by the hardware. Collected data are sent to various clients (one or more Java plotting widgets, a dedicated GUI or a COTS application) using the ACS/CORBA Notification Channel. The data transport is optimized: samples are cached locally and sent in packets with a lower and user-defined frequency to keep network load under control. Simultaneous sampling of the Properties of different Components is also possible. Together with the design and implementation issues we present the performance of the sampling system evaluated on two different platforms: on a VME based system using VxWorks RTOS (currently adopted by ALMA) and on a PC/104+ embedded platform using Red Hat 9 Linux operating system. The PC/104+ solution offers, as an alternative, a low cost PC compatible hardware environment with free and open operating system.

  20. An advanced real-time digital signal processing system for linear systems emulation, with special emphasis on network and acoustic response characterization

    NASA Astrophysics Data System (ADS)

    Gaydecki, Patrick; Fernandes, Bosco

    2003-11-01

    A fast digital signal processing (DSP) system is described that can perform real-time emulation of a wide variety of linear audio-bandwidth systems and networks, such as reverberant spaces, musical instrument bodies and very high order filter networks. The hardware design is based upon a Motorola DSP56309 operating at 110 million multiplication-accumulations per second and a dual-channel 24 bit codec with a maximum sampling frequency of 192 kHz. High level software has been developed to express complex vector frequency responses as both infinite impulse response (IIR) and finite impulse response (FIR) coefficients, in a form suitable for real-time convolution by the firmware installed in the DSP system memory. An algorithm has also been devised to express IIR filters as equivalent FIR structures, thereby obviating the potential instabilities associated with recursive equations and negating the traditional deficiencies of FIR filters respecting equivalent analogue designs. The speed and dynamic range of the system is such that, when sampling at 48 kHz, the frequency response can be specified to a spectral precision of 22 Hz when sampling at 10 kHz, this resolution increases to 0.9 Hz. Moreover, it is also possible to control the phase of any frequency band with a theoretical precision of 10-5 degrees in all cases. The system has been applied in the study of analogue filter networks, real-time Hilbert transformation, phase-shift systems and musical instrument body emulation, where it is providing valuable new insights into the understanding of psychoacoustic mechanisms.

  1. Establishing the ACORN National Practitioner Database: Strategies to Recruit Practitioners to a National Practice-Based Research Network.

    PubMed

    Adams, Jon; Steel, Amie; Moore, Craig; Amorin-Woods, Lyndon; Sibbritt, David

    2016-10-01

    The purpose of this paper is to report on the recruitment and promotion strategies employed by the Australian Chiropractic Research Network (ACORN) project aimed at helping recruit a substantial national sample of participants and to describe the features of our practice-based research network (PBRN) design that may provide key insights to others looking to establish a similar network or draw on the ACORN project to conduct sub-studies. The ACORN project followed a multifaceted recruitment and promotion strategy drawing on distinct branding, a practitioner-focused promotion campaign, and a strategically designed questionnaire and distribution/recruitment approach to attract sufficient participation from the ranks of registered chiropractors across Australia. From the 4684 chiropractors registered at the time of recruitment, the project achieved a database response rate of 36% (n = 1680), resulting in a large, nationally representative sample across age, gender, and location. This sample constitutes the largest proportional coverage of participants from any voluntary national PBRN across any single health care profession. It does appear that a number of key promotional and recruitment features of the ACORN project may have helped establish the high response rate for the PBRN, which constitutes an important sustainable resource for future national and international efforts to grow the chiropractic evidence base and research capacity. Further rigorous enquiry is needed to help evaluate the direct contribution of specific promotional and recruitment strategies in attaining high response rates from practitioner populations who may be invited to participate in future PBRNs. Copyright © 2016. Published by Elsevier Inc.

  2. Optimizing Observation Networks Combining Ships of Opportunity, Gliders, Moored Buoys and FerryBox in the Bay of Biscay and English Channel

    NASA Astrophysics Data System (ADS)

    Charria, G.; Lamouroux, J.; De Mey, P. J.; Raynaud, S.; Heyraud, C.; Craneguy, P.; Dumas, F.; Le Henaff, M.

    2016-02-01

    Designing optimal observation networks in coastal oceans remains one of the major challenges towards the implementation of future Integrated Ocean Observing Systems to monitor the coastal environment. In the Bay of Biscay and the English Channel, the diversity of involved processes requires to adapt observing systems to the specific targeted environments. Also important is the requirement for those systems to sustain coastal applications. An efficient way to measure the hydrological content of the water column over the continental shelf is to consider ships of opportunity. In the French observation strategy, the RECOPESCA program, as a component of the High frequency Observation network for the environment in coastal SEAs (HOSEA), aims to collect environmental observations from sensors attached to fishing nets. In the present study, we assess that network performances using the ArM method (Le Hénaff et al., 2009). A reference network, based on fishing vessels observations in 2008, is assessed using that method. Moreover, three scenarios, based on the reference network, a denser network in 2010 and a fictive network aggregated from a pluri-annual collection of profiles, are also analyzed. Two other observational network design experiments have been implemented for the spring season in two regions: 1) the Loire River plume (northern part of the Bay of Biscay) to explore different possible glider endurance lines combined with a fixed mooring to monitor temperature and salinity and 2) the Western English Channel using a glider below FerryBox measurements. These experiments combining existing and future observing systems, as well as numerical ensemble simulations, highlight the key issue of monitoring the whole water column in and close to river plumes (e.g. using gliders), the efficiency of the surface high frequency sampling from FerryBoxes in macrotidal regions and the importance of sampling key regions instead of increasing the number of Voluntary Observing Ships.

  3. Stability and performance of propulsion control systems with distributed control architectures and failures

    NASA Astrophysics Data System (ADS)

    Belapurkar, Rohit K.

    Future aircraft engine control systems will be based on a distributed architecture, in which, the sensors and actuators will be connected to the Full Authority Digital Engine Control (FADEC) through an engine area network. Distributed engine control architecture will allow the implementation of advanced, active control techniques along with achieving weight reduction, improvement in performance and lower life cycle cost. The performance of a distributed engine control system is predominantly dependent on the performance of the communication network. Due to the serial data transmission policy, network-induced time delays and sampling jitter are introduced between the sensor/actuator nodes and the distributed FADEC. Communication network faults and transient node failures may result in data dropouts, which may not only degrade the control system performance but may even destabilize the engine control system. Three different architectures for a turbine engine control system based on a distributed framework are presented. A partially distributed control system for a turbo-shaft engine is designed based on ARINC 825 communication protocol. Stability conditions and control design methodology are developed for the proposed partially distributed turbo-shaft engine control system to guarantee the desired performance under the presence of network-induced time delay and random data loss due to transient sensor/actuator failures. A fault tolerant control design methodology is proposed to benefit from the availability of an additional system bandwidth and from the broadcast feature of the data network. It is shown that a reconfigurable fault tolerant control design can help to reduce the performance degradation in presence of node failures. A T-700 turbo-shaft engine model is used to validate the proposed control methodology based on both single input and multiple-input multiple-output control design techniques.

  4. Feasibility of Recruiting a Diverse Sample of Men Who Have Sex with Men: Observation from Nanjing, China

    PubMed Central

    Tang, Weiming; Yang, Haitao; Mahapatra, Tanmay; Huan, Xiping; Yan, Hongjing; Li, Jianjun; Fu, Gengfeng; Zhao, Jinkou; Detels, Roger

    2013-01-01

    Background Respondent-driven-sampling (RDS) has well been recognized as a method for sampling from most hard-to-reach populations like commercial sex workers, drug users and men who have sex with men. However the feasibility of this sampling strategy in terms of recruiting a diverse spectrum of these hidden populations has not been understood well yet in developing countries. Methods In a cross sectional study in Nanjing city of Jiangsu province of China, 430 MSM were recruited including 9 seeds in 14 weeks of study period using RDS. Information regarding socio-demographic characteristics and sexual risk behavior were collected and testing was done for HIV and syphilis. Duration, completion, participant characteristics and the equilibrium of key factors were used for assessing feasibility of RDS. Homophily of key variables, socio-demographic distribution and social network size were used as the indicators of diversity. Results In the study sample, adjusted HIV and syphilis prevalence were 6.6% and 14.6% respectively. Majority (96.3%) of the participants were recruited by members of their own social network. Although there was a tendency for recruitment within the same self-identified group (homosexuals recruited 60.0% homosexuals), considerable cross-group recruitment (bisexuals recruited 52.3% homosexuals) was also seen. Homophily of the self-identified sexual orientations was 0.111 for homosexuals. Upon completion of the recruitment process, participant characteristics and the equilibrium of key factors indicated that RDS was feasible for sampling MSM in Nanjing. Participants recruited by RDS were found to be diverse after assessing the homophily of key variables in successive waves of recruitment, the proportion of characteristics after reaching equilibrium and the social network size. The observed design effects were nearly the same or even better than the theoretical design effect of 2. Conclusion RDS was found to be an efficient and feasible sampling method for recruiting a diverse sample of MSM in a reasonable time. PMID:24244280

  5. Electromagnetic Design of a Magnetically-Coupled Spatial Power Combiner

    NASA Technical Reports Server (NTRS)

    Bulcha, B.; Cataldo, G.; Stevenson, T. R.; U-Yen, K.; Moseley, S. H.; Wollack, E. J.

    2017-01-01

    The design of a two-dimensional beam-combining network employing a parallel-plate superconducting waveguide with a mono-crystalline silicon dielectric is presented. This novel beam-combining network structure employs an array of magnetically coupled antenna elements to achieve high coupling efficiency and full sampling of the intensity distribution while avoiding diffractive losses in the multi-mode region defined by the parallel-plate waveguide. These attributes enable the structures use in realizing compact far-infrared spectrometers for astrophysical and instrumentation applications. When configured with a suitable corporate-feed power-combiner, this fully sampled array can be used to realize a low-sidelobe apodized response without incurring a reduction in coupling efficiency. To control undesired reflections over a wide range of angles in the finite-sized parallel-plate waveguide region, a wideband meta-material electromagnetic absorber structure is implemented. This adiabatic structure absorbs greater than 99 of the power over the 1.7:1 operational band at angles ranging from normal (0 degree) to near parallel (180 degree) incidence. Design, simulations, and application of the device will be presented.

  6. Computer aided lung cancer diagnosis with deep learning algorithms

    NASA Astrophysics Data System (ADS)

    Sun, Wenqing; Zheng, Bin; Qian, Wei

    2016-03-01

    Deep learning is considered as a popular and powerful method in pattern recognition and classification. However, there are not many deep structured applications used in medical imaging diagnosis area, because large dataset is not always available for medical images. In this study we tested the feasibility of using deep learning algorithms for lung cancer diagnosis with the cases from Lung Image Database Consortium (LIDC) database. The nodules on each computed tomography (CT) slice were segmented according to marks provided by the radiologists. After down sampling and rotating we acquired 174412 samples with 52 by 52 pixel each and the corresponding truth files. Three deep learning algorithms were designed and implemented, including Convolutional Neural Network (CNN), Deep Belief Networks (DBNs), Stacked Denoising Autoencoder (SDAE). To compare the performance of deep learning algorithms with traditional computer aided diagnosis (CADx) system, we designed a scheme with 28 image features and support vector machine. The accuracies of CNN, DBNs, and SDAE are 0.7976, 0.8119, and 0.7929, respectively; the accuracy of our designed traditional CADx is 0.7940, which is slightly lower than CNN and DBNs. We also noticed that the mislabeled nodules using DBNs are 4% larger than using traditional CADx, this might be resulting from down sampling process lost some size information of the nodules.

  7. Dynamic Network Logistic Regression: A Logistic Choice Analysis of Inter- and Intra-Group Blog Citation Dynamics in the 2004 US Presidential Election

    PubMed Central

    2013-01-01

    Methods for analysis of network dynamics have seen great progress in the past decade. This article shows how Dynamic Network Logistic Regression techniques (a special case of the Temporal Exponential Random Graph Models) can be used to implement decision theoretic models for network dynamics in a panel data context. We also provide practical heuristics for model building and assessment. We illustrate the power of these techniques by applying them to a dynamic blog network sampled during the 2004 US presidential election cycle. This is a particularly interesting case because it marks the debut of Internet-based media such as blogs and social networking web sites as institutionally recognized features of the American political landscape. Using a longitudinal sample of all Democratic National Convention/Republican National Convention–designated blog citation networks, we are able to test the influence of various strategic, institutional, and balance-theoretic mechanisms as well as exogenous factors such as seasonality and political events on the propensity of blogs to cite one another over time. Using a combination of deviance-based model selection criteria and simulation-based model adequacy tests, we identify the combination of processes that best characterizes the choice behavior of the contending blogs. PMID:24143060

  8. Quantitative design of emergency monitoring network for river chemical spills based on discrete entropy theory.

    PubMed

    Shi, Bin; Jiang, Jiping; Sivakumar, Bellie; Zheng, Yi; Wang, Peng

    2018-05-01

    Field monitoring strategy is critical for disaster preparedness and watershed emergency environmental management. However, development of such is also highly challenging. Despite the efforts and progress thus far, no definitive guidelines or solutions are available worldwide for quantitatively designing a monitoring network in response to river chemical spill incidents, except general rules based on administrative divisions or arbitrary interpolation on routine monitoring sections. To address this gap, a novel framework for spatial-temporal network design was proposed in this study. The framework combines contaminant transport modelling with discrete entropy theory and spectral analysis. The water quality model was applied to forecast the spatio-temporal distribution of contaminant after spills and then corresponding information transfer indexes (ITIs) and Fourier approximation periodic functions were estimated as critical measures for setting sampling locations and times. The results indicate that the framework can produce scientific preparedness plans of emergency monitoring based on scenario analysis of spill risks as well as rapid design as soon as the incident happened but not prepared. The framework was applied to a hypothetical spill case based on tracer experiment and a real nitrobenzene spill incident case to demonstrate its suitability and effectiveness. The newly-designed temporal-spatial monitoring network captured major pollution information at relatively low costs. It showed obvious benefits for follow-up early-warning and treatment as well as for aftermath recovery and assessment. The underlying drivers of ITIs as well as the limitations and uncertainty of the approach were analyzed based on the case studies. Comparison with existing monitoring network design approaches, management implications, and generalized applicability were also discussed. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. An historically consistent and broadly applicable MRV system based on LiDAR sampling and Landsat time-series

    Treesearch

    W. Cohen; H. Andersen; S. Healey; G. Moisen; T. Schroeder; C. Woodall; G. Domke; Z. Yang; S. Stehman; R. Kennedy; C. Woodcock; Z. Zhu; J. Vogelmann; D. Steinwand; C. Huang

    2014-01-01

    The authors are developing a REDD+ MRV system that tests different biomass estimation frameworks and components. Design-based inference from a costly fi eld plot network was compared to sampling with LiDAR strips and a smaller set of plots in combination with Landsat for disturbance monitoring. Biomass estimation uncertainties associated with these different data sets...

  10. Cone penetrometer testing and discrete-depth ground water sampling techniques: A cost-effective method of site characterization in a multiple-aquifer setting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zemo, D.A.; Pierce, Y.G.; Gallinatti, J.D.

    Cone penetrometer testing (CPT), combined with discrete-depth ground water sampling methods, can significantly reduce the time and expense required to characterize large sites that have multiple aquifers. Results from the screening site characterization can then be used to design and install a cost-effective monitoring well network. At a site in northern California, it was necessary to characterize the stratigraphy and the distribution of volatile organic compounds (VOCs). To expedite characterization, a five-week field screening program was implemented that consisted of a shallow ground water survey, CPT soundings and pore-pressure measurements, and discrete-depth ground water sampling. Based on continuous lithologic informationmore » provided by the CPT soundings, four predominantly coarse-grained, water yielding stratigraphic packages were identified. Seventy-nine discrete-depth ground water samples were collected using either shallow ground water survey techniques, the BAT Enviroprobe, or the QED HydroPunch I, depending on subsurface conditions. Using results from these efforts, a 20-well monitoring network was designed and installed to monitor critical points within each stratigraphic package. Good correlation was found for hydraulic head and chemical results between discrete-depth screening data and monitoring well data. Understanding the vertical VOC distribution and concentrations produced substantial time and cost savings by minimizing the number of permanent monitoring wells and reducing the number of costly conductor casings that had to be installed. Additionally, significant long-term cost savings will result from reduced sampling costs, because fewer wells comprise the monitoring network. The authors estimate these savings to be 50% for site characterization costs, 65% for site characterization time, and 60% for long-term monitoring costs.« less

  11. Space Network Time Distribution and Synchronization Protocol Development for Mars Proximity Link

    NASA Technical Reports Server (NTRS)

    Woo, Simon S.; Gao, Jay L.; Mills, David

    2010-01-01

    Time distribution and synchronization in deep space network are challenging due to long propagation delays, spacecraft movements, and relativistic effects. Further, the Network Time Protocol (NTP) designed for terrestrial networks may not work properly in space. In this work, we consider the time distribution protocol based on time message exchanges similar to Network Time Protocol (NTP). We present the Proximity-1 Space Link Interleaved Time Synchronization (PITS) algorithm that can work with the CCSDS Proximity-1 Space Data Link Protocol. The PITS algorithm provides faster time synchronization via two-way time transfer over proximity links, improves scalability as the number of spacecraft increase, lowers storage space requirement for collecting time samples, and is robust against packet loss and duplication which underlying protocol mechanisms provide.

  12. Analyzing Feedback Control Systems

    NASA Technical Reports Server (NTRS)

    Bauer, Frank H.; Downing, John P.

    1987-01-01

    Interactive controls analysis (INCA) program developed to provide user-friendly environment for design and analysis of linear control systems, primarily feedback control. Designed for use with both small- and large-order systems. Using interactive-graphics capability, INCA user quickly plots root locus, frequency response, or time response of either continuous-time system or sampled-data system. Configuration and parameters easily changed, allowing user to design compensation networks and perform sensitivity analyses in very convenient manner. Written in Pascal and FORTRAN.

  13. Bias in groundwater samples caused by wellbore flow

    USGS Publications Warehouse

    Reilly, Thomas E.; Franke, O. Lehn; Bennett, Gordon D.

    1989-01-01

    Proper design of physical installations and sampling procedures for groundwater monitoring networks is critical for the detection and analysis of possible contaminants. Monitoring networks associated with known contaminant sources sometimes include an array of monitoring wells with long well screens. The purpose of this paper is: (a) to report the results of a numerical experiment indicating that significant borehole flow can occur within long well screens installed in homogeneous aquifers with very small head differences in the aquifer (less than 0.01 feet between the top and bottom of the screen); (b) to demonstrate that contaminant monitoring wells with long screens may completely fail to fulfill their purpose in many groundwater environments.

  14. Dynamic design of ecological monitoring networks for non-Gaussian spatio-temporal data

    USGS Publications Warehouse

    Wikle, C.K.; Royle, J. Andrew

    2005-01-01

    Many ecological processes exhibit spatial structure that changes over time in a coherent, dynamical fashion. This dynamical component is often ignored in the design of spatial monitoring networks. Furthermore, ecological variables related to processes such as habitat are often non-Gaussian (e.g. Poisson or log-normal). We demonstrate that a simulation-based design approach can be used in settings where the data distribution is from a spatio-temporal exponential family. The key random component in the conditional mean function from this distribution is then a spatio-temporal dynamic process. Given the computational burden of estimating the expected utility of various designs in this setting, we utilize an extended Kalman filter approximation to facilitate implementation. The approach is motivated by, and demonstrated on, the problem of selecting sampling locations to estimate July brood counts in the prairie pothole region of the U.S.

  15. Usage of the back-propagation method for alphabet recognition

    NASA Astrophysics Data System (ADS)

    Shaila Sree, R. N.; Eswaran, Kumar; Sundararajan, N.

    1999-03-01

    Artificial Neural Networks play a pivotal role in the branch of Artificial Intelligence. They can be trained efficiently for a variety of tasks using different methods, of which the Back Propagation method is one among them. The paper studies the choosing of various design parameters of a neural network for the Back Propagation method. The study shows that when these parameters are properly assigned, the training task of the net is greatly simplified. The character recognition problem has been chosen as a test case for this study. A sample space of different handwritten characters of the English alphabet was gathered. A Neural net is finally designed taking many the design aspects into consideration and trained for different styles of writing. Experimental results are reported and discussed. It has been found that an appropriate choice of the design parameters of the neural net for the Back Propagation method reduces the training time and improves the performance of the net.

  16. Assessment of habitat representation across a network of marine protected areas with implications for the spatial design of monitoring.

    PubMed

    Young, Mary; Carr, Mark

    2015-01-01

    Networks of marine protected areas (MPAs) are being adopted globally to protect ecosystems and supplement fisheries management. The state of California recently implemented a coast-wide network of MPAs, a statewide seafloor mapping program, and ecological characterizations of species and ecosystems targeted for protection by the network. The main goals of this study were to use these data to evaluate how well seafloor features, as proxies for habitats, are represented and replicated across an MPA network and how well ecological surveys representatively sampled fish habitats inside MPAs and adjacent reference sites. Seafloor data were classified into broad substrate categories (rock and sediment) and finer scale geomorphic classifications standard to marine classification schemes using surface analyses (slope, ruggedness, etc.) done on the digital elevation model derived from multibeam bathymetry data. These classifications were then used to evaluate the representation and replication of seafloor structure within the MPAs and across the ecological surveys. Both the broad substrate categories and the finer scale geomorphic features were proportionately represented for many of the classes with deviations of 1-6% and 0-7%, respectively. Within MPAs, however, representation of seafloor features differed markedly from original estimates, with differences ranging up to 28%. Seafloor structure in the biological monitoring design had mismatches between sampling in the MPAs and their corresponding reference sites and some seafloor structure classes were missed entirely. The geomorphic variables derived from multibeam bathymetry data for these analyses are known determinants of the distribution and abundance of marine species and for coastal marine biodiversity. Thus, analyses like those performed in this study can be a valuable initial method of evaluating and predicting the conservation value of MPAs across a regional network.

  17. Assessment of Habitat Representation across a Network of Marine Protected Areas with Implications for the Spatial Design of Monitoring

    PubMed Central

    Young, Mary; Carr, Mark

    2015-01-01

    Networks of marine protected areas (MPAs) are being adopted globally to protect ecosystems and supplement fisheries management. The state of California recently implemented a coast-wide network of MPAs, a statewide seafloor mapping program, and ecological characterizations of species and ecosystems targeted for protection by the network. The main goals of this study were to use these data to evaluate how well seafloor features, as proxies for habitats, are represented and replicated across an MPA network and how well ecological surveys representatively sampled fish habitats inside MPAs and adjacent reference sites. Seafloor data were classified into broad substrate categories (rock and sediment) and finer scale geomorphic classifications standard to marine classification schemes using surface analyses (slope, ruggedness, etc.) done on the digital elevation model derived from multibeam bathymetry data. These classifications were then used to evaluate the representation and replication of seafloor structure within the MPAs and across the ecological surveys. Both the broad substrate categories and the finer scale geomorphic features were proportionately represented for many of the classes with deviations of 1-6% and 0-7%, respectively. Within MPAs, however, representation of seafloor features differed markedly from original estimates, with differences ranging up to 28%. Seafloor structure in the biological monitoring design had mismatches between sampling in the MPAs and their corresponding reference sites and some seafloor structure classes were missed entirely. The geomorphic variables derived from multibeam bathymetry data for these analyses are known determinants of the distribution and abundance of marine species and for coastal marine biodiversity. Thus, analyses like those performed in this study can be a valuable initial method of evaluating and predicting the conservation value of MPAs across a regional network. PMID:25760858

  18. Prediction of near-surface soil moisture at large scale by digital terrain modeling and neural networks.

    PubMed

    Lavado Contador, J F; Maneta, M; Schnabel, S

    2006-10-01

    The capability of Artificial Neural Network models to forecast near-surface soil moisture at fine spatial scale resolution has been tested for a 99.5 ha watershed located in SW Spain using several easy to achieve digital models of topographic and land cover variables as inputs and a series of soil moisture measurements as training data set. The study methods were designed in order to determining the potentials of the neural network model as a tool to gain insight into soil moisture distribution factors and also in order to optimize the data sampling scheme finding the optimum size of the training data set. Results suggest the efficiency of the methods in forecasting soil moisture, as a tool to assess the optimum number of field samples, and the importance of the variables selected in explaining the final map obtained.

  19. Watershed-based survey designs

    USGS Publications Warehouse

    Detenbeck, N.E.; Cincotta, D.; Denver, J.M.; Greenlee, S.K.; Olsen, A.R.; Pitchford, A.M.

    2005-01-01

    Watershed-based sampling design and assessment tools help serve the multiple goals for water quality monitoring required under the Clean Water Act, including assessment of regional conditions to meet Section 305(b), identification of impaired water bodies or watersheds to meet Section 303(d), and development of empirical relationships between causes or sources of impairment and biological responses. Creation of GIS databases for hydrography, hydrologically corrected digital elevation models, and hydrologic derivatives such as watershed boundaries and upstream–downstream topology of subcatchments would provide a consistent seamless nationwide framework for these designs. The elements of a watershed-based sample framework can be represented either as a continuous infinite set defined by points along a linear stream network, or as a discrete set of watershed polygons. Watershed-based designs can be developed with existing probabilistic survey methods, including the use of unequal probability weighting, stratification, and two-stage frames for sampling. Case studies for monitoring of Atlantic Coastal Plain streams, West Virginia wadeable streams, and coastal Oregon streams illustrate three different approaches for selecting sites for watershed-based survey designs.

  20. SABRE: a method for assessing the stability of gene modules in complex tissues and subject populations.

    PubMed

    Shannon, Casey P; Chen, Virginia; Takhar, Mandeep; Hollander, Zsuzsanna; Balshaw, Robert; McManus, Bruce M; Tebbutt, Scott J; Sin, Don D; Ng, Raymond T

    2016-11-14

    Gene network inference (GNI) algorithms can be used to identify sets of coordinately expressed genes, termed network modules from whole transcriptome gene expression data. The identification of such modules has become a popular approach to systems biology, with important applications in translational research. Although diverse computational and statistical approaches have been devised to identify such modules, their performance behavior is still not fully understood, particularly in complex human tissues. Given human heterogeneity, one important question is how the outputs of these computational methods are sensitive to the input sample set, or stability. A related question is how this sensitivity depends on the size of the sample set. We describe here the SABRE (Similarity Across Bootstrap RE-sampling) procedure for assessing the stability of gene network modules using a re-sampling strategy, introduce a novel criterion for identifying stable modules, and demonstrate the utility of this approach in a clinically-relevant cohort, using two different gene network module discovery algorithms. The stability of modules increased as sample size increased and stable modules were more likely to be replicated in larger sets of samples. Random modules derived from permutated gene expression data were consistently unstable, as assessed by SABRE, and provide a useful baseline value for our proposed stability criterion. Gene module sets identified by different algorithms varied with respect to their stability, as assessed by SABRE. Finally, stable modules were more readily annotated in various curated gene set databases. The SABRE procedure and proposed stability criterion may provide guidance when designing systems biology studies in complex human disease and tissues.

  1. Measuring Low Concentrations of Liquid Water in Soil

    NASA Technical Reports Server (NTRS)

    Buehler, Martin

    2009-01-01

    An apparatus has been developed for measuring the low concentrations of liquid water and ice in relatively dry soil samples. Designed as a prototype of instruments for measuring the liquidwater and ice contents of Lunar and Martian soils, the apparatus could also be applied similarly to terrestrial desert soils and sands. The apparatus is a special-purpose impedance spectrometer: Its design is based on the fact that the electrical behavior of a typical soil sample is well approximated by a network of resistors and capacitors in which resistances decrease and capacitances increase (and, hence, the magnitude of impedance decreases) with increasing water content.

  2. Magnetoresistive immunosensor for the detection of Escherichia coli O157:H7 including a microfluidic network.

    PubMed

    Mujika, M; Arana, S; Castaño, E; Tijero, M; Vilares, R; Ruano-López, J M; Cruz, A; Sainz, L; Berganza, J

    2009-01-01

    A hand held device has been designed for the immunomagnetic detection and quantification of the pathogen Escherichia coli O157:H7 in food and clinical samples. In this work, a technology to manufacture a Lab on a Chip that integrates a 3D microfluidic network with a microfabricated biosensor has been developed. With this aim, the sensing film optimization, the design of the microfluidic circuitry, the development of the biological protocols involved in the measurements and, finally, the packaging needed to carry out the assays in a safe and straightforward way have been completed. The biosensor is designed to be capable to detect and quantify small magnetic field variations caused by the presence of superparamagnetic beads bound to the antigens previously immobilized on the sensor surface via an antibody-antigen reaction. The giant magnetoresistive multilayer structure implemented as sensing film consists of 20[Cu(5.10nm)/Co(2.47 nm)] with a magnetoresistance of 3.20% at 235Oe and a sensitivity up to 0.06 Omega/Oe between 150Oe and 230Oe. Silicon nitride has been selected as optimum sensor surface coating due to its suitability for antibody immobilization. In order to guide the biological samples towards the sensing area, a microfluidic network made of SU-8 photoresist has been included. Finally, a novel packaging design has been fabricated employing 3D stereolithographic techniques. The microchannels are connected to the outside using standard tubing. Hence, this packaging allows an easy replacement of the used devices.

  3. Analog hardware implementation of neocognitron networks

    NASA Astrophysics Data System (ADS)

    Inigo, Rafael M.; Bonde, Allen, Jr.; Holcombe, Bradford

    1990-08-01

    This paper deals with the analog implementation of neocognitron based neural networks. All of Fukushima''s and related work on the neocognitron is based on digital computer simulations. To fully take advantage of the power of this network paradigm an analog electronic approach is proposed. We first implemented a 6-by-6 sensor network with discrete analog components and fixed weights. The network was given weight values to recognize the characters U L and F. These characters are recognized regardless of their location on the sensor and with various levels of distortion and noise. The network performance has also shown an excellent correlation with software simulation results. Next we implemented a variable weight network which can be trained to recognize simple patterns by means of self-organization. The adaptable weights were implemented with PETs configured as voltage-controlled resistors. To implement a variable weight there must be some type of " memory" to store the weight value and hold it while the value is reinforced or incremented. Two methods were evaluated: an analog sample-hold circuit and a digital storage scheme using binary counters. The latter is preferable for VLSI implementation because it uses standard components and does not require the use of capacitors. The analog design and implementation of these small-scale networks demonstrates the feasibility of implementing more complicated ANNs in electronic hardware. The circuits developed can also be designed for VLSI implementation. 1.

  4. Big Data: A Parallel Particle Swarm Optimization-Back-Propagation Neural Network Algorithm Based on MapReduce.

    PubMed

    Cao, Jianfang; Cui, Hongyan; Shi, Hao; Jiao, Lijuan

    2016-01-01

    A back-propagation (BP) neural network can solve complicated random nonlinear mapping problems; therefore, it can be applied to a wide range of problems. However, as the sample size increases, the time required to train BP neural networks becomes lengthy. Moreover, the classification accuracy decreases as well. To improve the classification accuracy and runtime efficiency of the BP neural network algorithm, we proposed a parallel design and realization method for a particle swarm optimization (PSO)-optimized BP neural network based on MapReduce on the Hadoop platform using both the PSO algorithm and a parallel design. The PSO algorithm was used to optimize the BP neural network's initial weights and thresholds and improve the accuracy of the classification algorithm. The MapReduce parallel programming model was utilized to achieve parallel processing of the BP algorithm, thereby solving the problems of hardware and communication overhead when the BP neural network addresses big data. Datasets on 5 different scales were constructed using the scene image library from the SUN Database. The classification accuracy of the parallel PSO-BP neural network algorithm is approximately 92%, and the system efficiency is approximately 0.85, which presents obvious advantages when processing big data. The algorithm proposed in this study demonstrated both higher classification accuracy and improved time efficiency, which represents a significant improvement obtained from applying parallel processing to an intelligent algorithm on big data.

  5. Design tradeoffs in long-term research for stream salamanders

    USGS Publications Warehouse

    Brand, Adrianne B,; Grant, Evan H. Campbell

    2017-01-01

    Long-term research programs can benefit from early and periodic evaluation of their ability to meet stated objectives. In particular, consideration of the spatial allocation of effort is key. We sampled 4 species of stream salamanders intensively for 2 years (2010–2011) in the Chesapeake and Ohio Canal National Historical Park, Maryland, USA to evaluate alternative distributions of sampling locations within stream networks, and then evaluated via simulation the ability of multiple survey designs to detect declines in occupancy and to estimate dynamic parameters (colonization, extinction) over 5 years for 2 species. We expected that fine-scale microhabitat variables (e.g., cobble, detritus) would be the strongest determinants of occupancy for each of the 4 species; however, we found greater support for all species for models including variables describing position within the stream network, stream size, or stream microhabitat. A monitoring design focused on headwater sections had greater power to detect changes in occupancy and the dynamic parameters in each of 3 scenarios for the dusky salamander (Desmognathus fuscus) and red salamander (Pseudotriton ruber). Results for transect length were more variable, but across all species and scenarios, 25-m transects are most suitable as a balance between maximizing detection probability and describing colonization and extinction. These results inform sampling design and provide a general framework for setting appropriate goals, effort, and duration in the initial planning stages of research programs on stream salamanders in the eastern United States.

  6. Sampling design optimization for spatial functions

    USGS Publications Warehouse

    Olea, R.A.

    1984-01-01

    A new procedure is presented for minimizing the sampling requirements necessary to estimate a mappable spatial function at a specified level of accuracy. The technique is based on universal kriging, an estimation method within the theory of regionalized variables. Neither actual implementation of the sampling nor universal kriging estimations are necessary to make an optimal design. The average standard error and maximum standard error of estimation over the sampling domain are used as global indices of sampling efficiency. The procedure optimally selects those parameters controlling the magnitude of the indices, including the density and spatial pattern of the sample elements and the number of nearest sample elements used in the estimation. As an illustration, the network of observation wells used to monitor the water table in the Equus Beds of Kansas is analyzed and an improved sampling pattern suggested. This example demonstrates the practical utility of the procedure, which can be applied equally well to other spatial sampling problems, as the procedure is not limited by the nature of the spatial function. ?? 1984 Plenum Publishing Corporation.

  7. NEON: High Frequency Monitoring Network for Watershed-Scale Processes and Aquatic Ecology

    NASA Astrophysics Data System (ADS)

    Vance, J. M.; Fitzgerald, M.; Parker, S. M.; Roehm, C. L.; Goodman, K. J.; Bohall, C.; Utz, R.

    2014-12-01

    Networked high frequency hydrologic and water quality measurements needed to investigate physical and biogeochemical processes at the watershed scale and create robust models are limited and lacking standardization. Determining the drivers and mechanisms of ecological changes in aquatic systems in response to natural and anthropogenic pressures is challenging due to the large amounts of terrestrial, aquatic, atmospheric, biological, chemical, and physical data it requires at varied spatiotemporal scales. The National Ecological Observatory Network (NEON) is a continental-scale infrastructure project designed to provide data to address the impacts of climate change, land-use, and invasive species on ecosystem structure and function. Using a combination of standardized continuous in situ measurements and observational sampling, the NEON Aquatic array will produce over 200 data products across its spatially-distributed field sites for 30 years to facilitate spatiotemporal analysis of the drivers of ecosystem change. Three NEON sites in Alabama were chosen to address linkages between watershed-scale processes and ecosystem changes along an eco-hydrological gradient within the Tombigbee River Basin. The NEON Aquatic design, once deployed, will include continuous measurements of surface water physical, chemical, and biological parameters, groundwater level, temperature and conductivity and local meteorology. Observational sampling will include bathymetry, water chemistry and isotopes, and a suite of organismal sampling from microbes to macroinvertebrates to vertebrates. NEON deployed a buoy to measure the temperature profile of the Black Warrior River from July - November, 2013 to determine the spatiotemporal variability across the water column from a daily to seasonal scale. In July 2014 a series of water quality profiles were performed to assess the contribution of physical and biogeochemical drivers over a diurnal cycle. Additional river transects were performed across our site reach to capture the spatial variability of surface water parameters. Our preliminary data show differing response times to precipitation events and diurnal processes informing our infrastructure designs and sampling protocols aimed at providing data to address the eco-hydrological gradient.

  8. A new centrality measure for identifying influential nodes in social networks

    NASA Astrophysics Data System (ADS)

    Rhouma, Delel; Ben Romdhane, Lotfi

    2018-04-01

    The identification of central nodes has been a key problem in the field of social network analysis. In fact, it is a measure that accounts the popularity or the visibility of an actor within a network. In order to capture this concept, various measures, either sample or more elaborate, has been developed. Nevertheless, many of "traditional" measures are not designed to be applicable to huge data. This paper sets out a new node centrality index suitable for large social network. It uses the amount of the neighbors of a node and connections between them to characterize a "pivot" node in the graph. We presented experimental results on real data sets which show the efficiency of our proposal.

  9. Integrative Analysis of Many Weighted Co-Expression Networks Using Tensor Computation

    PubMed Central

    Li, Wenyuan; Liu, Chun-Chi; Zhang, Tong; Li, Haifeng; Waterman, Michael S.; Zhou, Xianghong Jasmine

    2011-01-01

    The rapid accumulation of biological networks poses new challenges and calls for powerful integrative analysis tools. Most existing methods capable of simultaneously analyzing a large number of networks were primarily designed for unweighted networks, and cannot easily be extended to weighted networks. However, it is known that transforming weighted into unweighted networks by dichotomizing the edges of weighted networks with a threshold generally leads to information loss. We have developed a novel, tensor-based computational framework for mining recurrent heavy subgraphs in a large set of massive weighted networks. Specifically, we formulate the recurrent heavy subgraph identification problem as a heavy 3D subtensor discovery problem with sparse constraints. We describe an effective approach to solving this problem by designing a multi-stage, convex relaxation protocol, and a non-uniform edge sampling technique. We applied our method to 130 co-expression networks, and identified 11,394 recurrent heavy subgraphs, grouped into 2,810 families. We demonstrated that the identified subgraphs represent meaningful biological modules by validating against a large set of compiled biological knowledge bases. We also showed that the likelihood for a heavy subgraph to be meaningful increases significantly with its recurrence in multiple networks, highlighting the importance of the integrative approach to biological network analysis. Moreover, our approach based on weighted graphs detects many patterns that would be overlooked using unweighted graphs. In addition, we identified a large number of modules that occur predominately under specific phenotypes. This analysis resulted in a genome-wide mapping of gene network modules onto the phenome. Finally, by comparing module activities across many datasets, we discovered high-order dynamic cooperativeness in protein complex networks and transcriptional regulatory networks. PMID:21698123

  10. Discontinuities in effective permeability due to fracture percolation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hyman, Jeffrey De'Haven; Karra, Satish; Carey, James William

    Motivated by a triaxial coreflood experiment with a sample of Utica shale where an abrupt jump in permeability was observed, possibly due to the creation of a percolating fracture network through the sample, we perform numerical simulations based on the experiment to characterize how the effective permeability of otherwise low-permeability porous media depends on fracture formation, connectivity, and the contrast between the fracture and matrix permeabilities. While a change in effective permeability due to fracture formation is expected, the dependence of its magnitude upon the contrast between the matrix permeability and fracture permeability and the fracture network structure is poorlymore » characterized. We use two different high-fidelity fracture network models to characterize how effective permeability changes as percolation occurs. The first is a dynamic two-dimensional fracture propagation model designed to mimic the laboratory settings of the experiment. The second is a static three-dimensional discrete fracture network (DFN) model, whose fracture and network statistics are based on the fractured sample of Utica shale. Once the network connects the inflow and outflow boundaries, the effective permeability increases non-linearly with network density. In most networks considered, a jump in the effective permeability was observed when the embedded fracture network percolated. We characterize how the magnitude of the jump, should it occur, depends on the contrast between the fracture and matrix permeabilities. For small contrasts between the matrix and fracture permeabilities the change is insignificant. However, for larger contrasts, there is a substantial jump whose magnitude depends non-linearly on the difference between matrix and fracture permeabilities. A power-law relationship between the size of the jump and the difference between the matrix and fracture permeabilities is observed. In conclusion, the presented results underscore the importance of fracture network topology on the upscaled properties of the porous medium in which it is embedded.« less

  11. Discontinuities in effective permeability due to fracture percolation

    DOE PAGES

    Hyman, Jeffrey De'Haven; Karra, Satish; Carey, James William; ...

    2018-01-31

    Motivated by a triaxial coreflood experiment with a sample of Utica shale where an abrupt jump in permeability was observed, possibly due to the creation of a percolating fracture network through the sample, we perform numerical simulations based on the experiment to characterize how the effective permeability of otherwise low-permeability porous media depends on fracture formation, connectivity, and the contrast between the fracture and matrix permeabilities. While a change in effective permeability due to fracture formation is expected, the dependence of its magnitude upon the contrast between the matrix permeability and fracture permeability and the fracture network structure is poorlymore » characterized. We use two different high-fidelity fracture network models to characterize how effective permeability changes as percolation occurs. The first is a dynamic two-dimensional fracture propagation model designed to mimic the laboratory settings of the experiment. The second is a static three-dimensional discrete fracture network (DFN) model, whose fracture and network statistics are based on the fractured sample of Utica shale. Once the network connects the inflow and outflow boundaries, the effective permeability increases non-linearly with network density. In most networks considered, a jump in the effective permeability was observed when the embedded fracture network percolated. We characterize how the magnitude of the jump, should it occur, depends on the contrast between the fracture and matrix permeabilities. For small contrasts between the matrix and fracture permeabilities the change is insignificant. However, for larger contrasts, there is a substantial jump whose magnitude depends non-linearly on the difference between matrix and fracture permeabilities. A power-law relationship between the size of the jump and the difference between the matrix and fracture permeabilities is observed. In conclusion, the presented results underscore the importance of fracture network topology on the upscaled properties of the porous medium in which it is embedded.« less

  12. Design of the National Trends Network for monitoring the chemistry of atmospheric precipitation

    USGS Publications Warehouse

    Robertson, J.K.; Wilson, J.W.

    1985-01-01

    Long-term monitoring (10 years minimum) of the chemistry of wet deposition will be conducted at National Trends Network (NTN) sites across the United States. Precipitation samples will be collected at sites that represent broad regional characteristics. Design of the NTN considered four basic elements during construction of a model to distribute 50, 75, 100, 125 or 150 sites. The modeling oriented design was supplemented with guidance developed during the course of the site selection process. Ultimately, a network of 151 sites was proposed. The basic elements of the design are: (1) Assurance that all areas of the country are represented in the network on the basis of regional ecological properties (96 sites); (2) Placement of additional sites east of the Rocky Mountains to better define high deposition gradients (27 sites); (3) Placement of sites to assure that potentially sensitive regions are represented (15 sites); (4) Placement of sites to allow for other considerations, such as urban area effects (5 sites), intercomparison with Canada (3 sites), and apparent disparities in regional coverage (5 sites). Site selection stressed areas away from urban centers, large point sources, or ocean influences. Local factors, such as stable land ownership, nearby small emission sources (about 10 km), and close-by roads and fireplaces (about 0.5 km) were also considered. All proposed sites will be visited as part of the second phase of the study.

  13. Design and Implementation of the International Genetics and Translational Research in Transplantation Network.

    PubMed

    2015-11-01

    Genetic association studies of transplantation outcomes have been hampered by small samples and highly complex multifactorial phenotypes, hindering investigations of the genetic architecture of a range of comorbidities which significantly impact graft and recipient life expectancy. We describe here the rationale and design of the International Genetics & Translational Research in Transplantation Network. The network comprises 22 studies to date, including 16494 transplant recipients and 11669 donors, of whom more than 5000 are of non-European ancestry, all of whom have existing genomewide genotype data sets. We describe the rich genetic and phenotypic information available in this consortium comprising heart, kidney, liver, and lung transplant cohorts. We demonstrate significant power in International Genetics & Translational Research in Transplantation Network to detect main effect association signals across regions such as the MHC region as well as genomewide for transplant outcomes that span all solid organs, such as graft survival, acute rejection, new onset of diabetes after transplantation, and for delayed graft function in kidney only. This consortium is designed and statistically powered to deliver pioneering insights into the genetic architecture of transplant-related outcomes across a range of different solid-organ transplant studies. The study design allows a spectrum of analyses to be performed including recipient-only analyses, donor-recipient HLA mismatches with focus on loss-of-function variants and nonsynonymous single nucleotide polymorphisms.

  14. Neural Network Design on the SRC-6 Reconfigurable Computer

    DTIC Science & Technology

    2006-12-01

    fingerprint identification. In this field, automatic identification methods are used to save time, especially for the purpose of fingerprint matching in...grid widths and lengths and therefore was useful in producing an accurate canvas with which to create sample training images. The added benefit of...tools available free of charge and readily accessible on the computer, it was simple to design bitmap data files visually on a canvas and then

  15. Statewide water-quality network for Massachusetts

    USGS Publications Warehouse

    Desimone, Leslie A.; Steeves, Peter A.; Zimmerman, Marc James

    2001-01-01

    A water-quality monitoring program is proposed that would provide data to meet multiple information needs of Massachusetts agencies and other users concerned with the condition of the State's water resources. The program was designed by the U.S. Geological Survey and the Massachusetts Department of Environmental Protection, Division of Watershed Management, with input from many organizations involved in water-quality monitoring in the State, and focuses on inland surface waters (streams and lakes). The proposed monitoring program consists of several components, or tiers, which are defined in terms of specific monitoring objectives, and is intended to complement the Massachusetts Watershed Initiative (MWI) basin assessments. Several components were developed using the Neponset River Basin in eastern Massachusetts as a pilot area, or otherwise make use of data from and sampling approaches used in that basin as part of a MWI pilot assessment in 1994. To guide development of the monitoring program, reviews were conducted of general principles of network design, including monitoring objectives and approaches, and of ongoing monitoring activities of Massachusetts State agencies.Network tiers described in this report are primarily (1) a statewide, basin-based assessment of existing surface-water-quality conditions, and (2) a fixed-station network for determining contaminant loads carried by major rivers. Other components, including (3) targeted programs for hot-spot monitoring and other objectives, and (4) compliance monitoring, also are discussed. Monitoring programs for the development of Total Maximum Daily Loads for specific water bodies, which would constitute another tier of the network, are being developed separately and are not described in this report. The basin-based assessment of existing conditions is designed to provide information on the status of surface waters with respect to State water-quality standards and designated uses in accordance with the reporting requirements [Section 305(b)] of the Clean Water Act (CWA). Geographic Information System (GIS)-based procedures were developed to inventory streams and lakes in a basin for these purposes. Several monitoring approaches for this tier and their associated resource requirements were investigated. Analysis of the Neponset Basin for this purpose demonstrated that the large number of sites needed in order for all the small streams in a basin to be sampled (about half of stream miles in the basin were headwater or first-order streams) pose substantial resource-based problems for a comprehensive assessment of existing conditions. The many lakes pose similar problems. Thus, a design is presented in which probabilistic monitoring of small streams is combined with deterministic or targeted monitoring of large streams and lakes to meet CWA requirements and to provide data for other information needs of Massachusetts regulatory agencies and MWI teams.The fixed-station network is designed to permit the determination of contaminant loads carried by the State's major rivers to sensitive inland and coastal receiving waters and across State boundaries. Sampling at 19 proposed sites in 17 of the 27 major basins in Massachusetts would provide information on contaminant loads from 67 percent of the total land area of the State; unsampled areas are primarily coastal areas drained by many small streams that would be impossible to sample within realistic resource limitations. Strategies for hot-spot monitoring, a targeted monitoring program focused on identifying contaminant sources, are described with reference to an analysis of the bacteria sampling program of the 1994 Neponset Basin assessment. Finally, major discharge sites permitted under the National Pollutant Discharge Elimination System (NPDES) were evaluated as a basis for ambient water-quality monitoring. The discharge sites are well distributed geographically among basins, but are primarily on large rivers (two-thirds or more

  16. Data from selected U.S. Geological Survey national stream water-quality monitoring networks (WQN) on CD-ROM

    USGS Publications Warehouse

    Alexander, R.B.; Ludtke, A.S.; Fitzgerald, K.K.; Schertz, T.L.

    1996-01-01

    Data from two U.S. Geological Survey (USGS) national stream water-quality monitoring networks, the National Stream Quality Accounting Network (NASQAN) and the Hydrologic Benchmark Network (HBN), are now available in a two CD-ROM set. These data on CD-ROM are collectively referred to as WQN, water-quality networks. Data from these networks have been used at the national, regional, and local levels to estimate the rates of chemical flux from watersheds, quantify changes in stream water quality for periods during the past 30 years, and investigate relations between water quality and streamflow as well as the relations of water quality to pollution sources and various physical characteristics of watersheds. The networks include 679 monitoring stations in watersheds that represent diverse climatic, physiographic, and cultural characteristics. The HBN includes 63 stations in relatively small, minimally disturbed basins ranging in size from 2 to 2,000 square miles with a median drainage basin size of 57 square miles. NASQAN includes 618 stations in larger, more culturally-influenced drainage basins ranging in size from one square mile to 1.2 million square miles with a median drainage basin size of about 4,000 square miles. The CD-ROMs contain data for 63 physical, chemical, and biological properties of water (122 total constituents including analyses of dissolved and water suspended-sediment samples) collected during more than 60,000 site visits. These data approximately span the periods 1962-95 for HBN and 1973-95 for NASQAN. The data reflect sampling over a wide range of streamflow conditions and the use of relatively consistent sampling and analytical methods. The CD-ROMs provide ancillary information and data-retrieval tools to allow the national network data to be properly and efficiently used. Ancillary information includes the following: descriptions of the network objectives and history, characteristics of the network stations and water-quality data, historical records of important changes in network sample collection and laboratory analytical methods, water reference sample data for estimating laboratory measurement bias and variability for 34 dissolved constituents for the period 1985-95, discussions of statistical methods for using water reference sample data to evaluate the accuracy of network stream water-quality data, and a bibliography of scientific investigations using national network data and other publications relevant to the networks. The data structure of the CD-ROMs is designed to allow users to efficiently enter the water-quality data to user-supplied software packages including statistical analysis, modeling, or geographic information systems. On one disc, all data are stored in ASCII form accessible from any computer system with a CD-ROM driver. The data also can be accessed using DOS-based retrieval software supplied on a second disc. This software supports logical queries of the water-quality data based on constituent concentrations, sample- collection date, river name, station name, county, state, hydrologic unit number, and 1990 population and 1987 land-cover characteristics for station watersheds. User-selected data may be output in a variety of formats including dBASE, flat ASCII, delimited ASCII, or fixed-field for subsequent use in other software packages.

  17. Geostatistical Prediction of Microbial Water Quality Throughout a Stream Network Using Meteorology, Land Cover, and Spatiotemporal Autocorrelation.

    PubMed

    Holcomb, David A; Messier, Kyle P; Serre, Marc L; Rowny, Jakob G; Stewart, Jill R

    2018-06-25

    Predictive modeling is promising as an inexpensive tool to assess water quality. We developed geostatistical predictive models of microbial water quality that empirically modeled spatiotemporal autocorrelation in measured fecal coliform (FC) bacteria concentrations to improve prediction. We compared five geostatistical models featuring different autocorrelation structures, fit to 676 observations from 19 locations in North Carolina's Jordan Lake watershed using meteorological and land cover predictor variables. Though stream distance metrics (with and without flow-weighting) failed to improve prediction over the Euclidean distance metric, incorporating temporal autocorrelation substantially improved prediction over the space-only models. We predicted FC throughout the stream network daily for one year, designating locations "impaired", "unimpaired", or "unassessed" if the probability of exceeding the state standard was ≥90%, ≤10%, or >10% but <90%, respectively. We could assign impairment status to more of the stream network on days any FC were measured, suggesting frequent sample-based monitoring remains necessary, though implementing spatiotemporal predictive models may reduce the number of concurrent sampling locations required to adequately assess water quality. Together, these results suggest that prioritizing sampling at different times and conditions using geographically sparse monitoring networks is adequate to build robust and informative geostatistical models of water quality impairment.

  18. Rocky Mountain snowpack chemistry network; history, methods, and the importance of monitoring mountain ecosystems

    USGS Publications Warehouse

    Ingersoll, George P.; Turk, John T.; Mast, M. Alisa; Clow, David W.; Campbell, Donald H.; Bailey, Zelda C.

    2002-01-01

    Because regional-scale atmospheric deposition data in the Rocky Mountains are sparse, a program was designed by the U.S. Geological Survey to more thoroughly determine the quality of precipitation and to identify sources of atmospherically deposited pollution in a network of high-elevation sites. Depth-integrated samples of seasonal snowpacks at 52 sampling sites, in a network from New Mexico to Montana, were collected and analyzed each year since 1993. The results of the first 5 years (1993?97) of the program are discussed in this report. Spatial patterns in regional data have emerged from the geographically distributed chemical concentrations of ammonium, nitrate, and sulfate that clearly indicate that concentrations of these acid precursors in less developed areas of the region are lower than concentrations in the heavily developed areas. Snowpacks in northern Colorado that lie adjacent to both the highly developed Denver metropolitan area to the east and coal-fired powerplants to the west had the highest overall concentrations of nitrate and sulfate in the network. Ammonium concentrations were highest in northwestern Wyoming and southern Montana.

  19. Design of asymptotic estimators: an approach based on neural networks and nonlinear programming.

    PubMed

    Alessandri, Angelo; Cervellera, Cristiano; Sanguineti, Marcello

    2007-01-01

    A methodology to design state estimators for a class of nonlinear continuous-time dynamic systems that is based on neural networks and nonlinear programming is proposed. The estimator has the structure of a Luenberger observer with a linear gain and a parameterized (in general, nonlinear) function, whose argument is an innovation term representing the difference between the current measurement and its prediction. The problem of the estimator design consists in finding the values of the gain and of the parameters that guarantee the asymptotic stability of the estimation error. Toward this end, if a neural network is used to take on this function, the parameters (i.e., the neural weights) are chosen, together with the gain, by constraining the derivative of a quadratic Lyapunov function for the estimation error to be negative definite on a given compact set. It is proved that it is sufficient to impose the negative definiteness of such a derivative only on a suitably dense grid of sampling points. The gain is determined by solving a Lyapunov equation. The neural weights are searched for via nonlinear programming by minimizing a cost penalizing grid-point constraints that are not satisfied. Techniques based on low-discrepancy sequences are applied to deal with a small number of sampling points, and, hence, to reduce the computational burden required to optimize the parameters. Numerical results are reported and comparisons with those obtained by the extended Kalman filter are made.

  20. ICCE Policy Statement on Network and Multiple Machine Software.

    ERIC Educational Resources Information Center

    International Council for Computers in Education, Eugene, OR.

    Designed to provide educators with guidance for the lawful reproduction of computer software, this document contains suggested guidelines, sample forms, and several short articles concerning software copyright and license agreements. The initial policy statement calls for educators to provide software developers (or their agents) with a…

  1. A method of network topology optimization design considering application process characteristic

    NASA Astrophysics Data System (ADS)

    Wang, Chunlin; Huang, Ning; Bai, Yanan; Zhang, Shuo

    2018-03-01

    Communication networks are designed to meet the usage requirements of users for various network applications. The current studies of network topology optimization design mainly considered network traffic, which is the result of network application operation, but not a design element of communication networks. A network application is a procedure of the usage of services by users with some demanded performance requirements, and has obvious process characteristic. In this paper, we first propose a method to optimize the design of communication network topology considering the application process characteristic. Taking the minimum network delay as objective, and the cost of network design and network connective reliability as constraints, an optimization model of network topology design is formulated, and the optimal solution of network topology design is searched by Genetic Algorithm (GA). Furthermore, we investigate the influence of network topology parameter on network delay under the background of multiple process-oriented applications, which can guide the generation of initial population and then improve the efficiency of GA. Numerical simulations show the effectiveness and validity of our proposed method. Network topology optimization design considering applications can improve the reliability of applications, and provide guidance for network builders in the early stage of network design, which is of great significance in engineering practices.

  2. Sampling properties of directed networks

    NASA Astrophysics Data System (ADS)

    Son, S.-W.; Christensen, C.; Bizhani, G.; Foster, D. V.; Grassberger, P.; Paczuski, M.

    2012-10-01

    For many real-world networks only a small “sampled” version of the original network may be investigated; those results are then used to draw conclusions about the actual system. Variants of breadth-first search (BFS) sampling, which are based on epidemic processes, are widely used. Although it is well established that BFS sampling fails, in most cases, to capture the IN component(s) of directed networks, a description of the effects of BFS sampling on other topological properties is all but absent from the literature. To systematically study the effects of sampling biases on directed networks, we compare BFS sampling to random sampling on complete large-scale directed networks. We present new results and a thorough analysis of the topological properties of seven complete directed networks (prior to sampling), including three versions of Wikipedia, three different sources of sampled World Wide Web data, and an Internet-based social network. We detail the differences that sampling method and coverage can make to the structural properties of sampled versions of these seven networks. Most notably, we find that sampling method and coverage affect both the bow-tie structure and the number and structure of strongly connected components in sampled networks. In addition, at a low sampling coverage (i.e., less than 40%), the values of average degree, variance of out-degree, degree autocorrelation, and link reciprocity are overestimated by 30% or more in BFS-sampled networks and only attain values within 10% of the corresponding values in the complete networks when sampling coverage is in excess of 65%. These results may cause us to rethink what we know about the structure, function, and evolution of real-world directed networks.

  3. Urban Land Cover Mapping Accuracy Assessment - A Cost-benefit Analysis Approach

    NASA Astrophysics Data System (ADS)

    Xiao, T.

    2012-12-01

    One of the most important components in urban land cover mapping is mapping accuracy assessment. Many statistical models have been developed to help design simple schemes based on both accuracy and confidence levels. It is intuitive that an increased number of samples increases the accuracy as well as the cost of an assessment. Understanding cost and sampling size is crucial in implementing efficient and effective of field data collection. Few studies have included a cost calculation component as part of the assessment. In this study, a cost-benefit sampling analysis model was created by combining sample size design and sampling cost calculation. The sampling cost included transportation cost, field data collection cost, and laboratory data analysis cost. Simple Random Sampling (SRS) and Modified Systematic Sampling (MSS) methods were used to design sample locations and to extract land cover data in ArcGIS. High resolution land cover data layers of Denver, CO and Sacramento, CA, street networks, and parcel GIS data layers were used in this study to test and verify the model. The relationship between the cost and accuracy was used to determine the effectiveness of each sample method. The results of this study can be applied to other environmental studies that require spatial sampling.

  4. Many-objective Groundwater Monitoring Network Design Using Bias-Aware Ensemble Kalman Filtering and Evolutionary Optimization

    NASA Astrophysics Data System (ADS)

    Kollat, J. B.; Reed, P. M.

    2009-12-01

    This study contributes the ASSIST (Adaptive Strategies for Sampling in Space and Time) framework for improving long-term groundwater monitoring decisions across space and time while accounting for the influences of systematic model errors (or predictive bias). The ASSIST framework combines contaminant flow-and-transport modeling, bias-aware ensemble Kalman filtering (EnKF) and many-objective evolutionary optimization. Our goal in this work is to provide decision makers with a fuller understanding of the information tradeoffs they must confront when performing long-term groundwater monitoring network design. Our many-objective analysis considers up to 6 design objectives simultaneously and consequently synthesizes prior monitoring network design methodologies into a single, flexible framework. This study demonstrates the ASSIST framework using a tracer study conducted within a physical aquifer transport experimental tank located at the University of Vermont. The tank tracer experiment was extensively sampled to provide high resolution estimates of tracer plume behavior. The simulation component of the ASSIST framework consists of stochastic ensemble flow-and-transport predictions using ParFlow coupled with the Lagrangian SLIM transport model. The ParFlow and SLIM ensemble predictions are conditioned with tracer observations using a bias-aware EnKF. The EnKF allows decision makers to enhance plume transport predictions in space and time in the presence of uncertain and biased model predictions by conditioning them on uncertain measurement data. In this initial demonstration, the position and frequency of sampling were optimized to: (i) minimize monitoring cost, (ii) maximize information provided to the EnKF, (iii) minimize failure to detect the tracer, (iv) maximize the detection of tracer flux, (v) minimize error in quantifying tracer mass, and (vi) minimize error in quantifying the moment of the tracer plume. The results demonstrate that the many-objective problem formulation provides a tremendous amount of information for decision makers. Specifically our many-objective analysis highlights the limitations and potentially negative design consequences of traditional single and two-objective problem formulations. These consequences become apparent through visual exploration of high-dimensional tradeoffs and the identification of regions with interesting compromise solutions. The prediction characteristics of these compromise designs are explored in detail, as well as their implications for subsequent design decisions in both space and time.

  5. Identifying influencers from sampled social networks

    NASA Astrophysics Data System (ADS)

    Tsugawa, Sho; Kimura, Kazuma

    2018-10-01

    Identifying influencers who can spread information to many other individuals from a social network is a fundamental research task in the network science research field. Several measures for identifying influencers have been proposed, and the effectiveness of these influence measures has been evaluated for the case where the complete social network structure is known. However, it is difficult in practice to obtain the complete structure of a social network because of missing data, false data, or node/link sampling from the social network. In this paper, we investigate the effects of node sampling from a social network on the effectiveness of influence measures at identifying influencers. Our experimental results show that the negative effect of biased sampling, such as sample edge count, on the identification of influencers is generally small. For social media networks, we can identify influencers whose influence is comparable with that of those identified from the complete social networks by sampling only 10%-30% of the networks. Moreover, our results also suggest the possible benefit of network sampling in the identification of influencers. Our results show that, for some networks, nodes with higher influence can be discovered from sampled social networks than from complete social networks.

  6. Neural computing thermal comfort index PMV for the indoor environment intelligent control system

    NASA Astrophysics Data System (ADS)

    Liu, Chang; Chen, Yifei

    2013-03-01

    Providing indoor thermal comfort and saving energy are two main goals of indoor environmental control system. An intelligent comfort control system by combining the intelligent control and minimum power control strategies for the indoor environment is presented in this paper. In the system, for realizing the comfort control, the predicted mean vote (PMV) is designed as the control goal, and with chastening formulas of PMV, it is controlled to optimize for improving indoor comfort lever by considering six comfort related variables. On the other hand, a RBF neural network based on genetic algorithm is designed to calculate PMV for better performance and overcoming the nonlinear feature of the PMV calculation better. The formulas given in the paper are presented for calculating the expected output values basing on the input samples, and the RBF network model is trained depending on input samples and the expected output values. The simulation result is proved that the design of the intelligent calculation method is valid. Moreover, this method has a lot of advancements such as high precision, fast dynamic response and good system performance are reached, it can be used in practice with requested calculating error.

  7. Sampling from complex networks using distributed learning automata

    NASA Astrophysics Data System (ADS)

    Rezvanian, Alireza; Rahmati, Mohammad; Meybodi, Mohammad Reza

    2014-02-01

    A complex network provides a framework for modeling many real-world phenomena in the form of a network. In general, a complex network is considered as a graph of real world phenomena such as biological networks, ecological networks, technological networks, information networks and particularly social networks. Recently, major studies are reported for the characterization of social networks due to a growing trend in analysis of online social networks as dynamic complex large-scale graphs. Due to the large scale and limited access of real networks, the network model is characterized using an appropriate part of a network by sampling approaches. In this paper, a new sampling algorithm based on distributed learning automata has been proposed for sampling from complex networks. In the proposed algorithm, a set of distributed learning automata cooperate with each other in order to take appropriate samples from the given network. To investigate the performance of the proposed algorithm, several simulation experiments are conducted on well-known complex networks. Experimental results are compared with several sampling methods in terms of different measures. The experimental results demonstrate the superiority of the proposed algorithm over the others.

  8. Antenna analysis using neural networks

    NASA Technical Reports Server (NTRS)

    Smith, William T.

    1992-01-01

    Conventional computing schemes have long been used to analyze problems in electromagnetics (EM). The vast majority of EM applications require computationally intensive algorithms involving numerical integration and solutions to large systems of equations. The feasibility of using neural network computing algorithms for antenna analysis is investigated. The ultimate goal is to use a trained neural network algorithm to reduce the computational demands of existing reflector surface error compensation techniques. Neural networks are computational algorithms based on neurobiological systems. Neural nets consist of massively parallel interconnected nonlinear computational elements. They are often employed in pattern recognition and image processing problems. Recently, neural network analysis has been applied in the electromagnetics area for the design of frequency selective surfaces and beam forming networks. The backpropagation training algorithm was employed to simulate classical antenna array synthesis techniques. The Woodward-Lawson (W-L) and Dolph-Chebyshev (D-C) array pattern synthesis techniques were used to train the neural network. The inputs to the network were samples of the desired synthesis pattern. The outputs are the array element excitations required to synthesize the desired pattern. Once trained, the network is used to simulate the W-L or D-C techniques. Various sector patterns and cosecant-type patterns (27 total) generated using W-L synthesis were used to train the network. Desired pattern samples were then fed to the neural network. The outputs of the network were the simulated W-L excitations. A 20 element linear array was used. There were 41 input pattern samples with 40 output excitations (20 real parts, 20 imaginary). A comparison between the simulated and actual W-L techniques is shown for a triangular-shaped pattern. Dolph-Chebyshev is a different class of synthesis technique in that D-C is used for side lobe control as opposed to pattern shaping. The interesting thing about D-C synthesis is that the side lobes have the same amplitude. Five-element arrays were used. Again, 41 pattern samples were used for the input. Nine actual D-C patterns ranging from -10 dB to -30 dB side lobe levels were used to train the network. A comparison between simulated and actual D-C techniques for a pattern with -22 dB side lobe level is shown. The goal for this research was to evaluate the performance of neural network computing with antennas. Future applications will employ the backpropagation training algorithm to drastically reduce the computational complexity involved in performing EM compensation for surface errors in large space reflector antennas.

  9. Antenna analysis using neural networks

    NASA Astrophysics Data System (ADS)

    Smith, William T.

    1992-09-01

    Conventional computing schemes have long been used to analyze problems in electromagnetics (EM). The vast majority of EM applications require computationally intensive algorithms involving numerical integration and solutions to large systems of equations. The feasibility of using neural network computing algorithms for antenna analysis is investigated. The ultimate goal is to use a trained neural network algorithm to reduce the computational demands of existing reflector surface error compensation techniques. Neural networks are computational algorithms based on neurobiological systems. Neural nets consist of massively parallel interconnected nonlinear computational elements. They are often employed in pattern recognition and image processing problems. Recently, neural network analysis has been applied in the electromagnetics area for the design of frequency selective surfaces and beam forming networks. The backpropagation training algorithm was employed to simulate classical antenna array synthesis techniques. The Woodward-Lawson (W-L) and Dolph-Chebyshev (D-C) array pattern synthesis techniques were used to train the neural network. The inputs to the network were samples of the desired synthesis pattern. The outputs are the array element excitations required to synthesize the desired pattern. Once trained, the network is used to simulate the W-L or D-C techniques. Various sector patterns and cosecant-type patterns (27 total) generated using W-L synthesis were used to train the network. Desired pattern samples were then fed to the neural network. The outputs of the network were the simulated W-L excitations. A 20 element linear array was used. There were 41 input pattern samples with 40 output excitations (20 real parts, 20 imaginary).

  10. Operation of remote mobile sensors for security of drinking water distribution systems.

    PubMed

    Perelman, By Lina; Ostfeld, Avi

    2013-09-01

    The deployment of fixed online water quality sensors in water distribution systems has been recognized as one of the key components of contamination warning systems for securing public health. This study proposes to explore how the inclusion of mobile sensors for inline monitoring of various water quality parameters (e.g., residual chlorine, pH) can enhance water distribution system security. Mobile sensors equipped with sampling, sensing, data acquisition, wireless transmission and power generation systems are being designed, fabricated, and tested, and prototypes are expected to be released in the very near future. This study initiates the development of a theoretical framework for modeling mobile sensor movement in water distribution systems and integrating the sensory data collected from stationary and non-stationary sensor nodes to increase system security. The methodology is applied and demonstrated on two benchmark networks. Performance of different sensor network designs are compared for fixed and combined fixed and mobile sensor networks. Results indicate that complementing online sensor networks with inline monitoring can increase detection likelihood and decrease mean time to detection. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Multiobjective sampling design for parameter estimation and model discrimination in groundwater solute transport

    USGS Publications Warehouse

    Knopman, Debra S.; Voss, Clifford I.

    1989-01-01

    Sampling design for site characterization studies of solute transport in porous media is formulated as a multiobjective problem. Optimal design of a sampling network is a sequential process in which the next phase of sampling is designed on the basis of all available physical knowledge of the system. Three objectives are considered: model discrimination, parameter estimation, and cost minimization. For the first two objectives, physically based measures of the value of information obtained from a set of observations are specified. In model discrimination, value of information of an observation point is measured in terms of the difference in solute concentration predicted by hypothesized models of transport. Points of greatest difference in predictions can contribute the most information to the discriminatory power of a sampling design. Sensitivity of solute concentration to a change in a parameter contributes information on the relative variance of a parameter estimate. Inclusion of points in a sampling design with high sensitivities to parameters tends to reduce variance in parameter estimates. Cost minimization accounts for both the capital cost of well installation and the operating costs of collection and analysis of field samples. Sensitivities, discrimination information, and well installation and sampling costs are used to form coefficients in the multiobjective problem in which the decision variables are binary (zero/one), each corresponding to the selection of an observation point in time and space. The solution to the multiobjective problem is a noninferior set of designs. To gain insight into effective design strategies, a one-dimensional solute transport problem is hypothesized. Then, an approximation of the noninferior set is found by enumerating 120 designs and evaluating objective functions for each of the designs. Trade-offs between pairs of objectives are demonstrated among the models. The value of an objective function for a given design is shown to correspond to the ability of a design to actually meet an objective.

  12. Microfluidics-based integrated airborne pathogen detection systems

    NASA Astrophysics Data System (ADS)

    Northrup, M. Allen; Alleman-Sposito, Jennifer; Austin, Todd; Devitt, Amy; Fong, Donna; Lin, Phil; Nakao, Brian; Pourahmadi, Farzad; Vinas, Mary; Yuan, Bob

    2006-09-01

    Microfluidic Systems is focused on building microfluidic platforms that interface front-end mesofluidics to handle real world sample volumes for optimal sensitivity coupled to microfluidic circuitry to process small liquid volumes for complex reagent metering, mixing, and biochemical analysis, particularly for pathogens. MFSI is the prime contractor on two programs for the US Department of Homeland Security: BAND (Bioagent Autonomous Networked Detector) and IBADS (Instantaneous Bio-Aerosol Detection System). The goal of BAND is to develop an autonomous system for monitoring the air for known biological agents. This consists of air collection, sample lysis, sample purification, detection of DNA, RNA, and toxins, and a networked interface to report the results. For IBADS, MFSI is developing the confirmatory device which must verify the presence of a pathogen with 5 minutes of an air collector/trigger sounding an alarm. Instrument designs and biological assay results from both BAND and IBADS will be presented.

  13. Using convolutional neural networks to explore the microbiome.

    PubMed

    Reiman, Derek; Metwally, Ahmed; Yang Dai

    2017-07-01

    The microbiome has been shown to have an impact on the development of various diseases in the host. Being able to make an accurate prediction of the phenotype of a genomic sample based on its microbial taxonomic abundance profile is an important problem for personalized medicine. In this paper, we examine the potential of using a deep learning framework, a convolutional neural network (CNN), for such a prediction. To facilitate the CNN learning, we explore the structure of abundance profiles by creating the phylogenetic tree and by designing a scheme to embed the tree to a matrix that retains the spatial relationship of nodes in the tree and their quantitative characteristics. The proposed CNN framework is highly accurate, achieving a 99.47% of accuracy based on the evaluation on a dataset 1967 samples of three phenotypes. Our result demonstrated the feasibility and promising aspect of CNN in the classification of sample phenotype.

  14. Surface-water-quality assessment of the Kentucky River Basin, Kentucky; fixed-station network and selected water-quality data, April 1987 through August 1991

    USGS Publications Warehouse

    Griffin, M.S.; Martin, G.R.; White, K.D.

    1994-01-01

    This report describes selected data-collection activities and the associated data collected during the Kentucky River Basin pilot study of the U.S. Geological Survey's National Water-Quality Assessment Program. The data are intended to provide a nationally consistent description and improved understanding of current water quality in the basin. The data were collected at seven fixed stations that represent stream cross sections where constituent transport and water-quality trends can be evaluated. The report includes descriptions of (1) the basin; (2) the design of the fixed-station network; (3) the fixed-station sites; (4) the physical and chemical measurements; (5) the methods of sample collection, processing, and analysis; and (6) the quality-assurance and quality-control procedures. Water-quality data collected at the fixed stations during routine periodic sampling and supplemental high-flow sampling from April 1987 to August 1991 are presented.

  15. Structure guided GANs

    NASA Astrophysics Data System (ADS)

    Cao, Feidao; Zhao, Huaici; Liu, Pengfei

    2017-11-01

    Generative adversarial networks (GANs) has achieved success in many fields. However, there are some samples generated by many GAN-based works, whose structure is ambiguous. In this work, we propose Structure Guided GANs that introduce structural similar into GANs to overcome the problem. In order to achieve our goal, we introduce an encoder and a decoder into a generator to design a new generator and take real samples as part of the input of a generator. And we modify the loss function of the generator accordingly. By comparison with WGAN, experimental results show that our proposed method overcomes largely sample structure ambiguous and can generate higher quality samples.

  16. Asynchronous sampled-data approach for event-triggered systems

    NASA Astrophysics Data System (ADS)

    Mahmoud, Magdi S.; Memon, Azhar M.

    2017-11-01

    While aperiodically triggered network control systems save a considerable amount of communication bandwidth, they also pose challenges such as coupling between control and event-condition design, optimisation of the available resources such as control, communication and computation power, and time-delays due to computation and communication network. With this motivation, the paper presents separate designs of control and event-triggering mechanism, thus simplifying the overall analysis, asynchronous linear quadratic Gaussian controller which tackles delays and aperiodic nature of transmissions, and a novel event mechanism which compares the cost of the aperiodic system against a reference periodic implementation. The proposed scheme is simulated on a linearised wind turbine model for pitch angle control and the results show significant improvement against the periodic counterpart.

  17. Function approximation and documentation of sampling data using artificial neural networks.

    PubMed

    Zhang, Wenjun; Barrion, Albert

    2006-11-01

    Biodiversity studies in ecology often begin with the fitting and documentation of sampling data. This study is conducted to make function approximation on sampling data and to document the sampling information using artificial neural network algorithms, based on the invertebrate data sampled in the irrigated rice field. Three types of sampling data, i.e., the curve species richness vs. the sample size, the curve rarefaction, and the curve mean abundance of newly sampled species vs.the sample size, are fitted and documented using BP (Backpropagation) network and RBF (Radial Basis Function) network. As the comparisons, The Arrhenius model, and rarefaction model, and power function are tested for their ability to fit these data. The results show that the BP network and RBF network fit the data better than these models with smaller errors. BP network and RBF network can fit non-linear functions (sampling data) with specified accuracy and don't require mathematical assumptions. In addition to the interpolation, BP network is used to extrapolate the functions and the asymptote of the sampling data can be drawn. BP network cost a longer time to train the network and the results are always less stable compared to the RBF network. RBF network require more neurons to fit functions and generally it may not be used to extrapolate the functions. The mathematical function for sampling data can be exactly fitted using artificial neural network algorithms by adjusting the desired accuracy and maximum iterations. The total numbers of functional species of invertebrates in the tropical irrigated rice field are extrapolated as 140 to 149 using trained BP network, which are similar to the observed richness.

  18. The microfluidic bioagent autonomous networked detector (M-BAND): an update. Fully integrated, automated, and networked field identification of airborne pathogens

    NASA Astrophysics Data System (ADS)

    Sanchez, M.; Probst, L.; Blazevic, E.; Nakao, B.; Northrup, M. A.

    2011-11-01

    We describe a fully automated and autonomous air-borne biothreat detection system for biosurveillance applications. The system, including the nucleic-acid-based detection assay, was designed, built and shipped by Microfluidic Systems Inc (MFSI), a new subsidiary of PositiveID Corporation (PSID). Our findings demonstrate that the system and assay unequivocally identify pathogenic strains of Bacillus anthracis, Yersinia pestis, Francisella tularensis, Burkholderia mallei, and Burkholderia pseudomallei. In order to assess the assay's ability to detect unknown samples, our team also challenged it against a series of blind samples provided by the Department of Homeland Security (DHS). These samples included natural occurring isolated strains, near-neighbor isolates, and environmental samples. Our results indicate that the multiplex assay was specific and produced no false positives when challenged with in house gDNA collections and DHS provided panels. Here we present another analytical tool for the rapid identification of nine Centers for Disease Control and Prevention category A and B biothreat organisms.

  19. Inferring Broad Regulatory Biology from Time Course Data: Have We Reached an Upper Bound under Constraints Typical of In Vivo Studies?

    PubMed Central

    Craddock, Travis J. A.; Fletcher, Mary Ann; Klimas, Nancy G.

    2015-01-01

    There is a growing appreciation for the network biology that regulates the coordinated expression of molecular and cellular markers however questions persist regarding the identifiability of these networks. Here we explore some of the issues relevant to recovering directed regulatory networks from time course data collected under experimental constraints typical of in vivo studies. NetSim simulations of sparsely connected biological networks were used to evaluate two simple feature selection techniques used in the construction of linear Ordinary Differential Equation (ODE) models, namely truncation of terms versus latent vector projection. Performance was compared with ODE-based Time Series Network Identification (TSNI) integral, and the information-theoretic Time-Delay ARACNE (TD-ARACNE). Projection-based techniques and TSNI integral outperformed truncation-based selection and TD-ARACNE on aggregate networks with edge densities of 10-30%, i.e. transcription factor, protein-protein cliques and immune signaling networks. All were more robust to noise than truncation-based feature selection. Performance was comparable on the in silico 10-node DREAM 3 network, a 5-node Yeast synthetic network designed for In vivo Reverse-engineering and Modeling Assessment (IRMA) and a 9-node human HeLa cell cycle network of similar size and edge density. Performance was more sensitive to the number of time courses than to sample frequency and extrapolated better to larger networks by grouping experiments. In all cases performance declined rapidly in larger networks with lower edge density. Limited recovery and high false positive rates obtained overall bring into question our ability to generate informative time course data rather than the design of any particular reverse engineering algorithm. PMID:25984725

  20. Characteristics of the Healthy Brain Project Sample: Representing Diversity among Study Participants

    ERIC Educational Resources Information Center

    Bryant, Lucinda L.; Laditka, James N.; Laditka, Sarah B.; Mathews, Anna E.

    2009-01-01

    Purpose: Description of study participants and documentation of the desired diversity in the Prevention Research Centers Healthy Aging Research Network's Workgroup on Promoting Cognitive Health large multisite study designed to examine attitudes about brain health, behaviors associated with its maintenance, and information-receiving preferences…

  1. NETWORK DESIGN FACTORS FOR ASSESSING TEMPORAL VARIABILITY IN GROUND-WATER QUALITY

    EPA Science Inventory

    A 1.5 year benchmark data Set was collected at biweekly frequency from two siteS in shallow sand and gravel deposits in West Central Illinois. ne site was near a hog-processing facility and the other represented uncontaminated conditions. onsistent sampling and analytical protoco...

  2. With Love, Grandma: Letters to Grandchildren.

    ERIC Educational Resources Information Center

    Smith, Carl B.; Ritter, Naomi

    Based on years of experience with intergenerational correspondence at the "Senior Partners Network," this book is designed to help grandparents (and grandchildren) to find the right topics for correspondence, all laid out in clear steps. The book also offers sample letters, cards, and e-mail messages, and provides dozens of themes. The…

  3. An Analysis on the Use of Educational Social Networking Sites in the Course Activities of Geography Department Students: Edmodo Sample

    ERIC Educational Resources Information Center

    Teyfur, Emine; Özkan, Adem; Teyfur, Mehmet

    2017-01-01

    The aim of this study was to examine the views of the students of Geography Department on the use of ESNS Edmodo in the course activities. Sequential explanatory design in mixed methods research designs was used in the study. This study was conducted with a total of 41 second grade students who take Europe Geography class and study in the…

  4. The national stream quality accounting network: A flux-basedapproach to monitoring the water quality of large rivers

    USGS Publications Warehouse

    Hooper, R.P.; Aulenbach, Brent T.; Kelly, V.J.

    2001-01-01

    Estimating the annual mass flux at a network of fixed stations is one approach to characterizing water quality of large rivers. The interpretive context provided by annual flux includes identifying source and sink areas for constituents and estimating the loadings to receiving waters, such as reservoirs or the ocean. Since 1995, the US Geological Survey's National Stream Quality Accounting Network (NASQAN) has employed this approach at a network of 39 stations in four of the largest river basins of the USA: The Mississippi, the Columbia, the Colorado and the Rio Grande. In this paper, the design of NASQAN is described and its effectiveness at characterizing the water quality of these rivers is evaluated using data from the first 3 years of operation. A broad range of constituents was measured by NASQAN, including trace organic and inorganic chemicals, major ions, sediment and nutrients. Where possible, a regression model relating concentration to discharge and season was used to interpolate between chemical observations for flux estimation. For water-quality network design, the most important finding from NASQAN was the importance of having a specific objective (that is, estimating annual mass flux) and, from that, an explicitly stated data analysis strategy, namely the use of regression models to interpolate between observations. The use of such models aided in the design of sampling strategy and provided a context for data review. The regression models essentially form null hypotheses for concentration variation that can be evaluated by the observed data. The feedback between network operation and data collection established by the hypothesis tests places the water-quality network on a firm scientific footing.

  5. A unified framework for unraveling the functional interaction structure of a biomolecular network based on stimulus-response experimental data.

    PubMed

    Cho, Kwang-Hyun; Choo, Sang-Mok; Wellstead, Peter; Wolkenhauer, Olaf

    2005-08-15

    We propose a unified framework for the identification of functional interaction structures of biomolecular networks in a way that leads to a new experimental design procedure. In developing our approach, we have built upon previous work. Thus we begin by pointing out some of the restrictions associated with existing structure identification methods and point out how these restrictions may be eased. In particular, existing methods use specific forms of experimental algebraic equations with which to identify the functional interaction structure of a biomolecular network. In our work, we employ an extended form of these experimental algebraic equations which, while retaining their merits, also overcome some of their disadvantages. Experimental data are required in order to estimate the coefficients of the experimental algebraic equation set associated with the structure identification task. However, experimentalists are rarely provided with guidance on which parameters to perturb, and to what extent, to perturb them. When a model of network dynamics is required then there is also the vexed question of sample rate and sample time selection to be resolved. Supplying some answers to these questions is the main motivation of this paper. The approach is based on stationary and/or temporal data obtained from parameter perturbations, and unifies the previous approaches of Kholodenko et al. (PNAS 99 (2002) 12841-12846) and Sontag et al. (Bioinformatics 20 (2004) 1877-1886). By way of demonstration, we apply our unified approach to a network model which cannot be properly identified by existing methods. Finally, we propose an experiment design methodology, which is not limited by the amount of parameter perturbations, and illustrate its use with an in numero example.

  6. Optimal spatio-temporal design of water quality monitoring networks for reservoirs: Application of the concept of value of information

    NASA Astrophysics Data System (ADS)

    Maymandi, Nahal; Kerachian, Reza; Nikoo, Mohammad Reza

    2018-03-01

    This paper presents a new methodology for optimizing Water Quality Monitoring (WQM) networks of reservoirs and lakes using the concept of the value of information (VOI) and utilizing results of a calibrated numerical water quality simulation model. With reference to the value of information theory, water quality of every checkpoint with a specific prior probability differs in time. After analyzing water quality samples taken from potential monitoring points, the posterior probabilities are updated using the Baye's theorem, and VOI of the samples is calculated. In the next step, the stations with maximum VOI is selected as optimal stations. This process is repeated for each sampling interval to obtain optimal monitoring network locations for each interval. The results of the proposed VOI-based methodology is compared with those obtained using an entropy theoretic approach. As the results of the two methodologies would be partially different, in the next step, the results are combined using a weighting method. Finally, the optimal sampling interval and location of WQM stations are chosen using the Evidential Reasoning (ER) decision making method. The efficiency and applicability of the methodology are evaluated using available water quantity and quality data of the Karkheh Reservoir in the southwestern part of Iran.

  7. Stochastic sampled-data control for synchronization of complex dynamical networks with control packet loss and additive time-varying delays.

    PubMed

    Rakkiyappan, R; Sakthivel, N; Cao, Jinde

    2015-06-01

    This study examines the exponential synchronization of complex dynamical networks with control packet loss and additive time-varying delays. Additionally, sampled-data controller with time-varying sampling period is considered and is assumed to switch between m different values in a random way with given probability. Then, a novel Lyapunov-Krasovskii functional (LKF) with triple integral terms is constructed and by using Jensen's inequality and reciprocally convex approach, sufficient conditions under which the dynamical network is exponentially mean-square stable are derived. When applying Jensen's inequality to partition double integral terms in the derivation of linear matrix inequality (LMI) conditions, a new kind of linear combination of positive functions weighted by the inverses of squared convex parameters appears. In order to handle such a combination, an effective method is introduced by extending the lower bound lemma. To design the sampled-data controller, the synchronization error system is represented as a switched system. Based on the derived LMI conditions and average dwell-time method, sufficient conditions for the synchronization of switched error system are derived in terms of LMIs. Finally, numerical example is employed to show the effectiveness of the proposed methods. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Spectral-spatial classification of hyperspectral image using three-dimensional convolution network

    NASA Astrophysics Data System (ADS)

    Liu, Bing; Yu, Xuchu; Zhang, Pengqiang; Tan, Xiong; Wang, Ruirui; Zhi, Lu

    2018-01-01

    Recently, hyperspectral image (HSI) classification has become a focus of research. However, the complex structure of an HSI makes feature extraction difficult to achieve. Most current methods build classifiers based on complex handcrafted features computed from the raw inputs. The design of an improved 3-D convolutional neural network (3D-CNN) model for HSI classification is described. This model extracts features from both the spectral and spatial dimensions through the application of 3-D convolutions, thereby capturing the important discrimination information encoded in multiple adjacent bands. The designed model views the HSI cube data altogether without relying on any pre- or postprocessing. In addition, the model is trained in an end-to-end fashion without any handcrafted features. The designed model was applied to three widely used HSI datasets. The experimental results demonstrate that the 3D-CNN-based method outperforms conventional methods even with limited labeled training samples.

  9. Handwritten digits recognition based on immune network

    NASA Astrophysics Data System (ADS)

    Li, Yangyang; Wu, Yunhui; Jiao, Lc; Wu, Jianshe

    2011-11-01

    With the development of society, handwritten digits recognition technique has been widely applied to production and daily life. It is a very difficult task to solve these problems in the field of pattern recognition. In this paper, a new method is presented for handwritten digit recognition. The digit samples firstly are processed and features extraction. Based on these features, a novel immune network classification algorithm is designed and implemented to the handwritten digits recognition. The proposed algorithm is developed by Jerne's immune network model for feature selection and KNN method for classification. Its characteristic is the novel network with parallel commutating and learning. The performance of the proposed method is experimented to the handwritten number datasets MNIST and compared with some other recognition algorithms-KNN, ANN and SVM algorithm. The result shows that the novel classification algorithm based on immune network gives promising performance and stable behavior for handwritten digits recognition.

  10. Simultaneous Automatic Electrochemical Detection of Zinc, Cadmium, Copper and Lead Ions in Environmental Samples Using a Thin-Film Mercury Electrode and an Artificial Neural Network

    PubMed Central

    Kudr, Jiri; Nguyen, Hoai Viet; Gumulec, Jaromir; Nejdl, Lukas; Blazkova, Iva; Ruttkay-Nedecky, Branislav; Hynek, David; Kynicky, Jindrich; Adam, Vojtech; Kizek, Rene

    2015-01-01

    In this study a device for automatic electrochemical analysis was designed. A three electrodes detection system was attached to a positioning device, which enabled us to move the electrode system from one well to another of a microtitre plate. Disposable carbon tip electrodes were used for Cd(II), Cu(II) and Pb(II) ion quantification, while Zn(II) did not give signal in this electrode configuration. In order to detect all mentioned heavy metals simultaneously, thin-film mercury electrodes (TFME) were fabricated by electrodeposition of mercury on the surface of carbon tips. In comparison with bare electrodes the TMFEs had lower detection limits and better sensitivity. In addition to pure aqueous heavy metal solutions, the assay was also performed on mineralized rock samples, artificial blood plasma samples and samples of chicken embryo organs treated with cadmium. An artificial neural network was created to evaluate the concentrations of the mentioned heavy metals correctly in mixture samples and an excellent fit was observed (R2 = 0.9933). PMID:25558996

  11. Digital Curation of Marine Physical Samples at Ocean Networks Canada

    NASA Astrophysics Data System (ADS)

    Jenkyns, R.; Tomlin, M. C.; Timmerman, R.

    2015-12-01

    Ocean Networks Canada (ONC) has collected hundreds of geological, biological and fluid samples from the water column and seafloor during its maintenance expeditions. These samples have been collected by Remotely Operated Vehicles (ROVs), divers, networked and autonomously deployed instruments, and rosettes. Subsequent measurements are used for scientific experiments, calibration of in-situ and remote sensors, monitoring of Marine Protected Areas, and environment characterization. Tracking the life cycles of these samples from collection to dissemination of results with all the pertinent documents (e.g., protocols, imagery, reports), metadata (e.g., location, identifiers, purpose, method) and data (e.g., measurements, taxonomic classification) is a challenge. The initial collection of samples is normally documented in SeaScribe (an ROV dive logging tool within ONC's Oceans 2.0 software) for which ONC has defined semantics and syntax. Next, samples are often sent to individual scientists and institutions (e.g., Royal BC Museum) for processing and storage, making acquisition of results and life cycle metadata difficult. Finally, this information needs to be retrieved and collated such that multiple user scenarios can be addressed. ONC aims to improve and extend its digital infrastructure for physical samples to support this complex array of samples, workflows and applications. However, in order to promote effective data discovery and exchange, interoperability and community standards must be an integral part of the design. Thus, integrating recommendations and outcomes of initiatives like the EarthCube iSamples working groups are essential. Use cases, existing tools, schemas and identifiers are reviewed, while remaining gaps and challenges are identified. The current status, selected approaches and possible future directions to enhance ONC's digital infrastructure for each sample type are presented.

  12. Toward cost-efficient sampling methods

    NASA Astrophysics Data System (ADS)

    Luo, Peng; Li, Yongli; Wu, Chong; Zhang, Guijie

    2015-09-01

    The sampling method has been paid much attention in the field of complex network in general and statistical physics in particular. This paper proposes two new sampling methods based on the idea that a small part of vertices with high node degree could possess the most structure information of a complex network. The two proposed sampling methods are efficient in sampling high degree nodes so that they would be useful even if the sampling rate is low, which means cost-efficient. The first new sampling method is developed on the basis of the widely used stratified random sampling (SRS) method and the second one improves the famous snowball sampling (SBS) method. In order to demonstrate the validity and accuracy of two new sampling methods, we compare them with the existing sampling methods in three commonly used simulation networks that are scale-free network, random network, small-world network, and also in two real networks. The experimental results illustrate that the two proposed sampling methods perform much better than the existing sampling methods in terms of achieving the true network structure characteristics reflected by clustering coefficient, Bonacich centrality and average path length, especially when the sampling rate is low.

  13. Social networks and health: a systematic review of sociocentric network studies in low- and middle-income countries.

    PubMed

    Perkins, Jessica M; Subramanian, S V; Christakis, Nicholas A

    2015-01-01

    In low- and middle-income countries (LMICs), naturally occurring social networks may be particularly vital to health outcomes as extended webs of social ties often are the principal source of various resources. Understanding how social network structure, and influential individuals within the network, may amplify the effects of interventions in LMICs, by creating, for example, cascade effects to non-targeted participants, presents an opportunity to improve the efficiency and effectiveness of public health interventions in such settings. We conducted a systematic review of PubMed, Econlit, Sociological Abstracts, and PsycINFO to identify a sample of 17 sociocentric network papers (arising from 10 studies) that specifically examined health issues in LMICs. We also separately selected to review 19 sociocentric network papers (arising from 10 other studies) on development topics related to wellbeing in LMICs. First, to provide a methodological resource, we discuss the sociocentric network study designs employed in the selected papers, and then provide a catalog of 105 name generator questions used to measure social ties across all the LMIC network papers (including both ego- and sociocentric network papers) cited in this review. Second, we show that network composition, individual network centrality, and network structure are associated with important health behaviors and health and development outcomes in different contexts across multiple levels of analysis and across distinct network types. Lastly, we highlight the opportunities for health researchers and practitioners in LMICs to 1) design effective studies and interventions in LMICs that account for the sociocentric network positions of certain individuals and overall network structure, 2) measure the spread of outcomes or intervention externalities, and 3) enhance the effectiveness and efficiency of aid based on knowledge of social structure. In summary, human health and wellbeing are connected through complex webs of dynamic social relationships. Harnessing such information may be especially important in contexts where resources are limited and people depend on their direct and indirect connections for support. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Social Networks and Health: A Systematic Review of Sociocentric Network Studies in Low- and Middle-Income Countries

    PubMed Central

    Perkins, Jessica M; Subramanian, S V; Christakis, Nicholas A

    2015-01-01

    In low- and middle-income countries (LMICs), naturally occurring social networks may be particularly vital to health outcomes as extended webs of social ties often are the principal source of various resources. Understanding how social network structure, and influential individuals within the network, may amplify the effects of interventions in LMICs, by creating, for example, cascade effects to non-targeted participants, presents an opportunity to improve the efficiency and effectiveness of public health interventions in such settings. We conducted a systematic review of PubMed, Econlit, Sociological Abstracts, and PsycINFO to identify a sample of 17 sociocentric network papers (arising from 10 studies) that specifically examined health issues in LMICs. We also separately selected to review 19 sociocentric network papers (arising from 10 other studies) on development topics related to wellbeing in LMICs. First, to provide a methodological resource, we discuss the sociocentric network study designs employed in the selected papers, and then provide a catalog of 105 name generator questions used to measure social ties across all the LMIC network papers (including both ego- and sociocentric network papers) cited in this review. Second, we show that network composition, individual network centrality, and network structure are associated with important health behaviors and health and development outcomes in different contexts across multiple levels of analysis and across distinct network types. Lastly, we highlight the opportunities for health researchers and practitioners in LMICs to 1) design effective studies and interventions in LMICs that account for the sociocentric network positions of certain individuals and overall network structure, 2) measure the spread of outcomes or intervention externalities, and 3) enhance the effectiveness and efficiency of aid based on knowledge of social structure. In summary, human health and wellbeing are connected through complex webs of dynamic social relationships. Harnessing such information may be especially important in contexts where resources are limited and people depend on their direct and indirect connections for support. PMID:25442969

  15. LESS: Link Estimation with Sparse Sampling in Intertidal WSNs

    PubMed Central

    Ji, Xiaoyu; Chen, Yi-chao; Li, Xiaopeng; Xu, Wenyuan

    2018-01-01

    Deploying wireless sensor networks (WSN) in the intertidal area is an effective approach for environmental monitoring. To sustain reliable data delivery in such a dynamic environment, a link quality estimation mechanism is crucial. However, our observations in two real WSN systems deployed in the intertidal areas reveal that link update in routing protocols often suffers from energy and bandwidth waste due to the frequent link quality measurement and updates. In this paper, we carefully investigate the network dynamics using real-world sensor network data and find it feasible to achieve accurate estimation of link quality using sparse sampling. We design and implement a compressive-sensing-based link quality estimation protocol, LESS, which incorporates both spatial and temporal characteristics of the system to aid the link update in routing protocols. We evaluate LESS in both real WSN systems and a large-scale simulation, and the results show that LESS can reduce energy and bandwidth consumption by up to 50% while still achieving more than 90% link quality estimation accuracy. PMID:29494557

  16. A remote sensing and geographic information system approach to sampling malaria vector habitats in Chiapas, Mexico

    NASA Astrophysics Data System (ADS)

    Beck, L.; Wood, B.; Whitney, S.; Rossi, R.; Spanner, M.; Rodriguez, M.; Rodriguez-Ramirez, A.; Salute, J.; Legters, L.; Roberts, D.; Rejmankova, E.; Washino, R.

    1993-08-01

    This paper describes a procedure whereby remote sensing and geographic information system (GIS) technologies are used in a sample design to study the habitat of Anopheles albimanus, one of the principle vectors of malaria in Central America. This procedure incorporates Landsat-derived land cover maps with digital elevation and road network data to identify a random selection of larval habitats accessible for field sampling. At the conclusion of the sampling season, the larval counts will be used to determine habitat productivity, and then integrated with information on human settlement to assess where people are at high risk of malaria. This aproach would be appropriate in areas where land cover information is lacking and problems of access constrain field sampling. The use of a GIS also permits other data (such as insecticide spraying data) to the incorporated in the sample design as they arise. This approach would also be pertinent for other tropical vector-borne diseases, particularly where human activities impact disease vector habitat.

  17. Influences of sampling effort on detected patterns and structuring processes of a Neotropical plant-hummingbird network.

    PubMed

    Vizentin-Bugoni, Jeferson; Maruyama, Pietro K; Debastiani, Vanderlei J; Duarte, L da S; Dalsgaard, Bo; Sazima, Marlies

    2016-01-01

    Virtually all empirical ecological interaction networks to some extent suffer from undersampling. However, how limitations imposed by sampling incompleteness affect our understanding of ecological networks is still poorly explored, which may hinder further advances in the field. Here, we use a plant-hummingbird network with unprecedented sampling effort (2716 h of focal observations) from the Atlantic Rainforest in Brazil, to investigate how sampling effort affects the description of network structure (i.e. widely used network metrics) and the relative importance of distinct processes (i.e. species abundances vs. traits) in determining the frequency of pairwise interactions. By dividing the network into time slices representing a gradient of sampling effort, we show that quantitative metrics, such as interaction evenness, specialization (H2 '), weighted nestedness (wNODF) and modularity (Q; QuanBiMo algorithm) were less biased by sampling incompleteness than binary metrics. Furthermore, the significance of some network metrics changed along the sampling effort gradient. Nevertheless, the higher importance of traits in structuring the network was apparent even with small sampling effort. Our results (i) warn against using very poorly sampled networks as this may bias our understanding of networks, both their patterns and structuring processes, (ii) encourage the use of quantitative metrics little influenced by sampling when performing spatio-temporal comparisons and (iii) indicate that in networks strongly constrained by species traits, such as plant-hummingbird networks, even small sampling is sufficient to detect their relative importance for the frequencies of interactions. Finally, we argue that similar effects of sampling are expected for other highly specialized subnetworks. © 2015 The Authors. Journal of Animal Ecology © 2015 British Ecological Society.

  18. Utilizing Big Data and Twitter to Discover Emergent Online Communities of Cannabis Users

    PubMed Central

    Baumgartner, Peter; Peiper, Nicholas

    2017-01-01

    Large shifts in medical, recreational, and illicit cannabis consumption in the United States have implications for personalizing treatment and prevention programs to a wide variety of populations. As such, considerable research has investigated clinical presentations of cannabis users in clinical and population-based samples. Studies leveraging big data, social media, and social network analysis have emerged as a promising mechanism to generate timely insights that can inform treatment and prevention research. This study extends a novel method called stochastic block modeling to derive communities of cannabis consumers as part of a complex social network on Twitter. A set of examples illustrate how this method can ascertain candidate samples of medical, recreational, and illicit cannabis users. Implications for research planning, intervention design, and public health surveillance are discussed. PMID:28615950

  19. Network analysis reveals multiscale controls on streamwater chemistry

    USGS Publications Warehouse

    McGuire, Kevin J.; Torgersen, Christian E.; Likens, Gene E.; Buso, Donald C.; Lowe, Winsor H.; Bailey, Scott W.

    2014-01-01

    By coupling synoptic data from a basin-wide assessment of streamwater chemistry with network-based geostatistical analysis, we show that spatial processes differentially affect biogeochemical condition and pattern across a headwater stream network. We analyzed a high-resolution dataset consisting of 664 water samples collected every 100 m throughout 32 tributaries in an entire fifth-order stream network. These samples were analyzed for an exhaustive suite of chemical constituents. The fine grain and broad extent of this study design allowed us to quantify spatial patterns over a range of scales by using empirical semivariograms that explicitly incorporated network topology. Here, we show that spatial structure, as determined by the characteristic shape of the semivariograms, differed both among chemical constituents and by spatial relationship (flow-connected, flow-unconnected, or Euclidean). Spatial structure was apparent at either a single scale or at multiple nested scales, suggesting separate processes operating simultaneously within the stream network and surrounding terrestrial landscape. Expected patterns of spatial dependence for flow-connected relationships (e.g., increasing homogeneity with downstream distance) occurred for some chemical constituents (e.g., dissolved organic carbon, sulfate, and aluminum) but not for others (e.g., nitrate, sodium). By comparing semivariograms for the different chemical constituents and spatial relationships, we were able to separate effects on streamwater chemistry of (i) fine-scale versus broad-scale processes and (ii) in-stream processes versus landscape controls. These findings provide insight on the hierarchical scaling of local, longitudinal, and landscape processes that drive biogeochemical patterns in stream networks.

  20. Network analysis reveals multiscale controls on streamwater chemistry

    PubMed Central

    McGuire, Kevin J.; Torgersen, Christian E.; Likens, Gene E.; Buso, Donald C.; Lowe, Winsor H.; Bailey, Scott W.

    2014-01-01

    By coupling synoptic data from a basin-wide assessment of streamwater chemistry with network-based geostatistical analysis, we show that spatial processes differentially affect biogeochemical condition and pattern across a headwater stream network. We analyzed a high-resolution dataset consisting of 664 water samples collected every 100 m throughout 32 tributaries in an entire fifth-order stream network. These samples were analyzed for an exhaustive suite of chemical constituents. The fine grain and broad extent of this study design allowed us to quantify spatial patterns over a range of scales by using empirical semivariograms that explicitly incorporated network topology. Here, we show that spatial structure, as determined by the characteristic shape of the semivariograms, differed both among chemical constituents and by spatial relationship (flow-connected, flow-unconnected, or Euclidean). Spatial structure was apparent at either a single scale or at multiple nested scales, suggesting separate processes operating simultaneously within the stream network and surrounding terrestrial landscape. Expected patterns of spatial dependence for flow-connected relationships (e.g., increasing homogeneity with downstream distance) occurred for some chemical constituents (e.g., dissolved organic carbon, sulfate, and aluminum) but not for others (e.g., nitrate, sodium). By comparing semivariograms for the different chemical constituents and spatial relationships, we were able to separate effects on streamwater chemistry of (i) fine-scale versus broad-scale processes and (ii) in-stream processes versus landscape controls. These findings provide insight on the hierarchical scaling of local, longitudinal, and landscape processes that drive biogeochemical patterns in stream networks. PMID:24753575

  1. Network analysis reveals multiscale controls on streamwater chemistry.

    PubMed

    McGuire, Kevin J; Torgersen, Christian E; Likens, Gene E; Buso, Donald C; Lowe, Winsor H; Bailey, Scott W

    2014-05-13

    By coupling synoptic data from a basin-wide assessment of streamwater chemistry with network-based geostatistical analysis, we show that spatial processes differentially affect biogeochemical condition and pattern across a headwater stream network. We analyzed a high-resolution dataset consisting of 664 water samples collected every 100 m throughout 32 tributaries in an entire fifth-order stream network. These samples were analyzed for an exhaustive suite of chemical constituents. The fine grain and broad extent of this study design allowed us to quantify spatial patterns over a range of scales by using empirical semivariograms that explicitly incorporated network topology. Here, we show that spatial structure, as determined by the characteristic shape of the semivariograms, differed both among chemical constituents and by spatial relationship (flow-connected, flow-unconnected, or Euclidean). Spatial structure was apparent at either a single scale or at multiple nested scales, suggesting separate processes operating simultaneously within the stream network and surrounding terrestrial landscape. Expected patterns of spatial dependence for flow-connected relationships (e.g., increasing homogeneity with downstream distance) occurred for some chemical constituents (e.g., dissolved organic carbon, sulfate, and aluminum) but not for others (e.g., nitrate, sodium). By comparing semivariograms for the different chemical constituents and spatial relationships, we were able to separate effects on streamwater chemistry of (i) fine-scale versus broad-scale processes and (ii) in-stream processes versus landscape controls. These findings provide insight on the hierarchical scaling of local, longitudinal, and landscape processes that drive biogeochemical patterns in stream networks.

  2. Analysis of patient organizations' needs and ICT use--The APTIC project in Spain to develop an online collaborative social network.

    PubMed

    Hernández-Encuentra, Eulàlia; Gómez-Zúñiga, Beni; Guillamón, Noemí; Boixadós, Mercè; Armayones, Manuel

    2015-12-01

    The purpose of this first part of the APTIC (Patient Organisations and ICT) project is to design and run an online collaborative social network for paediatric patient organizations (PPOs). To analyse the needs of PPOs in Spain to identify opportunities to improve health services through the use of ICT. A convenience sample of staff from 35 PPOs (54.68% response rate) participated in a structured online survey and three focus groups (12 PPOs). Paediatric patient organizations' major needs are to provide accredited and managed information, increase personal support and assistance and promote joint commitment to health care. Moreover, PPOs believe in the Internet's potential to meet their needs and support their activities. Basic limitations to using the Internet are lack of knowledge and resources. The discussion of the data includes key elements of designing an online collaborative social network and reflections on health services provided. © 2014 John Wiley & Sons Ltd.

  3. A Low-Power Sensor Network for Long Duration Monitoring in Deep Caves

    NASA Astrophysics Data System (ADS)

    Silva, A.; Johnson, I.; Bick, T.; Winclechter, C.; Jorgensen, A. M.; Teare, S. W.; Arechiga, R. O.

    2010-12-01

    Monitoring deep and inaccessible caves is important and challenging for a variety of reasons. It is of interest to study caves environments for understanding cave ecosystems, and human impact on the ecosystems. Caves may also hold clues to past climate changes. Cave instrumentation must however carry out its job with minimal human intervention and without disturbing the fragile environment. This requires unobtrusive and autonomous instrumentation. Earth-bound caves can also serve as analogs for caves on other planets and act as testbeds for autonomous sensor networks. Here we report on a project to design and implement a low-power, ad-hoc, wireless sensor network for monitoring caves and similar environments. The implemented network is composed of individual nodes which consist of a sensor, processing unit, memory, transceiver and a power source. Data collected at these nodes is transmitted through a wireless ZigBee network to a central data collection point from which the researcher may transfer collected data to a laptop for further analysis. The project accomplished a node design with a physical footprint of 2 inches long by 3 inches wide. The design is based on the EZMSP430-RF2480, a Zigbee hardware base offered by Texas Instruments. Five functioning nodes have been constructed at very low cost and tested. Due to the use of an external analog-to-digital converter the design was able to achieve a 16-bit resolution. The operational time achieved by the prototype was calculated to be approximately 80 days of autonomous operation while sampling once per minute. Each node is able to support and record data from up to four different sensors.

  4. Groundwater-quality data for the Sierra Nevada study unit, 2008: Results from the California GAMA program

    USGS Publications Warehouse

    Shelton, Jennifer L.; Fram, Miranda S.; Munday, Cathy M.; Belitz, Kenneth

    2010-01-01

    Groundwater quality in the approximately 25,500-square-mile Sierra Nevada study unit was investigated in June through October 2008, as part of the Priority Basin Project of the Groundwater Ambient Monitoring and Assessment (GAMA) Program. The GAMA Priority Basin Project is being conducted by the U.S. Geological Survey (USGS) in cooperation with the California State Water Resources Control Board (SWRCB). The Sierra Nevada study was designed to provide statistically robust assessments of untreated groundwater quality within the primary aquifer systems in the study unit, and to facilitate statistically consistent comparisons of groundwater quality throughout California. The primary aquifer systems (hereinafter, primary aquifers) are defined by the depth of the screened or open intervals of the wells listed in the California Department of Public Health (CDPH) database of wells used for public and community drinking-water supplies. The quality of groundwater in shallower or deeper water-bearing zones may differ from that in the primary aquifers; shallow groundwater may be more vulnerable to contamination from the surface. In the Sierra Nevada study unit, groundwater samples were collected from 84 wells (and springs) in Lassen, Plumas, Butte, Sierra, Yuba, Nevada, Placer, El Dorado, Amador, Alpine, Calaveras, Tuolumne, Madera, Mariposa, Fresno, Inyo, Tulare, and Kern Counties. The wells were selected on two overlapping networks by using a spatially-distributed, randomized, grid-based approach. The primary grid-well network consisted of 30 wells, one well per grid cell in the study unit, and was designed to provide statistical representation of groundwater quality throughout the entire study unit. The lithologic grid-well network is a secondary grid that consisted of the wells in the primary grid-well network plus 53 additional wells and was designed to provide statistical representation of groundwater quality in each of the four major lithologic units in the Sierra Nevada study unit: granitic, metamorphic, sedimentary, and volcanic rocks. One natural spring that is not used for drinking water was sampled for comparison with a nearby primary grid well in the same cell. Groundwater samples were analyzed for organic constituents (volatile organic compounds [VOC], pesticides and pesticide degradates, and pharmaceutical compounds), constituents of special interest (N-nitrosodimethylamine [NDMA] and perchlorate), naturally occurring inorganic constituents (nutrients, major ions, total dissolved solids, and trace elements), and radioactive constituents (radium isotopes, radon-222, gross alpha and gross beta particle activities, and uranium isotopes). Naturally occurring isotopes and geochemical tracers (stable isotopes of hydrogen and oxygen in water, stable isotopes of carbon, carbon-14, strontium isotopes, and tritium), and dissolved noble gases also were measured to help identify the sources and ages of the sampled groundwater. Three types of quality-control samples (blanks, replicates, and samples for matrix spikes) each were collected at approximately 10 percent of the wells sampled for each analysis, and the results for these samples were used to evaluate the quality of the data for the groundwater samples. Field blanks rarely contained detectable concentrations of any constituent, suggesting that contamination from sample collection, handling, and analytical procedures was not a significant source of bias in the data for the groundwater samples. Differences between replicate samples were within acceptable ranges, with few exceptions. Matrix-spike recoveries were within acceptable ranges for most compounds. This study did not attempt to evaluate the quality of water delivered to consumers; after withdrawal from the ground, groundwater typically is treated, disinfected, or blended with other waters to maintain water quality. Regulatory benchmarks apply to finished drinking water that is served to the consumer, not to untre

  5. Networks as systems.

    PubMed

    Best, Allan; Berland, Alex; Greenhalgh, Trisha; Bourgeault, Ivy L; Saul, Jessie E; Barker, Brittany

    2018-03-19

    Purpose The purpose of this paper is to present a case study of the World Health Organization's Global Healthcare Workforce Alliance (GHWA). Based on a commissioned evaluation of GHWA, it applies network theory and key concepts from systems thinking to explore network emergence, effectiveness, and evolution to over a ten-year period. The research was designed to provide high-level strategic guidance for further evolution of global governance in human resources for health (HRH). Design/methodology/approach Methods included a review of published literature on HRH governance and current practice in the field and an in-depth case study whose main data sources were relevant GHWA background documents and key informant interviews with GHWA leaders, staff, and stakeholders. Sampling was purposive and at a senior level, focusing on board members, executive directors, funders, and academics. Data were analyzed thematically with reference to systems theory and Shiffman's theory of network development. Findings Five key lessons emerged: effective management and leadership are critical; networks need to balance "tight" and "loose" approaches to their structure and processes; an active communication strategy is key to create and maintain support; the goals, priorities, and membership must be carefully focused; and the network needs to support shared measurement of progress on agreed-upon goals. Shiffman's middle-range network theory is a useful tool when guided by the principles of complex systems that illuminate dynamic situations and shifting interests as global alliances evolve. Research limitations/implications This study was implemented at the end of the ten-year funding cycle. A more continuous evaluation throughout the term would have provided richer understanding of issues. Experience and perspectives at the country level were not assessed. Practical implications Design and management of large, complex networks requires ongoing attention to key issues like leadership, and flexible structures and processes to accommodate the dynamic reality of these networks. Originality/value This case study builds on growing interest in the role of networks to foster large-scale change. The particular value rests on the longitudinal perspective on the evolution of a large, complex global network, and the use of theory to guide understanding.

  6. Differentially Coexpressed Disease Gene Identification Based on Gene Coexpression Network.

    PubMed

    Jiang, Xue; Zhang, Han; Quan, Xiongwen

    2016-01-01

    Screening disease-related genes by analyzing gene expression data has become a popular theme. Traditional disease-related gene selection methods always focus on identifying differentially expressed gene between case samples and a control group. These traditional methods may not fully consider the changes of interactions between genes at different cell states and the dynamic processes of gene expression levels during the disease progression. However, in order to understand the mechanism of disease, it is important to explore the dynamic changes of interactions between genes in biological networks at different cell states. In this study, we designed a novel framework to identify disease-related genes and developed a differentially coexpressed disease-related gene identification method based on gene coexpression network (DCGN) to screen differentially coexpressed genes. We firstly constructed phase-specific gene coexpression network using time-series gene expression data and defined the conception of differential coexpression of genes in coexpression network. Then, we designed two metrics to measure the value of gene differential coexpression according to the change of local topological structures between different phase-specific networks. Finally, we conducted meta-analysis of gene differential coexpression based on the rank-product method. Experimental results demonstrated the feasibility and effectiveness of DCGN and the superior performance of DCGN over other popular disease-related gene selection methods through real-world gene expression data sets.

  7. The Philip Morris Information Network: A Library Database on an In-House Timesharing System.

    ERIC Educational Resources Information Center

    DeBardeleben, Marian Z.; And Others

    1983-01-01

    Outlines a database constructed at Philip Morris Research Center Library which encompasses holdings and circulation and acquisitions records for all items in the library. Host computer (DECSYSTEM-2060), software (BASIC), database design, search methodology, cataloging, and accessibility are noted; sample search, circ-in profile, end user profiles,…

  8. DogMATIC--A Remote Biospecimen Collection Kit for Biobanking.

    PubMed

    Milley, Kristi M; Nimmo, Judith S; Bacci, Barbara; Ryan, Stewart D; Richardson, Samantha J; Danks, Janine A

    2015-08-01

    Canine tumors are valuable comparative oncology models. This research was designed to create a sustainable biobank of canine mammary tumors for breast cancer research. The aim was to provide a well-characterized sample cohort for specimen sharing, data mining, and long-term research aims. Canine mammary tumors are most frequently managed at a local veterinary clinic or hospital. We adopted a biobank framework based on a large number of participating veterinary hospitals and clinics acting as collection centers that were serviced by a centralized storage facility. Recruitment was targeted at rural veterinary clinics. A tailored, stable collection kit (DogMATIC) was designed that was used by veterinarians in remote or rural locations to collect both fresh and fixed tissue for submission to the biobank. To validate this methodology the kit design, collection rate, and sample quality were analyzed. The Australian Veterinary Cancer Biobank was established as a network of 47 veterinary clinics and three veterinary pathology laboratories spanning over 200,000 km(2). In the first 12 months, 30 canine mammary tumor cases were submitted via the DogMATIC kit. Pure intact RNA was isolated in over 80% of samples with an average yield of 14.49 μg. A large network biobank, utilizing off-site collection with the DogMATIC kit, was successfully coordinated. The creation of the Australian Veterinary Cancer Biobank has established a long-term, sustainable, comparative oncology research resource in Australia. There are broader implications for biobanking with this very different form of collection and banking.

  9. Identification and classification of similar looking food grains

    NASA Astrophysics Data System (ADS)

    Anami, B. S.; Biradar, Sunanda D.; Savakar, D. G.; Kulkarni, P. V.

    2013-01-01

    This paper describes the comparative study of Artificial Neural Network (ANN) and Support Vector Machine (SVM) classifiers by taking a case study of identification and classification of four pairs of similar looking food grains namely, Finger Millet, Mustard, Soyabean, Pigeon Pea, Aniseed, Cumin-seeds, Split Greengram and Split Blackgram. Algorithms are developed to acquire and process color images of these grains samples. The developed algorithms are used to extract 18 colors-Hue Saturation Value (HSV), and 42 wavelet based texture features. Back Propagation Neural Network (BPNN)-based classifier is designed using three feature sets namely color - HSV, wavelet-texture and their combined model. SVM model for color- HSV model is designed for the same set of samples. The classification accuracies ranging from 93% to 96% for color-HSV, ranging from 78% to 94% for wavelet texture model and from 92% to 97% for combined model are obtained for ANN based models. The classification accuracy ranging from 80% to 90% is obtained for color-HSV based SVM model. Training time required for the SVM based model is substantially lesser than ANN for the same set of images.

  10. Multifunctional Mesoscale Observing Networks.

    NASA Astrophysics Data System (ADS)

    Dabberdt, Walter F.; Schlatter, Thomas W.; Carr, Frederick H.; Friday, Elbert W. Joe; Jorgensen, David; Koch, Steven; Pirone, Maria; Ralph, F. Martin; Sun, Juanzhen; Welsh, Patrick; Wilson, James W.; Zou, Xiaolei

    2005-07-01

    More than 120 scientists, engineers, administrators, and users met on 8 10 December 2003 in a workshop format to discuss the needs for enhanced three-dimensional mesoscale observing networks. Improved networks are seen as being critical to advancing numerical and empirical modeling for a variety of mesoscale applications, including severe weather warnings and forecasts, hydrology, air-quality forecasting, chemical emergency response, transportation safety, energy management, and others. The participants shared a clear and common vision for the observing requirements: existing two-dimensional mesoscale measurement networks do not provide observations of the type, frequency, and density that are required to optimize mesoscale prediction and nowcasts. To be viable, mesoscale observing networks must serve multiple applications, and the public, private, and academic sectors must all actively participate in their design and implementation, as well as in the creation and delivery of value-added products. The mesoscale measurement challenge can best be met by an integrated approach that considers all elements of an end-to-end solution—identifying end users and their needs, designing an optimal mix of observations, defining the balance between static and dynamic (targeted or adaptive) sampling strategies, establishing long-term test beds, and developing effective implementation strategies. Detailed recommendations are provided pertaining to nowcasting, numerical prediction and data assimilation, test beds, and implementation strategies.


  11. An FPGA Platform for Real-Time Simulation of Spiking Neuronal Networks

    PubMed Central

    Pani, Danilo; Meloni, Paolo; Tuveri, Giuseppe; Palumbo, Francesca; Massobrio, Paolo; Raffo, Luigi

    2017-01-01

    In the last years, the idea to dynamically interface biological neurons with artificial ones has become more and more urgent. The reason is essentially due to the design of innovative neuroprostheses where biological cell assemblies of the brain can be substituted by artificial ones. For closed-loop experiments with biological neuronal networks interfaced with in silico modeled networks, several technological challenges need to be faced, from the low-level interfacing between the living tissue and the computational model to the implementation of the latter in a suitable form for real-time processing. Field programmable gate arrays (FPGAs) can improve flexibility when simple neuronal models are required, obtaining good accuracy, real-time performance, and the possibility to create a hybrid system without any custom hardware, just programming the hardware to achieve the required functionality. In this paper, this possibility is explored presenting a modular and efficient FPGA design of an in silico spiking neural network exploiting the Izhikevich model. The proposed system, prototypically implemented on a Xilinx Virtex 6 device, is able to simulate a fully connected network counting up to 1,440 neurons, in real-time, at a sampling rate of 10 kHz, which is reasonable for small to medium scale extra-cellular closed-loop experiments. PMID:28293163

  12. A Neutral Network based Early Eathquake Warning model in California region

    NASA Astrophysics Data System (ADS)

    Xiao, H.; MacAyeal, D. R.

    2016-12-01

    Early Earthquake Warning systems could reduce loss of lives and other economic impact resulted from natural disaster or man-made calamity. Current systems could be further enhanced by neutral network method. A 3 layer neural network model combined with onsite method was deployed in this paper to improve the recognition time and detection time for large scale earthquakes.The 3 layer neutral network early earthquake warning model adopted the vector feature design for sample events happened within 150 km radius of the epicenters. Dataset used in this paper contained both destructive events and small scale events. All the data was extracted from IRIS database to properly train the model. In the training process, backpropagation algorithm was used to adjust the weight matrices and bias matrices during each iteration. The information in all three channels of the seismometers served as the source in this model. Through designed tests, it was indicated that this model could identify approximately 90 percent of the events' scale correctly. And the early detection could provide informative evidence for public authorities to make further decisions. This indicated that neutral network model could have the potential to strengthen current early warning system, since the onsite method may greatly reduce the responding time and save more lives in such disasters.

  13. Results of external quality-assurance program for the National Atmospheric Deposition Program and National Trends Network during 1985

    USGS Publications Warehouse

    Brooks, M.H.; Schroder, L.J.; Willoughby, T.C.

    1988-01-01

    External quality assurance monitoring of the National Atmospheric Deposition Program (NADP) and National Trends Network (NTN) was performed by the U.S. Geological Survey during 1985. The monitoring consisted of three primary programs: (1) an intersite comparison program designed to assess the precision and accuracy of onsite pH and specific conductance measurements made by NADP and NTN site operators; (2) a blind audit sample program designed to assess the effect of routine field handling on the precision and bias of NADP and NTN wet deposition data; and (3) an interlaboratory comparison program designed to compare analytical data from the laboratory processing NADP and NTN samples with data produced by other laboratories routinely analyzing wet deposition samples and to provide estimates of individual laboratory precision. An average of 94% of the site operators participated in the four voluntary intersite comparisons during 1985. A larger percentage of participating site operators met the accuracy goal for specific conductance measurements (average, 87%) than for pH measurements (average, 67%). Overall precision was dependent on the actual specific conductance of the test solution and independent of the pH of the test solution. Data for the blind audit sample program indicated slight positive biases resulting from routine field handling for all analytes except specific conductance. These biases were not large enough to be significant for most data users. Data for the blind audit sample program also indicated that decreases in hydrogen ion concentration were accompanied by decreases in specific conductance. Precision estimates derived from the blind audit sample program indicate that the major source of uncertainty in wet deposition data is the routine field handling that each wet deposition sample receives. Results of the interlaboratory comparison program were similar to results of previous years ' evaluations, indicating that the participating laboratories produced comparable data when they analyzed identical wet deposition samples, and that the laboratory processing NADP and NTN samples achieved the best analyte precision of the participating laboratories. (Author 's abstract)

  14. Orphan therapies: making best use of postmarket data.

    PubMed

    Maro, Judith C; Brown, Jeffrey S; Dal Pan, Gerald J; Li, Lingling

    2014-08-01

    Postmarket surveillance of the comparative safety and efficacy of orphan therapeutics is challenging, particularly when multiple therapeutics are licensed for the same orphan indication. To make best use of product-specific registry data collected to fulfill regulatory requirements, we propose the creation of a distributed electronic health data network among registries. Such a network could support sequential statistical analyses designed to detect early warnings of excess risks. We use a simulated example to explore the circumstances under which a distributed network may prove advantageous. We perform sample size calculations for sequential and non-sequential statistical studies aimed at comparing the incidence of hepatotoxicity following initiation of two newly licensed therapies for homozygous familial hypercholesterolemia. We calculate the sample size savings ratio, or the proportion of sample size saved if one conducted a sequential study as compared to a non-sequential study. Then, using models to describe the adoption and utilization of these therapies, we simulate when these sample sizes are attainable in calendar years. We then calculate the analytic calendar time savings ratio, analogous to the sample size savings ratio. We repeat these analyses for numerous scenarios. Sequential analyses detect effect sizes earlier or at the same time as non-sequential analyses. The most substantial potential savings occur when the market share is more imbalanced (i.e., 90% for therapy A) and the effect size is closest to the null hypothesis. However, due to low exposure prevalence, these savings are difficult to realize within the 30-year time frame of this simulation for scenarios in which the outcome of interest occurs at or more frequently than one event/100 person-years. We illustrate a process to assess whether sequential statistical analyses of registry data performed via distributed networks may prove a worthwhile infrastructure investment for pharmacovigilance.

  15. A multiobjective hybrid genetic algorithm for the capacitated multipoint network design problem.

    PubMed

    Lo, C C; Chang, W H

    2000-01-01

    The capacitated multipoint network design problem (CMNDP) is NP-complete. In this paper, a hybrid genetic algorithm for CMNDP is proposed. The multiobjective hybrid genetic algorithm (MOHGA) differs from other genetic algorithms (GAs) mainly in its selection procedure. The concept of subpopulation is used in MOHGA. Four subpopulations are generated according to the elitism reservation strategy, the shifting Prufer vector, the stochastic universal sampling, and the complete random method, respectively. Mixing these four subpopulations produces the next generation population. The MOHGA can effectively search the feasible solution space due to population diversity. The MOHGA has been applied to CMNDP. By examining computational and analytical results, we notice that the MOHGA can find most nondominated solutions and is much more effective and efficient than other multiobjective GAs.

  16. Big Data: A Parallel Particle Swarm Optimization-Back-Propagation Neural Network Algorithm Based on MapReduce

    PubMed Central

    Cao, Jianfang; Cui, Hongyan; Shi, Hao; Jiao, Lijuan

    2016-01-01

    A back-propagation (BP) neural network can solve complicated random nonlinear mapping problems; therefore, it can be applied to a wide range of problems. However, as the sample size increases, the time required to train BP neural networks becomes lengthy. Moreover, the classification accuracy decreases as well. To improve the classification accuracy and runtime efficiency of the BP neural network algorithm, we proposed a parallel design and realization method for a particle swarm optimization (PSO)-optimized BP neural network based on MapReduce on the Hadoop platform using both the PSO algorithm and a parallel design. The PSO algorithm was used to optimize the BP neural network’s initial weights and thresholds and improve the accuracy of the classification algorithm. The MapReduce parallel programming model was utilized to achieve parallel processing of the BP algorithm, thereby solving the problems of hardware and communication overhead when the BP neural network addresses big data. Datasets on 5 different scales were constructed using the scene image library from the SUN Database. The classification accuracy of the parallel PSO-BP neural network algorithm is approximately 92%, and the system efficiency is approximately 0.85, which presents obvious advantages when processing big data. The algorithm proposed in this study demonstrated both higher classification accuracy and improved time efficiency, which represents a significant improvement obtained from applying parallel processing to an intelligent algorithm on big data. PMID:27304987

  17. Pulmonary Nodule Classification with Deep Convolutional Neural Networks on Computed Tomography Images.

    PubMed

    Li, Wei; Cao, Peng; Zhao, Dazhe; Wang, Junbo

    2016-01-01

    Computer aided detection (CAD) systems can assist radiologists by offering a second opinion on early diagnosis of lung cancer. Classification and feature representation play critical roles in false-positive reduction (FPR) in lung nodule CAD. We design a deep convolutional neural networks method for nodule classification, which has an advantage of autolearning representation and strong generalization ability. A specified network structure for nodule images is proposed to solve the recognition of three types of nodules, that is, solid, semisolid, and ground glass opacity (GGO). Deep convolutional neural networks are trained by 62,492 regions-of-interest (ROIs) samples including 40,772 nodules and 21,720 nonnodules from the Lung Image Database Consortium (LIDC) database. Experimental results demonstrate the effectiveness of the proposed method in terms of sensitivity and overall accuracy and that it consistently outperforms the competing methods.

  18. Multiobjective design of aquifer monitoring networks for optimal spatial prediction and geostatistical parameter estimation

    NASA Astrophysics Data System (ADS)

    Alzraiee, Ayman H.; Bau, Domenico A.; Garcia, Luis A.

    2013-06-01

    Effective sampling of hydrogeological systems is essential in guiding groundwater management practices. Optimal sampling of groundwater systems has previously been formulated based on the assumption that heterogeneous subsurface properties can be modeled using a geostatistical approach. Therefore, the monitoring schemes have been developed to concurrently minimize the uncertainty in the spatial distribution of systems' states and parameters, such as the hydraulic conductivity K and the hydraulic head H, and the uncertainty in the geostatistical model of system parameters using a single objective function that aggregates all objectives. However, it has been shown that the aggregation of possibly conflicting objective functions is sensitive to the adopted aggregation scheme and may lead to distorted results. In addition, the uncertainties in geostatistical parameters affect the uncertainty in the spatial prediction of K and H according to a complex nonlinear relationship, which has often been ineffectively evaluated using a first-order approximation. In this study, we propose a multiobjective optimization framework to assist the design of monitoring networks of K and H with the goal of optimizing their spatial predictions and estimating the geostatistical parameters of the K field. The framework stems from the combination of a data assimilation (DA) algorithm and a multiobjective evolutionary algorithm (MOEA). The DA algorithm is based on the ensemble Kalman filter, a Monte-Carlo-based Bayesian update scheme for nonlinear systems, which is employed to approximate the posterior uncertainty in K, H, and the geostatistical parameters of K obtained by collecting new measurements. Multiple MOEA experiments are used to investigate the trade-off among design objectives and identify the corresponding monitoring schemes. The methodology is applied to design a sampling network for a shallow unconfined groundwater system located in Rocky Ford, Colorado. Results indicate that the effect of uncertainties associated with the geostatistical parameters on the spatial prediction might be significantly alleviated (by up to 80% of the prior uncertainty in K and by 90% of the prior uncertainty in H) by sampling evenly distributed measurements with a spatial measurement density of more than 1 observation per 60 m × 60 m grid block. In addition, exploration of the interaction of objective functions indicates that the ability of head measurements to reduce the uncertainty associated with the correlation scale is comparable to the effect of hydraulic conductivity measurements.

  19. Evaluating the effectiveness of care integration strategies in different healthcare systems in Latin America: the EQUITY-LA II quasi-experimental study protocol.

    PubMed

    Vázquez, María-Luisa; Vargas, Ingrid; Unger, Jean-Pierre; De Paepe, Pierre; Mogollón-Pérez, Amparo Susana; Samico, Isabella; Albuquerque, Paulette; Eguiguren, Pamela; Cisneros, Angelica Ivonne; Rovere, Mario; Bertolotto, Fernando

    2015-07-31

    Although fragmentation in the provision of healthcare is considered an important obstacle to effective care, there is scant evidence on best practices in care coordination in Latin America. The aim is to evaluate the effectiveness of a participatory shared care strategy in improving coordination across care levels and related care quality, in health services networks in six different healthcare systems of Latin America. A controlled before and after quasi-experimental study taking a participatory action research approach. In each country, two comparable healthcare networks were selected--intervention and control. The study contains four phases: (1) A baseline study to establish network performance in care coordination and continuity across care levels, using (A) qualitative methods: semi-structured interviews and focus groups with a criterion sample of health managers, professionals and users; and (B) quantitative methods: two questionnaire surveys with samples of 174 primary and secondary care physicians and 392 users with chronic conditions per network. Sample size was calculated to detect a proportion difference of 15% and 10%, before and after intervention (α=0.05; β=0.2 in a two-sided test); (2) a bottom-up participatory design and implementation of shared care strategies involving micro-level care coordination interventions to improve the adequacy of patient referral and information transfer. Strategies are selected through a participatory process by the local steering committee (local policymakers, health care network professionals, managers, users and researchers), supported by appropriate training; (3) Evaluation of the effectiveness of interventions by measuring changes in levels of care coordination and continuity 18 months after implementation, applying the same design as in the baseline study; (4) Cross-country comparative analysis. This study complies with international and national legal stipulations on ethics. Conditions of the study procedure were approved by each country's ethical committee. A variety of dissemination activities are implemented addressing the main stakeholders. Registration No.257 Clinical Research Register of the Santa Fe Health Department, Argentina. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  20. Evaluating the effectiveness of care integration strategies in different healthcare systems in Latin America: the EQUITY-LA II quasi-experimental study protocol

    PubMed Central

    Vázquez, María-Luisa; Vargas, Ingrid; Unger, Jean-Pierre; De Paepe, Pierre; Mogollón-Pérez, Amparo Susana; Samico, Isabella; Albuquerque, Paulette; Eguiguren, Pamela; Cisneros, Angelica Ivonne; Rovere, Mario; Bertolotto, Fernando

    2015-01-01

    Introduction Although fragmentation in the provision of healthcare is considered an important obstacle to effective care, there is scant evidence on best practices in care coordination in Latin America. The aim is to evaluate the effectiveness of a participatory shared care strategy in improving coordination across care levels and related care quality, in health services networks in six different healthcare systems of Latin America. Methods and analysis A controlled before and after quasi-experimental study taking a participatory action research approach. In each country, two comparable healthcare networks were selected—intervention and control. The study contains four phases: (1) A baseline study to establish network performance in care coordination and continuity across care levels, using (A) qualitative methods: semi-structured interviews and focus groups with a criterion sample of health managers, professionals and users; and (B) quantitative methods: two questionnaire surveys with samples of 174 primary and secondary care physicians and 392 users with chronic conditions per network. Sample size was calculated to detect a proportion difference of 15% and 10%, before and after intervention (α=0.05; β=0.2 in a two-sided test); (2) a bottom-up participatory design and implementation of shared care strategies involving micro-level care coordination interventions to improve the adequacy of patient referral and information transfer. Strategies are selected through a participatory process by the local steering committee (local policymakers, health care network professionals, managers, users and researchers), supported by appropriate training; (3) Evaluation of the effectiveness of interventions by measuring changes in levels of care coordination and continuity 18 months after implementation, applying the same design as in the baseline study; (4) Cross-country comparative analysis. Ethics and dissemination This study complies with international and national legal stipulations on ethics. Conditions of the study procedure were approved by each country's ethical committee. A variety of dissemination activities are implemented addressing the main stakeholders. Registration No.257 Clinical Research Register of the Santa Fe Health Department, Argentina. PMID:26231753

  1. Classification of Electrophotonic Images of Yogic Practice of Mudra through Neural Networks.

    PubMed

    Kumar, Kotikalapudi Shiva; Srinivasan, T M; Ilavarasu, Judu; Mondal, Biplob; Nagendra, H R

    2018-01-01

    Mudras signify a gesture with hands, eyes, and the body. Different configurations of the joining of fingertips are also termed mudra and are used by yoga practitioners for energy manipulation and for therapeutic applications. Electrophotonic imaging (EPI) captures the coronal discharge around the fingers as a result of electron capture from the ten fingers. The coronal discharge around each fingertip is studied to understand the effect of mudra on EPI parameters. The participants were from Swami Vivekananda Yoga Anusandhana Samsthana and Sushrutha Ayurvedic Medical College, in Bengaluru, India. There were 29 volunteers in the mudra group and 32 in the control group. There were two designs: one was a pre-post design with control the other was pre-post with repeated measures with 18 individuals practicing mudra for 3 days. The duration of intervention for the pre-post design was 10 min on the 1 st day, 15 min on the 2 nd day, and 20 min on the 3 rd day. A neural network classifier was used for classifying mudra and control samples. The EPI parameters, normalized area and average intensity, passed the test of normality Shapiro-Wilk. The Cohen's d , effect size was 0.988 and 0.974 for the mudra and control groups, respectively. Neural network-based analysis showed the classification accuracy of the post-intervention samples for mudra and control varied from 85% to 100% while the classification accuracy varied from 55% to 70% for the pre-intervention samples. The result of the mudra intervention showed statistically significant changes in the mean values on the 3 rd day compared to the 1 st day. The effect size of the variations in mudra was more than that of the control group. Mudra practice of a longer duration showed statistically significant change in the EPI parameter, average intensity in comparison to the practice on the 1 st day.

  2. Information theory-based decision support system for integrated design of multivariable hydrometric networks

    NASA Astrophysics Data System (ADS)

    Keum, Jongho; Coulibaly, Paulin

    2017-07-01

    Adequate and accurate hydrologic information from optimal hydrometric networks is an essential part of effective water resources management. Although the key hydrologic processes in the water cycle are interconnected, hydrometric networks (e.g., streamflow, precipitation, groundwater level) have been routinely designed individually. A decision support framework is proposed for integrated design of multivariable hydrometric networks. The proposed method is applied to design optimal precipitation and streamflow networks simultaneously. The epsilon-dominance hierarchical Bayesian optimization algorithm was combined with Shannon entropy of information theory to design and evaluate hydrometric networks. Specifically, the joint entropy from the combined networks was maximized to provide the most information, and the total correlation was minimized to reduce redundant information. To further optimize the efficiency between the networks, they were designed by maximizing the conditional entropy of the streamflow network given the information of the precipitation network. Compared to the traditional individual variable design approach, the integrated multivariable design method was able to determine more efficient optimal networks by avoiding the redundant stations. Additionally, four quantization cases were compared to evaluate their effects on the entropy calculations and the determination of the optimal networks. The evaluation results indicate that the quantization methods should be selected after careful consideration for each design problem since the station rankings and the optimal networks can change accordingly.

  3. Sampling design optimisation for rainfall prediction using a non-stationary geostatistical model

    NASA Astrophysics Data System (ADS)

    Wadoux, Alexandre M. J.-C.; Brus, Dick J.; Rico-Ramirez, Miguel A.; Heuvelink, Gerard B. M.

    2017-09-01

    The accuracy of spatial predictions of rainfall by merging rain-gauge and radar data is partly determined by the sampling design of the rain-gauge network. Optimising the locations of the rain-gauges may increase the accuracy of the predictions. Existing spatial sampling design optimisation methods are based on minimisation of the spatially averaged prediction error variance under the assumption of intrinsic stationarity. Over the past years, substantial progress has been made to deal with non-stationary spatial processes in kriging. Various well-documented geostatistical models relax the assumption of stationarity in the mean, while recent studies show the importance of considering non-stationarity in the variance for environmental processes occurring in complex landscapes. We optimised the sampling locations of rain-gauges using an extension of the Kriging with External Drift (KED) model for prediction of rainfall fields. The model incorporates both non-stationarity in the mean and in the variance, which are modelled as functions of external covariates such as radar imagery, distance to radar station and radar beam blockage. Spatial predictions are made repeatedly over time, each time recalibrating the model. The space-time averaged KED variance was minimised by Spatial Simulated Annealing (SSA). The methodology was tested using a case study predicting daily rainfall in the north of England for a one-year period. Results show that (i) the proposed non-stationary variance model outperforms the stationary variance model, and (ii) a small but significant decrease of the rainfall prediction error variance is obtained with the optimised rain-gauge network. In particular, it pays off to place rain-gauges at locations where the radar imagery is inaccurate, while keeping the distribution over the study area sufficiently uniform.

  4. Engineering Design of ITER Prototype Fast Plant System Controller

    NASA Astrophysics Data System (ADS)

    Goncalves, B.; Sousa, J.; Carvalho, B.; Rodrigues, A. P.; Correia, M.; Batista, A.; Vega, J.; Ruiz, M.; Lopez, J. M.; Rojo, R. Castro; Wallander, A.; Utzel, N.; Neto, A.; Alves, D.; Valcarcel, D.

    2011-08-01

    The ITER control, data access and communication (CODAC) design team identified the need for two types of plant systems. A slow control plant system is based on industrial automation technology with maximum sampling rates below 100 Hz, and a fast control plant system is based on embedded technology with higher sampling rates and more stringent real-time requirements than that required for slow controllers. The latter is applicable to diagnostics and plant systems in closed-control loops whose cycle times are below 1 ms. Fast controllers will be dedicated industrial controllers with the ability to supervise other fast and/or slow controllers, interface to actuators and sensors and, if necessary, high performance networks. Two prototypes of a fast plant system controller specialized for data acquisition and constrained by ITER technological choices are being built using two different form factors. This prototyping activity contributes to the Plant Control Design Handbook effort of standardization, specifically regarding fast controller characteristics. Envisaging a general purpose fast controller design, diagnostic use cases with specific requirements were analyzed and will be presented along with the interface with CODAC and sensors. The requirements and constraints that real-time plasma control imposes on the design were also taken into consideration. Functional specifications and technology neutral architecture, together with its implications on the engineering design, were considered. The detailed engineering design compliant with ITER standards was performed and will be discussed in detail. Emphasis will be given to the integration of the controller in the standard CODAC environment. Requirements for the EPICS IOC providing the interface to the outside world, the prototype decisions on form factor, real-time operating system, and high-performance networks will also be discussed, as well as the requirements for data streaming to CODAC for visualization and archiving.

  5. Relation of Shallow Water Quality in the Central Oklahoma Aquifer to Geology, Soils, and Land Use

    USGS Publications Warehouse

    Rea, Alan H.; Christenson, Scott C.; Andrews, William J.

    2001-01-01

    The purpose of this report is to identify, describe, and explain relations between natural and land-use factors and ground-water quality in the Central Oklahoma aquifer NAWQA study unit. Natural factors compared to water quality included the geologic unit in which the sampled wells were completed and the properties of soils in the areas surrounding the wells. Land-use factors included types of land use and population densities surrounding sampled wells. Ground-water quality was characterized by concentrations of inorganic constituents, and by frequencies of detection of volatile organic compounds and pesticides. Water-quality data were from samples collected from wells 91 meters (300 feet) or less in depth as part of Permian and Quaternary geologic unit survey networks and from an urban survey network. Concentrations of many inorganic constituents were significantly related to geology. In addition, concentrations of many inorganic constituents were greater in water from wells from the Oklahoma City urban sampling network than in water from wells from low-density survey networks designed to evaluate ambient water quality in the Central Oklahoma aquifer study unit. However, sampling bias may have been induced by differences in hydrogeologic factors between sampling networks, limiting the ability to determine land-use effects on concentrations of inorganic constituents. Frequencies of detection of pesticide and volatile organic compounds (VOC's) in ground-water samples were related to land use and population density, with these compounds being more frequently detected in densely-populated areas. Geology and soil properties were not significantly correlated to pesticide or VOC occurrence in ground water. Lesser frequencies of detection of pesticides in water from wells in rural areas may be due to low to moderate use of those compounds on agricultural lands in the study unit, with livestock production being the primary agricultural activity. There are many possible sources of pesticides and VOC's in the urban areas of Central Oklahoma. Because only existing water-supply wells were sampled, it is not clear from the data collected whether pesticides and VOC's: (1) occur in low concentrations throughout upper portions of the aquifer in urban areas, or (2) are present in ground water only in the immediate vicinity of the wells due to back-flow of those chemicals into the wells or to inflow around cement seals and through gravel packs surrounding well casings of surface runoff containing those compounds.

  6. Water-quality assessment of the Delmarva Peninsula, Delaware, Maryland, and Virginia; results of investigations, 1987-91

    USGS Publications Warehouse

    Shedlock, Robert J.; Denver, J.M.; Hayes, M.A.; Hamilton, P.A.; Koterba, M.T.; Bachman, L.J.; Phillips, P.J.; Banks, W.S.

    1999-01-01

    A regional ground-water-quality assessment of the Delmarva Peninsula was conducted as a pilot study for the U.S. Geological Survey's National Water-Quality Assessment (NAWQA) Program. The study focused on the surficial aquifer and used both existing data and new data collected between 1988 and 1991. The new water samples were analyzed for major ions, nutrients, radon, volatile organic compounds, and a suite of herbicides and insecticides commonly used on corn, soybeans, and small grains. Samples also were collected from wells completed in deeper, confined aquifers and from selected streams, and analyzed for most of these constituents. The study employed a multi-scale network design. Regional networks were chosen to provide broad geographic coverage of the study area and to ensure that the major hydrogeologic settings of the surficial aquifer were adequately represented. Both the existing data and the data from samples collected during the study showed that agricultural activities had affected the quality of water in the surficial aquifer over most of the Peninsula.

  7. Computerized stratified random site-selection approaches for design of a ground-water-quality sampling network

    USGS Publications Warehouse

    Scott, J.C.

    1990-01-01

    Computer software was written to randomly select sites for a ground-water-quality sampling network. The software uses digital cartographic techniques and subroutines from a proprietary geographic information system. The report presents the approaches, computer software, and sample applications. It is often desirable to collect ground-water-quality samples from various areas in a study region that have different values of a spatial characteristic, such as land-use or hydrogeologic setting. A stratified network can be used for testing hypotheses about relations between spatial characteristics and water quality, or for calculating statistical descriptions of water-quality data that account for variations that correspond to the spatial characteristic. In the software described, a study region is subdivided into areal subsets that have a common spatial characteristic to stratify the population into several categories from which sampling sites are selected. Different numbers of sites may be selected from each category of areal subsets. A population of potential sampling sites may be defined by either specifying a fixed population of existing sites, or by preparing an equally spaced population of potential sites. In either case, each site is identified with a single category, depending on the value of the spatial characteristic of the areal subset in which the site is located. Sites are selected from one category at a time. One of two approaches may be used to select sites. Sites may be selected randomly, or the areal subsets in the category can be grouped into cells and sites selected randomly from each cell.

  8. Network Sampling and Classification:An Investigation of Network Model Representations

    PubMed Central

    Airoldi, Edoardo M.; Bai, Xue; Carley, Kathleen M.

    2011-01-01

    Methods for generating a random sample of networks with desired properties are important tools for the analysis of social, biological, and information networks. Algorithm-based approaches to sampling networks have received a great deal of attention in recent literature. Most of these algorithms are based on simple intuitions that associate the full features of connectivity patterns with specific values of only one or two network metrics. Substantive conclusions are crucially dependent on this association holding true. However, the extent to which this simple intuition holds true is not yet known. In this paper, we examine the association between the connectivity patterns that a network sampling algorithm aims to generate and the connectivity patterns of the generated networks, measured by an existing set of popular network metrics. We find that different network sampling algorithms can yield networks with similar connectivity patterns. We also find that the alternative algorithms for the same connectivity pattern can yield networks with different connectivity patterns. We argue that conclusions based on simulated network studies must focus on the full features of the connectivity patterns of a network instead of on the limited set of network metrics for a specific network type. This fact has important implications for network data analysis: for instance, implications related to the way significance is currently assessed. PMID:21666773

  9. A Comparative Study on Perceived Effects of Communication Networks in Acquiring International Orientations.

    ERIC Educational Resources Information Center

    Haavelsrud, Magnus

    A study was designed to test the hypothesis that different communication stages between nations--primitive, traditional, modern, and neomodern--provide important variables for explaining differences in pre-adults' conception of war in different countries. Although the two samples used in the study were drawn from two cultures which fall into the…

  10. The Assessment of Public Issue Perception: Exploration of a Three-Tiered, Social-Network Based Methodology in the Champlain Basin.

    ERIC Educational Resources Information Center

    Gore, Peter H.; And Others

    Design, application, and interpretation of a three-tiered sampling framework as a strategy for eliciting public participation in planning and program implementation is presented, with emphasis on implications for federal programs which mandate citizen participation (for example, Level B planning of Water Resources Planning Act, Federal Water…

  11. Macrostructure from Microstructure: Generating Whole Systems from Ego Networks

    PubMed Central

    Smith, Jeffrey A.

    2014-01-01

    This paper presents a new simulation method to make global network inference from sampled data. The proposed simulation method takes sampled ego network data and uses Exponential Random Graph Models (ERGM) to reconstruct the features of the true, unknown network. After describing the method, the paper presents two validity checks of the approach: the first uses the 20 largest Add Health networks while the second uses the Sociology Coauthorship network in the 1990's. For each test, I take random ego network samples from the known networks and use my method to make global network inference. I find that my method successfully reproduces the properties of the networks, such as distance and main component size. The results also suggest that simpler, baseline models provide considerably worse estimates for most network properties. I end the paper by discussing the bounds/limitations of ego network sampling. I also discuss possible extensions to the proposed approach. PMID:25339783

  12. External quality-assurance programs managed by the U.S. Geological Survey in support of the National Atmospheric Deposition Program/National Trends Network

    USGS Publications Warehouse

    Latysh, Natalie E.; Wetherbee, Gregory A.

    2005-01-01

    The U.S. Geological Survey, Branch of Quality Systems, operates the external quality-assurance programs for the National Atmospheric Deposition Program/National Trends Network (NADP/NTN). Beginning in 1978, six different programs have been implemented?the intersite-comparison program, the blind-audit program, the sample-handling evaluation program, the field-audit program, the interlaboratory-comparison program, and the collocated-sampler program. Each program was designed to measure error contributed by specific components in the data-collection process. The intersite-comparison program, which was discontinued in 2004, was designed to assess the accuracy and reliability of field pH and specific-conductance measurements made by site operators. The blind-audit and sample-handling evaluation programs, which also were discontinued in 2002 and 2004, respectively, assessed contamination that may result from sampling equipment and routine handling and processing of the wet-deposition samples. The field-audit program assesses the effects of sample handling, processing, and field exposure. The interlaboratory-comparison program evaluates bias and precision of analytical results produced by the contract laboratory for NADP, the Illinois State Water Survey, Central Analytical Laboratory, and compares its performance with the performance of international laboratories. The collocated-sampler program assesses the overall precision of wet-deposition data collected by NADP/NTN. This report documents historical operations and the operating procedures for each of these external quality-assurance programs. USGS quality-assurance information allows NADP/NTN data users to discern between actual environmental trends and inherent measurement variability.

  13. Social networks and health-related quality of life: a population based study among older adults.

    PubMed

    Gallegos-Carrillo, Katia; Mudgal, Jyoti; Sánchez-García, Sergio; Wagner, Fernando A; Gallo, Joseph J; Salmerón, Jorge; García-Peña, Carmen

    2009-01-01

    To examine the relationship between components of social networks and health-related quality of life (HRQL) in older adults with and without depressive symptoms. Comparative cross-sectional study with data from the cohort study 'Integral Study of Depression', carried out in Mexico City during 2004. The sample was selected through a multi-stage probability design. HRQL was measured with the SF-36. Geriatric Depression Scale (GDS) and the Short Anxiety Screening Test (SAST) determined depressive symptoms and anxiety. T-test and multiple linear regressions were conducted. Older adults with depressive symptoms had the lowest scores in all HRQL scales. A larger network of close relatives and friends was associated with better HRQL on several scales. Living alone did not significantly affect HRQL level, in either the study or comparison group. A positive association between some components of social networks and good HRQL exists even in older adults with depressive symptoms.

  14. Marine Vehicle Sensor Network Architecture and Protocol Designs for Ocean Observation

    PubMed Central

    Zhang, Shaowei; Yu, Jiancheng; Zhang, Aiqun; Yang, Lei; Shu, Yeqiang

    2012-01-01

    The micro-scale and meso-scale ocean dynamic processes which are nonlinear and have large variability, have a significant impact on the fisheries, natural resources, and marine climatology. A rapid, refined and sophisticated observation system is therefore needed in marine scientific research. The maneuverability and controllability of mobile sensor platforms make them a preferred choice to establish ocean observing networks, compared to the static sensor observing platform. In this study, marine vehicles are utilized as the nodes of mobile sensor networks for coverage sampling of a regional ocean area and ocean feature tracking. A synoptic analysis about marine vehicle dynamic control, multi vehicles mission assignment and path planning methods, and ocean feature tracking and observing techniques is given. Combined with the observation plan in the South China Sea, we provide an overview of the mobile sensor networks established with marine vehicles, and the corresponding simulation results. PMID:22368475

  15. Integrated pathway-based transcription regulation network mining and visualization based on gene expression profiles.

    PubMed

    Kibinge, Nelson; Ono, Naoaki; Horie, Masafumi; Sato, Tetsuo; Sugiura, Tadao; Altaf-Ul-Amin, Md; Saito, Akira; Kanaya, Shigehiko

    2016-06-01

    Conventionally, workflows examining transcription regulation networks from gene expression data involve distinct analytical steps. There is a need for pipelines that unify data mining and inference deduction into a singular framework to enhance interpretation and hypotheses generation. We propose a workflow that merges network construction with gene expression data mining focusing on regulation processes in the context of transcription factor driven gene regulation. The pipeline implements pathway-based modularization of expression profiles into functional units to improve biological interpretation. The integrated workflow was implemented as a web application software (TransReguloNet) with functions that enable pathway visualization and comparison of transcription factor activity between sample conditions defined in the experimental design. The pipeline merges differential expression, network construction, pathway-based abstraction, clustering and visualization. The framework was applied in analysis of actual expression datasets related to lung, breast and prostrate cancer. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Snoopy--a unifying Petri net framework to investigate biomolecular networks.

    PubMed

    Rohr, Christian; Marwan, Wolfgang; Heiner, Monika

    2010-04-01

    To investigate biomolecular networks, Snoopy provides a unifying Petri net framework comprising a family of related Petri net classes. Models can be hierarchically structured, allowing for the mastering of larger networks. To move easily between the qualitative, stochastic and continuous modelling paradigms, models can be converted into each other. We get models sharing structure, but specialized by their kinetic information. The analysis and iterative reverse engineering of biomolecular networks is supported by the simultaneous use of several Petri net classes, while the graphical user interface adapts dynamically to the active one. Built-in animation and simulation are complemented by exports to various analysis tools. Snoopy facilitates the addition of new Petri net classes thanks to its generic design. Our tool with Petri net samples is available free of charge for non-commercial use at http://www-dssz.informatik.tu-cottbus.de/snoopy.html; supported operating systems: Mac OS X, Windows and Linux (selected distributions).

  17. A natural experiment of social network formation and dynamics.

    PubMed

    Phan, Tuan Q; Airoldi, Edoardo M

    2015-05-26

    Social networks affect many aspects of life, including the spread of diseases, the diffusion of information, the workers' productivity, and consumers' behavior. Little is known, however, about how these networks form and change. Estimating causal effects and mechanisms that drive social network formation and dynamics is challenging because of the complexity of engineering social relations in a controlled environment, endogeneity between network structure and individual characteristics, and the lack of time-resolved data about individuals' behavior. We leverage data from a sample of 1.5 million college students on Facebook, who wrote more than 630 million messages and 590 million posts over 4 years, to design a long-term natural experiment of friendship formation and social dynamics in the aftermath of a natural disaster. The analysis shows that affected individuals are more likely to strengthen interactions, while maintaining the same number of friends as unaffected individuals. Our findings suggest that the formation of social relationships may serve as a coping mechanism to deal with high-stress situations and build resilience in communities.

  18. A natural experiment of social network formation and dynamics

    PubMed Central

    Phan, Tuan Q.; Airoldi, Edoardo M.

    2015-01-01

    Social networks affect many aspects of life, including the spread of diseases, the diffusion of information, the workers' productivity, and consumers' behavior. Little is known, however, about how these networks form and change. Estimating causal effects and mechanisms that drive social network formation and dynamics is challenging because of the complexity of engineering social relations in a controlled environment, endogeneity between network structure and individual characteristics, and the lack of time-resolved data about individuals' behavior. We leverage data from a sample of 1.5 million college students on Facebook, who wrote more than 630 million messages and 590 million posts over 4 years, to design a long-term natural experiment of friendship formation and social dynamics in the aftermath of a natural disaster. The analysis shows that affected individuals are more likely to strengthen interactions, while maintaining the same number of friends as unaffected individuals. Our findings suggest that the formation of social relationships may serve as a coping mechanism to deal with high-stress situations and build resilience in communities. PMID:25964337

  19. A Community "Hub" Network Intervention for HIV Stigma Reduction: A Case Study.

    PubMed

    Prinsloo, Catharina D; Greeff, Minrie

    2016-01-01

    We describe the implementation of a community "hub" network intervention to reduce HIV stigma in the Tlokwe Municipality, North West Province, South Africa. A holistic case study design was used, focusing on community members with no differentiation by HIV status. Participants were recruited through accessibility sampling. Data analyses used open coding and document analysis. Findings showed that the HIV stigma-reduction community hub network intervention successfully activated mobilizers to initiate change; lessened the stigma experience for people living with HIV; and addressed HIV stigma in a whole community using a combination of strategies including individual and interpersonal levels, social networks, and the public. Further research is recommended to replicate and enhance the intervention. In particular, the hub network system should be extended, the intervention period should be longer, there should be a stronger support system for mobilizers, and the multiple strategy approach should be continued on individual and social levels. Copyright © 2016 Association of Nurses in AIDS Care. Published by Elsevier Inc. All rights reserved.

  20. Across North America tracer experiment (ANATEX): Sampling and analysis

    NASA Astrophysics Data System (ADS)

    Draxler, R. R.; Dietz, R.; Lagomarsino, R. J.; Start, G.

    Between 5 January 1987 and 29 March 1987, there were 33 releases of different tracers from each of two sites: Glasgow, MT and St. Cloud, MN. The perfluorocarbon tracers were routinely released in a 3-h period every 2.5 days, alternating between daytime and night-time tracer releases. Ground-level air samples of 24-h duration were taken at 77 sites mostly located near rawinsonde stations east of 105°W and between 26°N and 55°N. Weekly air samples were taken at 12 remote sites between San Diego, CA and Pt. Barrow, AK and between Norway and the Canary Islands. Short-term 6-h samples were collected at ground level and 200 m AGL along an arc of five towers between Tulsa, OK and Green Bay, WI. Aircraft sampling within several hundred kilometers of both tracer release sites was used to establish the initial tracer path. Experimental design required improved sampler performance, new tracers with lower atmospheric backgrounds, and improvements in analytic precision. The advances to the perfluorocarbon tracer system are discussed in detail. Results from the tracer sampling showed that the average and peak concentrations measured over the daily ground-level sampling network were consistent with what would be calculated using mass conservative approaches. however, ground-level samples from individual tracer patterns showed considerable complexity due to vertical stability or the interaction of the tracer plumes with low pressure and frontal systems. These systems could pass right through the tracer plume without appreciable effect. Aircraft tracer measurements are used to confirm the initial tracer trajectory when the narrow plume may miss the coarser spaced ground-level sampling network. Tower tracer measurements showed a more complex temporal structure than evident from the longer duration ground-level sampling sites. Few above background plume measurements were evident in the more distant remote sampling network due to larger than expected uncertainties in the ambient background concentrations.

  1. Doctors' opinions on clinical coordination between primary and secondary care in the Catalan healthcare system.

    PubMed

    Aller, Marta-Beatriz; Vargas, Ingrid; Coderch, Jordi; Calero, Sebastià; Cots, Francesc; Abizanda, Mercè; Colomés, Lluís; Farré, Joan; Vázquez-Navarrete, María-Luisa

    2017-08-26

    To analyse doctors' opinions on clinical coordination between primary and secondary care in different healthcare networks and on the factors influencing it. A qualitative descriptive-interpretative study was conducted, based on semi-structured interviews. A two-stage theoretical sample was designed: 1) healthcare networks with different management models; 2) primary care and secondary care doctors in each network. Final sample size (n = 50) was reached by saturation. A thematic content analysis was conducted. In all networks doctors perceived that primary and secondary care given to patients was coordinated in terms of information transfer, consistency and accessibility to SC following a referral. However, some problems emerged, related to difficulties in acceding non-urgent secondary care changes in prescriptions and the inadequacy of some referrals across care levels. Doctors identified the following factors: 1) organizational influencing factors: coordination is facilitated by mechanisms that facilitate information transfer, communication, rapid access and physical proximity that fosters positive attitudes towards collaboration; coordination is hindered by the insufficient time to use mechanisms, unshared incentives in prescription and, in two networks, the change in the organizational model; 2) professional factors: clinical skills and attitudes towards coordination. Although doctors perceive that primary and secondary care is coordinated, they also highlighted problems. Identified factors offer valuable insights on where to direct organizational efforts to improve coordination. Copyright © 2017. Publicado por Elsevier España, S.L.U.

  2. Topology design and performance analysis of an integrated communication network

    NASA Technical Reports Server (NTRS)

    Li, V. O. K.; Lam, Y. F.; Hou, T. C.; Yuen, J. H.

    1985-01-01

    A research study on the topology design and performance analysis for the Space Station Information System (SSIS) network is conducted. It is begun with a survey of existing research efforts in network topology design. Then a new approach for topology design is presented. It uses an efficient algorithm to generate candidate network designs (consisting of subsets of the set of all network components) in increasing order of their total costs, and checks each design to see if it forms an acceptable network. This technique gives the true cost-optimal network, and is particularly useful when the network has many constraints and not too many components. The algorithm for generating subsets is described in detail, and various aspects of the overall design procedure are discussed. Two more efficient versions of this algorithm (applicable in specific situations) are also given. Next, two important aspects of network performance analysis: network reliability and message delays are discussed. A new model is introduced to study the reliability of a network with dependent failures. For message delays, a collection of formulas from existing research results is given to compute or estimate the delays of messages in a communication network without making the independence assumption. The design algorithm coded in PASCAL is included as an appendix.

  3. Using stochastic activity networks to study the energy feasibility of automatic weather stations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cassano, Luca; Cesarini, Daniel; Avvenuti, Marco

    Automatic Weather Stations (AWSs) are systems equipped with a number of environmental sensors and communication interfaces used to monitor harsh environments, such as glaciers and deserts. Designing such systems is challenging, since designers have to maximize the amount of sampled and transmitted data while considering the energy needs of the system that, in most cases, is powered by rechargeable batteries and exploits energy harvesting, e.g., solar cells and wind turbines. To support designers of AWSs in the definition of the software tasks and of the hardware configuration of the AWS we designed and implemented an energy-aware simulator of such systems.more » The simulator relies on the Stochastic Activity Networks (SANs) formalism and has been developed using the Möbius tool. In this paper we first show how we used the SAN formalism to model the various components of an AWS, we then report results from an experiment carried out to validate the simulator against a real-world AWS and we finally show some examples of usage of the proposed simulator.« less

  4. Distributed Computer Networks in Support of Complex Group Practices

    PubMed Central

    Wess, Bernard P.

    1978-01-01

    The economics of medical computer networks are presented in context with the patient care and administrative goals of medical networks. Design alternatives and network topologies are discussed with an emphasis on medical network design requirements in distributed data base design, telecommunications, satellite systems, and software engineering. The success of the medical computer networking technology is predicated on the ability of medical and data processing professionals to design comprehensive, efficient, and virtually impenetrable security systems to protect data bases, network access and services, and patient confidentiality.

  5. iSANLA: intelligent sensor and actuator network for life science applications.

    PubMed

    Schloesser, Mario; Schnitzer, Andreas; Ying, Hong; Silex, Carmen; Schiek, Michael

    2008-01-01

    In the fields of neurological rehabilitation and neurophysiological research there is a strong need for miniaturized, multi channel, battery driven, wireless networking DAQ systems enabling real-time digital signal processing and feedback experiments. For the scientific investigation on the passive auditory based 3D-orientation of Barn Owls and the scientific research on vegetative locomotor coordination of Parkinson's disease patients during rehabilitation we developed our 'intelligent Sensor and Actuator Network for Life science Application' (iSANLA) system. Implemented on the ultra low power microcontroller MSP430 sample rates up to 96 kHz have been realised for single channel DAQ. The system includes lossless local data storage up to 4 GB. With its outer dimensions of 20mm per rim and less than 15 g of weight including the Lithium-Ion battery our modular designed sensor node is thoroughly capable of up to eight channel recordings with 8 kHz sample rate each and provides sufficient computational power for digital signal processing ready to start our first mobile experiments. For wireless mobility a compact communication protocol based on the IEEE 802.15.4 wireless standard with net data rates up to 141 kbit/s has been implemented. To merge the lossless acquired data of the distributed iNODEs a time synchronization protocol has been developed preserving causality. Hence the necessary time synchronous start of the data acquisition inside a network of multiple sensors with a precision better than the highest sample rate has been realized.

  6. A Compressed Sensing-Based Wearable Sensor Network for Quantitative Assessment of Stroke Patients

    PubMed Central

    Yu, Lei; Xiong, Daxi; Guo, Liquan; Wang, Jiping

    2016-01-01

    Clinical rehabilitation assessment is an important part of the therapy process because it is the premise for prescribing suitable rehabilitation interventions. However, the commonly used assessment scales have the following two drawbacks: (1) they are susceptible to subjective factors; (2) they only have several rating levels and are influenced by a ceiling effect, making it impossible to exactly detect any further improvement in the movement. Meanwhile, energy constraints are a primary design consideration in wearable sensor network systems since they are often battery-operated. Traditionally, for wearable sensor network systems that follow the Shannon/Nyquist sampling theorem, there are many data that need to be sampled and transmitted. This paper proposes a novel wearable sensor network system to monitor and quantitatively assess the upper limb motion function, based on compressed sensing technology. With the sparse representation model, less data is transmitted to the computer than with traditional systems. The experimental results show that the accelerometer signals of Bobath handshake and shoulder touch exercises can be compressed, and the length of the compressed signal is less than 1/3 of the raw signal length. More importantly, the reconstruction errors have no influence on the predictive accuracy of the Brunnstrom stage classification model. It also indicated that the proposed system can not only reduce the amount of data during the sampling and transmission processes, but also, the reconstructed accelerometer signals can be used for quantitative assessment without any loss of useful information. PMID:26861337

  7. A Compressed Sensing-Based Wearable Sensor Network for Quantitative Assessment of Stroke Patients.

    PubMed

    Yu, Lei; Xiong, Daxi; Guo, Liquan; Wang, Jiping

    2016-02-05

    Clinical rehabilitation assessment is an important part of the therapy process because it is the premise for prescribing suitable rehabilitation interventions. However, the commonly used assessment scales have the following two drawbacks: (1) they are susceptible to subjective factors; (2) they only have several rating levels and are influenced by a ceiling effect, making it impossible to exactly detect any further improvement in the movement. Meanwhile, energy constraints are a primary design consideration in wearable sensor network systems since they are often battery-operated. Traditionally, for wearable sensor network systems that follow the Shannon/Nyquist sampling theorem, there are many data that need to be sampled and transmitted. This paper proposes a novel wearable sensor network system to monitor and quantitatively assess the upper limb motion function, based on compressed sensing technology. With the sparse representation model, less data is transmitted to the computer than with traditional systems. The experimental results show that the accelerometer signals of Bobath handshake and shoulder touch exercises can be compressed, and the length of the compressed signal is less than 1/3 of the raw signal length. More importantly, the reconstruction errors have no influence on the predictive accuracy of the Brunnstrom stage classification model. It also indicated that the proposed system can not only reduce the amount of data during the sampling and transmission processes, but also, the reconstructed accelerometer signals can be used for quantitative assessment without any loss of useful information.

  8. Enhancing the Anti-Solvatochromic Two-Photon Fluorescence for Cirrhosis Imaging by Forming a Hydrogen-Bond Network.

    PubMed

    Ren, Tian-Bing; Xu, Wang; Zhang, Qian-Ling; Zhang, Xing-Xing; Wen, Si-Yu; Yi, Hai-Bo; Yuan, Lin; Zhang, Xiao-Bing

    2018-06-18

    Two-photon imaging is an emerging tool for biomedical research and clinical diagnostics. Electron donor-acceptor (D-A) type molecules are the most widely employed two-photon scaffolds. However, current D-A type fluorophores suffer from solvatochromic quenching in aqueous biological samples. To address this issue, we devised a novel class of D-A type green fluorescent protein (GFP) chromophore analogues that form a hydrogen-bond network in water to improve the two-photon efficiency. Our design results in two-photon chalcone (TPC) dyes with 0.80 quantum yield and large two-photon action cross section (210 GM) in water. This strategy to form hydrogen bonds can be generalized to design two-photon materials with anti-solvatochromic fluorescence. To demonstrate the improved in vivo imaging, we designed a sulfide probe based on TPC dyes and monitored endogenous H 2 S generation and scavenging in the cirrhotic rat liver for the first time. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. NEVESIM: event-driven neural simulation framework with a Python interface.

    PubMed

    Pecevski, Dejan; Kappel, David; Jonke, Zeno

    2014-01-01

    NEVESIM is a software package for event-driven simulation of networks of spiking neurons with a fast simulation core in C++, and a scripting user interface in the Python programming language. It supports simulation of heterogeneous networks with different types of neurons and synapses, and can be easily extended by the user with new neuron and synapse types. To enable heterogeneous networks and extensibility, NEVESIM is designed to decouple the simulation logic of communicating events (spikes) between the neurons at a network level from the implementation of the internal dynamics of individual neurons. In this paper we will present the simulation framework of NEVESIM, its concepts and features, as well as some aspects of the object-oriented design approaches and simulation strategies that were utilized to efficiently implement the concepts and functionalities of the framework. We will also give an overview of the Python user interface, its basic commands and constructs, and also discuss the benefits of integrating NEVESIM with Python. One of the valuable capabilities of the simulator is to simulate exactly and efficiently networks of stochastic spiking neurons from the recently developed theoretical framework of neural sampling. This functionality was implemented as an extension on top of the basic NEVESIM framework. Altogether, the intended purpose of the NEVESIM framework is to provide a basis for further extensions that support simulation of various neural network models incorporating different neuron and synapse types that can potentially also use different simulation strategies.

  10. NEVESIM: event-driven neural simulation framework with a Python interface

    PubMed Central

    Pecevski, Dejan; Kappel, David; Jonke, Zeno

    2014-01-01

    NEVESIM is a software package for event-driven simulation of networks of spiking neurons with a fast simulation core in C++, and a scripting user interface in the Python programming language. It supports simulation of heterogeneous networks with different types of neurons and synapses, and can be easily extended by the user with new neuron and synapse types. To enable heterogeneous networks and extensibility, NEVESIM is designed to decouple the simulation logic of communicating events (spikes) between the neurons at a network level from the implementation of the internal dynamics of individual neurons. In this paper we will present the simulation framework of NEVESIM, its concepts and features, as well as some aspects of the object-oriented design approaches and simulation strategies that were utilized to efficiently implement the concepts and functionalities of the framework. We will also give an overview of the Python user interface, its basic commands and constructs, and also discuss the benefits of integrating NEVESIM with Python. One of the valuable capabilities of the simulator is to simulate exactly and efficiently networks of stochastic spiking neurons from the recently developed theoretical framework of neural sampling. This functionality was implemented as an extension on top of the basic NEVESIM framework. Altogether, the intended purpose of the NEVESIM framework is to provide a basis for further extensions that support simulation of various neural network models incorporating different neuron and synapse types that can potentially also use different simulation strategies. PMID:25177291

  11. Quality of surface water in Missouri, water year 2009

    USGS Publications Warehouse

    Barr, Miya N.

    2010-01-01

    The U.S. Geological Survey, in cooperation with the Missouri Department of Natural Resources, designs and operates a series of monitoring stations on streams throughout Missouri known as the Ambient Water-Quality Monitoring Network. During the 2009 water year (October 1, 2008, through September 30, 2009), data were collected at 75 stations-69 Ambient Water-Quality Monitoring Network stations, 2 U.S. Geological Survey National Stream Quality Accounting Network stations, 1 spring sampled in cooperation with the U.S. Forest Service, and 3 stations sampled in cooperation with the Elk River Watershed Improvement Association. Dissolved oxygen, specific conductance, water temperature, suspended solids, suspended sediment, fecal coliform bacteria, Escherichia coli bacteria, dissolved nitrate plus nitrite, total phosphorus, dissolved and total recoverable lead and zinc, and select pesticide compound summaries are presented for 72 of these stations. The stations primarily have been classified into groups corresponding to the physiography of the State, primary land use, or unique station types. In addition, a summary of hydrologic conditions in the State including peak discharges, monthly mean discharges, and seven-day low flow is presented.

  12. Designing Networks that are Capable of Self-Healing and Adapting

    DTIC Science & Technology

    2017-04-01

    from statistical mechanics, combinatorics, boolean networks, and numerical simulations, and inspired by design principles from biological networks, we... principles for self-healing networks, and applications, and construct an all-possible-paths model for network adaptation. 2015-11-16 UNIT CONVERSION...combinatorics, boolean networks, and numerical simulations, and inspired by design principles from biological networks, we will undertake the fol

  13. A strategy to sample nutrient dynamics across the terrestrial-aquatic interface at NEON sites

    NASA Astrophysics Data System (ADS)

    Hinckley, E. S.; Goodman, K. J.; Roehm, C. L.; Meier, C. L.; Luo, H.; Ayres, E.; Parnell, J.; Krause, K.; Fox, A. M.; SanClements, M.; Fitzgerald, M.; Barnett, D.; Loescher, H. W.; Schimel, D.

    2012-12-01

    The construction of the National Ecological Observatory Network (NEON) across the U.S. creates the opportunity for researchers to investigate biogeochemical transformations and transfers across ecosystems at local-to-continental scales. Here, we examine a subset of NEON sites where atmospheric, terrestrial, and aquatic observations will be collected for 30 years. These sites are located across a range of hydrological regimes, including flashy rain-driven, shallow sub-surface (perched, pipe-flow, etc), and deep groundwater, which likely affect the chemical forms and quantities of reactive elements that are retained and/or mobilized across landscapes. We present a novel spatial and temporal sampling design that enables researchers to evaluate long-term trends in carbon, nitrogen, and phosphorus biogeochemical cycles under these different hydrological regimes. This design focuses on inputs to the terrestrial system (atmospheric deposition, bulk precipitation), transfers (soil-water and groundwater sources/chemistry), and outputs (surface water, and evapotranspiration). We discuss both data that will be collected as part of the current NEON design, as well as how the research community can supplement the NEON design through collaborative efforts, such as providing additional datasets, including soil biogeochemical processes and trace gas emissions, and developing collaborative research networks. Current engagement with the research community working at the terrestrial-aquatic interface is critical to NEON's success as we begin construction, to ensure that high-quality, standardized and useful data are not only made available, but inspire further, cutting-edge research.

  14. Design of a WSN for the Sampling of Environmental Variability in Complex Terrain

    PubMed Central

    Martín-Tardío, Miguel A.; Felicísimo, Ángel M.

    2014-01-01

    In-situ environmental parameter measurements using sensor systems connected to a wireless network have become widespread, but the problem of monitoring large and mountainous areas by means of a wireless sensor network (WSN) is not well resolved. The main reasons for this are: (1) the environmental variability distribution is unknown in the field; (2) without this knowledge, a huge number of sensors would be necessary to ensure the complete coverage of the environmental variability and (3) WSN design requirements, for example, effective connectivity (intervisibility), limiting distances and controlled redundancy, are usually solved by trial and error. Using temperature as the target environmental variable, we propose: (1) a method to determine the homogeneous environmental classes to be sampled using the digital elevation model (DEM) and geometric simulations and (2) a procedure to determine an effective WSN design in complex terrain in terms of the number of sensors, redundancy, cost and spatial distribution. The proposed methodology, based on geographic information systems and binary integer programming can be easily adapted to a wide range of applications that need exhaustive and continuous environmental monitoring with high spatial resolution. The results show that the WSN design is perfectly suited to the topography and the technical specifications of the sensors, and provides a complete coverage of the environmental variability in terms of Sun exposure. However these results still need be validated in the field and the proposed procedure must be refined. PMID:25412218

  15. Resonance Energy Transfer-Based Molecular Switch Designed Using a Systematic Design Process Based on Monte Carlo Methods and Markov Chains

    NASA Astrophysics Data System (ADS)

    Rallapalli, Arjun

    A RET network consists of a network of photo-active molecules called chromophores that can participate in inter-molecular energy transfer called resonance energy transfer (RET). RET networks are used in a variety of applications including cryptographic devices, storage systems, light harvesting complexes, biological sensors, and molecular rulers. In this dissertation, we focus on creating a RET device called closed-diffusive exciton valve (C-DEV) in which the input to output transfer function is controlled by an external energy source, similar to a semiconductor transistor like the MOSFET. Due to their biocompatibility, molecular devices like the C-DEVs can be used to introduce computing power in biological, organic, and aqueous environments such as living cells. Furthermore, the underlying physics in RET devices are stochastic in nature, making them suitable for stochastic computing in which true random distribution generation is critical. In order to determine a valid configuration of chromophores for the C-DEV, we developed a systematic process based on user-guided design space pruning techniques and built-in simulation tools. We show that our C-DEV is 15x better than C-DEVs designed using ad hoc methods that rely on limited data from prior experiments. We also show ways in which the C-DEV can be improved further and how different varieties of C-DEVs can be combined to form more complex logic circuits. Moreover, the systematic design process can be used to search for valid chromophore network configurations for a variety of RET applications. We also describe a feasibility study for a technique used to control the orientation of chromophores attached to DNA. Being able to control the orientation can expand the design space for RET networks because it provides another parameter to tune their collective behavior. While results showed limited control over orientation, the analysis required the development of a mathematical model that can be used to determine the distribution of dipoles in a given sample of chromophore constructs. The model can be used to evaluate the feasibility of other potential orientation control techniques.

  16. Monitoring of persistent organic pollutants in Africa. Part 2: design of a network to monitor the continental and intercontinental background.

    PubMed

    Lammel, G; Dobrovolný, P; Dvorská, A; Chromá, K; Brázdil, R; Holoubek, I; Hosek, J

    2009-11-01

    A network for the study of long-term trends of the continental background in Africa and the intercontinental background of persistent organic pollutants as resulting from long-range transport of contaminants from European, South Asian, and other potential source regions, as well as by watching supposedly pristine regions, i.e. the Southern Ocean and Antarctica is designed. The results of a pilot phase sampling programme in 2008 and meteorological and climatological information from the period 1961-2007 was used to apply objective criteria for the selection of stations for the monitoring network: out the original 26 stations six have been rejected because of suggested strong local sources of POPs and three others because of local meteorological effects, which may prevent part of the time long-range transported air to reach the sampling site. Representativeness of the meteorological patterns during the pilot phase with respect to climatology was assessed by comparison of the more local airflow situation as given by climatological vs. observed wind roses and by comparison of backward trajectories with the climatological wind (NCEP/NCAR re-analyses). With minor exceptions advection to nine inspected stations was typical for present-day climate during the pilot phase, 2008. Six to nine stations would cover satisfyingly large and densely populated regions of North-eastern, West and East Africa and its neighbouring seas, the Mediterranean, Northern and Equatorial Atlantic Ocean, the Western Indian Ocean and the Southern Ocean. Among the more densely populated areas Southern Cameroon, parts of the Abessinian plateau and most of the Great Lakes area would not be covered. The potential of the network is not hampered by on-going long-term changes of the advection to the selected stations, as these do hardly affect the coverage of target areas.

  17. Stochastic Stability of Sampled Data Systems with a Jump Linear Controller

    NASA Technical Reports Server (NTRS)

    Gonzalez, Oscar R.; Herencia-Zapana, Heber; Gray, W. Steven

    2004-01-01

    In this paper an equivalence between the stochastic stability of a sampled-data system and its associated discrete-time representation is established. The sampled-data system consists of a deterministic, linear, time-invariant, continuous-time plant and a stochastic, linear, time-invariant, discrete-time, jump linear controller. The jump linear controller models computer systems and communication networks that are subject to stochastic upsets or disruptions. This sampled-data model has been used in the analysis and design of fault-tolerant systems and computer-control systems with random communication delays without taking into account the inter-sample response. This paper shows that the known equivalence between the stability of a deterministic sampled-data system and the associated discrete-time representation holds even in a stochastic framework.

  18. Information and communications technology, culture, and medical universities; organizational culture and netiquette among academic staff.

    PubMed

    Yarmohammadian, Mohammad Hossein; Iravani, Hoorsana; Abzari, Mehdi

    2012-01-01

    Netiquette is appropriate behavioral etiquette when communicating through computer networks or virtual space. Identification of a dominant organizational culture and its relationship with a network culture offers applied guidelines to top managers of the university to expand communications and develop and learn organization through the use of the internet. The aim of this research was to examine the relationship between netiquette and organizational culture among faculty members of the Isfahan University of Medical Sciences (IUMS), Iran. To achieve this aim, the research method in this study was correlational research, which belonged to the category of descriptive survey research. The target population comprised of 594 faculty members of the IUMS, from which a sample of 150 was randomly selected, based on a simple stratified sampling method. For collecting the required data, two researcher-made questionnaires were formulated. Even as the first questionnaire tended to measure the selected sample members' organizational culture according to Rabbin's model (1999), the latter was designed in the Health Management and Economic Research Center (HMERC), to evaluate netiquette. The reliability of the questionnaires was computed by Choronbach's alpha coefficient formula and they happened to be 0.97 and 0.89, respectively. Ultimately, SPSS Version #15 was used for the statistical analysis of the data. The findings revealed that the organizational culture and netiquette were below average level among the sample members, signifying a considerable gap in the mean. In spite of that, there was no significant relationship between netiquette and the organizational culture of the faculty members. Emphasizing the importance of cultural preparation and a network user's training, this research suggests that the expansion of network culture rules among IUMS and organizational official communications, through the use of internet networks, in order to promote university netiquette and convenience in communication development, on the basis of special etiquette.

  19. Information and communications technology, culture, and medical universities; organizational culture and netiquette among academic staff

    PubMed Central

    Yarmohammadian, Mohammad Hossein; Iravani, Hoorsana; Abzari, Mehdi

    2012-01-01

    Introduction: Netiquette is appropriate behavioral etiquette when communicating through computer networks or virtual space. Identification of a dominant organizational culture and its relationship with a network culture offers applied guidelines to top managers of the university to expand communications and develop and learn organization through the use of the internet. The aim of this research was to examine the relationship between netiquette and organizational culture among faculty members of the Isfahan University of Medical Sciences (IUMS), Iran. Materials and Methods: To achieve this aim, the research method in this study was correlational research, which belonged to the category of descriptive survey research. The target population comprised of 594 faculty members of the IUMS, from which a sample of 150 was randomly selected, based on a simple stratified sampling method. For collecting the required data, two researcher-made questionnaires were formulated. Even as the first questionnaire tended to measure the selected sample members’ organizational culture according to Rabbin's model (1999), the latter was designed in the Health Management and Economic Research Center (HMERC), to evaluate netiquette. The reliability of the questionnaires was computed by Choronbach's alpha coefficient formula and they happened to be 0.97 and 0.89, respectively. Ultimately, SPSS Version #15 was used for the statistical analysis of the data. Results: The findings revealed that the organizational culture and netiquette were below average level among the sample members, signifying a considerable gap in the mean. In spite of that, there was no significant relationship between netiquette and the organizational culture of the faculty members. Conclusion: Emphasizing the importance of cultural preparation and a network user's training, this research suggests that the expansion of network culture rules among IUMS and organizational official communications, through the use of internet networks, in order to promote university netiquette and convenience in communication development, on the basis of special etiquette. PMID:23555109

  20. Analyzing hidden populations online: topic, emotion, and social network of HIV-related users in the largest Chinese online community.

    PubMed

    Liu, Chuchu; Lu, Xin

    2018-01-05

    Traditional survey methods are limited in the study of hidden populations due to the hard to access properties, including lack of a sampling frame, sensitivity issue, reporting error, small sample size, etc. The rapid increase of online communities, of which members interact with others via the Internet, have generated large amounts of data, offering new opportunities for understanding hidden populations with unprecedented sample sizes and richness of information. In this study, we try to understand the multidimensional characteristics of a hidden population by analyzing the massive data generated in the online community. By elaborately designing crawlers, we retrieved a complete dataset from the "HIV bar," the largest bar related to HIV on the Baidu Tieba platform, for all records from January 2005 to August 2016. Through natural language processing and social network analysis, we explored the psychology, behavior and demand of online HIV population and examined the network community structure. In HIV communities, the average topic similarity among members is positively correlated to network efficiency (r = 0.70, p < 0.001), indicating that the closer the social distance between members of the community, the more similar their topics. The proportion of negative users in each community is around 60%, weakly correlated with community size (r = 0.25, p = 0.002). It is found that users suspecting initial HIV infection or first in contact with high-risk behaviors tend to seek help and advice on the social networking platform, rather than immediately going to a hospital for blood tests. Online communities have generated copious amounts of data offering new opportunities for understanding hidden populations with unprecedented sample sizes and richness of information. It is recommended that support through online services for HIV/AIDS consultation and diagnosis be improved to avoid privacy concerns and social discrimination in China.

  1. Semi-Autonomous Small Unmanned Aircraft Systems for Sampling Tornadic Supercell Thunderstorms

    NASA Astrophysics Data System (ADS)

    Elston, Jack S.

    This work describes the development of a network-centric unmanned aircraft system (UAS) for in situ sampling of supercell thunderstorms. UAS have been identified as a well-suited platform for meteorological observations given their portability, endurance, and ability to mitigate atmospheric disturbances. They represent a unique tool for performing targeted sampling in regions of a supercell thunderstorm previously unreachable through other methods. Doppler radar can provide unique measurements of the wind field in and around supercell thunderstorms. In order to exploit this capability, a planner was developed that can optimize ingress trajectories for severe storm penetration. The resulting trajectories were examined to determine the feasibility of such a mission, and to optimize ingress in terms of flight time and exposure to precipitation. A network-centric architecture was developed to handle the large amount of distributed data produced during a storm sampling mission. Creation of this architecture was performed through a bottom-up design approach which reflects and enhances the interplay between networked communication and autonomous aircraft operation. The advantages of the approach are demonstrated through several field and hardware-in-the-loop experiments containing different hardware, networking protocols, and objectives. Results are provided from field experiments involving the resulting network-centric architecture. An airmass boundary was sampled in the Collaborative Colorado Nebraska Unmanned Aircraft Experiment (CoCoNUE). Utilizing lessons learned from CoCoNUE, a new concept of operations (CONOPS) and UAS were developed to perform in situ sampling of supercell thunderstorms. Deployment during the Verification of the Origins of Rotation in Tornadoes Experiment 2 (VORTEX2) resulted in the first ever sampling of the airmass associated with the rear flank downdraft of a tornadic supercell thunderstorm by a UAS. Hardware-in-the-loop simulation capability was added to the UAS to enable further assessment of the system and CONOPS. The simulation combines a full six degree-of-freedom aircraft dynamic model with wind and precipitation data from simulations of severe convective storms. Interfaces were written to involve as much of the system's field hardware as possible, including the creation of a simulated radar product server. A variety of simulations were conducted to evaluate different aspects of the CONOPS used for the 2010 VORTEX2 field campaign.

  2. Towards a Framework for Evolvable Network Design

    NASA Astrophysics Data System (ADS)

    Hassan, Hoda; Eltarras, Ramy; Eltoweissy, Mohamed

    The layered Internet architecture that had long guided network design and protocol engineering was an “interconnection architecture” defining a framework for interconnecting networks rather than a model for generic network structuring and engineering. We claim that the approach of abstracting the network in terms of an internetwork hinders the thorough understanding of the network salient characteristics and emergent behavior resulting in impeding design evolution required to address extreme scale, heterogeneity, and complexity. This paper reports on our work in progress that aims to: 1) Investigate the problem space in terms of the factors and decisions that influenced the design and development of computer networks; 2) Sketch the core principles for designing complex computer networks; and 3) Propose a model and related framework for building evolvable, adaptable and self organizing networks We will adopt a bottom up strategy primarily focusing on the building unit of the network model, which we call the “network cell”. The model is inspired by natural complex systems. A network cell is intrinsically capable of specialization, adaptation and evolution. Subsequently, we propose CellNet; a framework for evolvable network design. We outline scenarios for using the CellNet framework to enhance legacy Internet protocol stack.

  3. Automated Clean Chemistry for Bulk Analysis of Environmental Swipe Samples - FY17 Year End Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ticknor, Brian W.; Metzger, Shalina C.; McBay, Eddy H.

    Sample preparation methods for mass spectrometry are being automated using commercial-off-the-shelf (COTS) equipment to shorten lengthy and costly manual chemical purification procedures. This development addresses a serious need in the International Atomic Energy Agency’s Network of Analytical Laboratories (IAEA NWAL) to increase efficiency in the Bulk Analysis of Environmental Samples for Safeguards program with a method that allows unattended, overnight operation. In collaboration with Elemental Scientific Inc., the prepFAST-MC2 was designed based on COTS equipment. It was modified for uranium/plutonium separations using renewable columns packed with Eichrom TEVA and UTEVA resins, with a chemical separation method based on the Oakmore » Ridge National Laboratory (ORNL) NWAL chemical procedure. The newly designed prepFAST-SR has had several upgrades compared with the original prepFAST-MC2. Both systems are currently installed in the Ultra-Trace Forensics Science Center at ORNL.« less

  4. An analog silicon retina with multichip configuration.

    PubMed

    Kameda, Seiji; Yagi, Tetsuya

    2006-01-01

    The neuromorphic silicon retina is a novel analog very large scale integrated circuit that emulates the structure and the function of the retinal neuronal circuit. We fabricated a neuromorphic silicon retina, in which sample/hold circuits were embedded to generate fluctuation-suppressed outputs in the previous study [1]. The applications of this silicon retina, however, are limited because of a low spatial resolution and computational variability. In this paper, we have fabricated a multichip silicon retina in which the functional network circuits are divided into two chips: the photoreceptor network chip (P chip) and the horizontal cell network chip (H chip). The output images of the P chip are transferred to the H chip with analog voltages through the line-parallel transfer bus. The sample/hold circuits embedded in the P and H chips compensate for the pattern noise generated on the circuits, including the analog communication pathway. Using the multichip silicon retina together with an off-chip differential amplifier, spatial filtering of the image with an odd- and an even-symmetric orientation selective receptive fields was carried out in real time. The analog data transfer method in the present multichip silicon retina is useful to design analog neuromorphic multichip systems that mimic the hierarchical structure of neuronal networks in the visual system.

  5. A deep 3D residual CNN for false-positive reduction in pulmonary nodule detection.

    PubMed

    Jin, Hongsheng; Li, Zongyao; Tong, Ruofeng; Lin, Lanfen

    2018-05-01

    The automatic detection of pulmonary nodules using CT scans improves the efficiency of lung cancer diagnosis, and false-positive reduction plays a significant role in the detection. In this paper, we focus on the false-positive reduction task and propose an effective method for this task. We construct a deep 3D residual CNN (convolution neural network) to reduce false-positive nodules from candidate nodules. The proposed network is much deeper than the traditional 3D CNNs used in medical image processing. Specifically, in the network, we design a spatial pooling and cropping (SPC) layer to extract multilevel contextual information of CT data. Moreover, we employ an online hard sample selection strategy in the training process to make the network better fit hard samples (e.g., nodules with irregular shapes). Our method is evaluated on 888 CT scans from the dataset of the LUNA16 Challenge. The free-response receiver operating characteristic (FROC) curve shows that the proposed method achieves a high detection performance. Our experiments confirm that our method is robust and that the SPC layer helps increase the prediction accuracy. Additionally, the proposed method can easily be extended to other 3D object detection tasks in medical image processing. © 2018 American Association of Physicists in Medicine.

  6. Resource constrained design of artificial neural networks using comparator neural network

    NASA Technical Reports Server (NTRS)

    Wah, Benjamin W.; Karnik, Tanay S.

    1992-01-01

    We present a systematic design method executed under resource constraints for automating the design of artificial neural networks using the back error propagation algorithm. Our system aims at finding the best possible configuration for solving the given application with proper tradeoff between the training time and the network complexity. The design of such a system is hampered by three related problems. First, there are infinitely many possible network configurations, each may take an exceedingly long time to train; hence, it is impossible to enumerate and train all of them to completion within fixed time, space, and resource constraints. Second, expert knowledge on predicting good network configurations is heuristic in nature and is application dependent, rendering it difficult to characterize fully in the design process. A learning procedure that refines this knowledge based on examples on training neural networks for various applications is, therefore, essential. Third, the objective of the network to be designed is ill-defined, as it is based on a subjective tradeoff between the training time and the network cost. A design process that proposes alternate configurations under different cost-performance tradeoff is important. We have developed a Design System which schedules the available time, divided into quanta, for testing alternative network configurations. Its goal is to select/generate and test alternative network configurations in each quantum, and find the best network when time is expended. Since time is limited, a dynamic schedule that determines the network configuration to be tested in each quantum is developed. The schedule is based on relative comparison of predicted training times of alternative network configurations using comparator network paradigm. The comparator network has been trained to compare training times for a large variety of traces of TSSE-versus-time collected during back-propagation learning of various applications.

  7. Earth, Air, Fire, & Water: Resource Guide 6. The Arts and Learning, Interdisciplinary Resources for Education.

    ERIC Educational Resources Information Center

    Lee, Ronald T., Ed.

    This resource guide is intended to aid practitioners in the design of new curriculum units or the enrichment of existing units by suggesting activities and resources in the topic areas of earth, air, fire, and water. Special projects and trips relating to these topic areas are proposed. A sample arts networking system used to integrate various…

  8. The Perceived Value of Networking through an EMBA: A Study of Taiwanese Women

    ERIC Educational Resources Information Center

    Chen, Aurora; Doherty, Noeleen; Vinnicombe, Susan

    2012-01-01

    Purpose: This paper seeks to explore the perceived value of an executive MBA (EMBA) to the development of knowing-who competency for Taiwanese women managers. Design/methodology/approach: This qualitative research drew on in-depth interviews with a sample of 18 female alumni across three business schools in Taiwan. Analysis, using NVivo 8.0,…

  9. Model-Based Adaptive Event-Triggered Control of Strict-Feedback Nonlinear Systems.

    PubMed

    Li, Yuan-Xin; Yang, Guang-Hong

    2018-04-01

    This paper is concerned with the adaptive event-triggered control problem of nonlinear continuous-time systems in strict-feedback form. By using the event-sampled neural network (NN) to approximate the unknown nonlinear function, an adaptive model and an associated event-triggered controller are designed by exploiting the backstepping method. In the proposed method, the feedback signals and the NN weights are aperiodically updated only when the event-triggered condition is violated. A positive lower bound on the minimum intersample time is guaranteed to avoid accumulation point. The closed-loop stability of the resulting nonlinear impulsive dynamical system is rigorously proved via Lyapunov analysis under an adaptive event sampling condition. In comparing with the traditional adaptive backstepping design with a fixed sample period, the event-triggered method samples the state and updates the NN weights only when it is necessary. Therefore, the number of transmissions can be significantly reduced. Finally, two simulation examples are presented to show the effectiveness of the proposed control method.

  10. Gender Differences of Brain Glucose Metabolic Networks Revealed by FDG-PET: Evidence from a Large Cohort of 400 Young Adults

    PubMed Central

    Li, Kai; Zhu, Hong; Qi, Rongfeng; Zhang, Zhiqiang; Lu, Guangming

    2013-01-01

    Background Gender differences of the human brain are an important issue in neuroscience research. In recent years, an increasing amount of evidence has been gathered from noninvasive neuroimaging studies supporting a sexual dimorphism of the human brain. However, there is a lack of imaging studies on gender differences of brain metabolic networks based on a large population sample. Materials and Methods FDG PET data of 400 right-handed, healthy subjects, including 200 females (age: 25∼45 years, mean age±SD: 40.9±3.9 years) and 200 age-matched males were obtained and analyzed in the present study. We first investigated the regional differences of brain glucose metabolism between genders using a voxel-based two-sample t-test analysis. Subsequently, we investigated the gender differences of the metabolic networks. Sixteen metabolic covariance networks using seed-based correlation were analyzed. Seven regions showing significant regional metabolic differences between genders, and nine regions conventionally used in the resting-state network studies were selected as regions-of-interest. Permutation tests were used for comparing within- and between-network connectivity between genders. Results Compared with the males, females showed higher metabolism in the posterior part and lower metabolism in the anterior part of the brain. Moreover, there were widely distributed patterns of the metabolic networks in the human brain. In addition, significant gender differences within and between brain glucose metabolic networks were revealed in the present study. Conclusion This study provides solid data that reveal gender differences in regional brain glucose metabolism and brain glucose metabolic networks. These observations might contribute to the better understanding of the gender differences in human brain functions, and suggest that gender should be included as a covariate when designing experiments and explaining results of brain glucose metabolic networks in the control and experimental individuals or patients. PMID:24358312

  11. Gender differences of brain glucose metabolic networks revealed by FDG-PET: evidence from a large cohort of 400 young adults.

    PubMed

    Hu, Yuxiao; Xu, Qiang; Li, Kai; Zhu, Hong; Qi, Rongfeng; Zhang, Zhiqiang; Lu, Guangming

    2013-01-01

    Gender differences of the human brain are an important issue in neuroscience research. In recent years, an increasing amount of evidence has been gathered from noninvasive neuroimaging studies supporting a sexual dimorphism of the human brain. However, there is a lack of imaging studies on gender differences of brain metabolic networks based on a large population sample. FDG PET data of 400 right-handed, healthy subjects, including 200 females (age: 25:45 years, mean age ± SD: 40.9 ± 3.9 years) and 200 age-matched males were obtained and analyzed in the present study. We first investigated the regional differences of brain glucose metabolism between genders using a voxel-based two-sample t-test analysis. Subsequently, we investigated the gender differences of the metabolic networks. Sixteen metabolic covariance networks using seed-based correlation were analyzed. Seven regions showing significant regional metabolic differences between genders, and nine regions conventionally used in the resting-state network studies were selected as regions-of-interest. Permutation tests were used for comparing within- and between-network connectivity between genders. Compared with the males, females showed higher metabolism in the posterior part and lower metabolism in the anterior part of the brain. Moreover, there were widely distributed patterns of the metabolic networks in the human brain. In addition, significant gender differences within and between brain glucose metabolic networks were revealed in the present study. This study provides solid data that reveal gender differences in regional brain glucose metabolism and brain glucose metabolic networks. These observations might contribute to the better understanding of the gender differences in human brain functions, and suggest that gender should be included as a covariate when designing experiments and explaining results of brain glucose metabolic networks in the control and experimental individuals or patients.

  12. An improved sampling method of complex network

    NASA Astrophysics Data System (ADS)

    Gao, Qi; Ding, Xintong; Pan, Feng; Li, Weixing

    2014-12-01

    Sampling subnet is an important topic of complex network research. Sampling methods influence the structure and characteristics of subnet. Random multiple snowball with Cohen (RMSC) process sampling which combines the advantages of random sampling and snowball sampling is proposed in this paper. It has the ability to explore global information and discover the local structure at the same time. The experiments indicate that this novel sampling method could keep the similarity between sampling subnet and original network on degree distribution, connectivity rate and average shortest path. This method is applicable to the situation where the prior knowledge about degree distribution of original network is not sufficient.

  13. Pumping tests in networks of multilevel sampling wells: Motivation and methodology

    USGS Publications Warehouse

    Butler, J.J.; McElwee, C.D.; Bohling, Geoffrey C.

    1999-01-01

    The identification of spatial variations in hydraulic conductivity (K) on a scale of relevance for transport investigations has proven to be a considerable challenge. Recently, a new field method for the estimation of interwell variations in K has been proposed. This method, hydraulic tomography, essentially consists of a series of short‐term pumping tests performed in a tomographic‐like arrangement. In order to fully realize the potential of this approach, information about lateral and vertical variations in pumping‐induced head changes (drawdown) is required with detail that has previously been unobtainable in the field. Pumping tests performed in networks of multilevel sampling (MLS) wells can provide data of the needed density if drawdown can accurately and rapidly be measured in the small‐diameter tubing used in such wells. Field and laboratory experiments show that accurate transient drawdown data can be obtained in the small‐diameter MLS tubing either directly with miniature fiber‐optic pressure sensors or indirectly using air‐pressure transducers. As with data from many types of hydraulic tests, the quality of drawdown measurements from MLS tubing is quite dependent on the effectiveness of well development activities. Since MLS ports of the standard design are prone to clogging and are difficult to develop, alternate designs are necessary to ensure accurate drawdown measurements. Initial field experiments indicate that drawdown measurements obtained from pumping tests performed in MLS networks have considerable potential for providing valuable information about spatial variations in hydraulic conductivity.

  14. 42 CFR 405.2110 - Designation of ESRD networks.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false Designation of ESRD networks. 405.2110 Section 405... End-Stage Renal Disease (ESRD) Services § 405.2110 Designation of ESRD networks. CMS designated ESRD networks in which the approved ESRD facilities collectively provide the necessary care for ESRD patients...

  15. 42 CFR 405.2110 - Designation of ESRD networks.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 2 2011-10-01 2011-10-01 false Designation of ESRD networks. 405.2110 Section 405... End-Stage Renal Disease (ESRD) Services § 405.2110 Designation of ESRD networks. CMS designated ESRD networks in which the approved ESRD facilities collectively provide the necessary care for ESRD patients...

  16. Research in network management techniques for tactical data communications networks

    NASA Astrophysics Data System (ADS)

    Boorstyn, R.; Kershenbaum, A.; Maglaris, B.; Sarachik, P.

    1982-09-01

    This is the final technical report for work performed on network management techniques for tactical data networks. It includes all technical papers that have been published during the control period. Research areas include Packet Network modelling, adaptive network routing, network design algorithms, network design techniques, and local area networks.

  17. Spectrophotometric determination of fluoxetine by molecularly imprinted polypyrrole and optimization by experimental design, artificial neural network and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Nezhadali, Azizollah; Motlagh, Maryam Omidvar; Sadeghzadeh, Samira

    2018-02-01

    A selective method based on molecularly imprinted polymer (MIP) solid-phase extraction (SPE) using UV-Vis spectrophotometry as a detection technique was developed for the determination of fluoxetine (FLU) in pharmaceutical and human serum samples. The MIPs were synthesized using pyrrole as a functional monomer in the presence of FLU as a template molecule. The factors that affecting the preparation and extraction ability of MIP such as amount of sorbent, initiator concentration, the amount of monomer to template ratio, uptake shaking rate, uptake time, washing buffer pH, take shaking rate, Taking time and polymerization time were considered for optimization. First a Plackett-Burman design (PBD) consists of 12 randomized runs were applied to determine the influence of each factor. The other optimization processes were performed using central composite design (CCD), artificial neural network (ANN) and genetic algorithm (GA). At optimal condition the calibration curve showed linearity over a concentration range of 10- 7-10- 8 M with a correlation coefficient (R2) of 0.9970. The limit of detection (LOD) for FLU was obtained 6.56 × 10- 9 M. The repeatability of the method was obtained 1.61%. The synthesized MIP sorbent showed a good selectivity and sensitivity toward FLU. The MIP/SPE method was used for the determination of FLU in pharmaceutical, serum and plasma samples, successfully.

  18. Colorimetric assay for on-the-spot alcoholic strength sensing in spirit samples based on dual-responsive lanthanide coordination polymer particles with ratiometric fluorescence.

    PubMed

    Deng, Jingjing; Shi, Guoyue; Zhou, Tianshu

    2016-10-26

    This study demonstrates a new strategy for colorimetric detection of alcoholic strength (AS) in spirit samples based on dual-responsive lanthanide infinite coordination polymer (Ln-ICP) particles with ratiometric fluorescence. The ICP used in this study are composed of two components: one is the supramolecular Ln-ICP network formed by the coordination between the ligand 2,2'-thiodiacetic acid (TDA) and central metal ion Eu 3+ ; and the other is a fluorescent dye, i.e., coumarin 343 (C343), both as the cofactor ligand and as the sensitizer, doped into the Ln-ICP network through self-adaptive chemistry. Upon being excited at 300 nm, the red fluorescence of Ln-ICP network itself at 617 nm is highly enhanced due to the concomitant energy transfer from C343 to Eu 3+ , while the fluorescence of C343 at 495 nm is supressed. In pure ethanol solvent, the as-formed C343@Eu-TDA is well dispersed and quite stable. However, the addition of water into ethanolic dispersion of C343@Eu-TDA destructs Eu-TDA network structure, resulting in the release of C343 from ICP network into the solvent. Consequently, the fluorescence of Eu-TDA turns off and the fluorescence of C343 turns on, leading to the fluorescent color change of the dispersion from red to blue, which constitutes a new mechanism for colorimetric sensing of AS in commercial spirit samples. With the method developed here, we could clearly distinguish the AS of different spirit samples within a wide linear range from 10% vol to 100% vol directly by "naked eye" with the help of UV-lamp (365 nm). This study not only offers a new method for on-the-spot visible detection of AS, but also provides a strategy for dual-responsive sensing mode by rational designing the optical properties of the Ln-ICP network and the guest, respectively. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Feasibility of conducting wetfall chemistry investigations around the Bowen Power Plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, N.C.J.; Patrinos, A.A.N.

    1979-10-01

    The feasibility of expanding the Meteorological Effects of Thermal Energy Releases - Oak Ridge National Laboratory (METER-ORNL) research at Bower Power Plant, a coal-fired power plant in northwest Georgia, to include wetfall chemistry is evaluated using results of similar studies around other power plants, several atmospheric washout models, analysis of spatial variability in precipitation, and field logistical considerations. An optimal wetfall chemistry network design is proposed, incorporating the inner portion of the existing rain-gauge network and augmented by additional sites to ensure adequate coverage of probable target areas. The predicted sulfate production rate differs by about four orders of magnitudemore » among the models reviewed with a pH of 3. No model can claim superiority over any other model without substantive data verification. The spatial uniformity in rain amount is evaluated using four storms that occurred at the METER-ORNL network. Values of spatial variability ranged from 8 to 31% and decreased as the mean rainfall increased. The field study of wetfall chemistry will require a minimum of 5 persons to operate the approximately 50 collectors covering an area of 740 km/sup 2/. Preliminary wetfall-only samples collected on an event basis showed lower pH and higher electrical conductivity of precipitation collected about 5 km downwind of the power plant relative to samples collected upwind. Wetfall samples collected on a weekly basis using automatic samplers, however, showed variable results, with no consistent pattern. This suggests the need for event sampling to minimize variable rain volume and multiple-source effects often associated with weekly samples.« less

  20. Challenges to Recruiting Population Representative Samples of Female Sex Workers in China Using Respondent Driven Sampling1

    PubMed Central

    Merli, M. Giovanna; Moody, James; Smith, Jeffrey; Li, Jing; Weir, Sharon; Chen, Xiangsheng

    2014-01-01

    We explore the network coverage of a sample of female sex workers (FSWs) in China recruited through Respondent Drive Sampling (RDS) as part of an effort to evaluate the claim of RDS of population representation with empirical data. We take advantage of unique information on the social networks of FSWs obtained from two overlapping studies --RDS and a venue-based sampling approach (PLACE) -- and use an exponential random graph modeling (ERGM) framework from local networks to construct a likely network from which our observed RDS sample is drawn. We then run recruitment chains over this simulated network to assess the assumption that the RDS chain referral process samples participants in proportion to their degree and the extent to which RDS satisfactorily covers certain parts of the network. We find evidence that, contrary to assumptions, RDS oversamples low degree nodes and geographically central areas of the network. Unlike previous evaluations of RDS which have explored the performance of RDS sampling chains on a non-hidden population, or the performance of simulated chains over previously mapped realistic social networks, our study provides a robust, empirically grounded evaluation of the performance of RDS chains on a real-world hidden population. PMID:24834869

  1. RESPONDENT-DRIVEN SAMPLING AS MARKOV CHAIN MONTE CARLO

    PubMed Central

    GOEL, SHARAD; SALGANIK, MATTHEW J.

    2013-01-01

    Respondent-driven sampling (RDS) is a recently introduced, and now widely used, technique for estimating disease prevalence in hidden populations. RDS data are collected through a snowball mechanism, in which current sample members recruit future sample members. In this paper we present respondent-driven sampling as Markov chain Monte Carlo (MCMC) importance sampling, and we examine the effects of community structure and the recruitment procedure on the variance of RDS estimates. Past work has assumed that the variance of RDS estimates is primarily affected by segregation between healthy and infected individuals. We examine an illustrative model to show that this is not necessarily the case, and that bottlenecks anywhere in the networks can substantially affect estimates. We also show that variance is inflated by a common design feature in which sample members are encouraged to recruit multiple future sample members. The paper concludes with suggestions for implementing and evaluating respondent-driven sampling studies. PMID:19572381

  2. Design framework for entanglement-distribution switching networks

    NASA Astrophysics Data System (ADS)

    Drost, Robert J.; Brodsky, Michael

    2016-09-01

    The distribution of quantum entanglement appears to be an important component of applications of quantum communications and networks. The ability to centralize the sourcing of entanglement in a quantum network can provide for improved efficiency and enable a variety of network structures. A necessary feature of an entanglement-sourcing network node comprising several sources of entangled photons is the ability to reconfigurably route the generated pairs of photons to network neighbors depending on the desired entanglement sharing of the network users at a given time. One approach to such routing is the use of a photonic switching network. The requirements for an entanglement distribution switching network are less restrictive than for typical conventional applications, leading to design freedom that can be leveraged to optimize additional criteria. In this paper, we present a mathematical framework defining the requirements of an entanglement-distribution switching network. We then consider the design of such a switching network using a number of 2 × 2 crossbar switches, addressing the interconnection of these switches and efficient routing algorithms. In particular, we define a worst-case loss metric and consider 6 × 6, 8 × 8, and 10 × 10 network designs that optimize both this metric and the number of crossbar switches composing the network. We pay particular attention to the 10 × 10 network, detailing novel results proving the optimality of the proposed design. These optimized network designs have great potential for use in practical quantum networks, thus advancing the concept of quantum networks toward reality.

  3. Maximizing capture of gene co-expression relationships through pre-clustering of input expression samples: an Arabidopsis case study.

    PubMed

    Feltus, F Alex; Ficklin, Stephen P; Gibson, Scott M; Smith, Melissa C

    2013-06-05

    In genomics, highly relevant gene interaction (co-expression) networks have been constructed by finding significant pair-wise correlations between genes in expression datasets. These networks are then mined to elucidate biological function at the polygenic level. In some cases networks may be constructed from input samples that measure gene expression under a variety of different conditions, such as for different genotypes, environments, disease states and tissues. When large sets of samples are obtained from public repositories it is often unmanageable to associate samples into condition-specific groups, and combining samples from various conditions has a negative effect on network size. A fixed significance threshold is often applied also limiting the size of the final network. Therefore, we propose pre-clustering of input expression samples to approximate condition-specific grouping of samples and individual network construction of each group as a means for dynamic significance thresholding. The net effect is increase sensitivity thus maximizing the total co-expression relationships in the final co-expression network compendium. A total of 86 Arabidopsis thaliana co-expression networks were constructed after k-means partitioning of 7,105 publicly available ATH1 Affymetrix microarray samples. We term each pre-sorted network a Gene Interaction Layer (GIL). Random Matrix Theory (RMT), an un-supervised thresholding method, was used to threshold each of the 86 networks independently, effectively providing a dynamic (non-global) threshold for the network. The overall gene count across all GILs reached 19,588 genes (94.7% measured gene coverage) and 558,022 unique co-expression relationships. In comparison, network construction without pre-sorting of input samples yielded only 3,297 genes (15.9%) and 129,134 relationships. in the global network. Here we show that pre-clustering of microarray samples helps approximate condition-specific networks and allows for dynamic thresholding using un-supervised methods. Because RMT ensures only highly significant interactions are kept, the GIL compendium consists of 558,022 unique high quality A. thaliana co-expression relationships across almost all of the measurable genes on the ATH1 array. For A. thaliana, these networks represent the largest compendium to date of significant gene co-expression relationships, and are a means to explore complex pathway, polygenic, and pleiotropic relationships for this focal model plant. The networks can be explored at sysbio.genome.clemson.edu. Finally, this method is applicable to any large expression profile collection for any organism and is best suited where a knowledge-independent network construction method is desired.

  4. Maximizing capture of gene co-expression relationships through pre-clustering of input expression samples: an Arabidopsis case study

    PubMed Central

    2013-01-01

    Background In genomics, highly relevant gene interaction (co-expression) networks have been constructed by finding significant pair-wise correlations between genes in expression datasets. These networks are then mined to elucidate biological function at the polygenic level. In some cases networks may be constructed from input samples that measure gene expression under a variety of different conditions, such as for different genotypes, environments, disease states and tissues. When large sets of samples are obtained from public repositories it is often unmanageable to associate samples into condition-specific groups, and combining samples from various conditions has a negative effect on network size. A fixed significance threshold is often applied also limiting the size of the final network. Therefore, we propose pre-clustering of input expression samples to approximate condition-specific grouping of samples and individual network construction of each group as a means for dynamic significance thresholding. The net effect is increase sensitivity thus maximizing the total co-expression relationships in the final co-expression network compendium. Results A total of 86 Arabidopsis thaliana co-expression networks were constructed after k-means partitioning of 7,105 publicly available ATH1 Affymetrix microarray samples. We term each pre-sorted network a Gene Interaction Layer (GIL). Random Matrix Theory (RMT), an un-supervised thresholding method, was used to threshold each of the 86 networks independently, effectively providing a dynamic (non-global) threshold for the network. The overall gene count across all GILs reached 19,588 genes (94.7% measured gene coverage) and 558,022 unique co-expression relationships. In comparison, network construction without pre-sorting of input samples yielded only 3,297 genes (15.9%) and 129,134 relationships. in the global network. Conclusions Here we show that pre-clustering of microarray samples helps approximate condition-specific networks and allows for dynamic thresholding using un-supervised methods. Because RMT ensures only highly significant interactions are kept, the GIL compendium consists of 558,022 unique high quality A. thaliana co-expression relationships across almost all of the measurable genes on the ATH1 array. For A. thaliana, these networks represent the largest compendium to date of significant gene co-expression relationships, and are a means to explore complex pathway, polygenic, and pleiotropic relationships for this focal model plant. The networks can be explored at sysbio.genome.clemson.edu. Finally, this method is applicable to any large expression profile collection for any organism and is best suited where a knowledge-independent network construction method is desired. PMID:23738693

  5. Designing optimized multi-species monitoring networks to detect range shifts driven by climate change: a case study with bats in the North of Portugal.

    PubMed

    Amorim, Francisco; Carvalho, Sílvia B; Honrado, João; Rebelo, Hugo

    2014-01-01

    Here we develop a framework to design multi-species monitoring networks using species distribution models and conservation planning tools to optimize the location of monitoring stations to detect potential range shifts driven by climate change. For this study, we focused on seven bat species in Northern Portugal (Western Europe). Maximum entropy modelling was used to predict the likely occurrence of those species under present and future climatic conditions. By comparing present and future predicted distributions, we identified areas where each species is likely to gain, lose or maintain suitable climatic space. We then used a decision support tool (the Marxan software) to design three optimized monitoring networks considering: a) changes in species likely occurrence, b) species conservation status, and c) level of volunteer commitment. For present climatic conditions, species distribution models revealed that areas suitable for most species occur in the north-eastern part of the region. However, areas predicted to become climatically suitable in the future shifted towards west. The three simulated monitoring networks, adaptable for an unpredictable volunteer commitment, included 28, 54 and 110 sampling locations respectively, distributed across the study area and covering the potential full range of conditions where species range shifts may occur. Our results show that our framework outperforms the traditional approach that only considers current species ranges, in allocating monitoring stations distributed across different categories of predicted shifts in species distributions. This study presents a straightforward framework to design monitoring schemes aimed specifically at testing hypotheses about where and when species ranges may shift with climatic changes, while also ensuring surveillance of general population trends.

  6. The effect of a loss of model structural detail due to network skeletonization on contamination warning system design: case studies.

    PubMed

    Davis, Michael J; Janke, Robert

    2018-01-04

    The effect of limitations in the structural detail available in a network model on contamination warning system (CWS) design was examined in case studies using the original and skeletonized network models for two water distribution systems (WDSs). The skeletonized models were used as proxies for incomplete network models. CWS designs were developed by optimizing sensor placements for worst-case and mean-case contamination events. Designs developed using the skeletonized network models were transplanted into the original network model for evaluation. CWS performance was defined as the number of people who ingest more than some quantity of a contaminant in tap water before the CWS detects the presence of contamination. Lack of structural detail in a network model can result in CWS designs that (1) provide considerably less protection against worst-case contamination events than that obtained when a more complete network model is available and (2) yield substantial underestimates of the consequences associated with a contamination event. Nevertheless, CWSs developed using skeletonized network models can provide useful reductions in consequences for contaminants whose effects are not localized near the injection location. Mean-case designs can yield worst-case performances similar to those for worst-case designs when there is uncertainty in the network model. Improvements in network models for WDSs have the potential to yield significant improvements in CWS designs as well as more realistic evaluations of those designs. Although such improvements would be expected to yield improved CWS performance, the expected improvements in CWS performance have not been quantified previously. The results presented here should be useful to those responsible for the design or implementation of CWSs, particularly managers and engineers in water utilities, and encourage the development of improved network models.

  7. The effect of a loss of model structural detail due to network skeletonization on contamination warning system design: case studies

    NASA Astrophysics Data System (ADS)

    Davis, Michael J.; Janke, Robert

    2018-05-01

    The effect of limitations in the structural detail available in a network model on contamination warning system (CWS) design was examined in case studies using the original and skeletonized network models for two water distribution systems (WDSs). The skeletonized models were used as proxies for incomplete network models. CWS designs were developed by optimizing sensor placements for worst-case and mean-case contamination events. Designs developed using the skeletonized network models were transplanted into the original network model for evaluation. CWS performance was defined as the number of people who ingest more than some quantity of a contaminant in tap water before the CWS detects the presence of contamination. Lack of structural detail in a network model can result in CWS designs that (1) provide considerably less protection against worst-case contamination events than that obtained when a more complete network model is available and (2) yield substantial underestimates of the consequences associated with a contamination event. Nevertheless, CWSs developed using skeletonized network models can provide useful reductions in consequences for contaminants whose effects are not localized near the injection location. Mean-case designs can yield worst-case performances similar to those for worst-case designs when there is uncertainty in the network model. Improvements in network models for WDSs have the potential to yield significant improvements in CWS designs as well as more realistic evaluations of those designs. Although such improvements would be expected to yield improved CWS performance, the expected improvements in CWS performance have not been quantified previously. The results presented here should be useful to those responsible for the design or implementation of CWSs, particularly managers and engineers in water utilities, and encourage the development of improved network models.

  8. [Application of an artificial neural network in the design of sustained-release dosage forms].

    PubMed

    Wei, X H; Wu, J J; Liang, W Q

    2001-09-01

    To use the artificial neural network (ANN) in Matlab 5.1 tool-boxes to predict the formulations of sustained-release tablets. The solubilities of nine drugs and various ratios of HPMC: Dextrin for 63 tablet formulations were used as the ANN model input, and in vitro accumulation released at 6 sampling times were used as output. The ANN model was constructed by selecting the optimal number of iterations (25) and model structure in which there are one hidden layer and five hidden layer nodes. The optimized ANN model was used for prediction of formulation based on desired target in vitro dissolution-time profiles. ANN predicted profiles based on ANN predicted formulations were closely similar to the target profiles. The ANN could be used for predicting the dissolution profiles of sustained release dosage form and for the design of optimal formulation.

  9. Impact of weak social ties and networks on poor sleep quality: A case study of Iranian employees.

    PubMed

    Masoudnia, Ebrahim

    2015-12-01

    The poor sleep quality is one of the major risk factors of somatic, psychiatric and social disorders and conditions as well as the major predictors of quality of employees' performance. The previous studies in Iran had neglected the impacts of social factors including social networks and ties on adults sleep quality. Thus, the aim of the current research was to determine the relationship between social networks and adult employees' sleep quality. This study was conducted with a correlational and descriptive design. Data were collected from 360 participants (183 males and 177 females) who were employed in Yazd public organizations in June and July of 2014. These samples were selected based on random sampling method. In addition, the measuring tools were the Pittsburgh Sleep Quality Index (PSQI) and Social Relations Inventory (SRI). Based on the results, the prevalence rate of sleep disorder among Iranian adult employees was 63.1% (total PSQI>5). And, after controlling for socio-demographic variables, there was significant difference between individuals with strong and poor social network and ties in terms of overall sleep quality (p<.01), subjective sleep quality (p<.01), habitual sleep efficiency (p<.05), and daytime dysfunction (p<.01). The results also revealed that the employees with strong social network and ties had better overall sleep quality, had the most habitual sleep efficiency, and less daytime dysfunction than employees with poor social network and ties. It can be implied that the weak social network and ties serve as a risk factor for sleep disorders or poor sleep quality for adult employees. Therefore, the social and behavioral interventions seem essential to improve the adult's quality sleep. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Using Neural Networks in the Mapping of Mixed Discrete/Continuous Design Spaces With Application to Structural Design

    DTIC Science & Technology

    1994-02-01

    desired that the problem to which the design space mapping techniques were applied be easily analyzed, yet provide a design space with realistic complexity...consistent fully stressed solution. 3 DESIGN SPACE MAPPING In order to reduce the computational expense required to optimize design spaces, neural networks...employed in this study. Some of the issues involved in using neural networks to do design space mapping are how to configure the neural network, how much

  11. Role of Social Media in Diabetes Management in the Middle East Region: Systematic Review

    PubMed Central

    2018-01-01

    Background Diabetes is a major health care burden in the Middle East region. Social networking tools can contribute to the management of diabetes with improved educational and care outcomes using these popular tools in the region. Objective The objective of this review was to evaluate the impact of social networking interventions on the improvement of diabetes management and health outcomes in patients with diabetes in the Middle East. Methods Peer-reviewed articles from PubMed (1990-2017) and Google Scholar (1990-2017) were identified using various combinations of predefined terms and search criteria. The main inclusion criterion consisted of the use of social networking apps on mobile phones as the primary intervention. Outcomes were grouped according to study design, type of diabetes, category of technological intervention, location, and sample size. Results This review included 5 articles evaluating the use of social media tools in the management of diabetes in the Middle East. In most studies, the acceptance rate for the use of social networking to optimize the management of diabetes was relatively high. Diabetes-specific management tools such as the Saudi Arabia Networking for Aiding Diabetes and Diabetes Intelligent Management System for Iraq systems helped collect patient information and lower hemoglobin A1c (HbA1c) levels, respectively. Conclusions The reviewed studies demonstrated the potential of social networking tools being adopted in regions in the Middle East to improve the management of diabetes. Future studies consisting of larger sample sizes spanning multiple regions would provide further insight into the use of social media for improving patient outcomes. PMID:29439941

  12. NetMiner-an ensemble pipeline for building genome-wide and high-quality gene co-expression network using massive-scale RNA-seq samples.

    PubMed

    Yu, Hua; Jiao, Bingke; Lu, Lu; Wang, Pengfei; Chen, Shuangcheng; Liang, Chengzhi; Liu, Wei

    2018-01-01

    Accurately reconstructing gene co-expression network is of great importance for uncovering the genetic architecture underlying complex and various phenotypes. The recent availability of high-throughput RNA-seq sequencing has made genome-wide detecting and quantifying of the novel, rare and low-abundance transcripts practical. However, its potential merits in reconstructing gene co-expression network have still not been well explored. Using massive-scale RNA-seq samples, we have designed an ensemble pipeline, called NetMiner, for building genome-scale and high-quality Gene Co-expression Network (GCN) by integrating three frequently used inference algorithms. We constructed a RNA-seq-based GCN in one species of monocot rice. The quality of network obtained by our method was verified and evaluated by the curated gene functional association data sets, which obviously outperformed each single method. In addition, the powerful capability of network for associating genes with functions and agronomic traits was shown by enrichment analysis and case studies. In particular, we demonstrated the potential value of our proposed method to predict the biological roles of unknown protein-coding genes, long non-coding RNA (lncRNA) genes and circular RNA (circRNA) genes. Our results provided a valuable and highly reliable data source to select key candidate genes for subsequent experimental validation. To facilitate identification of novel genes regulating important biological processes and phenotypes in other plants or animals, we have published the source code of NetMiner, making it freely available at https://github.com/czllab/NetMiner.

  13. Research in Network Management Techniques for Tactical Data Communications Network.

    DTIC Science & Technology

    1982-09-01

    the control period. Research areas include Packet Network modelling, adaptive network routing, network design algorithms, network design techniques...contro!lers are designed to perform their limited tasks optimally. For the dynamic routing problem considered here, the local controllers are node...feedback to finding in optimum stead-o-state routing (static strategies) under non - control which can be easily implemented in real time. congested

  14. Principles of Biomimetic Vascular Network Design Applied to a Tissue-Engineered Liver Scaffold

    PubMed Central

    Hoganson, David M.; Pryor, Howard I.; Spool, Ira D.; Burns, Owen H.; Gilmore, J. Randall

    2010-01-01

    Branched vascular networks are a central component of scaffold architecture for solid organ tissue engineering. In this work, seven biomimetic principles were established as the major guiding technical design considerations of a branched vascular network for a tissue-engineered scaffold. These biomimetic design principles were applied to a branched radial architecture to develop a liver-specific vascular network. Iterative design changes and computational fluid dynamic analysis were used to optimize the network before mold manufacturing. The vascular network mold was created using a new mold technique that achieves a 1:1 aspect ratio for all channels. In vitro blood flow testing confirmed the physiologic hemodynamics of the network as predicted by computational fluid dynamic analysis. These results indicate that this biomimetic liver vascular network design will provide a foundation for developing complex vascular networks for solid organ tissue engineering that achieve physiologic blood flow. PMID:20001254

  15. Principles of biomimetic vascular network design applied to a tissue-engineered liver scaffold.

    PubMed

    Hoganson, David M; Pryor, Howard I; Spool, Ira D; Burns, Owen H; Gilmore, J Randall; Vacanti, Joseph P

    2010-05-01

    Branched vascular networks are a central component of scaffold architecture for solid organ tissue engineering. In this work, seven biomimetic principles were established as the major guiding technical design considerations of a branched vascular network for a tissue-engineered scaffold. These biomimetic design principles were applied to a branched radial architecture to develop a liver-specific vascular network. Iterative design changes and computational fluid dynamic analysis were used to optimize the network before mold manufacturing. The vascular network mold was created using a new mold technique that achieves a 1:1 aspect ratio for all channels. In vitro blood flow testing confirmed the physiologic hemodynamics of the network as predicted by computational fluid dynamic analysis. These results indicate that this biomimetic liver vascular network design will provide a foundation for developing complex vascular networks for solid organ tissue engineering that achieve physiologic blood flow.

  16. Reducing uncertainty in dust monitoring to detect aeolian sediment transport responses to land cover change

    NASA Astrophysics Data System (ADS)

    Webb, N.; Chappell, A.; Van Zee, J.; Toledo, D.; Duniway, M.; Billings, B.; Tedela, N.

    2017-12-01

    Anthropogenic land use and land cover change (LULCC) influence global rates of wind erosion and dust emission, yet our understanding of the magnitude of the responses remains poor. Field measurements and monitoring provide essential data to resolve aeolian sediment transport patterns and assess the impacts of human land use and management intensity. Data collected in the field are also required for dust model calibration and testing, as models have become the primary tool for assessing LULCC-dust cycle interactions. However, there is considerable uncertainty in estimates of dust emission due to the spatial variability of sediment transport. Field sampling designs are currently rudimentary and considerable opportunities are available to reduce the uncertainty. Establishing the minimum detectable change is critical for measuring spatial and temporal patterns of sediment transport, detecting potential impacts of LULCC and land management, and for quantifying the uncertainty of dust model estimates. Here, we evaluate the effectiveness of common sampling designs (e.g., simple random sampling, systematic sampling) used to measure and monitor aeolian sediment transport rates. Using data from the US National Wind Erosion Research Network across diverse rangeland and cropland cover types, we demonstrate how only large changes in sediment mass flux (of the order 200% to 800%) can be detected when small sample sizes are used, crude sampling designs are implemented, or when the spatial variation is large. We then show how statistical rigour and the straightforward application of a sampling design can reduce the uncertainty and detect change in sediment transport over time and between land use and land cover types.

  17. optGpSampler: an improved tool for uniformly sampling the solution-space of genome-scale metabolic networks.

    PubMed

    Megchelenbrink, Wout; Huynen, Martijn; Marchiori, Elena

    2014-01-01

    Constraint-based models of metabolic networks are typically underdetermined, because they contain more reactions than metabolites. Therefore the solutions to this system do not consist of unique flux rates for each reaction, but rather a space of possible flux rates. By uniformly sampling this space, an estimated probability distribution for each reaction's flux in the network can be obtained. However, sampling a high dimensional network is time-consuming. Furthermore, the constraints imposed on the network give rise to an irregularly shaped solution space. Therefore more tailored, efficient sampling methods are needed. We propose an efficient sampling algorithm (called optGpSampler), which implements the Artificial Centering Hit-and-Run algorithm in a different manner than the sampling algorithm implemented in the COBRA Toolbox for metabolic network analysis, here called gpSampler. Results of extensive experiments on different genome-scale metabolic networks show that optGpSampler is up to 40 times faster than gpSampler. Application of existing convergence diagnostics on small network reconstructions indicate that optGpSampler converges roughly ten times faster than gpSampler towards similar sampling distributions. For networks of higher dimension (i.e. containing more than 500 reactions), we observed significantly better convergence of optGpSampler and a large deviation between the samples generated by the two algorithms. optGpSampler for Matlab and Python is available for non-commercial use at: http://cs.ru.nl/~wmegchel/optGpSampler/.

  18. Design and implementation of interface units for high speed fiber optics local area networks and broadband integrated services digital networks

    NASA Technical Reports Server (NTRS)

    Tobagi, Fouad A.; Dalgic, Ismail; Pang, Joseph

    1990-01-01

    The design and implementation of interface units for high speed Fiber Optic Local Area Networks and Broadband Integrated Services Digital Networks are discussed. During the last years, a number of network adapters that are designed to support high speed communications have emerged. This approach to the design of a high speed network interface unit was to implement package processing functions in hardware, using VLSI technology. The VLSI hardware implementation of a buffer management unit, which is required in such architectures, is described.

  19. Filling the gap between disaster preparedness and response networks of urban emergency management: Following the 2013 Seoul Floods.

    PubMed

    Song, Minsun; Jung, Kyujin

    2015-01-01

    To examine the gap between disaster preparedness and response networks following the 2013 Seoul Floods in which the rapid transmission of disaster information and resources was impeded by severe changes of interorganizational collaboration networks. This research uses the 2013 Seoul Emergency Management Survey data that were collected before and after the floods, and total 94 organizations involving in coping with the floods were analyzed in bootstrap independent-sample t-test and social network analysis through UCINET 6 and STATA 12. The findings show that despite the primary network form that is more hierarchical, horizontal collaboration has been relatively invigorated in actual response. Also, interorganizational collaboration networks for response operations seem to be more flexible grounded on improvisation to coping with unexpected victims and damages. Local organizations under urban emergency management are recommended to tightly build a strong commitment for joint response operations through full-size exercises at the metropolitan level before a catastrophic event. Also, interorganizational emergency management networks need to be restructured by reflecting the actual response networks to reduce collaboration risk during a disaster. This research presents a critical insight into inverse thinking of the view designing urban emergency management networks and provides original evidences for filling the gap between previously coordinated networks for disaster preparedness and practical response operations after a disaster.

  20. Predictors of Condom Use among Peer Social Networks of Men Who Have Sex with Men in Ghana, West Africa

    PubMed Central

    Nelson, LaRon E.; Wilton, Leo; Agyarko-Poku, Thomas; Zhang, Nanhua; Zou, Yuanshu; Aluoch, Marilyn; Apea, Vanessa; Hanson, Samuel Owiredu; Adu-Sarkodie, Yaw

    2015-01-01

    Ghanaian men who have sex with men (MSM) have high rates of HIV infection. A first step in designing culturally relevant prevention interventions for MSM in Ghana is to understand the influence that peer social networks have on their attitudes and behaviors. We aimed to examine whether, in a sample of Ghanaian MSM, mean scores on psychosocial variables theorized to influence HIV/STI risk differed between peer social networks and to examine whether these variables were associated with condom use. We conducted a formative, cross-sectional survey with 22 peer social networks of MSM (n = 137) in Ghana. We assessed basic psychological-needs satisfaction, HIV/STI knowledge, sense of community, HIV and gender non-conformity stigmas, gender equitable norms, sexual behavior and condom use. Data were analyzed using analysis of variance, generalized estimating equations, and Wilcoxon two sample tests. All models were adjusted for age and income, ethnicity, education, housing and community of residence. Mean scores for all psychosocial variables differed significantly by social network. Men who reported experiencing more autonomy support by their healthcare providers had higher odds of condom use for anal (AOR = 3.29, p<0.01), oral (AOR = 5.06, p<0.01) and vaginal (AOR = 1.8, p<0.05) sex. Those with a stronger sense of community also had higher odds of condom use for anal sex (AOR = 1.26, p<0.001). Compared to networks with low prevalence of consistent condom users, networks with higher prevalence of consistent condom users had higher STD and HIV knowledge, had norms that were more supportive of gender equity, and experienced more autonomy support in their healthcare encounters. Healthcare providers and peer social networks can have an important influence on safer-sex behaviors in Ghanaian MSM. More research with Ghanaian MSM is needed that considers knowledge, attitudes, and norms of their social networks in the development and implementation of culturally relevant HIV/STI prevention intervention strategies. PMID:25635774

  1. An Intelligent Pinger Network for Solid Glacier Environments

    NASA Astrophysics Data System (ADS)

    Schönitz, S.; Reuter, S.; Henke, C.; Jeschke, S.; Ewert, D.; Eliseev, D.; Heinen, D.; Linder, P.; Scholz, F.; Weinstock, L.; Wickmann, S.; Wiebusch, C.; Zierke, S.

    2016-12-01

    This talk presents a novel approach for an intelligent, agent-based pinger network in an extraterrestrial glacier environment. Because of recent findings of the Cassini spacecraft, a mission to Saturn's moon Enceladus is planned in order search for extraterrestrial life within the ocean beneath Enceladus' ice crust. Therefore, a maneuverable melting probe, the EnEx probe, was developed to melt into Enceladus' ice and take liquid samples from water-filled crevasses. Hence, the probe collecting the samples has to be able to navigate in ice which is a hard problem, because neither visual nor gravitational methods can be used. To enhance the navigability of the probe, a network of autonomous pinger units (APU) is in development that is able to extract a map of the ice environment via ultrasonic soundwaves. A network of these APUs will be deployed on the surface of Enceladus, melt into the ice and form a network to help guide the probe safely to its destination. The APU network is able to form itself fully autonomously and to compensate system failures of individual APUs. The agents controlling the single APU are realized by rule-based expert systems implemented in CLIPS. The rule-based expert system evaluates available information of the environment, decides for actions to take to achieve the desired goal (e.g. a specific network topology), and executes and monitors such actions. In general, it encodes certain situations that are evaluated whenever an APU is currently idle, and then decides for a next action to take. It bases this decision on its internal world model that is shared with the other APUs. The optimal network topology that defines each agents position is iteratively determined by mixed-integer nonlinear programming. Extensive simulations studies show that the proposed agent design enables the APUs to form a robust network topology that is suited to create a reliable 3D map of the ice environment.

  2. Integrated design of multivariable hydrometric networks using entropy theory with a multiobjective optimization approach

    NASA Astrophysics Data System (ADS)

    Kim, Y.; Hwang, T.; Vose, J. M.; Martin, K. L.; Band, L. E.

    2016-12-01

    Obtaining quality hydrologic observations is the first step towards a successful water resources management. While remote sensing techniques have enabled to convert satellite images of the Earth's surface to hydrologic data, the importance of ground-based observations has never been diminished because in-situ data are often highly accurate and can be used to validate remote measurements. The existence of efficient hydrometric networks is becoming more important to obtain as much as information with minimum redundancy. The World Meteorological Organization (WMO) has recommended a guideline for the minimum hydrometric network density based on physiography; however, this guideline is not for the optimum network design but for avoiding serious deficiency from a network. Moreover, all hydrologic variables are interconnected within the hydrologic cycle, while monitoring networks have been designed individually. This study proposes an integrated network design method using entropy theory with a multiobjective optimization approach. In specific, a precipitation and a streamflow networks in a semi-urban watershed in Ontario, Canada were designed simultaneously by maximizing joint entropy, minimizing total correlation, and maximizing conditional entropy of streamflow network given precipitation network. After comparing with the typical individual network designs, the proposed design method would be able to determine more efficient optimal networks by avoiding the redundant stations, in which hydrologic information is transferable. Additionally, four quantization cases were applied in entropy calculations to assess their implications on the station rankings and the optimal networks. The results showed that the selection of quantization method should be considered carefully because the rankings and optimal networks are subject to change accordingly.

  3. Integrated design of multivariable hydrometric networks using entropy theory with a multiobjective optimization approach

    NASA Astrophysics Data System (ADS)

    Keum, J.; Coulibaly, P. D.

    2017-12-01

    Obtaining quality hydrologic observations is the first step towards a successful water resources management. While remote sensing techniques have enabled to convert satellite images of the Earth's surface to hydrologic data, the importance of ground-based observations has never been diminished because in-situ data are often highly accurate and can be used to validate remote measurements. The existence of efficient hydrometric networks is becoming more important to obtain as much as information with minimum redundancy. The World Meteorological Organization (WMO) has recommended a guideline for the minimum hydrometric network density based on physiography; however, this guideline is not for the optimum network design but for avoiding serious deficiency from a network. Moreover, all hydrologic variables are interconnected within the hydrologic cycle, while monitoring networks have been designed individually. This study proposes an integrated network design method using entropy theory with a multiobjective optimization approach. In specific, a precipitation and a streamflow networks in a semi-urban watershed in Ontario, Canada were designed simultaneously by maximizing joint entropy, minimizing total correlation, and maximizing conditional entropy of streamflow network given precipitation network. After comparing with the typical individual network designs, the proposed design method would be able to determine more efficient optimal networks by avoiding the redundant stations, in which hydrologic information is transferable. Additionally, four quantization cases were applied in entropy calculations to assess their implications on the station rankings and the optimal networks. The results showed that the selection of quantization method should be considered carefully because the rankings and optimal networks are subject to change accordingly.

  4. Adaptive Neural Network-Based Event-Triggered Control of Single-Input Single-Output Nonlinear Discrete-Time Systems.

    PubMed

    Sahoo, Avimanyu; Xu, Hao; Jagannathan, Sarangapani

    2016-01-01

    This paper presents a novel adaptive neural network (NN) control of single-input and single-output uncertain nonlinear discrete-time systems under event sampled NN inputs. In this control scheme, the feedback signals are transmitted, and the NN weights are tuned in an aperiodic manner at the event sampled instants. After reviewing the NN approximation property with event sampled inputs, an adaptive state estimator (SE), consisting of linearly parameterized NNs, is utilized to approximate the unknown system dynamics in an event sampled context. The SE is viewed as a model and its approximated dynamics and the state vector, during any two events, are utilized for the event-triggered controller design. An adaptive event-trigger condition is derived by using both the estimated NN weights and a dead-zone operator to determine the event sampling instants. This condition both facilitates the NN approximation and reduces the transmission of feedback signals. The ultimate boundedness of both the NN weight estimation error and the system state vector is demonstrated through the Lyapunov approach. As expected, during an initial online learning phase, events are observed more frequently. Over time with the convergence of the NN weights, the inter-event times increase, thereby lowering the number of triggered events. These claims are illustrated through the simulation results.

  5. Local self-uniformity in photonic networks.

    PubMed

    Sellers, Steven R; Man, Weining; Sahba, Shervin; Florescu, Marian

    2017-02-17

    The interaction of a material with light is intimately related to its wavelength-scale structure. Simple connections between structure and optical response empower us with essential intuition to engineer complex optical functionalities. Here we develop local self-uniformity (LSU) as a measure of a random network's internal structural similarity, ranking networks on a continuous scale from crystalline, through glassy intermediate states, to chaotic configurations. We demonstrate that complete photonic bandgap structures possess substantial LSU and validate LSU's importance in gap formation through design of amorphous gyroid structures. Amorphous gyroid samples are fabricated via three-dimensional ceramic printing and the bandgaps experimentally verified. We explore also the wing-scale structuring in the butterfly Pseudolycaena marsyas and show that it possesses substantial amorphous gyroid character, demonstrating the subtle order achieved by evolutionary optimization and the possibility of an amorphous gyroid's self-assembly.

  6. Local self-uniformity in photonic networks

    NASA Astrophysics Data System (ADS)

    Sellers, Steven R.; Man, Weining; Sahba, Shervin; Florescu, Marian

    2017-02-01

    The interaction of a material with light is intimately related to its wavelength-scale structure. Simple connections between structure and optical response empower us with essential intuition to engineer complex optical functionalities. Here we develop local self-uniformity (LSU) as a measure of a random network's internal structural similarity, ranking networks on a continuous scale from crystalline, through glassy intermediate states, to chaotic configurations. We demonstrate that complete photonic bandgap structures possess substantial LSU and validate LSU's importance in gap formation through design of amorphous gyroid structures. Amorphous gyroid samples are fabricated via three-dimensional ceramic printing and the bandgaps experimentally verified. We explore also the wing-scale structuring in the butterfly Pseudolycaena marsyas and show that it possesses substantial amorphous gyroid character, demonstrating the subtle order achieved by evolutionary optimization and the possibility of an amorphous gyroid's self-assembly.

  7. Link and Network Layers Design for Ultra-High-Speed Terahertz-Band Communications Networks

    DTIC Science & Technology

    2017-01-01

    throughput, and identify the optimal parameter values for their design (Sec. 6.2.3). Moreover, we validate and test the scheme with experimental data obtained...LINK AND NETWORK LAYERS DESIGN FOR ULTRA-HIGH- SPEED TERAHERTZ-BAND COMMUNICATIONS NETWORKS STATE UNIVERSITY OF NEW YORK (SUNY) AT BUFFALO JANUARY...TYPE FINAL TECHNICAL REPORT 3. DATES COVERED (From - To) FEB 2015 – SEP 2016 4. TITLE AND SUBTITLE LINK AND NETWORK LAYERS DESIGN FOR ULTRA-HIGH

  8. Robotic platform for traveling on vertical piping network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nance, Thomas A; Vrettos, Nick J; Krementz, Daniel

    This invention relates generally to robotic systems and is specifically designed for a robotic system that can navigate vertical pipes within a waste tank or similar environment. The robotic system allows a process for sampling, cleaning, inspecting and removing waste around vertical pipes by supplying a robotic platform that uses the vertical pipes to support and navigate the platform above waste material contained in the tank.

  9. Environmental and Water Quality Operational Studies. General Guidelines for Monitoring Contaminants in Reservoirs

    DTIC Science & Technology

    1986-02-01

    espacially trte for the topics of sampling and analytical methods, statistical considerations, and the design of general water quality monitoring networks. For...and to the establishment and habitat differentiation of biological populations within reservoirs. Reservoir operatirn, esp- cially the timing...8217 % - - % properties of bottom sediments, as well as specific habitat associations of biological populations of reservoirs. Thus, such heterogeneities

  10. Designing Industrial Networks Using Ecological Food Web Metrics.

    PubMed

    Layton, Astrid; Bras, Bert; Weissburg, Marc

    2016-10-18

    Biologically Inspired Design (biomimicry) and Industrial Ecology both look to natural systems to enhance the sustainability and performance of engineered products, systems and industries. Bioinspired design (BID) traditionally has focused on a unit operation and single product level. In contrast, this paper describes how principles of network organization derived from analysis of ecosystem properties can be applied to industrial system networks. Specifically, this paper examines the applicability of particular food web matrix properties as design rules for economically and biologically sustainable industrial networks, using an optimization model developed for a carpet recycling network. Carpet recycling network designs based on traditional cost and emissions based optimization are compared to designs obtained using optimizations based solely on ecological food web metrics. The analysis suggests that networks optimized using food web metrics also were superior from a traditional cost and emissions perspective; correlations between optimization using ecological metrics and traditional optimization ranged generally from 0.70 to 0.96, with flow-based metrics being superior to structural parameters. Four structural food parameters provided correlations nearly the same as that obtained using all structural parameters, but individual structural parameters provided much less satisfactory correlations. The analysis indicates that bioinspired design principles from ecosystems can lead to both environmentally and economically sustainable industrial resource networks, and represent guidelines for designing sustainable industry networks.

  11. Social Network Type and Subjective Well-being in a National Sample of Older Americans

    PubMed Central

    Litwin, Howard; Shiovitz-Ezra, Sharon

    2011-01-01

    Purpose: The study considers the social networks of older Americans, a population for whom there have been few studies of social network type. It also examines associations between network types and well-being indicators: loneliness, anxiety, and happiness. Design and Methods: A subsample of persons aged 65 years and older from the first wave of the National Social Life, Health, and Aging Project was employed (N = 1,462). We applied K-means cluster analysis to derive social network types using 7 criterion variables. In the multivariate stage, the well-being outcomes were regressed on the network type construct and on background and health characteristics by means of logistic regression. Results: Five social network types were derived: “diverse,” “friend,” “congregant,” “family,” and “restricted.” Social network type was found to be associated with each of the well-being indicators after adjusting for demographic and health confounders. Respondents embedded in network types characterized by greater social capital tended to exhibit better well-being in terms of less loneliness, less anxiety, and greater happiness. Implications: Knowledge about differing network types should make gerontological practitioners more aware of the varying interpersonal milieus in which older people function. Adopting network type assessment as an integral part of intake procedures and tracing network shifts over time can serve as a basis for risk assessment as well as a means for determining the efficacy of interventions. PMID:21097553

  12. Optimizing observational networks combining gliders, moored buoys and FerryBox in the Bay of Biscay and English Channel

    NASA Astrophysics Data System (ADS)

    Charria, Guillaume; Lamouroux, Julien; De Mey, Pierre

    2016-10-01

    Designing optimal observation networks in coastal oceans remains one of the major challenges towards the implementation of future efficient Integrated Ocean Observing Systems to monitor the coastal environment. In the Bay of Biscay and the English Channel, the diversity of involved processes (e.g. tidally-driven circulation, plume dynamics) requires to adapt observing systems to the specific targeted environments. Also important is the requirement for those systems to sustain coastal applications. Two observational network design experiments have been implemented for the spring season in two regions: the Loire River plume (northern part of the Bay of Biscay) and the Western English Channel. The method used to perform these experiments is based on the ArM (Array Modes) formalism using an ensemble-based approach without data assimilation. The first experiment in the Loire River plume aims to explore different possible glider endurance lines combined with a fixed mooring to monitor temperature and salinity. Main results show an expected improvement when combining glider and mooring observations. The experiment also highlights that the chosen transect (along-shore and North-South, cross-shore) does not significantly impact the efficiency of the network. Nevertheless, the classification from the method results in slightly better performances for along-shore and North-South sections. In the Western English Channel, a tidally-driven circulation system, added value of using a glider below FerryBox temperature and salinity measurements has been assessed. FerryBox systems are characterised by a high frequency sampling rate crossing the region 2 to 3 times a day. This efficient sampling, as well as the specific vertical hydrological structure (which is homogeneous in many sub-regions of the domain), explains the fact that the added value of an associated glider transect is not significant. These experiments combining existing and future observing systems, as well as numerical ensemble simulations, highlight the key issue of monitoring the whole water column in and close to river plumes (using gliders for example) and the efficiency of the surface high frequency sampling from FerryBoxes in macrotidal regions.

  13. 78 FR 8686 - Establishment of the National Freight Network

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-06

    ... Network AGENCY: Federal Highway Administration (FHWA), DOT. ACTION: Notice. SUMMARY: This notice defines the planned process for the designation of the national freight network as required by Section 1115 of... the initial designation of the primary freight network, the designation of additional miles critical...

  14. Bias and precision of selected analytes reported by the National Atmospheric Deposition Program and National Trends Network, 1984

    USGS Publications Warehouse

    Brooks, M.H.; Schroder, L.J.; Willoughby, T.C.

    1987-01-01

    The U.S. Geological Survey operated a blind audit sample program during 1974 to test the effects of the sample handling and shipping procedures used by the National Atmospheric Deposition Program and National Trends Network on the quality of wet deposition data produced by the combined networks. Blind audit samples, which were dilutions of standard reference water samples, were submitted by network site operators to the central analytical laboratory disguised as actual wet deposition samples. Results from the analyses of blind audit samples were used to calculate estimates of analyte bias associated with all network wet deposition samples analyzed in 1984 and to estimate analyte precision. Concentration differences between double blind samples that were submitted to the central analytical laboratory and separate analyses of aliquots of those blind audit samples that had not undergone network sample handling and shipping were used to calculate analyte masses that apparently were added to each blind audit sample by routine network handling and shipping procedures. These calculated masses indicated statistically significant biases for magnesium, sodium , potassium, chloride, and sulfate. Median calculated masses were 41.4 micrograms (ug) for calcium, 14.9 ug for magnesium, 23.3 ug for sodium, 0.7 ug for potassium, 16.5 ug for chloride and 55.3 ug for sulfate. Analyte precision was estimated using two different sets of replicate measures performed by the central analytical laboratory. Estimated standard deviations were similar to those previously reported. (Author 's abstract)

  15. Robust Network Design - Connectivity and Beyond

    DTIC Science & Technology

    2015-01-15

    utilize a heterogeneous set of physical links (RF, Optical/Laser and SATCOM), for interconnecting a set of terrestrial, space and highly mobile airborne...design of mobility patterns of airborne platforms to provide stable operating conditions,  the design of networks that enable graceful performance...research effort, Airborne Network research was primarily directed towards Mobile Ad-hoc Networks (MANET). From our experience in design and

  16. Assessing NIR & MIR Spectral Analysis as a Method for Soil C Estimation Across a Network of Sampling Sites

    NASA Astrophysics Data System (ADS)

    Spencer, S.; Ogle, S.; Borch, T.; Rock, B.

    2008-12-01

    Monitoring soil C stocks is critical to assess the impact of future climate and land use change on carbon sinks and sources in agricultural lands. A benchmark network for soil carbon monitoring of stock changes is being designed for US agricultural lands with 3000-5000 sites anticipated and re-sampling on a 5- to10-year basis. Approximately 1000 sites would be sampled per year producing around 15,000 soil samples to be processed for total, organic, and inorganic carbon, as well as bulk density and nitrogen. Laboratory processing of soil samples is cost and time intensive, therefore we are testing the efficacy of using near-infrared (NIR) and mid-infrared (MIR) spectral methods for estimating soil carbon. As part of an initial implementation of national soil carbon monitoring, we collected over 1800 soil samples from 45 cropland sites in the mid-continental region of the U.S. Samples were processed using standard laboratory methods to determine the variables above. Carbon and nitrogen were determined by dry combustion and inorganic carbon was estimated with an acid-pressure test. 600 samples are being scanned using a bench- top NIR reflectance spectrometer (30 g of 2 mm oven-dried soil and 30 g of 8 mm air-dried soil) and 500 samples using a MIR Fourier-Transform Infrared Spectrometer (FTIR) with a DRIFT reflectance accessory (0.2 g oven-dried ground soil). Lab-measured carbon will be compared to spectrally-estimated carbon contents using Partial Least Squares (PLS) multivariate statistical approach. PLS attempts to develop a soil C predictive model that can then be used to estimate C in soil samples not lab-processed. The spectral analysis of soil samples either whole or partially processed can potentially save both funding resources and time to process samples. This is particularly relevant for the implementation of a national monitoring network for soil carbon. This poster will discuss our methods, initial results and potential for using NIR and MIR spectral approaches to either replace or augment traditional lab-based carbon analyses of soils.

  17. Surgical Site Infections Following Pediatric Ambulatory Surgery: An Epidemiologic Analysis.

    PubMed

    Rinke, Michael L; Jan, Dominique; Nassim, Janelle; Choi, Jaeun; Choi, Steven J

    2016-08-01

    OBJECTIVE To identify surgical site infection (SSI) rates following pediatric ambulatory surgery, SSI outcomes and risk factors, and sensitivity and specificity of SSI administrative billing codes. DESIGN Retrospective chart review of pediatric ambulatory surgeries with International Classification of Disease, Ninth Revision (ICD-9) codes for SSI, and a systematic random sampling of 5% of surgeries without SSI ICD-9 codes, all adjudicated for SSI on the basis of an ambulatory-adapted National Healthcare Safety Network definition. SETTING Urban pediatric tertiary care center April 1, 2009-March 31, 2014. METHODS SSI rates and sensitivity and specificity of ICD-9 codes were estimated using sampling design, and risk factors were analyzed in case-rest of cohort, and case-control, designs. RESULTS In 15,448 pediatric ambulatory surgeries, 34 patients had ICD-9 codes for SSI and 25 met the adapted National Healthcare Safety Network criteria. One additional SSI was identified with systematic random sampling. The SSI rate following pediatric ambulatory surgery was 2.9 per 1,000 surgeries (95% CI, 1.2-6.9). Otolaryngology surgeries demonstrated significantly lower SSI rates compared with endocrine (P=.001), integumentary (P=.001), male genital (P<.0001), and respiratory (P=.01) surgeries. Almost half of patients with an SSI were admitted, 88% received antibiotics, and 15% returned to the operating room. No risk factors were associated with SSI. The sensitivity of ICD-9 codes for SSI following ambulatory surgery was 55.31% (95% CI, 12.69%-91.33%) and specificity was 99.94% (99.89%-99.97%). CONCLUSIONS SSI following pediatric ambulatory surgery occurs at an appreciable rate and conveys morbidity on children. Infect Control Hosp Epidemiol 2016;37:931-938.

  18. Bridging Social Circles: Need for Cognition, Prejudicial Judgments, and Personal Social Network Characteristics

    PubMed Central

    Curşeu, Petru L.; de Jong, Jeroen P.

    2017-01-01

    Various factors pertaining to the social context (availability of plausible social contacts) as well as personality traits influence the emergence of social ties that ultimately compose one’s personal social network. We build on a situational selection model to argue that personality traits influence the cognitive processing of social cues that in turn influences the preference for particular social ties. More specifically, we use a cross-lagged design to test a mediation model explaining the effects of need for cognition (NFC) on egocentric network characteristics. We used the data available in the LISS panel, in which a probabilistic sample of Dutch participants were asked to fill in surveys annually. We tested our model on data collected in three successive years and our results show that people scoring high in NFC tend to revolve in information-rich egocentric networks, characterized by high demographic diversity, high interpersonal dissimilarity, and high average education. The results also show that the effect of NFC on social network characteristics is mediated by non-prejudicial judgments. PMID:28790948

  19. Respondent-driven sampling and the recruitment of people with small injecting networks.

    PubMed

    Paquette, Dana; Bryant, Joanne; de Wit, John

    2012-05-01

    Respondent-driven sampling (RDS) is a form of chain-referral sampling, similar to snowball sampling, which was developed to reach hidden populations such as people who inject drugs (PWID). RDS is said to reach members of a hidden population that may not be accessible through other sampling methods. However, less attention has been paid as to whether there are segments of the population that are more likely to be missed by RDS. This study examined the ability of RDS to capture people with small injecting networks. A study of PWID, using RDS, was conducted in 2009 in Sydney, Australia. The size of participants' injecting networks was examined by recruitment chain and wave. Participants' injecting network characteristics were compared to those of participants from a separate pharmacy-based study. A logistic regression analysis was conducted to examine the characteristics independently associated with having small injecting networks, using the combined RDS and pharmacy-based samples. In comparison with the pharmacy-recruited participants, RDS participants were almost 80% less likely to have small injecting networks, after adjusting for other variables. RDS participants were also more likely to have their injecting networks form a larger proportion of those in their social networks, and to have acquaintances as part of their injecting networks. Compared to those with larger injecting networks, individuals with small injecting networks were equally likely to engage in receptive sharing of injecting equipment, but less likely to have had contact with prevention services. These findings suggest that those with small injecting networks are an important group to recruit, and that RDS is less likely to capture these individuals.

  20. Design of a ground-water-quality monitoring network for the Salinas River basin, California

    USGS Publications Warehouse

    Showalter, P.K.; Akers, J.P.; Swain, L.A.

    1984-01-01

    A regional ground-water quality monitoring network for the entire Salinas River drainage basin was designed to meet the needs of the California State Water Resources Control Board. The project included phase 1--identifying monitoring networks that exist in the region; phase 2--collecting information about the wells in each network; and phase 3--studying the factors--such as geology, land use, hydrology, and geohydrology--that influence the ground-water quality, and designing a regional network. This report is the major product of phase 3. Based on the authors ' understanding of the ground-water-quality monitoring system and input from local offices, an ideal network was designed. The proposed network includes 317 wells and 8 stream-gaging stations. Because limited funds are available to implement the monitoring network, the proposed network is designed to correspond to the ideal network insofar as practicable, and is composed mainly of 214 wells that are already being monitored by a local agency. In areas where network wells are not available, arrangements will be made to add wells to local networks. The data collected by this network will be used to assess the ground-water quality of the entire Salinas River drainage basin. After 2 years of data are collected, the network will be evaluated to test whether it is meeting the network objectives. Subsequent network evaluations will be done very 5 years. (USGS)

  1. Respondent-driven sampling as Markov chain Monte Carlo.

    PubMed

    Goel, Sharad; Salganik, Matthew J

    2009-07-30

    Respondent-driven sampling (RDS) is a recently introduced, and now widely used, technique for estimating disease prevalence in hidden populations. RDS data are collected through a snowball mechanism, in which current sample members recruit future sample members. In this paper we present RDS as Markov chain Monte Carlo importance sampling, and we examine the effects of community structure and the recruitment procedure on the variance of RDS estimates. Past work has assumed that the variance of RDS estimates is primarily affected by segregation between healthy and infected individuals. We examine an illustrative model to show that this is not necessarily the case, and that bottlenecks anywhere in the networks can substantially affect estimates. We also show that variance is inflated by a common design feature in which the sample members are encouraged to recruit multiple future sample members. The paper concludes with suggestions for implementing and evaluating RDS studies.

  2. Phenotypic constraints promote latent versatility and carbon efficiency in metabolic networks.

    PubMed

    Bardoscia, Marco; Marsili, Matteo; Samal, Areejit

    2015-07-01

    System-level properties of metabolic networks may be the direct product of natural selection or arise as a by-product of selection on other properties. Here we study the effect of direct selective pressure for growth or viability in particular environments on two properties of metabolic networks: latent versatility to function in additional environments and carbon usage efficiency. Using a Markov chain Monte Carlo (MCMC) sampling based on flux balance analysis (FBA), we sample from a known biochemical universe random viable metabolic networks that differ in the number of directly constrained environments. We find that the latent versatility of sampled metabolic networks increases with the number of directly constrained environments and with the size of the networks. We then show that the average carbon wastage of sampled metabolic networks across the constrained environments decreases with the number of directly constrained environments and with the size of the networks. Our work expands the growing body of evidence about nonadaptive origins of key functional properties of biological networks.

  3. Performance modeling & simulation of complex systems (A systems engineering design & analysis approach)

    NASA Technical Reports Server (NTRS)

    Hall, Laverne

    1995-01-01

    Modeling of the Multi-mission Image Processing System (MIPS) will be described as an example of the use of a modeling tool to design a distributed system that supports multiple application scenarios. This paper examines: (a) modeling tool selection, capabilities, and operation (namely NETWORK 2.5 by CACl), (b) pointers for building or constructing a model and how the MIPS model was developed, (c) the importance of benchmarking or testing the performance of equipment/subsystems being considered for incorporation the design/architecture, (d) the essential step of model validation and/or calibration using the benchmark results, (e) sample simulation results from the MIPS model, and (f) how modeling and simulation analysis affected the MIPS design process by having a supportive and informative impact.

  4. Long-term monitoring of blazars - the DWARF network

    NASA Astrophysics Data System (ADS)

    Backes, Michael; Biland, Adrian; Boller, Andrea; Braun, Isabel; Bretz, Thomas; Commichau, Sebastian; Commichau, Volker; Dorner, Daniela; von Gunten, Hanspeter; Gendotti, Adamo; Grimm, Oliver; Hildebrand, Dorothée; Horisberger, Urs; Krähenbühl, Thomas; Kranich, Daniel; Lustermann, Werner; Mannheim, Karl; Neise, Dominik; Pauss, Felicitas; Renker, Dieter; Rhode, Wolfgang; Rissi, Michael; Rollke, Sebastian; Röser, Ulf; Stark, Luisa Sabrina; Stucki, Jean-Pierre; Viertel, Gert; Vogler, Patrick; Weitzel, Quirin

    The variability of the very high energy (VHE) emission from blazars seems to be connected with the feeding and propagation of relativistic jets and with their origin in supermassive black hole binaries. The key to understanding their properties is measuring well-sampled gamma-ray lightcurves, revealing the typical source behavior unbiased by prior knowledge from other wavebands. Using ground-based gamma-ray observatories with exposures limited by dark-time, a global network of several telescopes is needed to carry out fulltime measurements. Obviously, such observations are time-consuming and, therefore, cannot be carried out with the present state of the art instruments. The DWARF telescope on the Canary Island of La Palma is dedicated to monitoring observations. It is currently being set up, employing a costefficient and robotic design. Part of this project is the future construction of a distributed network of small telescopes. The physical motivation of VHE long-term monitoring will be outlined in detail and the perspective for a network for 24/7 observations will be presented.

  5. Hybrid multiphoton volumetric functional imaging of large-scale bioengineered neuronal networks

    NASA Astrophysics Data System (ADS)

    Dana, Hod; Marom, Anat; Paluch, Shir; Dvorkin, Roman; Brosh, Inbar; Shoham, Shy

    2014-06-01

    Planar neural networks and interfaces serve as versatile in vitro models of central nervous system physiology, but adaptations of related methods to three dimensions (3D) have met with limited success. Here, we demonstrate for the first time volumetric functional imaging in a bioengineered neural tissue growing in a transparent hydrogel with cortical cellular and synaptic densities, by introducing complementary new developments in nonlinear microscopy and neural tissue engineering. Our system uses a novel hybrid multiphoton microscope design combining a 3D scanning-line temporal-focusing subsystem and a conventional laser-scanning multiphoton microscope to provide functional and structural volumetric imaging capabilities: dense microscopic 3D sampling at tens of volumes per second of structures with mm-scale dimensions containing a network of over 1,000 developing cells with complex spontaneous activity patterns. These developments open new opportunities for large-scale neuronal interfacing and for applications of 3D engineered networks ranging from basic neuroscience to the screening of neuroactive substances.

  6. Cybersim: geographic, temporal, and organizational dynamics of malware propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santhi, Nandakishore; Yan, Guanhua; Eidenbenz, Stephan

    2010-01-01

    Cyber-infractions into a nation's strategic security envelope pose a constant and daunting challenge. We present the modular CyberSim tool which has been developed in response to the need to realistically simulate at a national level, software vulnerabilities and resulting mal ware propagation in online social networks. CyberSim suite (a) can generate realistic scale-free networks from a database of geocoordinated computers to closely model social networks arising from personal and business email contacts and online communities; (b) maintains for each,bost a list of installed software, along with the latest published vulnerabilities; (d) allows designated initial nodes where malware gets introduced; (e)more » simulates, using distributed discrete event-driven technology, the spread of malware exploiting a specific vulnerability, with packet delay and user online behavior models; (f) provides a graphical visualization of spread of infection, its severity, businesses affected etc to the analyst. We present sample simulations on a national level network with millions of computers.« less

  7. State feedback controller design for the synchronization of Boolean networks with time delays

    NASA Astrophysics Data System (ADS)

    Li, Fangfei; Li, Jianning; Shen, Lijuan

    2018-01-01

    State feedback control design to make the response Boolean network synchronize with the drive Boolean network is far from being solved in the literature. Motivated by this, this paper studies the feedback control design for the complete synchronization of two coupled Boolean networks with time delays. A necessary condition for the existence of a state feedback controller is derived first. Then the feedback control design procedure for the complete synchronization of two coupled Boolean networks is provided based on the necessary condition. Finally, an example is given to illustrate the proposed design procedure.

  8. Engineering a Functional Small RNA Negative Autoregulation Network with Model-Guided Design.

    PubMed

    Hu, Chelsea Y; Takahashi, Melissa K; Zhang, Yan; Lucks, Julius B

    2018-05-22

    RNA regulators are powerful components of the synthetic biology toolbox. Here, we expand the repertoire of synthetic gene networks built from these regulators by constructing a transcriptional negative autoregulation (NAR) network out of small RNAs (sRNAs). NAR network motifs are core motifs of natural genetic networks, and are known for reducing network response time and steady state signal. Here we use cell-free transcription-translation (TX-TL) reactions and a computational model to design and prototype sRNA NAR constructs. Using parameter sensitivity analysis, we design a simple set of experiments that allow us to accurately predict NAR function in TX-TL. We transfer successful network designs into Escherichia coli and show that our sRNA transcriptional network reduces both network response time and steady-state gene expression. This work broadens our ability to construct increasingly sophisticated RNA genetic networks with predictable function.

  9. Optimal cost design of water distribution networks using a decomposition approach

    NASA Astrophysics Data System (ADS)

    Lee, Ho Min; Yoo, Do Guen; Sadollah, Ali; Kim, Joong Hoon

    2016-12-01

    Water distribution network decomposition, which is an engineering approach, is adopted to increase the efficiency of obtaining the optimal cost design of a water distribution network using an optimization algorithm. This study applied the source tracing tool in EPANET, which is a hydraulic and water quality analysis model, to the decomposition of a network to improve the efficiency of the optimal design process. The proposed approach was tested by carrying out the optimal cost design of two water distribution networks, and the results were compared with other optimal cost designs derived from previously proposed optimization algorithms. The proposed decomposition approach using the source tracing technique enables the efficient decomposition of an actual large-scale network, and the results can be combined with the optimal cost design process using an optimization algorithm. This proves that the final design in this study is better than those obtained with other previously proposed optimization algorithms.

  10. Method for Constructing Composite Response Surfaces by Combining Neural Networks with other Interpolation or Estimation Techniques

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan (Inventor); Madavan, Nateri K. (Inventor)

    2003-01-01

    A method and system for design optimization that incorporates the advantages of both traditional response surface methodology (RSM) and neural networks is disclosed. The present invention employs a unique strategy called parameter-based partitioning of the given design space. In the design procedure, a sequence of composite response surfaces based on both neural networks and polynomial fits is used to traverse the design space to identify an optimal solution. The composite response surface has both the power of neural networks and the economy of low-order polynomials (in terms of the number of simulations needed and the network training requirements). The present invention handles design problems with many more parameters than would be possible using neural networks alone and permits a designer to rapidly perform a variety of trade-off studies before arriving at the final design.

  11. 78 FR 75442 - Designation of the Primary Freight Network

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-11

    ...] Designation of the Primary Freight Network AGENCY: Federal Highway Administration (FHWA), DOT. ACTION: Notice... period for the Designation of the highway Primary Freight Network (PFN) notice, which was published on... the complete National Freight Network (NFN), and to solicit comments on aspects of the NFN. The five...

  12. Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks.

    PubMed

    Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph

    2015-08-01

    Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is "non-intrusive" and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design.

  13. Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks

    PubMed Central

    Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph

    2015-01-01

    Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is “non-intrusive” and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design. PMID:26317784

  14. A Numerical Climate Observing Network Design Study

    NASA Technical Reports Server (NTRS)

    Stammer, Detlef

    2003-01-01

    This project was concerned with three related questions of an optimal design of a climate observing system: 1. The spatial sampling characteristics required from an ARGO system. 2. The degree to which surface observations from ARGO can be used to calibrate and test satellite remote sensing observations of sea surface salinity (SSS) as it is anticipated now. 3. The more general design of an climate observing system as it is required in the near future for CLIVAR in the Atlantic. An important question in implementing an observing system is that of the sampling density required to observe climate-related variations in the ocean. For that purpose this project was concerned with the sampling requirements for the ARGO float system, but investigated also other elements of a climate observing system. As part of this project we studied the horizontal and vertical sampling characteristics of a global ARGO system which is required to make it fully complementary to altimeter data with the goal to capture climate related variations on large spatial scales (less thanAttachment: 1000 km). We addressed this question in the framework of a numerical model study in the North Atlantic with an 1/6 horizontal resolution. The advantage of a numerical design study is the knowledge of the full model state. Sampled by a synthetic float array, model results will therefore allow to test and improve existing deployment strategies with the goal to make the system as optimal and cost-efficient as possible. Attachment: "Optimal observations for variational data assimilation".

  15. Selective Narrowing of Social Networks Across Adulthood is Associated With Improved Emotional Experience in Daily Life.

    PubMed

    English, Tammy; Carstensen, Laura L

    2014-03-01

    Past research has documented age differences in the size and composition of social networks that suggest that networks grow smaller with age and include an increasingly greater proportion of well-known social partners. According to socioemotional selectivity theory, such changes in social network composition serve an antecedent emotion regulatory function that supports an age-related increase in the priority that people place on emotional well-being. The present study employed a longitudinal design with a sample that spanned the full adult age range to examine whether there is evidence of within-individual (developmental) change in social networks and whether the characteristics of relationships predict emotional experiences in daily life. Using growth curve analyses, social networks were found to increase in size in young adulthood and then decline steadily throughout later life. As postulated by socioemotional selectivity theory, reductions were observed primarily in the number of peripheral partners; the number of close partners was relatively stable over time. In addition, cross-sectional analyses revealed that older adults reported that social network members elicited less negative emotion and more positive emotion. The emotional tone of social networks, particularly when negative emotions were associated with network members, also predicted experienced emotion of participants. Overall, findings were robust after taking into account demographic variables and physical health. The implications of these findings are discussed in the context of socioemotional selectivity theory and related theoretical models.

  16. Mapping soil landscape as spatial continua: The Neural Network Approach

    NASA Astrophysics Data System (ADS)

    Zhu, A.-Xing

    2000-03-01

    A neural network approach was developed to populate a soil similarity model that was designed to represent soil landscape as spatial continua for hydroecological modeling at watersheds of mesoscale size. The approach employs multilayer feed forward neural networks. The input to the network was data on a set of soil formative environmental factors; the output from the network was a set of similarity values to a set of prescribed soil classes. The network was trained using a conjugate gradient algorithm in combination with a simulated annealing technique to learn the relationships between a set of prescribed soils and their environmental factors. Once trained, the network was used to compute for every location in an area the similarity values of the soil to the set of prescribed soil classes. The similarity values were then used to produce detailed soil spatial information. The approach also included a Geographic Information System procedure for selecting representative training and testing samples and a process of determining the network internal structure. The approach was applied to soil mapping in a watershed, the Lubrecht Experimental Forest, in western Montana. The case study showed that the soil spatial information derived using the neural network approach reveals much greater spatial detail and has a higher quality than that derived from the conventional soil map. Implications of this detailed soil spatial information for hydroecological modeling at the watershed scale are also discussed.

  17. Developing Canadian physician: the quest for leadership effectiveness.

    PubMed

    Comber, Scott; Wilson, Lisette; Crawford, Kyle C

    2016-07-04

    Purpose The purpose of this study is to discern the physicians' perception of leadership effectiveness in their clinical and non-clinical roles (leadership) by identifying their political skill levels. Design/methodology/approach A sample of 209 Canadian physicians was surveyed using the Political Skills Inventory (PSI) during the period 2012-2014. The PSI was chosen because it assesses leadership effectiveness on four dimensions: social astuteness, interpersonal influence, networking ability and apparent authenticity. Findings Physicians in clinical roles' PSI scores were significantly lower in all four PSI dimensions when compared to all other physicians in non-clinical roles, with the principal difference being in their networking abilities. Practical implications More emphasis is needed on educating and training physicians, specifically in the areas of political skills, in current clinical roles if they are to assume leadership roles and be effective. Originality/value Although this study is located in Canada, the study design and associated findings may have implications to other areas and countries wanting to increase physician leadership effectiveness. Further, replication of this study in other settings may provide insight into the future design of physician leadership training curriculum.

  18. Experimental Design for Multi-drug Combination Studies Using Signaling Networks

    PubMed Central

    Huang, Hengzhen; Fang, Hong-Bin; Tan, Ming T.

    2017-01-01

    Summary Combinations of multiple drugs are an important approach to maximize the chance for therapeutic success by inhibiting multiple pathways/targets. Analytic methods for studying drug combinations have received increasing attention because major advances in biomedical research have made available large number of potential agents for testing. The preclinical experiment on multi-drug combinations plays a key role in (especially cancer) drug development because of the complex nature of the disease, the need to reduce development time and costs. Despite recent progresses in statistical methods for assessing drug interaction, there is an acute lack of methods for designing experiments on multi-drug combinations. The number of combinations grows exponentially with the number of drugs and dose-levels and it quickly precludes laboratory testing. Utilizing experimental dose-response data of single drugs and a few combinations along with pathway/network information to obtain an estimate of the functional structure of the dose-response relationship in silico, we propose an optimal design that allows exploration of the dose-effect surface with the smallest possible sample size in this paper. The simulation studies show our proposed methods perform well. PMID:28960231

  19. Application of artificial neural networks to the design optimization of aerospace structural components

    NASA Technical Reports Server (NTRS)

    Berke, Laszlo; Patnaik, Surya N.; Murthy, Pappu L. N.

    1993-01-01

    The application of artificial neural networks to capture structural design expertise is demonstrated. The principal advantage of a trained neural network is that it requires trivial computational effort to produce an acceptable new design. For the class of problems addressed, the development of a conventional expert system would be extremely difficult. In the present effort, a structural optimization code with multiple nonlinear programming algorithms and an artificial neural network code NETS were used. A set of optimum designs for a ring and two aircraft wings for static and dynamic constraints were generated by using the optimization codes. The optimum design data were processed to obtain input and output pairs, which were used to develop a trained artificial neural network with the code NETS. Optimum designs for new design conditions were predicted by using the trained network. Neural net prediction of optimum designs was found to be satisfactory for most of the output design parameters. However, results from the present study indicate that caution must be exercised to ensure that all design variables are within selected error bounds.

  20. Optimum Design of Aerospace Structural Components Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Berke, L.; Patnaik, S. N.; Murthy, P. L. N.

    1993-01-01

    The application of artificial neural networks to capture structural design expertise is demonstrated. The principal advantage of a trained neural network is that it requires a trivial computational effort to produce an acceptable new design. For the class of problems addressed, the development of a conventional expert system would be extremely difficult. In the present effort, a structural optimization code with multiple nonlinear programming algorithms and an artificial neural network code NETS were used. A set of optimum designs for a ring and two aircraft wings for static and dynamic constraints were generated using the optimization codes. The optimum design data were processed to obtain input and output pairs, which were used to develop a trained artificial neural network using the code NETS. Optimum designs for new design conditions were predicted using the trained network. Neural net prediction of optimum designs was found to be satisfactory for the majority of the output design parameters. However, results from the present study indicate that caution must be exercised to ensure that all design variables are within selected error bounds.

  1. Enabling parallel simulation of large-scale HPC network systems

    DOE PAGES

    Mubarak, Misbah; Carothers, Christopher D.; Ross, Robert B.; ...

    2016-04-07

    Here, with the increasing complexity of today’s high-performance computing (HPC) architectures, simulation has become an indispensable tool for exploring the design space of HPC systems—in particular, networks. In order to make effective design decisions, simulations of these systems must possess the following properties: (1) have high accuracy and fidelity, (2) produce results in a timely manner, and (3) be able to analyze a broad range of network workloads. Most state-of-the-art HPC network simulation frameworks, however, are constrained in one or more of these areas. In this work, we present a simulation framework for modeling two important classes of networks usedmore » in today’s IBM and Cray supercomputers: torus and dragonfly networks. We use the Co-Design of Multi-layer Exascale Storage Architecture (CODES) simulation framework to simulate these network topologies at a flit-level detail using the Rensselaer Optimistic Simulation System (ROSS) for parallel discrete-event simulation. Our simulation framework meets all the requirements of a practical network simulation and can assist network designers in design space exploration. First, it uses validated and detailed flit-level network models to provide an accurate and high-fidelity network simulation. Second, instead of relying on serial time-stepped or traditional conservative discrete-event simulations that limit simulation scalability and efficiency, we use the optimistic event-scheduling capability of ROSS to achieve efficient and scalable HPC network simulations on today’s high-performance cluster systems. Third, our models give network designers a choice in simulating a broad range of network workloads, including HPC application workloads using detailed network traces, an ability that is rarely offered in parallel with high-fidelity network simulations« less

  2. Enabling parallel simulation of large-scale HPC network systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mubarak, Misbah; Carothers, Christopher D.; Ross, Robert B.

    Here, with the increasing complexity of today’s high-performance computing (HPC) architectures, simulation has become an indispensable tool for exploring the design space of HPC systems—in particular, networks. In order to make effective design decisions, simulations of these systems must possess the following properties: (1) have high accuracy and fidelity, (2) produce results in a timely manner, and (3) be able to analyze a broad range of network workloads. Most state-of-the-art HPC network simulation frameworks, however, are constrained in one or more of these areas. In this work, we present a simulation framework for modeling two important classes of networks usedmore » in today’s IBM and Cray supercomputers: torus and dragonfly networks. We use the Co-Design of Multi-layer Exascale Storage Architecture (CODES) simulation framework to simulate these network topologies at a flit-level detail using the Rensselaer Optimistic Simulation System (ROSS) for parallel discrete-event simulation. Our simulation framework meets all the requirements of a practical network simulation and can assist network designers in design space exploration. First, it uses validated and detailed flit-level network models to provide an accurate and high-fidelity network simulation. Second, instead of relying on serial time-stepped or traditional conservative discrete-event simulations that limit simulation scalability and efficiency, we use the optimistic event-scheduling capability of ROSS to achieve efficient and scalable HPC network simulations on today’s high-performance cluster systems. Third, our models give network designers a choice in simulating a broad range of network workloads, including HPC application workloads using detailed network traces, an ability that is rarely offered in parallel with high-fidelity network simulations« less

  3. Using turbidity for designing water networks.

    PubMed

    Castaño, J A; Higuita, J C

    2016-05-01

    Some methods to design water networks with minimum fresh water consumption are based on the selection of a key contaminant. In most of these "single contaminant methods", a maximum allowable concentration of contaminants must be established in water demands and water sources. Turbidity is not a contaminant concentration but is a property that represents the "sum" of other contaminants, with the advantage that it can be cheaper and easily measured than biological oxygen demand, chemical oxygen demand, suspended solids, dissolved solids, among others. The objective of this paper is to demonstrate that turbidity can be used directly in the design of water networks just like any other contaminant concentration. A mathematical demonstration is presented and in order to validate the mathematical results, the design of a water network for a guava fudge production process is performed. The material recovery pinch diagram and nearest neighbors algorithm were used for the design of the water network. Nevertheless, this water network could be designed using other single contaminant methodologies. The maximum error between the expected and the real turbidity values in the water network was 3.3%. These results corroborate the usefulness of turbidity in the design of water networks. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Convolutional Deep Belief Networks for Single-Cell/Object Tracking in Computational Biology and Computer Vision.

    PubMed

    Zhong, Bineng; Pan, Shengnan; Zhang, Hongbo; Wang, Tian; Du, Jixiang; Chen, Duansheng; Cao, Liujuan

    2016-01-01

    In this paper, we propose deep architecture to dynamically learn the most discriminative features from data for both single-cell and object tracking in computational biology and computer vision. Firstly, the discriminative features are automatically learned via a convolutional deep belief network (CDBN). Secondly, we design a simple yet effective method to transfer features learned from CDBNs on the source tasks for generic purpose to the object tracking tasks using only limited amount of training data. Finally, to alleviate the tracker drifting problem caused by model updating, we jointly consider three different types of positive samples. Extensive experiments validate the robustness and effectiveness of the proposed method.

  5. Convolutional Deep Belief Networks for Single-Cell/Object Tracking in Computational Biology and Computer Vision

    PubMed Central

    Pan, Shengnan; Zhang, Hongbo; Wang, Tian; Du, Jixiang; Chen, Duansheng; Cao, Liujuan

    2016-01-01

    In this paper, we propose deep architecture to dynamically learn the most discriminative features from data for both single-cell and object tracking in computational biology and computer vision. Firstly, the discriminative features are automatically learned via a convolutional deep belief network (CDBN). Secondly, we design a simple yet effective method to transfer features learned from CDBNs on the source tasks for generic purpose to the object tracking tasks using only limited amount of training data. Finally, to alleviate the tracker drifting problem caused by model updating, we jointly consider three different types of positive samples. Extensive experiments validate the robustness and effectiveness of the proposed method. PMID:27847827

  6. Spectrophotometric determination of fluoxetine by molecularly imprinted polypyrrole and optimization by experimental design, artificial neural network and genetic algorithm.

    PubMed

    Nezhadali, Azizollah; Motlagh, Maryam Omidvar; Sadeghzadeh, Samira

    2018-02-05

    A selective method based on molecularly imprinted polymer (MIP) solid-phase extraction (SPE) using UV-Vis spectrophotometry as a detection technique was developed for the determination of fluoxetine (FLU) in pharmaceutical and human serum samples. The MIPs were synthesized using pyrrole as a functional monomer in the presence of FLU as a template molecule. The factors that affecting the preparation and extraction ability of MIP such as amount of sorbent, initiator concentration, the amount of monomer to template ratio, uptake shaking rate, uptake time, washing buffer pH, take shaking rate, Taking time and polymerization time were considered for optimization. First a Plackett-Burman design (PBD) consists of 12 randomized runs were applied to determine the influence of each factor. The other optimization processes were performed using central composite design (CCD), artificial neural network (ANN) and genetic algorithm (GA). At optimal condition the calibration curve showed linearity over a concentration range of 10 -7 -10 -8 M with a correlation coefficient (R 2 ) of 0.9970. The limit of detection (LOD) for FLU was obtained 6.56×10 -9 M. The repeatability of the method was obtained 1.61%. The synthesized MIP sorbent showed a good selectivity and sensitivity toward FLU. The MIP/SPE method was used for the determination of FLU in pharmaceutical, serum and plasma samples, successfully. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Methods for evaluating temporal groundwater quality data and results of decadal-scale changes in chloride, dissolved solids, and nitrate concentrations in groundwater in the United States, 1988-2010

    USGS Publications Warehouse

    Lindsey, Bruce D.; Rupert, Michael G.

    2012-01-01

    Decadal-scale changes in groundwater quality were evaluated by the U.S. Geological Survey National Water-Quality Assessment (NAWQA) Program. Samples of groundwater collected from wells during 1988-2000 - a first sampling event representing the decade ending the 20th century - were compared on a pair-wise basis to samples from the same wells collected during 2001-2010 - a second sampling event representing the decade beginning the 21st century. The data set consists of samples from 1,236 wells in 56 well networks, representing major aquifers and urban and agricultural land-use areas, with analytical results for chloride, dissolved solids, and nitrate. Statistical analysis was done on a network basis rather than by individual wells. Although spanning slightly more or less than a 10-year period, the two-sample comparison between the first and second sampling events is referred to as an analysis of decadal-scale change based on a step-trend analysis. The 22 principal aquifers represented by these 56 networks account for nearly 80 percent of the estimated withdrawals of groundwater used for drinking-water supply in the Nation. Well networks where decadal-scale changes in concentrations were statistically significant were identified using the Wilcoxon-Pratt signed-rank test. For the statistical analysis of chloride, dissolved solids, and nitrate concentrations at the network level, more than half revealed no statistically significant change over the decadal period. However, for networks that had statistically significant changes, increased concentrations outnumbered decreased concentrations by a large margin. Statistically significant increases of chloride concentrations were identified for 43 percent of 56 networks. Dissolved solids concentrations increased significantly in 41 percent of the 54 networks with dissolved solids data, and nitrate concentrations increased significantly in 23 percent of 56 networks. At least one of the three - chloride, dissolved solids, or nitrate - had a statistically significant increase in concentration in 66 percent of the networks. Statistically significant decreases in concentrations were identified in 4 percent of the networks for chloride, 2 percent of the networks for dissolved solids, and 9 percent of the networks for nitrate. A larger percentage of urban land-use networks had statistically significant increases in chloride, dissolved solids, and nitrate concentrations than agricultural land-use networks. In order to assess the magnitude of statistically significant changes, the median of the differences between constituent concentrations from the first full-network sampling event and those from the second full-network sampling event was calculated using the Turnbull method. The largest median decadal increases in chloride concentrations were in networks in the Upper Illinois River Basin (67 mg/L) and in the New England Coastal Basins (34 mg/L), whereas the largest median decadal decrease in chloride concentrations was in the Upper Snake River Basin (1 mg/L). The largest median decadal increases in dissolved solids concentrations were in networks in the Rio Grande Valley (260 mg/L) and the Upper Illinois River Basin (160 mg/L). The largest median decadal decrease in dissolved solids concentrations was in the Apalachicola-Chattahoochee-Flint River Basin (6.0 mg/L). The largest median decadal increases in nitrate as nitrogen (N) concentrations were in networks in the South Platte River Basin (2.0 mg/L as N) and the San Joaquin-Tulare Basins (1.0 mg/L as N). The largest median decadal decrease in nitrate concentrations was in the Santee River Basin and Coastal Drainages (0.63 mg/L). The magnitude of change in networks with statistically significant increases typically was much larger than the magnitude of change in networks with statistically significant decreases. The magnitude of change was greatest for chloride in the urban land-use networks and greatest for dissolved solids and nitrate in the agricultural land-use networks. Analysis of data from all networks combined indicated statistically significant increases for chloride, dissolved solids, and nitrate. Although chloride, dissolved solids, and nitrate concentrations were typically less than the drinking-water standards and guidelines, a statistical test was used to determine whether or not the proportion of samples exceeding the drinking-water standard or guideline changed significantly between the first and second full-network sampling events. The proportion of samples exceeding the U.S. Environmental Protection Agency (USEPA) Secondary Maximum Contaminant Level for dissolved solids (500 milligrams per liter) increased significantly between the first and second full-network sampling events when evaluating all networks combined at the national level. Also, for all networks combined, the proportion of samples exceeding the USEPA Maximum Contaminant Level (MCL) of 10 mg/L as N for nitrate increased significantly. One network in the Delmarva Peninsula had a significant increase in the proportion of samples exceeding the MCL for nitrate. A subset of 261 wells was sampled every other year (biennially) to evaluate decadal-scale changes using a time-series analysis. The analysis of the biennial data set showed that changes were generally similar to the findings from the analysis of decadal-scale change that was based on a step-trend analysis. Because of the small number of wells in a network with biennial data (typically 4-5 wells), the time-series analysis is more useful for understanding water-quality responses to changes in site-specific conditions rather than as an indicator of the change for the entire network.

  8. Commutated automatic gain control system

    NASA Technical Reports Server (NTRS)

    Yost, S. R.

    1982-01-01

    A commutated automatic gain control (AGC) system was designed and built for a prototype Loran C receiver. The receiver uses a microcomputer to control a memory aided phase-locked loop (MAPLL). The microcomputer also controls the input/output, latitude/longitude conversion, and the recently added AGC system. The circuit designed for the AGC is described, and bench and flight test results are presented. The AGC circuit described actually samples starting at a point 40 microseconds after a zero crossing determined by the software lock pulse ultimately generated by a 30 microsecond delay and add network in the receiver front end envelope detector.

  9. Real-time control systems: feedback, scheduling and robustness

    NASA Astrophysics Data System (ADS)

    Simon, Daniel; Seuret, Alexandre; Sename, Olivier

    2017-08-01

    The efficient control of real-time distributed systems, where continuous components are governed through digital devices and communication networks, needs a careful examination of the constraints arising from the different involved domains inside co-design approaches. Thanks to the robustness of feedback control, both new control methodologies and slackened real-time scheduling schemes are proposed beyond the frontiers between these traditionally separated fields. A methodology to design robust aperiodic controllers is provided, where the sampling interval is considered as a control variable of the system. Promising experimental results are provided to show the feasibility and robustness of the approach.

  10. Spatial Variation in Soil Properties among North American Ecosystems and Guidelines for Sampling Designs

    PubMed Central

    Loescher, Henry; Ayres, Edward; Duffy, Paul; Luo, Hongyan; Brunke, Max

    2014-01-01

    Soils are highly variable at many spatial scales, which makes designing studies to accurately estimate the mean value of soil properties across space challenging. The spatial correlation structure is critical to develop robust sampling strategies (e.g., sample size and sample spacing). Current guidelines for designing studies recommend conducting preliminary investigation(s) to characterize this structure, but are rarely followed and sampling designs are often defined by logistics rather than quantitative considerations. The spatial variability of soils was assessed across ∼1 ha at 60 sites. Sites were chosen to represent key US ecosystems as part of a scaling strategy deployed by the National Ecological Observatory Network. We measured soil temperature (Ts) and water content (SWC) because these properties mediate biological/biogeochemical processes below- and above-ground, and quantified spatial variability using semivariograms to estimate spatial correlation. We developed quantitative guidelines to inform sample size and sample spacing for future soil studies, e.g., 20 samples were sufficient to measure Ts to within 10% of the mean with 90% confidence at every temperate and sub-tropical site during the growing season, whereas an order of magnitude more samples were needed to meet this accuracy at some high-latitude sites. SWC was significantly more variable than Ts at most sites, resulting in at least 10× more SWC samples needed to meet the same accuracy requirement. Previous studies investigated the relationship between the mean and variability (i.e., sill) of SWC across space at individual sites across time and have often (but not always) observed the variance or standard deviation peaking at intermediate values of SWC and decreasing at low and high SWC. Finally, we quantified how far apart samples must be spaced to be statistically independent. Semivariance structures from 10 of the 12-dominant soil orders across the US were estimated, advancing our continental-scale understanding of soil behavior. PMID:24465377

  11. Multidisciplinary Design Optimization for Aeropropulsion Engines and Solid Modeling/Animation via the Integrated Forced Methods

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The grant closure report is organized in the following four chapters: Chapter describes the two research areas Design optimization and Solid mechanics. Ten journal publications are listed in the second chapter. Five highlights is the subject matter of chapter three. CHAPTER 1. The Design Optimization Test Bed CometBoards. CHAPTER 2. Solid Mechanics: Integrated Force Method of Analysis. CHAPTER 3. Five Highlights: Neural Network and Regression Methods Demonstrated in the Design Optimization of a Subsonic Aircraft. Neural Network and Regression Soft Model Extended for PX-300 Aircraft Engine. Engine with Regression and Neural Network Approximators Designed. Cascade Optimization Strategy with Neural network and Regression Approximations Demonstrated on a Preliminary Aircraft Engine Design. Neural Network and Regression Approximations Used in Aircraft Design.

  12. Nationwide SIP Telephony Network Design to Prevent Congestion Caused by Disaster

    NASA Astrophysics Data System (ADS)

    Satoh, Daisuke; Ashitagawa, Kyoko

    We present a session initiation protocol (SIP) network design for a voice-over-IP network to prevent congestion caused by people calling friends and family after a disaster. The design increases the capacity of SIP servers in a network by using all of the SIP servers equally. It takes advantage of the fact that equipment for voice data packets is different from equipment for signaling packets in SIP networks. Furthermore, the design achieves simple routing on the basis of telephone numbers. We evaluated the performance of our design in preventing congestion through simulation. We showed that the proposed design has roughly 20 times more capacity, which is 57 times the normal load, than the conventional design if a disaster were to occur in Niigata Prefecture struck by the Chuetsu earthquake in 2004.

  13. Social network and individual correlates of sexual risk behavior among homeless young men who have sex with men.

    PubMed

    Tucker, Joan S; Hu, Jianhui; Golinelli, Daniela; Kennedy, David P; Green, Harold D; Wenzel, Suzanne L

    2012-10-01

    There is growing interest in network-based interventions to reduce HIV sexual risk behavior among both homeless youth and men who have sex with men. The goal of this study was to better understand the social network and individual correlates of sexual risk behavior among homeless young men who have sex with men (YMSM) to inform these HIV prevention efforts. A multistage sampling design was used to recruit a probability sample of 121 homeless YMSM (ages: 16-24 years) from shelters, drop-in centers, and street venues in Los Angeles County. Face-to-face interviews were conducted. Because of the different distributions of the three outcome variables, three distinct regression models were needed: ordinal logistic regression for unprotected sex, zero-truncated Poisson regression for number of sex partners, and logistic regression for any sex trade. Homeless YMSM were less likely to engage in unprotected sex and had fewer sex partners if their networks included platonic ties to peers who regularly attended school, and had fewer sex partners if most of their network members were not heavy drinkers. Most other aspects of network composition were unrelated to sexual risk behavior. Individual predictors of sexual risk behavior included older age, Hispanic ethnicity, lower education, depressive symptoms, less positive condom attitudes, and sleeping outdoors because of nowhere else to stay. HIV prevention programs for homeless YMSM may warrant a multipronged approach that helps these youth strengthen their ties to prosocial peers, develop more positive condom attitudes, and access needed mental health and housing services. Copyright © 2012 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  14. Role of Social Media in Diabetes Management in the Middle East Region: Systematic Review.

    PubMed

    Alanzi, Turki

    2018-02-13

    Diabetes is a major health care burden in the Middle East region. Social networking tools can contribute to the management of diabetes with improved educational and care outcomes using these popular tools in the region. The objective of this review was to evaluate the impact of social networking interventions on the improvement of diabetes management and health outcomes in patients with diabetes in the Middle East. Peer-reviewed articles from PubMed (1990-2017) and Google Scholar (1990-2017) were identified using various combinations of predefined terms and search criteria. The main inclusion criterion consisted of the use of social networking apps on mobile phones as the primary intervention. Outcomes were grouped according to study design, type of diabetes, category of technological intervention, location, and sample size. This review included 5 articles evaluating the use of social media tools in the management of diabetes in the Middle East. In most studies, the acceptance rate for the use of social networking to optimize the management of diabetes was relatively high. Diabetes-specific management tools such as the Saudi Arabia Networking for Aiding Diabetes and Diabetes Intelligent Management System for Iraq systems helped collect patient information and lower hemoglobin A 1c (HbA 1c ) levels, respectively. The reviewed studies demonstrated the potential of social networking tools being adopted in regions in the Middle East to improve the management of diabetes. Future studies consisting of larger sample sizes spanning multiple regions would provide further insight into the use of social media for improving patient outcomes. ©Turki Alanzi. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 13.02.2018.

  15. Designing Secure Library Networks.

    ERIC Educational Resources Information Center

    Breeding, Michael

    1997-01-01

    Focuses on designing a library network to maximize security. Discusses UNIX and file servers; connectivity to campus, corporate networks and the Internet; separation of staff from public servers; controlling traffic; the threat of network sniffers; hubs that eliminate eavesdropping; dividing the network into subnets; Switched Ethernet;…

  16. An efficient genetic algorithm for maximum coverage deployment in wireless sensor networks.

    PubMed

    Yoon, Yourim; Kim, Yong-Hyuk

    2013-10-01

    Sensor networks have a lot of applications such as battlefield surveillance, environmental monitoring, and industrial diagnostics. Coverage is one of the most important performance metrics for sensor networks since it reflects how well a sensor field is monitored. In this paper, we introduce the maximum coverage deployment problem in wireless sensor networks and analyze the properties of the problem and its solution space. Random deployment is the simplest way to deploy sensor nodes but may cause unbalanced deployment and therefore, we need a more intelligent way for sensor deployment. We found that the phenotype space of the problem is a quotient space of the genotype space in a mathematical view. Based on this property, we propose an efficient genetic algorithm using a novel normalization method. A Monte Carlo method is adopted to design an efficient evaluation function, and its computation time is decreased without loss of solution quality using a method that starts from a small number of random samples and gradually increases the number for subsequent generations. The proposed genetic algorithms could be further improved by combining with a well-designed local search. The performance of the proposed genetic algorithm is shown by a comparative experimental study. When compared with random deployment and existing methods, our genetic algorithm was not only about twice faster, but also showed significant performance improvement in quality.

  17. Methodology for designing and implementing a class for service for the transmission of medical images over a common network

    NASA Astrophysics Data System (ADS)

    Dimond, David A.; Burgess, Robert; Barrios, Nolan; Johnson, Neil D.

    2000-05-01

    Traditionally, to guarantee the network performance of medical image data transmission, imaging traffic was isolated on a separate network. Organizations are depending on a new generation of multi-purpose networks to transport both normal information and image traffic as they expand access to images throughout the enterprise. These organi want to leverage their existing infrastructure for imaging traffic, but are not willing to accept degradations in overall network performance. To guarantee 'on demand' network performance for image transmissions anywhere at any time, networks need to be designed with the ability to 'carve out' bandwidth for specific applications and to minimize the chances of network failures. This paper will present the methodology Cincinnati Children's Hospital Medical Center (CHMC) used to enhance the physical and logical network design of the existing hospital network to guarantee a class of service for imaging traffic. PACS network designs should utilize the existing enterprise local area network i.e. (LAN) infrastructure where appropriate. Logical separation or segmentation provides the application independence from other clinical and administrative applications as required, ensuring bandwidth and service availability.

  18. Next-Generation WDM Network Design and Routing

    NASA Astrophysics Data System (ADS)

    Tsang, Danny H. K.; Bensaou, Brahim

    2003-08-01

    Call for Papers The Editors of JON are soliciting papers on WDM Network Design and Routing. The aim in this focus issue is to publish original research on topics including - but not limited to - the following: - WDM network architectures and protocols - GMPLS network architectures - Wavelength converter placement in WDM networks - Routing and wavelength assignment (RWA) in WDM networks - Protection and restoration strategies and algorithms in WDM networks - Traffic grooming in WDM networks - Dynamic routing strategies and algorithms - Optical Burst Switching - Support of Multicast - Protection and restoration in WDM networks - Performance analysis and optimization in WDM networks Manuscript Submission To submit to this special issue, follow the normal procedure for submission to JON, indicating "WDM Network Design" in the "Comments" field of the online submission form. For all other questions relating to this focus issue, please send an e-mail to jon@osa.org, subject line "WDM Network Design." Additional information can be found on the JON website: http://www.osa-jon.org/submission/. Schedule Paper Submission Deadline: November 1, 2003 Notification to Authors: January 15, 2004 Final Manuscripts to Publisher: February 15, 2004 Publication of Focus Issue: February/March 2004

  19. Next-Generation WDM Network Design and Routing

    NASA Astrophysics Data System (ADS)

    Tsang, Danny H. K.; Bensaou, Brahim

    2003-10-01

    Call for Papers The Editors of JON are soliciting papers on WDM Network Design and Routing. The aim in this focus issue is to publish original research on topics including - but not limited to - the following: - WDM network architectures and protocols - GMPLS network architectures - Wavelength converter placement in WDM networks - Routing and wavelength assignment (RWA) in WDM networks - Protection and restoration strategies and algorithms in WDM networks - Traffic grooming in WDM networks - Dynamic routing strategies and algorithms - Optical burst switching - Support of multicast - Protection and restoration in WDM networks - Performance analysis and optimization in WDM networks Manuscript Submission To submit to this special issue, follow the normal procedure for submission to JON, indicating "WDM Network Design" in the "Comments" field of the online submission form. For all other questions relating to this focus issue, please send an e-mail to jon@osa.org, subject line "WDM Network Design." Additional information can be found on the JON website: http://www.osa-jon.org/submission/. Schedule - Paper Submission Deadline: November 1, 2003 - Notification to Authors: January 15, 2004 - Final Manuscripts to Publisher: February 15, 2004 - Publication of Focus Issue: February/March 2004

  20. Next-Generation WDM Network Design and Routing

    NASA Astrophysics Data System (ADS)

    Tsang, Danny H. K.; Bensaou, Brahim

    2003-09-01

    Call for Papers The Editors of JON are soliciting papers on WDM Network Design and Routing. The aim in this focus issue is to publish original research on topics including - but not limited to - the following: - WDM network architectures and protocols - GMPLS network architectures - Wavelength converter placement in WDM networks - Routing and wavelength assignment (RWA) in WDM networks - Protection and restoration strategies and algorithms in WDM networks - Traffic grooming in WDM networks - Dynamic routing strategies and algorithms - Optical burst switching - Support of multicast - Protection and restoration in WDM networks - Performance analysis and optimization in WDM networks Manuscript Submission To submit to this special issue, follow the normal procedure for submission to JON, indicating "WDM Network Design" in the "Comments" field of the online submission form. For all other questions relating to this focus issue, please send an e-mail to jon@osa.org, subject line "WDM Network Design." Additional information can be found on the JON website: http://www.osa-jon.org/submission/. Schedule - Paper Submission Deadline: November 1, 2003 - Notification to Authors: January 15, 2004 - Final Manuscripts to Publisher: February 15, 2004 - Publication of Focus Issue: February/March 2004

  1. Launch Control Network Engineer

    NASA Technical Reports Server (NTRS)

    Medeiros, Samantha

    2017-01-01

    The Spaceport Command and Control System (SCCS) is being built at the Kennedy Space Center in order to successfully launch NASA’s revolutionary vehicle that allows humans to explore further into space than ever before. During my internship, I worked with the Network, Firewall, and Hardware teams that are all contributing to the huge SCCS network project effort. I learned the SCCS network design and the several concepts that are running in the background. I also updated and designed documentation for physical networks that are part of SCCS. This includes being able to assist and build physical installations as well as configurations. I worked with the network design for vehicle telemetry interfaces to the Launch Control System (LCS); this allows the interface to interact with other systems at other NASA locations. This network design includes the Space Launch System (SLS), Interim Cryogenic Propulsion Stage (ICPS), and the Orion Multipurpose Crew Vehicle (MPCV). I worked on the network design and implementation in the Customer Avionics Interface Development and Analysis (CAIDA) lab.

  2. Minimal camera networks for 3D image based modeling of cultural heritage objects.

    PubMed

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-03-25

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue "Lamassu". Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883-859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm.

  3. Minimal Camera Networks for 3D Image Based Modeling of Cultural Heritage Objects

    PubMed Central

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-01-01

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue “Lamassu”. Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883–859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm. PMID:24670718

  4. Convergent evolution and adaptation of Pseudomonas aeruginosa within patients with cystic fibrosis.

    PubMed

    Marvig, Rasmus Lykke; Sommer, Lea Mette; Molin, Søren; Johansen, Helle Krogh

    2015-01-01

    Little is known about how within-host evolution compares between genotypically different strains of the same pathogenic species. We sequenced the whole genomes of 474 longitudinally collected clinical isolates of Pseudomonas aeruginosa sampled from 34 children and young individuals with cystic fibrosis. Our analysis of 36 P. aeruginosa lineages identified convergent molecular evolution in 52 genes. This list of genes suggests a role in host adaptation for remodeling of regulatory networks and central metabolism, acquisition of antibiotic resistance and loss of extracellular virulence factors. Furthermore, we find an ordered succession of mutations in key regulatory networks. Accordingly, mutations in downstream transcriptional regulators were contingent upon mutations in upstream regulators, suggesting that remodeling of regulatory networks might be important in adaptation. The characterization of genes involved in host adaptation may help in predicting bacterial evolution in patients with cystic fibrosis and in the design of future intervention strategies.

  5. Highly designable phenotypes and mutational buffers emerge from a systematic mapping between network topology and dynamic output.

    PubMed

    Nochomovitz, Yigal D; Li, Hao

    2006-03-14

    Deciphering the design principles for regulatory networks is fundamental to an understanding of biological systems. We have explored the mapping from the space of network topologies to the space of dynamical phenotypes for small networks. Using exhaustive enumeration of a simple model of three- and four-node networks, we demonstrate that certain dynamical phenotypes can be generated by an atypically broad spectrum of network topologies. Such dynamical outputs are highly designable, much like certain protein structures can be designed by an unusually broad spectrum of sequences. The network topologies that encode a highly designable dynamical phenotype possess two classes of connections: a fully conserved core of dedicated connections that encodes the stable dynamical phenotype and a partially conserved set of variable connections that controls the transient dynamical flow. By comparing the topologies and dynamics of the three- and four-node network ensembles, we observe a large number of instances of the phenomenon of "mutational buffering," whereby addition of a fourth node suppresses phenotypic variation amongst a set of three-node networks.

  6. The biobanking research infrastructure BBMRI_CZ: a critical tool to enhance translational cancer research.

    PubMed

    Holub, P; Greplova, K; Knoflickova, D; Nenutil, R; Valik, D

    2012-01-01

    We introduce the national research biobanking infrastructure, BBMRI_CZ. The infrastructure has been founded by the Ministry of Education and became a partner of the European biobanking infrastructure BBMRI.eu. It is designed as a network of individual biobanks where each biobank stores samples obtained from associated healthcare providers. The biobanks comprise long term storage (various types of tissues classified by diagnosis, serum at surgery, genomic DNA and RNA) and short term storage (longitudinally sampled patient sera). We discuss the operation workflow of the infrastructure that needs to be the distributed system: transfer of the samples to the biobank needs to be accompanied by extraction of data from the hospital information systems and this data must be stored in a central index serving mainly for sample lookup. Since BBMRI_CZ is designed solely for research purposes, the data is anonymised prior to their integration into the central BBMRI_CZ index. The index is then available for registered researchers to seek for samples of interest and to request the samples from biobank managers. The paper provides an overview of the structure of data stored in the index. We also discuss monitoring system for the biobanks, incorporated to ensure quality of the stored samples.

  7. Monitoring-well network and sampling design for ground-water quality, Wind River Indian Reservation, Wyoming

    USGS Publications Warehouse

    Mason, Jon P.; Sebree, Sonja K.; Quinn, Thomas L.

    2005-01-01

    The Wind River Indian Reservation, located in parts of Fremont and Hot Springs Counties, Wyoming, has a total land area of more than 3,500 square miles. Ground water on the Wind River Indian Reservation is a valuable resource for Shoshone and Northern Arapahoe tribal members and others who live on the Reservation. There are many types of land uses on the Reservation that have the potential to affect the quality of ground-water resources. Urban areas, rural housing developments, agricultural lands, landfills, oil and natural gas fields, mining, and pipeline utility corridors all have the potential to affect ground-water quality. A cooperative study was developed between the U.S. Geological Survey and the Wind River Environmental Quality Commission to identify areas of the Reservation that have the highest potential for ground-water contamination and develop a comprehensive plan to monitor these areas. An arithmetic overlay model for the Wind River Indian Reservation was created using seven geographic information system data layers representing factors with varying potential to affect ground-water quality. The data layers used were: the National Land Cover Dataset, water well density, aquifer sensitivity, oil and natural gas fields and petroleum pipelines, sites with potential contaminant sources, sites that are known to have ground-water contamination, and National Pollutant Discharge Elimination System sites. A prioritization map for monitoring ground-water quality on the Reservation was created using the model. The prioritization map ranks the priority for monitoring ground-water quality in different areas of the Reservation as low, medium, or high. To help minimize bias in selecting sites for a monitoring well network, an automated stratified random site-selection approach was used to select 30 sites for ground-water quality monitoring within the high priority areas. In addition, the study also provided a sampling design for constituents to be monitored, sampling frequency, and a simple water-table level observation well network.

  8. Adulteration of diesel/biodiesel blends by vegetable oil as determined by Fourier transform (FT) near infrared spectrometry and FT-Raman spectroscopy.

    PubMed

    Oliveira, Flavia C C; Brandão, Christian R R; Ramalho, Hugo F; da Costa, Leonardo A F; Suarez, Paulo A Z; Rubim, Joel C

    2007-03-28

    In this work it has been shown that the routine ASTM methods (ASTM 4052, ASTM D 445, ASTM D 4737, ASTM D 93, and ASTM D 86) recommended by the ANP (the Brazilian National Agency for Petroleum, Natural Gas and Biofuels) to determine the quality of diesel/biodiesel blends are not suitable to prevent the adulteration of B2 or B5 blends with vegetable oils. Considering the previous and actual problems with fuel adulterations in Brazil, we have investigated the application of vibrational spectroscopy (Fourier transform (FT) near infrared spectrometry and FT-Raman) to identify adulterations of B2 and B5 blends with vegetable oils. Partial least square regression (PLS), principal component regression (PCR), and artificial neural network (ANN) calibration models were designed and their relative performances were evaluated by external validation using the F-test. The PCR, PLS, and ANN calibration models based on the Fourier transform (FT) near infrared spectrometry and FT-Raman spectroscopy were designed using 120 samples. Other 62 samples were used in the validation and external validation, for a total of 182 samples. The results have shown that among the designed calibration models, the ANN/FT-Raman presented the best accuracy (0.028%, w/w) for samples used in the external validation.

  9. [The Identification of the Origin of Chinese Wolfberry Based on Infrared Spectral Technology and the Artificial Neural Network].

    PubMed

    Li, Zhong; Liu, Ming-de; Ji, Shou-xiang

    2016-03-01

    The Fourier Transform Infrared Spectroscopy (FTIR) is established to find the geographic origins of Chinese wolfberry quickly. In the paper, the 45 samples of Chinese wolfberry from different places of Qinghai Province are to be surveyed by FTIR. The original data matrix of FTIR is pretreated with common preprocessing and wavelet transform. Compared with common windows shifting smoothing preprocessing, standard normal variation correction and multiplicative scatter correction, wavelet transform is an effective spectrum data preprocessing method. Before establishing model through the artificial neural networks, the spectra variables are compressed by means of the wavelet transformation so as to enhance the training speed of the artificial neural networks, and at the same time the related parameters of the artificial neural networks model are also discussed in detail. The survey shows even if the infrared spectroscopy data is compressed to 1/8 of its original data, the spectral information and analytical accuracy are not deteriorated. The compressed spectra variables are used for modeling parameters of the backpropagation artificial neural network (BP-ANN) model and the geographic origins of Chinese wolfberry are used for parameters of export. Three layers of neural network model are built to predict the 10 unknown samples by using the MATLAB neural network toolbox design error back propagation network. The number of hidden layer neurons is 5, and the number of output layer neuron is 1. The transfer function of hidden layer is tansig, while the transfer function of output layer is purelin. Network training function is trainl and the learning function of weights and thresholds is learngdm. net. trainParam. epochs=1 000, while net. trainParam. goal = 0.001. The recognition rate of 100% is to be achieved. It can be concluded that the method is quite suitable for the quick discrimination of producing areas of Chinese wolfberry. The infrared spectral analysis technology combined with the artificial neural networks is proved to be a reliable and new method for the identification of the original place of Traditional Chinese Medicine.

  10. Evaluation of the streamflow-gaging network of Alaska in providing regional streamflow information

    USGS Publications Warehouse

    Brabets, Timothy P.

    1996-01-01

    In 1906, the U.S. Geological Survey (USGS) began operating a network of streamflow-gaging stations in Alaska. The primary purpose of the streamflow- gaging network has been to provide peak flow, average flow, and low-flow characteristics to a variety of users. In 1993, the USGS began a study to evaluate the current network of 78 stations. The objectives of this study were to determine the adequacy of the existing network in predicting selected regional flow characteristics and to determine if providing additional streamflow-gaging stations could improve the network's ability to predict these characteristics. Alaska was divided into six distinct hydrologic regions: Arctic, Northwest, Southcentral, Southeast, Southwest, and Yukon. For each region, historical and current streamflow data were compiled. In Arctic, Northwest, and Southwest Alaska, insufficient data were available to develop regional regression equations. In these areas, proposed locations of streamflow-gaging stations were selected by using clustering techniques to define similar areas within a region and by spatial visual analysis using the precipitation, physiographic, and hydrologic unit maps of Alaska. Sufficient data existed in Southcentral and Southeast Alaska to use generalized least squares (GLS) procedures to develop regional regression equations to estimate the 50-year peak flow, annual average flow, and a low-flow statistic. GLS procedures were also used for Yukon Alaska but the results should be used with caution because the data do not have an adequate spatial distribution. Network analysis procedures were used for the Southcentral, Southeast, and Yukon regions. Network analysis indicates the reduction in the sampling error of the regional regression equation that can be obtained given different scenarios. For Alaska, a 10-year planning period was used. One scenario showed the results of continuing the current network with no additional gaging stations and another scenario showed the results of adding gaging stations to the network. With the exception of the annual average discharge equation for Southeast Alaska, by adding gaging stations in all three regions, the sampling error was reduced to a greater extent than by not adding gaging stations. The proposed streamflow-gaging network for Alaska consists of 308 gaging stations, of which 32 are designated as index stations. If the proposed network can not be implemented in its entirety, then a lesser cost alternative would be to establish the index stations and to implement the network for a particular region.

  11. Optimization of Turbine Blade Design for Reusable Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Shyy, Wei

    1998-01-01

    To facilitate design optimization of turbine blade shape for reusable launching vehicles, appropriate techniques need to be developed to process and estimate the characteristics of the design variables and the response of the output with respect to the variations of the design variables. The purpose of this report is to offer insight into developing appropriate techniques for supporting such design and optimization needs. Neural network and polynomial-based techniques are applied to process aerodynamic data obtained from computational simulations for flows around a two-dimensional airfoil and a generic three- dimensional wing/blade. For the two-dimensional airfoil, a two-layered radial-basis network is designed and trained. The performances of two different design functions for radial-basis networks, one based on the accuracy requirement, whereas the other one based on the limit on the network size. While the number of neurons needed to satisfactorily reproduce the information depends on the size of the data, the neural network technique is shown to be more accurate for large data set (up to 765 simulations have been used) than the polynomial-based response surface method. For the three-dimensional wing/blade case, smaller aerodynamic data sets (between 9 to 25 simulations) are considered, and both the neural network and the polynomial-based response surface techniques improve their performance as the data size increases. It is found while the relative performance of two different network types, a radial-basis network and a back-propagation network, depends on the number of input data, the number of iterations required for radial-basis network is less than that for the back-propagation network.

  12. ANI-1: an extensible neural network potential with DFT accuracy at force field computational cost† †Electronic supplementary information (ESI) available. See DOI: 10.1039/c6sc05720a Click here for additional data file.

    PubMed Central

    Smith, J. S.

    2017-01-01

    Deep learning is revolutionizing many areas of science and technology, especially image, text, and speech recognition. In this paper, we demonstrate how a deep neural network (NN) trained on quantum mechanical (QM) DFT calculations can learn an accurate and transferable potential for organic molecules. We introduce ANAKIN-ME (Accurate NeurAl networK engINe for Molecular Energies) or ANI for short. ANI is a new method designed with the intent of developing transferable neural network potentials that utilize a highly-modified version of the Behler and Parrinello symmetry functions to build single-atom atomic environment vectors (AEV) as a molecular representation. AEVs provide the ability to train neural networks to data that spans both configurational and conformational space, a feat not previously accomplished on this scale. We utilized ANI to build a potential called ANI-1, which was trained on a subset of the GDB databases with up to 8 heavy atoms in order to predict total energies for organic molecules containing four atom types: H, C, N, and O. To obtain an accelerated but physically relevant sampling of molecular potential surfaces, we also proposed a Normal Mode Sampling (NMS) method for generating molecular conformations. Through a series of case studies, we show that ANI-1 is chemically accurate compared to reference DFT calculations on much larger molecular systems (up to 54 atoms) than those included in the training data set. PMID:28507695

  13. Medusa-Isosampler: A modular, network-based observatory system for combined physical, chemical and microbiological monitoring, sampling and incubation of hydrothermal and cold seep fluids

    NASA Astrophysics Data System (ADS)

    Schultz, A.; Flynn, M.; Taylor, P.

    2004-12-01

    The study of life in extreme environments provides an important context from which we can undertake the search for extraterrestrial life, and through which we can better understand biogeochemical feedback in terrestrial hydrothermal and cold seep systems. The Medusa-Isosampler project is aimed at fundamental research into understanding the potential for, and limits to, chemolithoautotrophic life, i.e. primary production without photosynthesis. One environment that might foster such life is associated with the high thermal and chemical gradient environment of hydrothermal vent structures. Another is associated with the lower thermal and chemical gradient environment of continental margin cold seeps. Under NERC, NASA and industrial support, we have designed a flexible instrumentation system, operating as networked, autonomous modules on a local area network, that will make possible simultaneous physical and chemical sampling and monitoring of hydrothermal and cold seep fluids, and the in situ and laboratory incubation of chemosynthetic microbes under high pressure, isobaric conditions. The system has been designed with long-term observatory operations in mind, and may be reconfigured dynamically as the requirements of the observatory installation change. The modular design will also accommodate new in situ chemical and biosensor technologies, provided by third parties. The system may be configured for seafloor use, and can be adapted to use in IODP boreholes. Our overall project goals are provide an instrumentation system capable of probing both high and low-gradient water-rock systems for chemolithoautotrophic biospheres, to identify the physical and chemical conditions that define these microhabitats and explore the details of the biogeochemical feedback loops that mediate these microhabitats, and to attempt to culture and identify chemolithoautotrophic microbial communities that might exist there. The Medusa-Isosampler system has been produced and is now undergoing initial deployments at sea.

  14. Development of a unique multi-contaminant air sampling device for a childhood asthma cohort in an agricultural environment.

    PubMed

    Armstrong, Jenna L; Fitzpatrick, Cole F; Loftus, Christine T; Yost, Michael G; Tchong-French, Maria; Karr, Catherine J

    2013-09-01

    This research describes the design, deployment, performance, and acceptability of a novel outdoor active air sampler to provide simultaneous measurements of multiple contaminants at timed intervals for the Aggravating Factors of Asthma in Rural Environment (AFARE) study-a longitudinal cohort of 50 children in Yakima Valley, Washington. The sampler was constructed of multiple sampling media connected to individual critical orifices and a rotary vane vacuum pump. It was connected to a timed control valve system to collect 24 hours samples every six days over 18 months. We describe a spatially representative approach with both quantitative and qualitative location criteria to deploy a network of 14 devices at participant residences in a rural region (20 × 60 km). Overall the sampler performed well, as the concurrent mean sample flow rates were within or above the ranges of recommended sampling rates for each exposure metric of interest. Acceptability was high among the study population of Hispanic farmworker participant households. The sampler design may prove useful for future urban and rural community-based studies with aims at collecting multiple contaminant data during specific time periods.

  15. Sample size and power considerations in network meta-analysis

    PubMed Central

    2012-01-01

    Background Network meta-analysis is becoming increasingly popular for establishing comparative effectiveness among multiple interventions for the same disease. Network meta-analysis inherits all methodological challenges of standard pairwise meta-analysis, but with increased complexity due to the multitude of intervention comparisons. One issue that is now widely recognized in pairwise meta-analysis is the issue of sample size and statistical power. This issue, however, has so far only received little attention in network meta-analysis. To date, no approaches have been proposed for evaluating the adequacy of the sample size, and thus power, in a treatment network. Findings In this article, we develop easy-to-use flexible methods for estimating the ‘effective sample size’ in indirect comparison meta-analysis and network meta-analysis. The effective sample size for a particular treatment comparison can be interpreted as the number of patients in a pairwise meta-analysis that would provide the same degree and strength of evidence as that which is provided in the indirect comparison or network meta-analysis. We further develop methods for retrospectively estimating the statistical power for each comparison in a network meta-analysis. We illustrate the performance of the proposed methods for estimating effective sample size and statistical power using data from a network meta-analysis on interventions for smoking cessation including over 100 trials. Conclusion The proposed methods are easy to use and will be of high value to regulatory agencies and decision makers who must assess the strength of the evidence supporting comparative effectiveness estimates. PMID:22992327

  16. Social networks of men who have sex with men and their implications for HIV/STI interventions: results from a cross-sectional study using respondent-driven sampling in a large and a small city in Tanzania.

    PubMed

    Ross, Michael W; Larsson, Markus; Jacobson, Jerry; Nyoni, Joyce; Agardh, Anette

    2016-11-18

    Men who have sex with men (MSM) in sub-Saharan Africa remain hidden and hard to reach for involvement in HIV and sexually transmitted infection (STI) services. The aim of the current study was to describe MSM social networks in a large and a small Tanzanian city in order to explore their utility for peer-based healthcare interventions. Data were collected through respondent-driven sampling (RDS) in Dar es Salaam (n=197) and in Tanga (n=99) in 2012 and 2013, using 5 and 4 seeds, respectively. All results were adjusted for RDS sampling design. Mean personal network size based on the number of MSM who were reported by the participants, as known to them was 12.0±15.5 in Dar es Salaam and 7.6±8.1 in Tanga. Mean actual RDS network size was 39.4±31.4 in Dar es Salaam and 25.3±9.7 in Tanga. A majority (97%) reported that the person from whom they received the recruitment coupon was a sexual partner, close friend or acquaintance. Homophile in recruitment patterns (selective affiliation) was present for age, gay openness, and HIV status in Dar es Salaam, and for sexual identification in Tanga. The personal network sizes and existence of contacts between recruiter and referral indicate that it is possible to use peer-driven interventions to reach MSM for HIV/STI interventions in larger and smaller sub-Saharan African cities. The study was reviewed and approved by the University of Texas Health Science Center's Institutional Review Board (HSC-SPH-10-0033) and the Tanzanian National Institute for Medical Research (NIMR/HQ/R.8a/Vol. IX/1088). Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  17. A New Framework for Adptive Sampling and Analysis During Long-Term Monitoring and Remedial Action Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Minsker, Barbara

    2005-06-01

    Yonas Demissie, a research assistant supported by the project, has successfully created artificial data and assimilated it into coupled Modflow and artificial neural network models. His initial findings show that the neural networks help correct errors in the Modflow models. Abhishek Singh has used test cases from the literature to show that performing model calibration with an interactive genetic algorithm results in significantly improved parameter values. Meghna Babbar, the third research assistant supported by the project, has found similar results when applying an interactive genetic algorithms to long-term monitoring design. She has also developed new types of interactive genetic algorithmsmore » that significantly improve performance. Gayathri Gopalakrishnan, the last research assistant who is partially supported by the project, has shown that sampling branches of phytoremediation trees is an accurate approach to estimating soil and groundwater contaminations in areas surrounding the trees at the Argonne 317/319 site.« less

  18. DECHADE: DEtecting slight Changes with HArd DEcisions in Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Ciuonzo, D.; Salvo Rossi, P.

    2018-07-01

    This paper focuses on the problem of change detection through a Wireless Sensor Network (WSN) whose nodes report only binary decisions (on the presence/absence of a certain event to be monitored), due to bandwidth/energy constraints. The resulting problem can be modelled as testing the equality of samples drawn from independent Bernoulli probability mass functions, when the bit probabilities under both hypotheses are not known. Both One-Sided (OS) and Two-Sided (TS) tests are considered, with reference to: (i) identical bit probability (a homogeneous scenario), (ii) different per-sensor bit probabilities (a non-homogeneous scenario) and (iii) regions with identical bit probability (a block-homogeneous scenario) for the observed samples. The goal is to provide a systematic framework collecting a plethora of viable detectors (designed via theoretically founded criteria) which can be used for each instance of the problem. Finally, verification of the derived detectors in two relevant WSN-related problems is provided to show the appeal of the proposed framework.

  19. Hydrologic-information needs for oil-shale development, northwestern Colorado

    USGS Publications Warehouse

    Taylor, O.J.

    1982-01-01

    Hydrologic information is not adequate for proper development of the large oil-shale reserves of Piceance basin in northwestern Colorado. Exploratory drilling and aquifer testing are needed to define the hydrologic system, to provide wells for aquifer testing, to design mine-drainage techniques, and to explore for additional water supplies. Sampling networks are needed to supply hydrologic data on the quantity and quality of surface water, ground water, and springs. A detailed sampling network is proposed for the White River basin because of expected impacts related to water supplies and waste disposal. Emissions from oil-shale retorts to the atmosphere need additional study because of possible resulting corrosion problems and the destruction of fisheries. Studies of the leachate materials and the stability of disposed retorted shale piles are needed to insure that these materials will not cause problems. Hazards related to in-situ retorts, and the wastes related to oil-shale development in general also need further investigation. (USGS)

  20. Hybrid network defense model based on fuzzy evaluation.

    PubMed

    Cho, Ying-Chiang; Pan, Jen-Yi

    2014-01-01

    With sustained and rapid developments in the field of information technology, the issue of network security has become increasingly prominent. The theme of this study is network data security, with the test subject being a classified and sensitive network laboratory that belongs to the academic network. The analysis is based on the deficiencies and potential risks of the network's existing defense technology, characteristics of cyber attacks, and network security technologies. Subsequently, a distributed network security architecture using the technology of an intrusion prevention system is designed and implemented. In this paper, first, the overall design approach is presented. This design is used as the basis to establish a network defense model, an improvement over the traditional single-technology model that addresses the latter's inadequacies. Next, a distributed network security architecture is implemented, comprising a hybrid firewall, intrusion detection, virtual honeynet projects, and connectivity and interactivity between these three components. Finally, the proposed security system is tested. A statistical analysis of the test results verifies the feasibility and reliability of the proposed architecture. The findings of this study will potentially provide new ideas and stimuli for future designs of network security architecture.

  1. Network Structure and Biased Variance Estimation in Respondent Driven Sampling

    PubMed Central

    Verdery, Ashton M.; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J.

    2015-01-01

    This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network. PMID:26679927

  2. Optimal Space Station solar array gimbal angle determination via radial basis function neural networks

    NASA Technical Reports Server (NTRS)

    Clancy, Daniel J.; Oezguener, Uemit; Graham, Ronald E.

    1994-01-01

    The potential for excessive plume impingement loads on Space Station Freedom solar arrays, caused by jet firings from an approaching Space Shuttle, is addressed. An artificial neural network is designed to determine commanded solar array beta gimbal angle for minimum plume loads. The commanded angle would be determined dynamically. The network design proposed involves radial basis functions as activation functions. Design, development, and simulation of this network design are discussed.

  3. 77 FR 50469 - Notice of Public Workshop: “Designing for Impact III: Workshop on Building the National Network...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-21

    ... series of public workshops entitled ``Designing for Impact: Workshop on Building the National Network for...-president-manufacturing-and-economy . The Designing for Impact workshop series is organized by the federal...: ``Designing for Impact III: Workshop on Building the National Network for Manufacturing Innovation'' AGENCY...

  4. A tree-like Bayesian structure learning algorithm for small-sample datasets from complex biological model systems.

    PubMed

    Yin, Weiwei; Garimalla, Swetha; Moreno, Alberto; Galinski, Mary R; Styczynski, Mark P

    2015-08-28

    There are increasing efforts to bring high-throughput systems biology techniques to bear on complex animal model systems, often with a goal of learning about underlying regulatory network structures (e.g., gene regulatory networks). However, complex animal model systems typically have significant limitations on cohort sizes, number of samples, and the ability to perform follow-up and validation experiments. These constraints are particularly problematic for many current network learning approaches, which require large numbers of samples and may predict many more regulatory relationships than actually exist. Here, we test the idea that by leveraging the accuracy and efficiency of classifiers, we can construct high-quality networks that capture important interactions between variables in datasets with few samples. We start from a previously-developed tree-like Bayesian classifier and generalize its network learning approach to allow for arbitrary depth and complexity of tree-like networks. Using four diverse sample networks, we demonstrate that this approach performs consistently better at low sample sizes than the Sparse Candidate Algorithm, a representative approach for comparison because it is known to generate Bayesian networks with high positive predictive value. We develop and demonstrate a resampling-based approach to enable the identification of a viable root for the learned tree-like network, important for cases where the root of a network is not known a priori. We also develop and demonstrate an integrated resampling-based approach to the reduction of variable space for the learning of the network. Finally, we demonstrate the utility of this approach via the analysis of a transcriptional dataset of a malaria challenge in a non-human primate model system, Macaca mulatta, suggesting the potential to capture indicators of the earliest stages of cellular differentiation during leukopoiesis. We demonstrate that by starting from effective and efficient approaches for creating classifiers, we can identify interesting tree-like network structures with significant ability to capture the relationships in the training data. This approach represents a promising strategy for inferring networks with high positive predictive value under the constraint of small numbers of samples, meeting a need that will only continue to grow as more high-throughput studies are applied to complex model systems.

  5. Neural network approaches to capture temporal information

    NASA Astrophysics Data System (ADS)

    van Veelen, Martijn; Nijhuis, Jos; Spaanenburg, Ben

    2000-05-01

    The automated design and construction of neural networks receives growing attention of the neural networks community. Both the growing availability of computing power and development of mathematical and probabilistic theory have had severe impact on the design and modelling approaches of neural networks. This impact is most apparent in the use of neural networks to time series prediction. In this paper, we give our views on past, contemporary and future design and modelling approaches to neural forecasting.

  6. The Plant Phenology Monitoring Design for the National Ecological Observatory Network

    NASA Technical Reports Server (NTRS)

    Elmendorf, Sarah C.; Jones, Katherine D.; Cook, Benjamin I.; Diez, Jeffrey M.; Enquist, Carolyn A. F.; Hufft, Rebecca A.; Jones, Matthew O.; Mazer, Susan J.; Miller-Rushing, Abraham J.; Moore, David J. P.; hide

    2016-01-01

    Phenology is an integrative science that comprises the study of recurring biological activities or events. In an era of rapidly changing climate, the relationship between the timing of those events and environmental cues such as temperature, snowmelt, water availability, or day length are of particular interest. This article provides an overview of the observer-based plant phenology sampling conducted by the U.S. National Ecological Observatory Network (NEON), the resulting data, and the rationale behind the design. Trained technicians will conduct regular in situ observations of plant phenology at all terrestrial NEON sites for the 30-yr life of the observatory. Standardized and coordinated data across the network of sites can be used to quantify the direction and magnitude of the relationships between phenology and environmental forcings, as well as the degree to which these relationships vary among sites, among species, among phenophases, and through time. Vegetation at NEON sites will also be monitored with tower-based cameras, satellite remote sensing, and annual high-resolution airborne remote sensing. Ground-based measurements can be used to calibrate and improve satellite-derived phenometrics. NEON's phenology monitoring design is complementary to existing phenology research efforts and citizen science initiatives throughout the world and will produce interoperable data. By collocating plant phenology observations with a suite of additional meteorological, biophysical, and ecological measurements (e.g., climate, carbon flux, plant productivity, population dynamics of consumers) at 47 terrestrial sites, the NEON design will enable continental-scale inference about the status, trends, causes, and ecological consequences of phenological change.

  7. The plant phenology monitoring design for the National Ecological Observatory Network

    USGS Publications Warehouse

    Elmendorf, Sarah C; Jones, Katherine D.; Cook, Benjamin I.; Diez, Jeffrey M.; Enquist, Carolyn A.F.; Hufft, Rebecca A.; Jones, Matthew O.; Mazer, Susan J.; Miller-Rushing, Abraham J.; Moore, David J. P.; Schwartz, Mark D.; Weltzin, Jake F.

    2016-01-01

    Phenology is an integrative science that comprises the study of recurring biological activities or events. In an era of rapidly changing climate, the relationship between the timing of those events and environmental cues such as temperature, snowmelt, water availability or day length are of particular interest. This article provides an overview of the plant phenology sampling which will be conducted by the U.S. National Ecological Observatory Network NEON, the resulting data, and the rationale behind the design. Trained technicians will conduct regular in situ observations of plant phenology at all terrestrial NEON sites for the 30-year life of the observatory. Standardized and coordinated data across the network of sites can be used to quantify the direction and magnitude of the relationships between phenology and environmental forcings, as well as the degree to which these relationships vary among sites, among species, among phenophases, and through time. Vegetation at NEON sites will also be monitored with tower-based cameras, satellite remote sensing and annual high-resolution airborne remote sensing. Ground-based measurements can be used to calibrate and improve satellite-derived phenometrics. NEON’s phenology monitoring design is complementary to existing phenology research efforts and citizen science initiatives throughout the world and will produce interoperable data. By collocating plant phenology observations with a suite of additional meteorological, biophysical and ecological measurements (e.g., climate, carbon flux, plant productivity, population dynamics of consumers) at 47 terrestrial sites, the NEON design will enable continentalscale inference about the status, trends, causes and ecological consequences of phenological change.

  8. Differentially co-expressed interacting protein pairs discriminate samples under distinct stages of HIV type 1 infection.

    PubMed

    Yoon, Dukyong; Kim, Hyosil; Suh-Kim, Haeyoung; Park, Rae Woong; Lee, KiYoung

    2011-01-01

    Microarray analyses based on differentially expressed genes (DEGs) have been widely used to distinguish samples across different cellular conditions. However, studies based on DEGs have not been able to clearly determine significant differences between samples of pathophysiologically similar HIV-1 stages, e.g., between acute and chronic progressive (or AIDS) or between uninfected and clinically latent stages. We here suggest a novel approach to allow such discrimination based on stage-specific genetic features of HIV-1 infection. Our approach is based on co-expression changes of genes known to interact. The method can identify a genetic signature for a single sample as contrasted with existing protein-protein-based analyses with correlational designs. Our approach distinguishes each sample using differentially co-expressed interacting protein pairs (DEPs) based on co-expression scores of individual interacting pairs within a sample. The co-expression score has positive value if two genes in a sample are simultaneously up-regulated or down-regulated. And the score has higher absolute value if expression-changing ratios are similar between the two genes. We compared characteristics of DEPs with that of DEGs by evaluating their usefulness in separation of HIV-1 stage. And we identified DEP-based network-modules and their gene-ontology enrichment to find out the HIV-1 stage-specific gene signature. Based on the DEP approach, we observed clear separation among samples from distinct HIV-1 stages using clustering and principal component analyses. Moreover, the discrimination power of DEPs on the samples (70-100% accuracy) was much higher than that of DEGs (35-45%) using several well-known classifiers. DEP-based network analysis also revealed the HIV-1 stage-specific network modules; the main biological processes were related to "translation," "RNA splicing," "mRNA, RNA, and nucleic acid transport," and "DNA metabolism." Through the HIV-1 stage-related modules, changing stage-specific patterns of protein interactions could be observed. DEP-based method discriminated the HIV-1 infection stages clearly, and revealed a HIV-1 stage-specific gene signature. The proposed DEP-based method might complement existing DEG-based approaches in various microarray expression analyses.

  9. Optimization of robustness of interdependent network controllability by redundant design

    PubMed Central

    2018-01-01

    Controllability of complex networks has been a hot topic in recent years. Real networks regarded as interdependent networks are always coupled together by multiple networks. The cascading process of interdependent networks including interdependent failure and overload failure will destroy the robustness of controllability for the whole network. Therefore, the optimization of the robustness of interdependent network controllability is of great importance in the research area of complex networks. In this paper, based on the model of interdependent networks constructed first, we determine the cascading process under different proportions of node attacks. Then, the structural controllability of interdependent networks is measured by the minimum driver nodes. Furthermore, we propose a parameter which can be obtained by the structure and minimum driver set of interdependent networks under different proportions of node attacks and analyze the robustness for interdependent network controllability. Finally, we optimize the robustness of interdependent network controllability by redundant design including node backup and redundancy edge backup and improve the redundant design by proposing different strategies according to their cost. Comparative strategies of redundant design are conducted to find the best strategy. Results shows that node backup and redundancy edge backup can indeed decrease those nodes suffering from failure and improve the robustness of controllability. Considering the cost of redundant design, we should choose BBS (betweenness-based strategy) or DBS (degree based strategy) for node backup and HDF(high degree first) for redundancy edge backup. Above all, our proposed strategies are feasible and effective at improving the robustness of interdependent network controllability. PMID:29438426

  10. Designing optimal greenhouse gas monitoring networks for Australia

    NASA Astrophysics Data System (ADS)

    Ziehn, T.; Law, R. M.; Rayner, P. J.; Roff, G.

    2016-01-01

    Atmospheric transport inversion is commonly used to infer greenhouse gas (GHG) flux estimates from concentration measurements. The optimal location of ground-based observing stations that supply these measurements can be determined by network design. Here, we use a Lagrangian particle dispersion model (LPDM) in reverse mode together with a Bayesian inverse modelling framework to derive optimal GHG observing networks for Australia. This extends the network design for carbon dioxide (CO2) performed by Ziehn et al. (2014) to also minimise the uncertainty on the flux estimates for methane (CH4) and nitrous oxide (N2O), both individually and in a combined network using multiple objectives. Optimal networks are generated by adding up to five new stations to the base network, which is defined as two existing stations, Cape Grim and Gunn Point, in southern and northern Australia respectively. The individual networks for CO2, CH4 and N2O and the combined observing network show large similarities because the flux uncertainties for each GHG are dominated by regions of biologically productive land. There is little penalty, in terms of flux uncertainty reduction, for the combined network compared to individually designed networks. The location of the stations in the combined network is sensitive to variations in the assumed data uncertainty across locations. A simple assessment of economic costs has been included in our network design approach, considering both establishment and maintenance costs. Our results suggest that, while site logistics change the optimal network, there is only a small impact on the flux uncertainty reductions achieved with increasing network size.

  11. A soil sampling intercomparison exercise for the ALMERA network.

    PubMed

    Belli, Maria; de Zorzi, Paolo; Sansone, Umberto; Shakhashiro, Abduhlghani; Gondin da Fonseca, Adelaide; Trinkl, Alexander; Benesch, Thomas

    2009-11-01

    Soil sampling and analysis for radionuclides after an accidental or routine release is a key factor for the dose calculation to members of the public, and for the establishment of possible countermeasures. The IAEA organized for selected laboratories of the ALMERA (Analytical Laboratories for the Measurement of Environmental Radioactivity) network a Soil Sampling Intercomparison Exercise (IAEA/SIE/01) with the objective of comparing soil sampling procedures used by different laboratories. The ALMERA network is a world-wide network of analytical laboratories located in IAEA member states capable of providing reliable and timely analysis of environmental samples in the event of an accidental or intentional release of radioactivity. Ten ALMERA laboratories were selected to participate in the sampling exercise. The soil sampling intercomparison exercise took place in November 2005 in an agricultural area qualified as a "reference site", aimed at assessing the uncertainties associated with soil sampling in agricultural, semi-natural, urban and contaminated environments and suitable for performing sampling intercomparison. In this paper, the laboratories sampling performance were evaluated.

  12. Application of self-organizing feature maps to analyze the relationships between ignitable liquids and selected mass spectral ions.

    PubMed

    Frisch-Daiello, Jessica L; Williams, Mary R; Waddell, Erin E; Sigman, Michael E

    2014-03-01

    The unsupervised artificial neural networks method of self-organizing feature maps (SOFMs) is applied to spectral data of ignitable liquids to visualize the grouping of similar ignitable liquids with respect to their American Society for Testing and Materials (ASTM) class designations and to determine the ions associated with each group. The spectral data consists of extracted ion spectra (EIS), defined as the time-averaged mass spectrum across the chromatographic profile for select ions, where the selected ions are a subset of ions from Table 2 of the ASTM standard E1618-11. Utilization of the EIS allows for inter-laboratory comparisons without the concern of retention time shifts. The trained SOFM demonstrates clustering of the ignitable liquid samples according to designated ASTM classes. The EIS of select samples designated as miscellaneous or oxygenated as well as ignitable liquid residues from fire debris samples are projected onto the SOFM. The results indicate the similarities and differences between the variables of the newly projected data compared to those of the data used to train the SOFM. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  13. A Network Optimization Approach for Improving Organizational Design

    DTIC Science & Technology

    2004-01-01

    functions, Dynamic Network Analysis, Social Network Analysis Abstract Organizations are frequently designed and redesigned, often in...links between sites on the web. Hence a change in any one of the four networks in which people are involved can potentially result in a cascade of...in terms of a set of networks that open the possibility of using all networks (both social and dynamic network measures) as indicators of potential

  14. Constructing fine-granularity functional brain network atlases via deep convolutional autoencoder.

    PubMed

    Zhao, Yu; Dong, Qinglin; Chen, Hanbo; Iraji, Armin; Li, Yujie; Makkie, Milad; Kou, Zhifeng; Liu, Tianming

    2017-12-01

    State-of-the-art functional brain network reconstruction methods such as independent component analysis (ICA) or sparse coding of whole-brain fMRI data can effectively infer many thousands of volumetric brain network maps from a large number of human brains. However, due to the variability of individual brain networks and the large scale of such networks needed for statistically meaningful group-level analysis, it is still a challenging and open problem to derive group-wise common networks as network atlases. Inspired by the superior spatial pattern description ability of the deep convolutional neural networks (CNNs), a novel deep 3D convolutional autoencoder (CAE) network is designed here to extract spatial brain network features effectively, based on which an Apache Spark enabled computational framework is developed for fast clustering of larger number of network maps into fine-granularity atlases. To evaluate this framework, 10 resting state networks (RSNs) were manually labeled from the sparsely decomposed networks of Human Connectome Project (HCP) fMRI data and 5275 network training samples were obtained, in total. Then the deep CAE models are trained by these functional networks' spatial maps, and the learned features are used to refine the original 10 RSNs into 17 network atlases that possess fine-granularity functional network patterns. Interestingly, it turned out that some manually mislabeled outliers in training networks can be corrected by the deep CAE derived features. More importantly, fine granularities of networks can be identified and they reveal unique network patterns specific to different brain task states. By further applying this method to a dataset of mild traumatic brain injury study, it shows that the technique can effectively identify abnormal small networks in brain injury patients in comparison with controls. In general, our work presents a promising deep learning and big data analysis solution for modeling functional connectomes, with fine granularities, based on fMRI data. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Wireless Concrete Strength Monitoring of Wind Turbine Foundations.

    PubMed

    Perry, Marcus; Fusiek, Grzegorz; Niewczas, Pawel; Rubert, Tim; McAlorum, Jack

    2017-12-16

    Wind turbine foundations are typically cast in place, leaving the concrete to mature under environmental conditions that vary in time and space. As a result, there is uncertainty around the concrete's initial performance, and this can encourage both costly over-design and inaccurate prognoses of structural health. Here, we demonstrate the field application of a dense, wireless thermocouple network to monitor the strength development of an onshore, reinforced-concrete wind turbine foundation. Up-to-date methods in fly ash concrete strength and maturity modelling are used to estimate the distribution and evolution of foundation strength over 29 days of curing. Strength estimates are verified by core samples, extracted from the foundation base. In addition, an artificial neural network, trained using temperature data, is exploited to demonstrate that distributed concrete strengths can be estimated for foundations using only sparse thermocouple data. Our techniques provide a practical alternative to computational models, and could assist site operators in making more informed decisions about foundation design, construction, operation and maintenance.

  16. Wireless Concrete Strength Monitoring of Wind Turbine Foundations

    PubMed Central

    Niewczas, Pawel; Rubert, Tim

    2017-01-01

    Wind turbine foundations are typically cast in place, leaving the concrete to mature under environmental conditions that vary in time and space. As a result, there is uncertainty around the concrete’s initial performance, and this can encourage both costly over-design and inaccurate prognoses of structural health. Here, we demonstrate the field application of a dense, wireless thermocouple network to monitor the strength development of an onshore, reinforced-concrete wind turbine foundation. Up-to-date methods in fly ash concrete strength and maturity modelling are used to estimate the distribution and evolution of foundation strength over 29 days of curing. Strength estimates are verified by core samples, extracted from the foundation base. In addition, an artificial neural network, trained using temperature data, is exploited to demonstrate that distributed concrete strengths can be estimated for foundations using only sparse thermocouple data. Our techniques provide a practical alternative to computational models, and could assist site operators in making more informed decisions about foundation design, construction, operation and maintenance. PMID:29258176

  17. An argument for mechanism-based statistical inference in cancer

    PubMed Central

    Ochs, Michael; Price, Nathan D.; Tomasetti, Cristian; Younes, Laurent

    2015-01-01

    Cancer is perhaps the prototypical systems disease, and as such has been the focus of extensive study in quantitative systems biology. However, translating these programs into personalized clinical care remains elusive and incomplete. In this perspective, we argue that realizing this agenda—in particular, predicting disease phenotypes, progression and treatment response for individuals—requires going well beyond standard computational and bioinformatics tools and algorithms. It entails designing global mathematical models over network-scale configurations of genomic states and molecular concentrations, and learning the model parameters from limited available samples of high-dimensional and integrative omics data. As such, any plausible design should accommodate: biological mechanism, necessary for both feasible learning and interpretable decision making; stochasticity, to deal with uncertainty and observed variation at many scales; and a capacity for statistical inference at the patient level. This program, which requires a close, sustained collaboration between mathematicians and biologists, is illustrated in several contexts, including learning bio-markers, metabolism, cell signaling, network inference and tumorigenesis. PMID:25381197

  18. Discrete dynamical system modelling for gene regulatory networks of 5-hydroxymethylfurfural tolerance for ethanologenic yeast.

    PubMed

    Song, M; Ouyang, Z; Liu, Z L

    2009-05-01

    Composed of linear difference equations, a discrete dynamical system (DDS) model was designed to reconstruct transcriptional regulations in gene regulatory networks (GRNs) for ethanologenic yeast Saccharomyces cerevisiae in response to 5-hydroxymethylfurfural (HMF), a bioethanol conversion inhibitor. The modelling aims at identification of a system of linear difference equations to represent temporal interactions among significantly expressed genes. Power stability is imposed on a system model under the normal condition in the absence of the inhibitor. Non-uniform sampling, typical in a time-course experimental design, is addressed by a log-time domain interpolation. A statistically significant DDS model of the yeast GRN derived from time-course gene expression measurements by exposure to HMF, revealed several verified transcriptional regulation events. These events implicate Yap1 and Pdr3, transcription factors consistently known for their regulatory roles by other studies or postulated by independent sequence motif analysis, suggesting their involvement in yeast tolerance and detoxification of the inhibitor.

  19. Development of metamodels for predicting aerosol dispersion in ventilated spaces

    NASA Astrophysics Data System (ADS)

    Hoque, Shamia; Farouk, Bakhtier; Haas, Charles N.

    2011-04-01

    Artificial neural network (ANN) based metamodels were developed to describe the relationship between the design variables and their effects on the dispersion of aerosols in a ventilated space. A Hammersley sequence sampling (HSS) technique was employed to efficiently explore the multi-parameter design space and to build numerical simulation scenarios. A detailed computational fluid dynamics (CFD) model was applied to simulate these scenarios. The results derived from the CFD simulations were used to train and test the metamodels. Feed forward ANN's were developed to map the relationship between the inputs and the outputs. The predictive ability of the neural network based metamodels was compared to linear and quadratic metamodels also derived from the same CFD simulation results. The ANN based metamodel performed well in predicting the independent data sets including data generated at the boundaries. Sensitivity analysis showed that particle tracking time to residence time and the location of input and output with relation to the height of the room had more impact than the other dimensionless groups on particle behavior.

  20. Exploring patients' health information communication practices with social network members as a foundation for consumer health IT design.

    PubMed

    Valdez, Rupa Sheth; Brennan, Patricia Flatley

    2015-05-01

    There is a need to ensure that the growing number of consumer health information technologies designed to support patient engagement account for the larger social context in which health is managed. Basic research on how patients engage this larger social context is needed as a precursor to the development of patient-centered consumer health information technology (IT) solutions. The purpose of this study was to inform the broader design of consumer health IT by characterizing patients' existing health information communication practices with their social network members. This qualitative study took place between 2010 and 2012 in a Midwestern city. Eighteen patients with chronic conditions participated in a semi-structured interview that was analyzed using qualitative content analysis and descriptive statistics. Emphasis was placed on recruiting a sample representing diverse cultural groups and including participants of low socioeconomic status. Participants' social networks included a wide range of individuals, spanning biological relatives, divinities, and second-degree relationships. Participants' rationales for health information communication reflected seven themes: (1) characteristics and circumstances of the person, (2) characteristics and circumstances of the relationship, (3) structure and composition of the social network, (4) content of the message, (5) orientation of the goal, (6) dimensions of the context, and (7) adaptive practices. This study demonstrates that patients' health information communication practices are multidimensional, engaging individuals beyond formal and informal caregivers and driven by characteristics of their personal lives and larger social contexts in addition to their health problem. New models of consumer health IT must be created to better align with the realities of patients' communication routines. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  1. Optical calibration of a new two-way optical component network analyzer

    NASA Astrophysics Data System (ADS)

    Tsao, Shyh-Lin; Ko, Chih-Han; Liou, Tai-Chi

    2003-12-01

    High-speed fiber communications show promising results recently [1,2]. Using of lightwave technology for measuring S parameters with optical component becoming important. For this purpose to develop a two-way network analyzer has been reported [3]. In this paper, we report the calibration method of a new two-way lightwave component analyze for applying in fiber optical signal processing elements. The background error and circulator wavelength response are all calibrated. We have designed a new probe for two-way optical component network analyzer. The probe is composed of frequency division multiplexer(FDM), electrical circulator, optical transmitter, optical receiver, and an optical circulator. We design 2-D grating structures as frequency division. The PCB we adopted is Kinstan GD1530 160 whose relative dielectric constantɛ= 4.3, length= 120 mm, and height= 1.8 mm. Two dimensional non-metal covered array square pads are designed on FR4 Glass-Epoxy board for FDM. The FDM can be achieved by the two dimensional non-metalized covered array square pads. Finally we use a single fiber ring resonator filter as our test samples. Comparing the numerical and experimental results, test the device we made. References [1] D. D. Curtis and E. E. Ames,"Optical Test Set for Microwave Fiber-Optic Network Analysis," IEEE Transactions on Microwave Theory and Techniques. , vol. 38, NO.5, pp. 552-559, 1990. [2] J. A. C. Bingham,"Multicarrier modulation for data transmission: an idea whose time has come," IEEE Commun. Magazine., pp. 5 -14, 1990. [3] M. Nakazawa, K. Suzuki, and Y. Kimura, " 3.2-5 Gbps 100km error-free soliton transmission with erbium amplifiers and repenters," IEEE Photonics Tech Lett.,vol.2,pp.216-219,1990.

  2. Application of a neural network for reflectance spectrum classification

    NASA Astrophysics Data System (ADS)

    Yang, Gefei; Gartley, Michael

    2017-05-01

    Traditional reflectance spectrum classification algorithms are based on comparing spectrum across the electromagnetic spectrum anywhere from the ultra-violet to the thermal infrared regions. These methods analyze reflectance on a pixel by pixel basis. Inspired by high performance that Convolution Neural Networks (CNN) have demonstrated in image classification, we applied a neural network to analyze directional reflectance pattern images. By using the bidirectional reflectance distribution function (BRDF) data, we can reformulate the 4-dimensional into 2 dimensions, namely incident direction × reflected direction × channels. Meanwhile, RIT's micro-DIRSIG model is utilized to simulate additional training samples for improving the robustness of the neural networks training. Unlike traditional classification by using hand-designed feature extraction with a trainable classifier, neural networks create several layers to learn a feature hierarchy from pixels to classifier and all layers are trained jointly. Hence, the our approach of utilizing the angular features are different to traditional methods utilizing spatial features. Although training processing typically has a large computational cost, simple classifiers work well when subsequently using neural network generated features. Currently, most popular neural networks such as VGG, GoogLeNet and AlexNet are trained based on RGB spatial image data. Our approach aims to build a directional reflectance spectrum based neural network to help us to understand from another perspective. At the end of this paper, we compare the difference among several classifiers and analyze the trade-off among neural networks parameters.

  3. Selective Narrowing of Social Networks Across Adulthood is Associated With Improved Emotional Experience in Daily Life

    PubMed Central

    English, Tammy; Carstensen, Laura L.

    2014-01-01

    Past research has documented age differences in the size and composition of social networks that suggest that networks grow smaller with age and include an increasingly greater proportion of well-known social partners. According to socioemotional selectivity theory, such changes in social network composition serve an antecedent emotion regulatory function that supports an age-related increase in the priority that people place on emotional well-being. The present study employed a longitudinal design with a sample that spanned the full adult age range to examine whether there is evidence of within-individual (developmental) change in social networks and whether the characteristics of relationships predict emotional experiences in daily life. Using growth curve analyses, social networks were found to increase in size in young adulthood and then decline steadily throughout later life. As postulated by socioemotional selectivity theory, reductions were observed primarily in the number of peripheral partners; the number of close partners was relatively stable over time. In addition, cross-sectional analyses revealed that older adults reported that social network members elicited less negative emotion and more positive emotion. The emotional tone of social networks, particularly when negative emotions were associated with network members, also predicted experienced emotion of participants. Overall, findings were robust after taking into account demographic variables and physical health. The implications of these findings are discussed in the context of socioemotional selectivity theory and related theoretical models. PMID:24910483

  4. Hybrid Analog/Digital Receiver

    NASA Technical Reports Server (NTRS)

    Brown, D. H.; Hurd, W. J.

    1989-01-01

    Advanced hybrid analog/digital receiver processes intermediate-frequency (IF) signals carrying digital data in form of phase modulation. Uses IF sampling and digital phase-locked loops to track carrier and subcarrier signals and to synchronize data symbols. Consists of three modules: IF assembly, signal-processing assembly, and test-signal assembly. Intended for use in Deep Space Network, but presumably basic design modified for such terrestrial uses as communications or laboratory instrumentation where signals weak and/or noise strong.

  5. Seasonal rationalization of river water quality sampling locations: a comparative study of the modified Sanders and multivariate statistical approaches.

    PubMed

    Varekar, Vikas; Karmakar, Subhankar; Jha, Ramakar

    2016-02-01

    The design of surface water quality sampling location is a crucial decision-making process for rationalization of monitoring network. The quantity, quality, and types of available dataset (watershed characteristics and water quality data) may affect the selection of appropriate design methodology. The modified Sanders approach and multivariate statistical techniques [particularly factor analysis (FA)/principal component analysis (PCA)] are well-accepted and widely used techniques for design of sampling locations. However, their performance may vary significantly with quantity, quality, and types of available dataset. In this paper, an attempt has been made to evaluate performance of these techniques by accounting the effect of seasonal variation, under a situation of limited water quality data but extensive watershed characteristics information, as continuous and consistent river water quality data is usually difficult to obtain, whereas watershed information may be made available through application of geospatial techniques. A case study of Kali River, Western Uttar Pradesh, India, is selected for the analysis. The monitoring was carried out at 16 sampling locations. The discrete and diffuse pollution loads at different sampling sites were estimated and accounted using modified Sanders approach, whereas the monitored physical and chemical water quality parameters were utilized as inputs for FA/PCA. The designed optimum number of sampling locations for monsoon and non-monsoon seasons by modified Sanders approach are eight and seven while that for FA/PCA are eleven and nine, respectively. Less variation in the number and locations of designed sampling sites were obtained by both techniques, which shows stability of results. A geospatial analysis has also been carried out to check the significance of designed sampling location with respect to river basin characteristics and land use of the study area. Both methods are equally efficient; however, modified Sanders approach outperforms FA/PCA when limited water quality and extensive watershed information is available. The available water quality dataset is limited and FA/PCA-based approach fails to identify monitoring locations with higher variation, as these multivariate statistical approaches are data-driven. The priority/hierarchy and number of sampling sites designed by modified Sanders approach are well justified by the land use practices and observed river basin characteristics of the study area.

  6. Landscape Characterization and Representativeness Analysis for Understanding Sampling Network Coverage

    DOE Data Explorer

    Maddalena, Damian; Hoffman, Forrest; Kumar, Jitendra; Hargrove, William

    2014-08-01

    Sampling networks rarely conform to spatial and temporal ideals, often comprised of network sampling points which are unevenly distributed and located in less than ideal locations due to access constraints, budget limitations, or political conflict. Quantifying the global, regional, and temporal representativeness of these networks by quantifying the coverage of network infrastructure highlights the capabilities and limitations of the data collected, facilitates upscaling and downscaling for modeling purposes, and improves the planning efforts for future infrastructure investment under current conditions and future modeled scenarios. The work presented here utilizes multivariate spatiotemporal clustering analysis and representativeness analysis for quantitative landscape characterization and assessment of the Fluxnet, RAINFOR, and ForestGEO networks. Results include ecoregions that highlight patterns of bioclimatic, topographic, and edaphic variables and quantitative representativeness maps of individual and combined networks.

  7. Polymerization speed and diffractive experiments in polymer network LC test cells

    NASA Astrophysics Data System (ADS)

    Braun, Larissa; Gong, Zhen; Habibpourmoghadam, Atefeh; Schafforz, Samuel L.; Wolfram, Lukas; Lorenz, Alexander

    2018-02-01

    Polymer-network liquid crystals (LCs), where the response properties of a LC can be enhanced by the presence of a porous polymer network, are investigated. In the reported experiments, liquid crystals were doped with a small amount (< 10%) of photo-curable acrylate monomers. Samples with surface grafted photoinitiators, dissolvable photoinitiators, and samples with both kinds of photoinitiators were prepared. Both conventional (planar electrodes) and diffractive (interdigitated electrodes) test cells were used. These samples were exposed with a UV light source and changes of their capacitance were investigated with an LCR meter during exposure. Due to the presence of the in-situ generated polymer network, the electro-optic response properties of photo cured samples were enhanced. For example, their continuous phase modulation properties led to more localized responses in samples with interdigitated electrodes, which caused suppression of selected diffraction orders in the diffraction patterns recorded in polymer network LC samples. Moreover, capacitance changes were investigated during photopolymerization of a blue phase LC.

  8. A micro-Doppler sonar for acoustic surveillance in sensor networks

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaonian

    Wireless sensor networks have been employed in a wide variety of applications, despite the limited energy and communication resources at each sensor node. Low power custom VLSI chips implementing passive acoustic sensing algorithms have been successfully integrated into an acoustic surveillance unit and demonstrated for detection and location of sound sources. In this dissertation, I explore active and passive acoustic sensing techniques, signal processing and classification algorithms for detection and classification in a multinodal sensor network environment. I will present the design and characterization of a continuous-wave micro-Doppler sonar to image objects with articulated moving components. As an example application for this system, we use it to image gaits of humans and four-legged animals. I will present the micro-Doppler gait signatures of a walking person, a dog and a horse. I will discuss the resolution and range of this micro-Doppler sonar and use experimental results to support the theoretical analyses. In order to reduce the data rate and make the system amenable to wireless sensor networks, I will present a second micro-Doppler sonar that uses bandpass sampling for data acquisition. Speech recognition algorithms are explored for biometric identifications from one's gait, and I will present and compare the classification performance of the two systems. The acoustic micro-Doppler sonar design and biometric identification results are the first in the field as the previous work used either video camera or microwave technology. I will also review bearing estimation algorithms and present results of applying these algorithms for bearing estimation and tracking of moving vehicles. Another major source of the power consumption at each sensor node is the wireless interface. To address the need of low power communications in a wireless sensor network, I will also discuss the design and implementation of ultra wideband transmitters in a three dimensional silicon on insulator process. Lastly, a prototype of neuromorphic interconnects using ultra wideband radio will be presented.

  9. An inventory of terrestrial mammals at national parks in the Northeast Temperate Network and Sagamore Hill National Historic Site

    USGS Publications Warehouse

    Gilbert, Andrew T.; O'Connell, Allan F.; Annand, Elizabeth M.; Talancy, Neil W.; Sauer, John R.; Nichols, James D.

    2008-01-01

    An inventory of mammals was conducted during 2004 at nine national park sites in the Northeast Temperate Network (NETN): Acadia National Park (NP), Marsh-Billings-Rockefeller National Historical Park (NHP), Minute Man NHP, Morristown NHP, Roosevelt-Vanderbilt National Historic Site (NHS), Saint-Gaudens NHS, Saugus Iron Works NHS, Saratoga NHP, and Weir Farm NHS. Sagamore Hill NHS, part of the Northeast Coastal and Barrier Network (NCBN), was also surveyed. Each park except Acadia NP was sampled twice, once in the winter/spring and again in the summer/fall. During the winter/spring visit, indirect measure (IM) sampling arrays were employed at 2 to 16 stations and included sampling by remote cameras, cubby boxes (covered trackplates), and hair traps. IM stations were established and re-used during the summer/fall sampling period. Trapping was conducted at 2 to 12 stations at all parks except Acadia NP during the summer/fall period and consisted of arrays of small-mammal traps, squirrel-sized live traps, and some fox-sized live traps. We used estimation-based procedures and probabilistic sampling techniques to design this inventory. A total of 38 species was detected by IM sampling, trapping, and field observations. Species diversity (number of species) varied among parks, ranging from 8 to 24, with Minute Man NHP having the most species detected. Raccoon (Procyon lotor), Virginia Opossum (Didelphis virginiana), Fisher (Martes pennanti), and Domestic Cat (Felis silvestris) were the most common medium-sized mammals detected in this study and White-footed Mouse (Peromyscus leucopus), Northern Short-tailed Shrew (Blarina brevicauda), Deer Mouse (P. maniculatus), and Meadow Vole (Microtus pennsylvanicus) the most common small mammals detected. All species detected are considered fairly common throughout their range including the Fisher, which has been reintroduced in several New England states. We did not detect any state or federal endangered or threatened species.

  10. Relationships between Perron-Frobenius eigenvalue and measurements of loops in networks

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Kou, Yingxin; Li, Zhanwu; Xu, An; Chang, Yizhe

    2018-07-01

    The Perron-Frobenius eigenvalue (PFE) is widely used as measurement of the number of loops in networks, but what exactly the relationship between the PFE and the number of loops in networks is has not been researched yet, is it strictly monotonically increasing? And what are the relationships between the PFE and other measurements of loops in networks? Such as the average loop degree of nodes, and the distribution of loop ranks. We make researches on these questions based on samples of ER random network, NW small-world network and BA scale-free network, and the results confirm that, both the number of loops in network and the average loop degree of nodes of all samples do increase with the increase of the PFE in general trend, but neither of them are strictly monotonically increasing, so the PFE is capable to be used as a rough estimative measurement of the number of loops in networks and the average loop degree of nodes. Furthermore, we find that a majority of the loop ranks of all samples obey Weibull distribution, of which the scale parameter A and the shape parameter B have approximate power-law relationships with the PFE of the samples.

  11. Application of Artificial Neural Networks to the Design of Turbomachinery Airfoils

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan; Madavan, Nateri

    1997-01-01

    Artificial neural networks are widely used in engineering applications, such as control, pattern recognition, plant modeling and condition monitoring to name just a few. In this seminar we will explore the possibility of applying neural networks to aerodynamic design, in particular, the design of turbomachinery airfoils. The principle idea behind this effort is to represent the design space using a neural network (within some parameter limits), and then to employ an optimization procedure to search this space for a solution that exhibits optimal performance characteristics. Results obtained for design problems in two spatial dimensions will be presented.

  12. Artificially modified magnetic anisotropy in interconnected nanowire networks.

    PubMed

    Araujo, Elsie; Encinas, Armando; Velázquez-Galván, Yenni; Martínez-Huerta, Juan Manuel; Hamoir, Gaël; Ferain, Etienne; Piraux, Luc

    2015-01-28

    Interconnected or crossed magnetic nanowire networks have been fabricated by electrodeposition into a polycarbonate template with crossed cylindrical nanopores oriented ±30° with respect to the surface normal. Tailor-made nanoporous polymer membranes have been designed by performing a double energetic heavy ion irradiation with fixed incidence angles. The Ni and Ni/NiFe nanowire networks have been characterized by magnetometry as well as ferromagnetic resonance and compared with parallel nanowire arrays of the same diameter and density. The most interesting feature of these nanostructured materials is a significant reduction of the magnetic anisotropy when the external field is applied perpendicular and parallel to the plane of the sample. This effect is attributed to the relative orientation of the nanowire axes with the applied field. Moreover, the microwave transmission spectra of these nanowire networks display an asymmetric linewidth broadening, which may be interesting for the development of low-pass filters. Nanoporous templates made of well-defined nanochannel network constitute an interesting approach to fabricate materials with controlled anisotropy and microwave absorption properties that can be easily modified by adjusting the relative orientation of the nanochannels, pore sizes and material composition along the length of the nanowire.

  13. Condition monitoring and fault diagnosis of motor bearings using undersampled vibration signals from a wireless sensor network

    NASA Astrophysics Data System (ADS)

    Lu, Siliang; Zhou, Peng; Wang, Xiaoxian; Liu, Yongbin; Liu, Fang; Zhao, Jiwen

    2018-02-01

    Wireless sensor networks (WSNs) which consist of miscellaneous sensors are used frequently in monitoring vital equipment. Benefiting from the development of data mining technologies, the massive data generated by sensors facilitate condition monitoring and fault diagnosis. However, too much data increase storage space, energy consumption, and computing resource, which can be considered fatal weaknesses for a WSN with limited resources. This study investigates a new method for motor bearings condition monitoring and fault diagnosis using the undersampled vibration signals acquired from a WSN. The proposed method, which is a fusion of the kurtogram, analog domain bandpass filtering, bandpass sampling, and demodulated resonance technique, can reduce the sampled data length while retaining the monitoring and diagnosis performance. A WSN prototype was designed, and simulations and experiments were conducted to evaluate the effectiveness and efficiency of the proposed method. Experimental results indicated that the sampled data length and transmission time of the proposed method result in a decrease of over 80% in comparison with that of the traditional method. Therefore, the proposed method indicates potential applications on condition monitoring and fault diagnosis of motor bearings installed in remote areas, such as wind farms and offshore platforms.

  14. S-CNN: Subcategory-aware convolutional networks for object detection.

    PubMed

    Chen, Tao; Lu, Shijian; Fan, Jiayuan

    2017-09-26

    The marriage between the deep convolutional neural network (CNN) and region proposals has made breakthroughs for object detection in recent years. While the discriminative object features are learned via a deep CNN for classification, the large intra-class variation and deformation still limit the performance of the CNN based object detection. We propose a subcategory-aware CNN (S-CNN) to solve the object intra-class variation problem. In the proposed technique, the training samples are first grouped into multiple subcategories automatically through a novel instance sharing maximum margin clustering process. A multi-component Aggregated Channel Feature (ACF) detector is then trained to produce more latent training samples, where each ACF component corresponds to one clustered subcategory. The produced latent samples together with their subcategory labels are further fed into a CNN classifier to filter out false proposals for object detection. An iterative learning algorithm is designed for the joint optimization of image subcategorization, multi-component ACF detector, and subcategory-aware CNN classifier. Experiments on INRIA Person dataset, Pascal VOC 2007 dataset and MS COCO dataset show that the proposed technique clearly outperforms the state-of-the-art methods for generic object detection.

  15. Plasmonic Three-Dimensional Transparent Conductor Based on Al-Doped Zinc Oxide-Coated Nanostructured Glass Using Atomic Layer Deposition

    DOE PAGES

    Malek, Gary A.; Aytug, Tolga; Liu, Qingfeng; ...

    2015-04-02

    Transparent nanostructured glass coatings, fabricated on glass substrates, with a unique three-dimensional (3D) architecture were utilized as the foundation for the design of plasmonic 3D transparent conductors. Transformation of the non-conducting 3D structure to a conducting 3D network was accomplished through atomic layer deposition of aluminum-doped zinc oxide (AZO). After AZO growth, gold nanoparticles (AuNPs) were deposited by electronbeam evaporation to enhance light trapping and decrease the overall sheet resistance. Field emission scanning electron microscopy and atomic force microcopy images revealed the highly porous, nanostructured morphology of the AZO coated glass surface along with the in-plane dimensions of the depositedmore » AuNPs. Sheet resistance measurements conducted on the coated samples verified that the electrical properties of the 3D network are comparable to that of the untextured two-dimensional AZO coated glass substrates. In addition, transmittance measurements of the glass samples coated with various AZO thicknesses showed preservation of the highly transparent nature of each sample, while the AuNPs demonstrated enhanced light scattering as well as light-trapping capability.« less

  16. Plasmonic Three-Dimensional Transparent Conductor Based on Al-Doped Zinc Oxide-Coated Nanostructured Glass Using Atomic Layer Deposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malek, Gary A.; Aytug, Tolga; Liu, Qingfeng

    Transparent nanostructured glass coatings, fabricated on glass substrates, with a unique three-dimensional (3D) architecture were utilized as the foundation for the design of plasmonic 3D transparent conductors. Transformation of the non-conducting 3D structure to a conducting 3D network was accomplished through atomic layer deposition of aluminum-doped zinc oxide (AZO). After AZO growth, gold nanoparticles (AuNPs) were deposited by electronbeam evaporation to enhance light trapping and decrease the overall sheet resistance. Field emission scanning electron microscopy and atomic force microcopy images revealed the highly porous, nanostructured morphology of the AZO coated glass surface along with the in-plane dimensions of the depositedmore » AuNPs. Sheet resistance measurements conducted on the coated samples verified that the electrical properties of the 3D network are comparable to that of the untextured two-dimensional AZO coated glass substrates. In addition, transmittance measurements of the glass samples coated with various AZO thicknesses showed preservation of the highly transparent nature of each sample, while the AuNPs demonstrated enhanced light scattering as well as light-trapping capability.« less

  17. Bidirectional influence: A longitudinal analysis of size of drug network and depression among inner-city residents in Baltimore, Maryland

    PubMed Central

    Yang, Jingyan; Latkin, Carl A.; Davey-Rothwell, Melissa

    2015-01-01

    BACKGROUND The prevalence of depression among drug users is high. It has been recognized that drug use behaviors can be influenced and spread through social networks. OBJECTIVES We investigated the directional relationship between social network factors and depressive symptoms among a sample of inner-city residents in Baltimore, MD. METHODS We performed a longitudinal study of four-wave data collected from a network-based HIV/STI prevention intervention for women and network members, consisting of both men and women. Our primary outcome and exposure were depression using CESD scale and social network characteristics, respectively. Linear mixed model with clustering adjustment was used to account for both repeated measurement and network design. RESULTS Of the 746 participants, those who had high levels of depression tended to be female, less educated, homeless, smokers, and did not have a main partner. In the univariate longitudinal model, larger size of drug network was significantly associated with depression (OR=1.38, p<0.001). This relationship held after controlling for age, gender, homeless in the past six months, college education, having a main partner, cigarette smoking, perceived health, and social support network (aOR=1.19, p=0.001). In the univariate mixed model using depression to predict size of drug network, the data suggested that depression was associated with larger size of drug network (coef.=1.23, p<0.001) and the same relation held in multivariate model (adjusted coef.=1.08, p=0.001). CONCLUSIONS The results suggest that larger size of drug network is a risk factor for depression, and vice versa. Further intervention strategies to reduce depression should address social networks factors. PMID:26584046

  18. Integrating network ecology with applied conservation: a synthesis and guide to implementation.

    PubMed

    Kaiser-Bunbury, Christopher N; Blüthgen, Nico

    2015-07-10

    Ecological networks are a useful tool to study the complexity of biotic interactions at a community level. Advances in the understanding of network patterns encourage the application of a network approach in other disciplines than theoretical ecology, such as biodiversity conservation. So far, however, practical applications have been meagre. Here we present a framework for network analysis to be harnessed to advance conservation management by using plant-pollinator networks and islands as model systems. Conservation practitioners require indicators to monitor and assess management effectiveness and validate overall conservation goals. By distinguishing between two network attributes, the 'diversity' and 'distribution' of interactions, on three hierarchical levels (species, guild/group and network) we identify seven quantitative metrics to describe changes in network patterns that have implications for conservation. Diversity metrics are partner diversity, vulnerability/generality, interaction diversity and interaction evenness, and distribution metrics are the specialization indices d' and [Formula: see text] and modularity. Distribution metrics account for sampling bias and may therefore be suitable indicators to detect human-induced changes to plant-pollinator communities, thus indirectly assessing the structural and functional robustness and integrity of ecosystems. We propose an implementation pathway that outlines the stages that are required to successfully embed a network approach in biodiversity conservation. Most importantly, only if conservation action and study design are aligned by practitioners and ecologists through joint experiments, are the findings of a conservation network approach equally beneficial for advancing adaptive management and ecological network theory. We list potential obstacles to the framework, highlight the shortfall in empirical, mostly experimental, network data and discuss possible solutions. Published by Oxford University Press on behalf of the Annals of Botany Company.

  19. Design and implementation of dynamic hybrid Honeypot network

    NASA Astrophysics Data System (ADS)

    Qiao, Peili; Hu, Shan-Shan; Zhai, Ji-Qiang

    2013-05-01

    The method of constructing a dynamic and self-adaptive virtual network is suggested to puzzle adversaries, delay and divert attacks, exhaust attacker resources and collect attacking information. The concepts of Honeypot and Honeyd, which is the frame of virtual Honeypot are introduced. The techniques of network scanning including active fingerprint recognition are analyzed. Dynamic virtual network system is designed and implemented. A virtual network similar to real network topology is built according to the collected messages from real environments in this system. By doing this, the system can perplex the attackers when Hackers attack and can further analyze and research the attacks. The tests to this system prove that this design can successfully simulate real network environment and can be used in network security analysis.

  20. Photodiode Preamplifier for Laser Ranging With Weak Signals

    NASA Technical Reports Server (NTRS)

    Abramovici, Alexander; Chapsky, Jacob

    2007-01-01

    An improved preamplifier circuit has been designed for processing the output of an avalanche photodiode (APD) that is used in a high-resolution laser ranging system to detect laser pulses returning from a target. The improved circuit stands in contrast to prior such circuits in which the APD output current pulses are made to pass, variously, through wide-band or narrow-band load networks before preamplification. A major disadvantage of the prior wide-band load networks is that they are highly susceptible to noise, which degrades timing resolution. A major disadvantage of the prior narrow-band load networks is that they make it difficult to sample the amplitudes of the narrow laser pulses ordinarily used in ranging. In the improved circuit, a load resistor is connected to the APD output and its value is chosen so that the time constant defined by this resistance and the APD capacitance is large, relative to the duration of a laser pulse. The APD capacitance becomes initially charged by the pulse of current generated by a return laser pulse, so that the rise time of the load-network output is comparable to the duration of the return pulse. Thus, the load-network output is characterized by a fast-rising leading edge, which is necessary for accurate pulse timing. On the other hand, the resistance-capacitance combination constitutes a lowpass filter, which helps to suppress noise. The long time constant causes the load network output pulse to have a long shallow-sloping trailing edge, which makes it easy to sample the amplitude of the return pulse. The output of the load network is fed to a low-noise, wide-band amplifier. The amplifier must be a wide-band one in order to preserve the sharp pulse rise for timing. The suppression of noise and the use of a low-noise amplifier enable the ranging system to detect relatively weak return pulses.

  1. Broadband electrical impedance matching for piezoelectric ultrasound transducers.

    PubMed

    Huang, Haiying; Paramo, Daniel

    2011-12-01

    This paper presents a systematic method for designing broadband electrical impedance matching networks for piezoelectric ultrasound transducers. The design process involves three steps: 1) determine the equivalent circuit of the unmatched piezoelectric transducer based on its measured admittance; 2) design a set of impedance matching networks using a computerized Smith chart; and 3) establish the simulation model of the matched transducer to evaluate the gain and bandwidth of the impedance matching networks. The effectiveness of the presented approach is demonstrated through the design, implementation, and characterization of impedance matching networks for a broadband acoustic emission sensor. The impedance matching network improved the power of the acquired signal by 9 times.

  2. Reliable Transport over SpaceWire for James Webb Space Telescope (JWST) Focal Plane Electronics (FPE) Network

    NASA Technical Reports Server (NTRS)

    Rakow, Glenn; Schnurr, Richard; Dailey, Christopher; Shakoorzadeh, Kamdin

    2003-01-01

    NASA's James Webb Space Telescope (JWST) faces difficult technical and budgetary challenges to overcome before it is scheduled launch in 2010. The Integrated Science Instrument Module (ISIM), shares these challenges. The major challenge addressed in this paper is the data network used to collect, process, compresses and store Infrared data. A total of 114 Mbps of raw information must be collected from 19 sources and delivered to the two redundant data processing units across a twenty meter deployed thermally restricted interface. Further data must be transferred to the solid-state recorder and the spacecraft. The JWST detectors are kept at cryogenic temperatures to obtain the sensitivity necessary to measure faint energy sources. The Focal Plane Electronics (FPE) that sample the detector, generate packets from the samples, and transmit these packets to the processing electronics must dissipate little power in order to help keep the detectors at these cold temperatures. Separating the low powered front-end electronics from the higher-powered processing electronics, and using a simple high-speed protocol to transmit the detector data minimize the power dissipation near the detectors. Low Voltage Differential Signaling (LVDS) drivers were considered an obvious choice for physical layer because of their high speed and low power. The mechanical restriction on the number cables across the thermal interface force the Image packets to be concentrated upon two high-speed links. These links connect the many image packet sources, Focal Plane Electronics (FPE), located near the cryogenic detectors to the processing electronics on the spacecraft structure. From 12 to 10,000 seconds of raw data are processed to make up an image, various algorithms integrate the pixel data Loss of commands to configure the detectors as well as the loss of science data itself may cause inefficiency in the use of the telescope that are unacceptable given the high cost of the observatory. This combination of requirements necessitates a redundant, fault tolerant, high- speed, low mass, low power network with a low Bit error Rate(1E-9- 1E-12). The ISIM systems team performed many studies of the various network architectures that meeting these requirements. The architecture selected uses the Spacewire protocol, with the addition of a new transport and network layer added to implement end-to-end reliable transport. The network and reliable transport mechanism must be implemented in hardware because of the high average information rate and the restriction on the ability of the detectors to buffer data due to power and size restrictions. This network and transport mechanism was designed to be compatible with existing Spacewire links and routers so that existing equipment and designs may be leveraged upon. The transport layer specification is being coordinated with European Space Agency (ESA), Spacewire Working Group and the Consultative Committee for Space Data System (CCSDS) PlK Standard Onboard Interface (SOIF) panel, with the intent of developing a standard for reliable transport for Spacewire. Changes to the protocol presented are likely since negotiations are ongoing with these groups. A block of RTL VHDL that implements a multi-port Spacewire router with an external user interface will be developed and integrated with an existing Spacewire Link design. The external user interface will be the local interface that sources and sinks packets onto and off of the network (Figure 3). The external user interface implements the network and transport layer and handles acknowledgements and re-tries of packets for reliable transport over the network. Because the design is written in RTL, it may be ported to any technology but will initially be targeted to the new Actel Accelerator series (AX) part. Each link will run at 160 Mbps and the power will be about 0.165 Watt per link worst case in the Actel AX.

  3. Integration of Steady-State and Temporal Gene Expression Data for the Inference of Gene Regulatory Networks

    PubMed Central

    Wang, Yi Kan; Hurley, Daniel G.; Schnell, Santiago; Print, Cristin G.; Crampin, Edmund J.

    2013-01-01

    We develop a new regression algorithm, cMIKANA, for inference of gene regulatory networks from combinations of steady-state and time-series gene expression data. Using simulated gene expression datasets to assess the accuracy of reconstructing gene regulatory networks, we show that steady-state and time-series data sets can successfully be combined to identify gene regulatory interactions using the new algorithm. Inferring gene networks from combined data sets was found to be advantageous when using noisy measurements collected with either lower sampling rates or a limited number of experimental replicates. We illustrate our method by applying it to a microarray gene expression dataset from human umbilical vein endothelial cells (HUVECs) which combines time series data from treatment with growth factor TNF and steady state data from siRNA knockdown treatments. Our results suggest that the combination of steady-state and time-series datasets may provide better prediction of RNA-to-RNA interactions, and may also reveal biological features that cannot be identified from dynamic or steady state information alone. Finally, we consider the experimental design of genomics experiments for gene regulatory network inference and show that network inference can be improved by incorporating steady-state measurements with time-series data. PMID:23967277

  4. A robust sound perception model suitable for neuromorphic implementation.

    PubMed

    Coath, Martin; Sheik, Sadique; Chicca, Elisabetta; Indiveri, Giacomo; Denham, Susan L; Wennekers, Thomas

    2013-01-01

    We have recently demonstrated the emergence of dynamic feature sensitivity through exposure to formative stimuli in a real-time neuromorphic system implementing a hybrid analog/digital network of spiking neurons. This network, inspired by models of auditory processing in mammals, includes several mutually connected layers with distance-dependent transmission delays and learning in the form of spike timing dependent plasticity, which effects stimulus-driven changes in the network connectivity. Here we present results that demonstrate that the network is robust to a range of variations in the stimulus pattern, such as are found in naturalistic stimuli and neural responses. This robustness is a property critical to the development of realistic, electronic neuromorphic systems. We analyze the variability of the response of the network to "noisy" stimuli which allows us to characterize the acuity in information-theoretic terms. This provides an objective basis for the quantitative comparison of networks, their connectivity patterns, and learning strategies, which can inform future design decisions. We also show, using stimuli derived from speech samples, that the principles are robust to other challenges, such as variable presentation rate, that would have to be met by systems deployed in the real world. Finally we demonstrate the potential applicability of the approach to real sounds.

  5. Low-molecular-weight gelators: elucidating the principles of gelation based on gelator solubility and a cooperative self-assembly model.

    PubMed

    Hirst, Andrew R; Coates, Ian A; Boucheteau, Thomas R; Miravet, Juan F; Escuder, Beatriu; Castelletto, Valeria; Hamley, Ian W; Smith, David K

    2008-07-16

    This paper highlights the key role played by solubility in influencing gelation and demonstrates that many facets of the gelation process depend on this vital parameter. In particular, we relate thermal stability ( T gel) and minimum gelation concentration (MGC) values of small-molecule gelation in terms of the solubility and cooperative self-assembly of gelator building blocks. By employing a van't Hoff analysis of solubility data, determined from simple NMR measurements, we are able to generate T calc values that reflect the calculated temperature for complete solubilization of the networked gelator. The concentration dependence of T calc allows the previously difficult to rationalize "plateau-region" thermal stability values to be elucidated in terms of gelator molecular design. This is demonstrated for a family of four gelators with lysine units attached to each end of an aliphatic diamine, with different peripheral groups (Z or Boc) in different locations on the periphery of the molecule. By tuning the peripheral protecting groups of the gelators, the solubility of the system is modified, which in turn controls the saturation point of the system and hence controls the concentration at which network formation takes place. We report that the critical concentration ( C crit) of gelator incorporated into the solid-phase sample-spanning network within the gel is invariant of gelator structural design. However, because some systems have higher solubilities, they are less effective gelators and require the application of higher total concentrations to achieve gelation, hence shedding light on the role of the MGC parameter in gelation. Furthermore, gelator structural design also modulates the level of cooperative self-assembly through solubility effects, as determined by applying a cooperative binding model to NMR data. Finally, the effect of gelator chemical design on the spatial organization of the networked gelator was probed by small-angle neutron and X-ray scattering (SANS/SAXS) on the native gel, and a tentative self-assembly model was proposed.

  6. LAVA web-based remote simulation: enhancements for education and technology innovation

    NASA Astrophysics Data System (ADS)

    Lee, Sang Il; Ng, Ka Chun; Orimoto, Takashi; Pittenger, Jason; Horie, Toshi; Adam, Konstantinos; Cheng, Mosong; Croffie, Ebo H.; Deng, Yunfei; Gennari, Frank E.; Pistor, Thomas V.; Robins, Garth; Williamson, Mike V.; Wu, Bo; Yuan, Lei; Neureuther, Andrew R.

    2001-09-01

    The Lithography Analysis using Virtual Access (LAVA) web site at http://cuervo.eecs.berkeley.edu/Volcano/ has been enhanced with new optical and deposition applets, graphical infrastructure and linkage to parallel execution on networks of workstations. More than ten new graphical user interface applets have been designed to support education, illustrate novel concepts from research, and explore usage of parallel machines. These applets have been improved through feedback and classroom use. Over the last year LAVA provided industry and other academic communities 1,300 session and 700 rigorous simulations per month among the SPLAT, SAMPLE2D, SAMPLE3D, TEMPEST, STORM, and BEBS simulators.

  7. Cellular network entropy as the energy potential in Waddington's differentiation landscape

    PubMed Central

    Banerji, Christopher R. S.; Miranda-Saavedra, Diego; Severini, Simone; Widschwendter, Martin; Enver, Tariq; Zhou, Joseph X.; Teschendorff, Andrew E.

    2013-01-01

    Differentiation is a key cellular process in normal tissue development that is significantly altered in cancer. Although molecular signatures characterising pluripotency and multipotency exist, there is, as yet, no single quantitative mark of a cellular sample's position in the global differentiation hierarchy. Here we adopt a systems view and consider the sample's network entropy, a measure of signaling pathway promiscuity, computable from a sample's genome-wide expression profile. We demonstrate that network entropy provides a quantitative, in-silico, readout of the average undifferentiated state of the profiled cells, recapitulating the known hierarchy of pluripotent, multipotent and differentiated cell types. Network entropy further exhibits dynamic changes in time course differentiation data, and in line with a sample's differentiation stage. In disease, network entropy predicts a higher level of cellular plasticity in cancer stem cell populations compared to ordinary cancer cells. Importantly, network entropy also allows identification of key differentiation pathways. Our results are consistent with the view that pluripotency is a statistical property defined at the cellular population level, correlating with intra-sample heterogeneity, and driven by the degree of signaling promiscuity in cells. In summary, network entropy provides a quantitative measure of a cell's undifferentiated state, defining its elevation in Waddington's landscape. PMID:24154593

  8. Introduction of a new laboratory test: an econometric approach with the use of neural network analysis.

    PubMed

    Jabor, A; Vlk, T; Boril, P

    1996-04-15

    We designed a simulation model for the assessment of the financial risks involved when a new diagnostic test is introduced in the laboratory. The model is based on a neural network consisting of ten neurons and assumes that input entities can have assigned appropriate uncertainty. Simulations are done on a 1-day interval basis. Risk analysis completes the model and the financial effects are evaluated for a selected time period. The basic output of the simulation consists of total expenses and income during the simulation time, net present value of the project at the end of simulation, total number of control samples during simulation, total number of patients evaluated and total number of used kits.

  9. Social relationships among family caregivers: a cross-cultural comparison between Mexican Americans and non-Hispanic White caregivers.

    PubMed

    Phillips, Linda R; Crist, Janice

    2008-10-01

    Sometimes, clinicians assume caregivers in cultural groups believed to have large social networks and strong social support need little intervention from health professionals. This longitudinal study tests five hypotheses about the social relationships of Mexican American compared to non-Hispanic White caregivers and whether negative changes in social support affect perceived health. The sample includes 66 Mexican American and 92 non-Hispanic White caregivers. Findings show that social networks and social support are similar at baseline and similarly stable for 1 year. Negative changes in social support are correlated with poorer health perceptions. Findings underscore the importance of designing interventions that are culturally competent based on what the caregiver is experiencing rather than cultural stereotypes.

  10. Fabrication of triangular nanobeam waveguide networks in bulk diamond using single-crystal silicon hard masks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bayn, I.; Mouradian, S.; Li, L.

    2014-11-24

    A scalable approach for integrated photonic networks in single-crystal diamond using triangular etching of bulk samples is presented. We describe designs of high quality factor (Q = 2.51 × 10{sup 6}) photonic crystal cavities with low mode volume (V{sub m} = 1.062 × (λ/n){sup 3}), which are connected via waveguides supported by suspension structures with predicted transmission loss of only 0.05 dB. We demonstrate the fabrication of these structures using transferred single-crystal silicon hard masks and angular dry etching, yielding photonic crystal cavities in the visible spectrum with measured quality factors in excess of Q = 3 × 10{sup 3}.

  11. A fully automated colorimetric sensing device using smartphone for biomolecular quantification

    NASA Astrophysics Data System (ADS)

    Dutta, Sibasish; Nath, Pabitra

    2017-03-01

    In the present work, the use of smartphone for colorimetric quantification of biomolecules has been demonstrated. As a proof-of-concept, BSA protein and carbohydrate have been used as biomolecular sample. BSA protein and carbohydrate at different concentrations have been treated with Lowry's reagent and Anthrone's reagent respectively . The change in color of the reagent-treated samples at different concentrations have been recorded with the camera of a smartphone in combination with a custom designed optomechanical hardware attachment. This change in color of the reagent-treated samples has been correlated with color channels of two different color models namely RGB (Red Green Blue) and HSV (Hue Saturation and Value) model. In addition to that, the change in color intensity has also been correlated with the grayscale value for each of the imaged sample. A custom designed android app has been developed to quantify the bimolecular concentration and display the result in the phone itself. The obtained results have been compared with that of standard spectrophotometer usually considered for the purpose and highly reliable data have been obtained with the designed sensor. The device is robust, portable and low cost as compared to its commercially available counterparts. The data obtained from the sensor can be transmitted to anywhere in the world through the existing cellular network. It is envisioned that the designed sensing device would find wide range of applications in the field of analytical and bioanalytical sensing research.

  12. Designing marine reserve networks for both conservation and fisheries management.

    PubMed

    Gaines, Steven D; White, Crow; Carr, Mark H; Palumbi, Stephen R

    2010-10-26

    Marine protected areas (MPAs) that exclude fishing have been shown repeatedly to enhance the abundance, size, and diversity of species. These benefits, however, mean little to most marine species, because individual protected areas typically are small. To meet the larger-scale conservation challenges facing ocean ecosystems, several nations are expanding the benefits of individual protected areas by building networks of protected areas. Doing so successfully requires a detailed understanding of the ecological and physical characteristics of ocean ecosystems and the responses of humans to spatial closures. There has been enormous scientific interest in these topics, and frameworks for the design of MPA networks for meeting conservation and fishery management goals are emerging. Persistent in the literature is the perception of an inherent tradeoff between achieving conservation and fishery goals. Through a synthetic analysis across these conservation and bioeconomic studies, we construct guidelines for MPA network design that reduce or eliminate this tradeoff. We present size, spacing, location, and configuration guidelines for designing networks that simultaneously can enhance biological conservation and reduce fishery costs or even increase fishery yields and profits. Indeed, in some settings, a well-designed MPA network is critical to the optimal harvest strategy. When reserves benefit fisheries, the optimal area in reserves is moderately large (mode ≈30%). Assessing network design principals is limited currently by the absence of empirical data from large-scale networks. Emerging networks will soon rectify this constraint.

  13. Hybrid Network Defense Model Based on Fuzzy Evaluation

    PubMed Central

    2014-01-01

    With sustained and rapid developments in the field of information technology, the issue of network security has become increasingly prominent. The theme of this study is network data security, with the test subject being a classified and sensitive network laboratory that belongs to the academic network. The analysis is based on the deficiencies and potential risks of the network's existing defense technology, characteristics of cyber attacks, and network security technologies. Subsequently, a distributed network security architecture using the technology of an intrusion prevention system is designed and implemented. In this paper, first, the overall design approach is presented. This design is used as the basis to establish a network defense model, an improvement over the traditional single-technology model that addresses the latter's inadequacies. Next, a distributed network security architecture is implemented, comprising a hybrid firewall, intrusion detection, virtual honeynet projects, and connectivity and interactivity between these three components. Finally, the proposed security system is tested. A statistical analysis of the test results verifies the feasibility and reliability of the proposed architecture. The findings of this study will potentially provide new ideas and stimuli for future designs of network security architecture. PMID:24574870

  14. Automated water analyser computer supported system (AWACSS) Part I: Project objectives, basic technology, immunoassay development, software design and networking.

    PubMed

    Tschmelak, Jens; Proll, Guenther; Riedt, Johannes; Kaiser, Joachim; Kraemmer, Peter; Bárzaga, Luis; Wilkinson, James S; Hua, Ping; Hole, J Patrick; Nudd, Richard; Jackson, Michael; Abuknesha, Ram; Barceló, Damià; Rodriguez-Mozaz, Sara; de Alda, Maria J López; Sacher, Frank; Stien, Jan; Slobodník, Jaroslav; Oswald, Peter; Kozmenko, Helena; Korenková, Eva; Tóthová, Lívia; Krascsenits, Zoltan; Gauglitz, Guenter

    2005-02-15

    A novel analytical system AWACSS (automated water analyser computer-supported system) based on immunochemical technology has been developed that can measure several organic pollutants at low nanogram per litre level in a single few-minutes analysis without any prior sample pre-concentration nor pre-treatment steps. Having in mind actual needs of water-sector managers related to the implementation of the Drinking Water Directive (DWD) (98/83/EC, 1998) and Water Framework Directive WFD (2000/60/EC, 2000), drinking, ground, surface, and waste waters were major media used for the evaluation of the system performance. The instrument was equipped with remote control and surveillance facilities. The system's software allows for the internet-based networking between the measurement and control stations, global management, trend analysis, and early-warning applications. The experience of water laboratories has been utilised at the design of the instrument's hardware and software in order to make the system rugged and user-friendly. Several market surveys were conducted during the project to assess the applicability of the final system. A web-based AWACSS database was created for automated evaluation and storage of the obtained data in a format compatible with major databases of environmental organic pollutants in Europe. This first part article gives the reader an overview of the aims and scope of the AWACSS project as well as details about basic technology, immunoassays, software, and networking developed and utilised within the research project. The second part article reports on the system performance, first real sample measurements, and an international collaborative trial (inter-laboratory tests) to compare the biosensor with conventional anayltical methods.

  15. Inventory of montane-nesting birds in the Arctic Network of National Parks, Alaska

    USGS Publications Warehouse

    Tibbitts, T.L.; Ruthrauff, D.R.; Gill, Robert E.; Handel, Colleen M.

    2006-01-01

    The Alaska Science Center of the U.S. Geological Survey conducted an inventory of birds in montane areas of the four northern parks in the Arctic Network of National Parks, Alaska. This effort represents the first comprehensive assessment of breeding range and habitat associations for the majority of avian species in the Arctic Network. Ultimately, these data provide a framework upon which to design future monitoring programs.A stratified random sampling design was used to select sample plots (n = 73 plots) that were allocated in proportion to the availability of ecological subsections. Point counts (n = 1,652) were conducted to quantify abundance, distribution, and habitat associations of birds. Field work occurred over three years (2001 to 2003) during two-week-long sessions in late May through early June that coincided with peak courtship activity of breeding birds.Totals of 53 species were recorded in Cape Krusenstern National Monument, 91 in Noatak National Preserve, 57 in Kobuk Valley National Park, and 96 in Gates of the Arctic National Park and Preserve. Substantial proportions of species in individual parks are considered species of conservation concern (18 to 26%) or species of stewardship responsibility of the land managers in the region (8 to 18%). The most commonly detected passerines on point counts included Redpoll spp. (Carduelis flammea and C. hornemanni), Savannah Sparrow (Passerculus sandwichensis), and American Tree Sparrow (Spizella arborea). The most numerous shorebirds were American Golden-Plover (Pluvialis dominica), Wilson’s Snipe (Gallinago delicata), and Whimbrel (Numenius phaeopus). Most species were detected at low rates, reflecting the low breeding densities (and/or low detectabilities) of birds in the montane Arctic. Suites of species were associated with particular ranges of elevation and showed strong associations with particular habitat types.

  16. Size, age, and habitat determine effectiveness of Palau's Marine Protected Areas

    PubMed Central

    Golbuu, Yimnang; Ballesteros, Enric; Caselle, Jennifer E.; Gouezo, Marine; Olsudong, Dawnette; Sala, Enric

    2017-01-01

    Palau has a rich heritage of conservation that has evolved from the traditional moratoria on fishing, or “bul”, to more western Marine Protected Areas (MPAs), while still retaining elements of customary management and tenure. In 2003, the Palau Protected Areas Network (PAN) was created to conserve Palau’s unique biodiversity and culture, and is the country’s mechanism for achieving the goals of the Micronesia Challenge (MC), an initiative to conserve ≥30% of near-shore marine resources within the region by 2020. The PAN comprises a network of numerous MPAs within Palau that vary in age, size, level of management, and habitat, which provide an excellent opportunity to test hypotheses concerning MPA design and function using multiple discreet sampling units. Our sampling design provided a robust space for time comparison to evaluate the relative influence of potential drivers of MPA efficacy. Our results showed that no-take MPAs had, on average, nearly twice the biomass of resource fishes (i.e. those important commercially, culturally, or for subsistence) compared to nearby unprotected areas. Biomass of non-resource fishes showed no differences between no-take areas and areas open to fishing. The most striking difference between no-take MPAs and unprotected areas was the more than 5-fold greater biomass of piscivorous fishes in the MPAs compared to fished areas. The most important determinates of no-take MPA success in conserving resource fish biomass were MPA size and years of protection. Habitat and distance from shore had little effect on resource fish biomass. The extensive network of MPAs in Palau likely provides important conservation and tourism benefits to the Republic, and may also provide fisheries benefits by protecting spawning aggregation sites, and potentially through adult spillover. PMID:28358910

  17. Successful Working Memory Processes and Cerebellum in an Elderly Sample: A Neuropsychological and fMRI Study

    PubMed Central

    Luis, Elkin O.; Arrondo, Gonzalo; Vidorreta, Marta; Martínez, Martin; Loayza, Francis; Fernández-Seara, María A.; Pastor, María A.

    2015-01-01

    Background Imaging studies help to understand the evolution of key cognitive processes related to aging, such as working memory (WM). This study aimed to test three hypotheses in older adults. First, that the brain activation pattern associated to WM processes in elderly during successful low load tasks is located in posterior sensory and associative areas; second, that the prefrontal and parietal cortex and basal ganglia should be more active during high-demand tasks; third, that cerebellar activations are related to high-demand cognitive tasks and have a specific lateralization depending on the condition. Methods We used a neuropsychological assessment with functional magnetic resonance imaging and a core N-back paradigm design that was maintained across the combination of four conditions of stimuli and two memory loads in a sample of twenty elderly subjects. Results During low-loads, activations were located in the visual ventral network. In high loads, there was an involvement of the basal ganglia and cerebellum in addition to the frontal and parietal cortices. Moreover, we detected an executive control role of the cerebellum in a relatively symmetric fronto-parietal network. Nevertheless, this network showed a predominantly left lateralization in parietal regions associated presumably with an overuse of verbal storage strategies. The differential activations between conditions were stimuli-dependent and were located in sensory areas. Conclusion Successful WM processes in the elderly population are accompanied by an activation pattern that involves cerebellar regions working together with a fronto-parietal network. PMID:26132286

  18. Improved Cost-Base Design of Water Distribution Networks using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Moradzadeh Azar, Foad; Abghari, Hirad; Taghi Alami, Mohammad; Weijs, Steven

    2010-05-01

    Population growth and progressive extension of urbanization in different places of Iran cause an increasing demand for primary needs. The water, this vital liquid is the most important natural need for human life. Providing this natural need is requires the design and construction of water distribution networks, that incur enormous costs on the country's budget. Any reduction in these costs enable more people from society to access extreme profit least cost. Therefore, investment of Municipal councils need to maximize benefits or minimize expenditures. To achieve this purpose, the engineering design depends on the cost optimization techniques. This paper, presents optimization models based on genetic algorithm(GA) to find out the minimum design cost Mahabad City's (North West, Iran) water distribution network. By designing two models and comparing the resulting costs, the abilities of GA were determined. the GA based model could find optimum pipe diameters to reduce the design costs of network. Results show that the water distribution network design using Genetic Algorithm could lead to reduction of at least 7% in project costs in comparison to the classic model. Keywords: Genetic Algorithm, Optimum Design of Water Distribution Network, Mahabad City, Iran.

  19. Cell segmentation in histopathological images with deep learning algorithms by utilizing spatial relationships.

    PubMed

    Hatipoglu, Nuh; Bilgin, Gokhan

    2017-10-01

    In many computerized methods for cell detection, segmentation, and classification in digital histopathology that have recently emerged, the task of cell segmentation remains a chief problem for image processing in designing computer-aided diagnosis (CAD) systems. In research and diagnostic studies on cancer, pathologists can use CAD systems as second readers to analyze high-resolution histopathological images. Since cell detection and segmentation are critical for cancer grade assessments, cellular and extracellular structures should primarily be extracted from histopathological images. In response, we sought to identify a useful cell segmentation approach with histopathological images that uses not only prominent deep learning algorithms (i.e., convolutional neural networks, stacked autoencoders, and deep belief networks), but also spatial relationships, information of which is critical for achieving better cell segmentation results. To that end, we collected cellular and extracellular samples from histopathological images by windowing in small patches with various sizes. In experiments, the segmentation accuracies of the methods used improved as the window sizes increased due to the addition of local spatial and contextual information. Once we compared the effects of training sample size and influence of window size, results revealed that the deep learning algorithms, especially convolutional neural networks and partly stacked autoencoders, performed better than conventional methods in cell segmentation.

  20. Measuring social networks in British primary schools through scientific engagement

    PubMed Central

    Conlan, A. J. K.; Eames, K. T. D.; Gage, J. A.; von Kirchbach, J. C.; Ross, J. V.; Saenz, R. A.; Gog, J. R.

    2011-01-01

    Primary schools constitute a key risk group for the transmission of infectious diseases, concentrating great numbers of immunologically naive individuals at high densities. Despite this, very little is known about the social patterns of mixing within a school, which are likely to contribute to disease transmission. In this study, we present a novel approach where scientific engagement was used as a tool to access school populations and measure social networks between young (4–11 years) children. By embedding our research project within enrichment activities to older secondary school (13–15) children, we could exploit the existing links between schools to achieve a high response rate for our study population (around 90% in most schools). Social contacts of primary school children were measured through self-reporting based on a questionnaire design, and analysed using the techniques of social network analysis. We find evidence of marked social structure and gender assortativity within and between classrooms in the same school. These patterns have been previously reported in smaller studies, but to our knowledge no study has attempted to exhaustively sample entire school populations. Our innovative approach facilitates access to a vitally important (but difficult to sample) epidemiological sub-group. It provides a model whereby scientific communication can be used to enhance, rather than merely complement, the outcomes of research. PMID:21047859

  1. Effectiveness of school network for childhood obesity prevention (SNOCOP) in primary schools of Saraburi Province, Thailand.

    PubMed

    Banchonhattakit, Pannee; Tanasugarn, Chanuantong; Pradipasen, Mandhana; Miner, Kathleen R; Nityasuddhi, Dechavudh

    2009-07-01

    This research was designed to test the effectiveness of a school network for childhood obesity prevention (SNOCOP) in primary schools; a program that aimed to improve student behavior in terms of knowledge, attitude, intention towards obesity prevention, and their food consumption behavior. A quasi-experimental pretest-posttest time series study was conducted. By 2-stage stratified sampling selection 180 students from 6 schools were assigned to the intervention group and 195 students from 6 schools to the control group at Saraburi Province, Thailand in 2006- 2007. In addition, thirty-one participants being school administrators, teachers, parents, and community members from six schools formed the social network initiating the intervention. The schoolchildren in the intervention group improved their eating behavior, knowledge, attitude, intention towards obesity preventive behavior. The six schools of the intervention group changed school policies and school activities aiming to reduce the proportion of obesity among their student. No such activities could be observed in the control group. These findings suggest that the School-Social Network of Childhood Obesity Prevention program is an effective means to prevent childhood obesity.

  2. Machine learning vortices at the Kosterlitz-Thouless transition

    NASA Astrophysics Data System (ADS)

    Beach, Matthew J. S.; Golubeva, Anna; Melko, Roger G.

    2018-01-01

    Efficient and automated classification of phases from minimally processed data is one goal of machine learning in condensed-matter and statistical physics. Supervised algorithms trained on raw samples of microstates can successfully detect conventional phase transitions via learning a bulk feature such as an order parameter. In this paper, we investigate whether neural networks can learn to classify phases based on topological defects. We address this question on the two-dimensional classical XY model which exhibits a Kosterlitz-Thouless transition. We find significant feature engineering of the raw spin states is required to convincingly claim that features of the vortex configurations are responsible for learning the transition temperature. We further show a single-layer network does not correctly classify the phases of the XY model, while a convolutional network easily performs classification by learning the global magnetization. Finally, we design a deep network capable of learning vortices without feature engineering. We demonstrate the detection of vortices does not necessarily result in the best classification accuracy, especially for lattices of less than approximately 1000 spins. For larger systems, it remains a difficult task to learn vortices.

  3. A MAC Protocol to Support Monitoring of Underwater Spaces.

    PubMed

    Santos, Rodrigo; Orozco, Javier; Ochoa, Sergio F; Meseguer, Roc; Eggly, Gabriel; Pistonesi, Marcelo F

    2016-06-27

    Underwater sensor networks are becoming an important field of research, because of their everyday increasing application scope. Examples of their application areas are environmental and pollution monitoring (mainly oil spills), oceanographic data collection, support for submarine geolocalization, ocean sampling and early tsunamis alert. The challenge of performing underwater communications is well known, provided that radio signals are useless in this medium, and a wired solution is too expensive. Therefore, the sensors in these networks transmit their information using acoustic signals that propagate well under water. This data transmission type not only brings an opportunity, but also several challenges to the implementation of these networks, e.g., in terms of energy consumption, data transmission and signal interference. In order to help advance the knowledge in the design and implementation of these networks for monitoring underwater spaces, this paper proposes a MAC protocol for acoustic communications between the nodes, based on a self-organized time division multiple access mechanism. The proposal was evaluated using simulations of a real monitoring scenario, and the obtained results are highly encouraging.

  4. A Key Pre-Distribution Scheme Based on µ-PBIBD for Enhancing Resilience in Wireless Sensor Networks.

    PubMed

    Yuan, Qi; Ma, Chunguang; Yu, Haitao; Bian, Xuefen

    2018-05-12

    Many key pre-distribution (KPD) schemes based on combinatorial design were proposed for secure communication of wireless sensor networks (WSNs). Due to complexity of constructing the combinatorial design, it is infeasible to generate key rings using the corresponding combinatorial design in large scale deployment of WSNs. In this paper, we present a definition of new combinatorial design, termed “µ-partially balanced incomplete block design (µ-PBIBD)”, which is a refinement of partially balanced incomplete block design (PBIBD), and then describe a 2-D construction of µ-PBIBD which is mapped to KPD in WSNs. Our approach is of simple construction which provides a strong key connectivity and a poor network resilience. To improve the network resilience of KPD based on 2-D µ-PBIBD, we propose a KPD scheme based on 3-D Ex-µ-PBIBD which is a construction of µ-PBIBD from 2-D space to 3-D space. Ex-µ-PBIBD KPD scheme improves network scalability and resilience while has better key connectivity. Theoretical analysis and comparison with the related schemes show that key pre-distribution scheme based on Ex-µ-PBIBD provides high network resilience and better key scalability, while it achieves a trade-off between network resilience and network connectivity.

  5. A Key Pre-Distribution Scheme Based on µ-PBIBD for Enhancing Resilience in Wireless Sensor Networks

    PubMed Central

    Yuan, Qi; Ma, Chunguang; Yu, Haitao; Bian, Xuefen

    2018-01-01

    Many key pre-distribution (KPD) schemes based on combinatorial design were proposed for secure communication of wireless sensor networks (WSNs). Due to complexity of constructing the combinatorial design, it is infeasible to generate key rings using the corresponding combinatorial design in large scale deployment of WSNs. In this paper, we present a definition of new combinatorial design, termed “µ-partially balanced incomplete block design (µ-PBIBD)”, which is a refinement of partially balanced incomplete block design (PBIBD), and then describe a 2-D construction of µ-PBIBD which is mapped to KPD in WSNs. Our approach is of simple construction which provides a strong key connectivity and a poor network resilience. To improve the network resilience of KPD based on 2-D µ-PBIBD, we propose a KPD scheme based on 3-D Ex-µ-PBIBD which is a construction of µ-PBIBD from 2-D space to 3-D space. Ex-µ-PBIBD KPD scheme improves network scalability and resilience while has better key connectivity. Theoretical analysis and comparison with the related schemes show that key pre-distribution scheme based on Ex-µ-PBIBD provides high network resilience and better key scalability, while it achieves a trade-off between network resilience and network connectivity. PMID:29757244

  6. Discrepancies between HIV prevention communication attitudes and actual conversations about HIV testing within social and sexual networks of African American men who have sex with men.

    PubMed

    Tobin, Karin Elizabeth; Yang, Cui; Sun, Christina; Spikes, Pilgrim; Latkin, Carl Asher

    2014-04-01

    Promoting communication among African American men who have sex with men (AA MSM) and their social networks about HIV testing is an avenue for altering HIV prevention social norms. This study examined the attitudes of AA MSM on talking with peers about HIV testing and characteristics of their network members with whom they have these conversations. Data came from a cross-sectional survey of 226 AA MSM who were 18 years or older and self-reported sex with another male in the prior 90 days. Participants completed an inventory to characterize network members with whom they had conversations about HIV testing and HIV status. Most of the sample reported that it was important/very important to talk to male friends about HIV (85%) and that they were comfortable/very comfortable talking with their friends about sexual behaviors (84%). However, a small proportion of the social network had been talked to by the participant about HIV testing (14%). Among sexual networks, 58% had been talked to about their HIV status, and this was positively associated with main and casual partner type compared with partners with whom money or drugs were exchanged. Findings suggest that positive attitudes about communication may be necessary but not sufficient for actual conversations to occur. Designing interventions that increase communication with social networks is warranted.

  7. Displayed Trees Do Not Determine Distinguishability Under the Network Multispecies Coalescent

    PubMed Central

    Zhu, Sha; Degnan, James H.

    2017-01-01

    Abstract Recent work in estimating species relationships from gene trees has included inferring networks assuming that past hybridization has occurred between species. Probabilistic models using the multispecies coalescent can be used in this framework for likelihood-based inference of both network topologies and parameters, including branch lengths and hybridization parameters. A difficulty for such methods is that it is not always clear whether, or to what extent, networks are identifiable—that is whether there could be two distinct networks that lead to the same distribution of gene trees. For cases in which incomplete lineage sorting occurs in addition to hybridization, we demonstrate a new representation of the species network likelihood that expresses the probability distribution of the gene tree topologies as a linear combination of gene tree distributions given a set of species trees. This representation makes it clear that in some cases in which two distinct networks give the same distribution of gene trees when sampling one allele per species, the two networks can be distinguished theoretically when multiple individuals are sampled per species. This result means that network identifiability is not only a function of the trees displayed by the networks but also depends on allele sampling within species. We additionally give an example in which two networks that display exactly the same trees can be distinguished from their gene trees even when there is only one lineage sampled per species. PMID:27780899

  8. Evaluation of passive samplers for the collection of dissolved organic matter in streams.

    PubMed

    Warner, Daniel L; Oviedo-Vargas, Diana; Royer, Todd V

    2015-01-01

    Traditional sampling methods for dissolved organic matter (DOM) in streams limit opportunities for long-term studies due to time and cost constraints. Passive DOM samplers were constructed following a design proposed previously which utilizes diethylaminoethyl (DEAE) cellulose as a sampling medium, and they were deployed throughout a temperate stream network in Indiana. Two deployments of the passive samplers were conducted, during which grab samples were frequently collected for comparison. Differences in DOM quality between sites and sampling methods were assessed using several common optical analyses. The analyses revealed significant differences in optical properties between sampling methods, with the passive samplers preferentially collecting terrestrial, humic-like DOM. We assert that the differences in DOM composition from each sampling method were caused by preferential binding of complex humic compounds to the DEAE cellulose in the passive samplers. Nonetheless, the passive samplers may provide a cost-effective, integrated sample of DOM in situations where the bulk DOM pool is composed mainly of terrestrial, humic-like compounds.

  9. Effect of Industry Sponsorship on Dental Restorative Trials.

    PubMed

    Schwendicke, F; Tu, Y-K; Blunck, U; Paris, S; Göstemeyer, G

    2016-01-01

    Industry sponsorship was found to potentially introduce bias into clinical trials. We assessed the effects of industry sponsorship on the design, comparator choice, and findings of randomized controlled trials on dental restorative materials. A systematic review was performed via MEDLINE, CENTRAL, and EMBASE. Randomized trials on dental restorative and adhesive materials published 2005 to 2015 were included. The design of sponsored and nonsponsored trials was compared statistically (risk of bias, treatment indication, setting, transferability, sample size). Comparator choice and network geometry of sponsored and nonsponsored trials were assessed via network analysis. Material performance rankings in different trial types were estimated via Bayesian network meta-analysis. Overall, 114 studies were included (15,321 restorations in 5,232 patients). We found 21 and 41 (18% and 36%) trials being clearly or possibly industry sponsored, respectively. Trial design of sponsored and nonsponsored trials did not significantly differ for most assessed items. Sponsored trials evaluated restorations of load-bearing cavities significantly more often than nonsponsored trials, had longer follow-up periods, and showed significantly increased risk of detection bias. Regardless of sponsorship status, comparisons were mainly performed within material classes. The proportion of trials comparing against gold standard restorative or adhesive materials did not differ between trial types. If ranked for performance according to the need to re-treat (best: least re-treatments), most material combinations were ranked similarly in sponsored and nonsponsored trials. The effect of industry sponsorship on dental restorative trials seems limited. © International & American Associations for Dental Research 2015.

  10. On the Structure of Cortical Microcircuits Inferred from Small Sample Sizes.

    PubMed

    Vegué, Marina; Perin, Rodrigo; Roxin, Alex

    2017-08-30

    The structure in cortical microcircuits deviates from what would be expected in a purely random network, which has been seen as evidence of clustering. To address this issue, we sought to reproduce the nonrandom features of cortical circuits by considering several distinct classes of network topology, including clustered networks, networks with distance-dependent connectivity, and those with broad degree distributions. To our surprise, we found that all of these qualitatively distinct topologies could account equally well for all reported nonrandom features despite being easily distinguishable from one another at the network level. This apparent paradox was a consequence of estimating network properties given only small sample sizes. In other words, networks that differ markedly in their global structure can look quite similar locally. This makes inferring network structure from small sample sizes, a necessity given the technical difficulty inherent in simultaneous intracellular recordings, problematic. We found that a network statistic called the sample degree correlation (SDC) overcomes this difficulty. The SDC depends only on parameters that can be estimated reliably given small sample sizes and is an accurate fingerprint of every topological family. We applied the SDC criterion to data from rat visual and somatosensory cortex and discovered that the connectivity was not consistent with any of these main topological classes. However, we were able to fit the experimental data with a more general network class, of which all previous topologies were special cases. The resulting network topology could be interpreted as a combination of physical spatial dependence and nonspatial, hierarchical clustering. SIGNIFICANCE STATEMENT The connectivity of cortical microcircuits exhibits features that are inconsistent with a simple random network. Here, we show that several classes of network models can account for this nonrandom structure despite qualitative differences in their global properties. This apparent paradox is a consequence of the small numbers of simultaneously recorded neurons in experiment: when inferred via small sample sizes, many networks may be indistinguishable despite being globally distinct. We develop a connectivity measure that successfully classifies networks even when estimated locally with a few neurons at a time. We show that data from rat cortex is consistent with a network in which the likelihood of a connection between neurons depends on spatial distance and on nonspatial, asymmetric clustering. Copyright © 2017 the authors 0270-6474/17/378498-13$15.00/0.

  11. Self-organization in neural networks - Applications in structural optimization

    NASA Technical Reports Server (NTRS)

    Hajela, Prabhat; Fu, B.; Berke, Laszlo

    1993-01-01

    The present paper discusses the applicability of ART (Adaptive Resonance Theory) networks, and the Hopfield and Elastic networks, in problems of structural analysis and design. A characteristic of these network architectures is the ability to classify patterns presented as inputs into specific categories. The categories may themselves represent distinct procedural solution strategies. The paper shows how this property can be adapted in the structural analysis and design problem. A second application is the use of Hopfield and Elastic networks in optimization problems. Of particular interest are problems characterized by the presence of discrete and integer design variables. The parallel computing architecture that is typical of neural networks is shown to be effective in such problems. Results of preliminary implementations in structural design problems are also included in the paper.

  12. Bluetooth telemedicine processor for multichannel biomedical signal transmission via mobile cellular networks.

    PubMed

    Rasid, Mohd Fadlee A; Woodward, Bryan

    2005-03-01

    One of the emerging issues in m-Health is how best to exploit the mobile communications technologies that are now almost globally available. The challenge is to produce a system to transmit a patient's biomedical signals directly to a hospital for monitoring or diagnosis, using an unmodified mobile telephone. The paper focuses on the design of a processor, which samples signals from sensors on the patient. It then transmits digital data over a Bluetooth link to a mobile telephone that uses the General Packet Radio Service. The modular design adopted is intended to provide a "future-proofed" system, whose functionality may be upgraded by modifying the software.

  13. Mars Relays Satellite Orbit Design Considerations for Global Support of Robotic Surface Missions

    NASA Technical Reports Server (NTRS)

    Hastrup, Rolf; Cesarone, Robert; Cook, Richard; Knocke, Phillip; McOmber, Robert

    1993-01-01

    This paper discusses orbit design considerations for Mars relay satellite (MRS)support of globally distributed robotic surface missions. The orbit results reported in this paper are derived from studies of MRS support for two types of Mars robotic surface missions: 1) the mars Environmental Survey (MESUR) mission, which in its current definition would deploy a global network of up to 16 small landers, and 2)a Small Mars Sample Return (SMSR) mission, which included four globally distributed landers, each with a return stage and one or two rovers, and up to four additional sets of lander/rover elements in an extended mission phase.

  14. A Smart Sensor Web for Ocean Observation: Integrated Acoustics, Satellite Networking, and Predictive Modeling

    NASA Astrophysics Data System (ADS)

    Arabshahi, P.; Chao, Y.; Chien, S.; Gray, A.; Howe, B. M.; Roy, S.

    2008-12-01

    In many areas of Earth science, including climate change research, there is a need for near real-time integration of data from heterogeneous and spatially distributed sensors, in particular in-situ and space- based sensors. The data integration, as provided by a smart sensor web, enables numerous improvements, namely, 1) adaptive sampling for more efficient use of expensive space-based sensing assets, 2) higher fidelity information gathering from data sources through integration of complementary data sets, and 3) improved sensor calibration. The specific purpose of the smart sensor web development presented here is to provide for adaptive sampling and calibration of space-based data via in-situ data. Our ocean-observing smart sensor web presented herein is composed of both mobile and fixed underwater in-situ ocean sensing assets and Earth Observing System (EOS) satellite sensors providing larger-scale sensing. An acoustic communications network forms a critical link in the web between the in-situ and space-based sensors and facilitates adaptive sampling and calibration. After an overview of primary design challenges, we report on the development of various elements of the smart sensor web. These include (a) a cable-connected mooring system with a profiler under real-time control with inductive battery charging; (b) a glider with integrated acoustic communications and broadband receiving capability; (c) satellite sensor elements; (d) an integrated acoustic navigation and communication network; and (e) a predictive model via the Regional Ocean Modeling System (ROMS). Results from field experiments, including an upcoming one in Monterey Bay (October 2008) using live data from NASA's EO-1 mission in a semi closed-loop system, together with ocean models from ROMS, are described. Plans for future adaptive sampling demonstrations using the smart sensor web are also presented.

  15. Final Report: Sampling-Based Algorithms for Estimating Structure in Big Data.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matulef, Kevin Michael

    The purpose of this project was to develop sampling-based algorithms to discover hidden struc- ture in massive data sets. Inferring structure in large data sets is an increasingly common task in many critical national security applications. These data sets come from myriad sources, such as network traffic, sensor data, and data generated by large-scale simulations. They are often so large that traditional data mining techniques are time consuming or even infeasible. To address this problem, we focus on a class of algorithms that do not compute an exact answer, but instead use sampling to compute an approximate answer using fewermore » resources. The particular class of algorithms that we focus on are streaming algorithms , so called because they are designed to handle high-throughput streams of data. Streaming algorithms have only a small amount of working storage - much less than the size of the full data stream - so they must necessarily use sampling to approximate the correct answer. We present two results: * A streaming algorithm called HyperHeadTail , that estimates the degree distribution of a graph (i.e., the distribution of the number of connections for each node in a network). The degree distribution is a fundamental graph property, but prior work on estimating the degree distribution in a streaming setting was impractical for many real-world application. We improve upon prior work by developing an algorithm that can handle streams with repeated edges, and graph structures that evolve over time. * An algorithm for the task of maintaining a weighted subsample of items in a stream, when the items must be sampled according to their weight, and the weights are dynamically changing. To our knowledge, this is the first such algorithm designed for dynamically evolving weights. We expect it may be useful as a building block for other streaming algorithms on dynamic data sets.« less

  16. Analysis of environmental contamination resulting from catastrophic incidents: part 1. Building and sustaining capacity in laboratory networks.

    PubMed

    Magnuson, Matthew; Ernst, Hiba; Griggs, John; Fitz-James, Schatzi; Mapp, Latisha; Mullins, Marissa; Nichols, Tonya; Shah, Sanjiv; Smith, Terry; Hedrick, Elizabeth

    2014-11-01

    Catastrophic incidents, such as natural disasters, terrorist attacks, and industrial accidents, can occur suddenly and have high impact. However, they often occur at such a low frequency and in unpredictable locations that planning for the management of the consequences of a catastrophe can be difficult. For those catastrophes that result in the release of contaminants, the ability to analyze environmental samples is critical and contributes to the resilience of affected communities. Analyses of environmental samples are needed to make appropriate decisions about the course of action to restore the area affected by the contamination. Environmental samples range from soil, water, and air to vegetation, building materials, and debris. In addition, processes used to decontaminate any of these matrices may also generate wastewater and other materials that require analyses to determine the best course for proper disposal. This paper summarizes activities and programs the United States Environmental Protection Agency (USEPA) has implemented to ensure capability and capacity for the analysis of contaminated environmental samples following catastrophic incidents. USEPA's focus has been on building capability for a wide variety of contaminant classes and on ensuring national laboratory capacity for potential surges in the numbers of samples that could quickly exhaust the resources of local communities. USEPA's efforts have been designed to ensure a strong and resilient laboratory infrastructure in the United States to support communities as they respond to contamination incidents of any magnitude. The efforts include not only addressing technical issues related to the best-available methods for chemical, biological, and radiological contaminants, but also include addressing the challenges of coordination and administration of an efficient and effective response. Laboratory networks designed for responding to large scale contamination incidents can be sustained by applying their resources during incidents of lesser significance, for special projects, and for routine surveillance and monitoring as part of ongoing activities of the environmental laboratory community. Published by Elsevier Ltd.

  17. Research of G3-PLC net self-organization processes in the NS-3 modeling framework

    NASA Astrophysics Data System (ADS)

    Pospelova, Irina; Chebotayev, Pavel; Klimenko, Aleksey; Myakochin, Yuri; Polyakov, Igor; Shelupanov, Alexander; Zykov, Dmitriy

    2017-11-01

    When modern infocommunication networks are designed, the combination of several data transfer channels is widely used. It is necessary for the purposes of improvement in quality and robustness of communication. Communication systems based on more than one data transfer channel are named heterogeneous communication systems. For the design of a heterogeneous network, the most optimal solution is the use of mesh technology. Mesh technology ensures message delivery to the destination under conditions of unpredictable interference environment situation in each of two channels. Therewith, one of the high-priority problems is the choice of a routing protocol when the mesh networks are designed. An important design stage for any computer network is modeling. Modeling allows us to design a few different variants of design solutions and also to compute all necessary functional specifications for each of these solutions. As a result, it allows us to reduce costs for the physical realization of a network. In this article the research of dynamic routing in the NS3 simulation modeling framework is presented. The article contains an evaluation of simulation modeling applicability in solving the problem of heterogeneous networks design. Results of modeling may be afterwards used for physical realization of this kind of networks.

  18. Design and evaluation of a wireless sensor network based aircraft strength testing system.

    PubMed

    Wu, Jian; Yuan, Shenfang; Zhou, Genyuan; Ji, Sai; Wang, Zilong; Wang, Yang

    2009-01-01

    The verification of aerospace structures, including full-scale fatigue and static test programs, is essential for structure strength design and evaluation. However, the current overall ground strength testing systems employ a large number of wires for communication among sensors and data acquisition facilities. The centralized data processing makes test programs lack efficiency and intelligence. Wireless sensor network (WSN) technology might be expected to address the limitations of cable-based aeronautical ground testing systems. This paper presents a wireless sensor network based aircraft strength testing (AST) system design and its evaluation on a real aircraft specimen. In this paper, a miniature, high-precision, and shock-proof wireless sensor node is designed for multi-channel strain gauge signal conditioning and monitoring. A cluster-star network topology protocol and application layer interface are designed in detail. To verify the functionality of the designed wireless sensor network for strength testing capability, a multi-point WSN based AST system is developed for static testing of a real aircraft undercarriage. Based on the designed wireless sensor nodes, the wireless sensor network is deployed to gather, process, and transmit strain gauge signals and monitor results under different static test loads. This paper shows the efficiency of the wireless sensor network based AST system, compared to a conventional AST system.

  19. Design and Evaluation of a Wireless Sensor Network Based Aircraft Strength Testing System

    PubMed Central

    Wu, Jian; Yuan, Shenfang; Zhou, Genyuan; Ji, Sai; Wang, Zilong; Wang, Yang

    2009-01-01

    The verification of aerospace structures, including full-scale fatigue and static test programs, is essential for structure strength design and evaluation. However, the current overall ground strength testing systems employ a large number of wires for communication among sensors and data acquisition facilities. The centralized data processing makes test programs lack efficiency and intelligence. Wireless sensor network (WSN) technology might be expected to address the limitations of cable-based aeronautical ground testing systems. This paper presents a wireless sensor network based aircraft strength testing (AST) system design and its evaluation on a real aircraft specimen. In this paper, a miniature, high-precision, and shock-proof wireless sensor node is designed for multi-channel strain gauge signal conditioning and monitoring. A cluster-star network topology protocol and application layer interface are designed in detail. To verify the functionality of the designed wireless sensor network for strength testing capability, a multi-point WSN based AST system is developed for static testing of a real aircraft undercarriage. Based on the designed wireless sensor nodes, the wireless sensor network is deployed to gather, process, and transmit strain gauge signals and monitor results under different static test loads. This paper shows the efficiency of the wireless sensor network based AST system, compared to a conventional AST system. PMID:22408521

  20. An Investigation of Synchrony in Transport Networks

    NASA Technical Reports Server (NTRS)

    Kincaid, Rex K.; Alexandrov, Natalia M.; Holroyd, Michael J.

    2007-01-01

    The cumulative degree distributions of transport networks, such as air transportation networks and respiratory neuronal networks, follow power laws. The significance of power laws with respect to other network performance measures, such as throughput and synchronization, remains an open question. Evolving methods for the analysis and design of air transportation networks must address network performance in the face of increasing demands and the need to contain and control local network disturbances, such as congestion. Toward this end, we investigate functional relationships that govern the performance of transport networks; for example, the links between the first nontrivial eigenvalue of a network's Laplacian matrix - a quantitative measure of network synchronizability - and other global network parameters. In particular, among networks with a fixed degree distribution and fixed network assortativity (a measure of a network's preference to attach nodes based on a similarity or difference), those with the small eigenvalue are shown to be poor synchronizers, to have much longer shortest paths and to have greater clustering in comparison to those with large. A simulation of a respiratory network adds data to our investigation. This study is a beginning step in developing metrics and design variables for the analysis and active design of air transport networks.

  1. Structural covariance networks are coupled to expression of genes enriched in supragranular layers of the human cortex.

    PubMed

    Romero-Garcia, Rafael; Whitaker, Kirstie J; Váša, František; Seidlitz, Jakob; Shinn, Maxwell; Fonagy, Peter; Dolan, Raymond J; Jones, Peter B; Goodyer, Ian M; Bullmore, Edward T; Vértes, Petra E

    2018-05-01

    Complex network topology is characteristic of many biological systems, including anatomical and functional brain networks (connectomes). Here, we first constructed a structural covariance network from MRI measures of cortical thickness on 296 healthy volunteers, aged 14-24 years. Next, we designed a new algorithm for matching sample locations from the Allen Brain Atlas to the nodes of the SCN. Subsequently we used this to define, transcriptomic brain networks by estimating gene co-expression between pairs of cortical regions. Finally, we explored the hypothesis that transcriptional networks and structural MRI connectomes are coupled. A transcriptional brain network (TBN) and a structural covariance network (SCN) were correlated across connection weights and showed qualitatively similar complex topological properties: assortativity, small-worldness, modularity, and a rich-club. In both networks, the weight of an edge was inversely related to the anatomical (Euclidean) distance between regions. There were differences between networks in degree and distance distributions: the transcriptional network had a less fat-tailed degree distribution and a less positively skewed distance distribution than the SCN. However, cortical areas connected to each other within modules of the SCN had significantly higher levels of whole genome co-expression than expected by chance. Nodes connected in the SCN had especially high levels of expression and co-expression of a human supragranular enriched (HSE) gene set that has been specifically located to supragranular layers of human cerebral cortex and is known to be important for large-scale, long-distance cortico-cortical connectivity. This coupling of brain transcriptome and connectome topologies was largely but not entirely accounted for by the common constraint of physical distance on both networks. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  2. Design of a network for concurrent message passing systems

    NASA Astrophysics Data System (ADS)

    Song, Paul Y.

    1988-08-01

    We describe the design of the network design frame (NDF), a self-timed routing chip for a message-passing concurrent computer. The NDF uses a partitioned data path, low-voltage output drivers, and a distributed token-passing arbiter to provide a bandwidth of 450 Mbits/sec into the network. Wormhole routing and bidirectional virtual channels are used to provide low latency communications, less than 2us latency to deliver a 216 bit message across the diameter of a 1K node mess-connected machine. To support concurrent software systems, the NDF provides two logical networks, one for user messages and one for system messages. The two networks share the same set of physical wires. To facilitate the development of network nodes, the NDF is a design frame. The NDF circuitry is integrated into the pad frame of a chip leaving the center of the chip uncommitted. We define an analytic framework in which to study the effects of network size, network buffering capacity, bidirectional channels, and traffic on this class of networks. The response of the network to various combinations of these parameters are obtained through extensive simulation of the network model. Through simulation, we are able to observe the macro behavior of the network as opposed to the micro behavior of the NDF routing controller.

  3. Research on key technology of planning and design for AC/DC hybrid distribution network

    NASA Astrophysics Data System (ADS)

    Shen, Yu; Wu, Guilian; Zheng, Huan; Deng, Junpeng; Shi, Pengjia

    2018-04-01

    With the increasing demand of DC generation and DC load, the development of DC technology, AC and DC distribution network integrating will become an important form of future distribution network. In this paper, the key technology of planning and design for AC/DC hybrid distribution network is proposed, including the selection of AC and DC voltage series, the design of typical grid structure and the comprehensive evaluation method of planning scheme. The research results provide some ideas and directions for the future development of AC/DC hybrid distribution network.

  4. Spreading activation in nonverbal memory networks.

    PubMed

    Foster, Paul S; Wakefield, Candias; Pryjmak, Scott; Roosa, Katelyn M; Branch, Kaylei K; Drago, Valeria; Harrison, David W; Ruff, Ronald

    2017-09-01

    Theories of spreading activation primarily involve semantic memory networks. However, the existence of separate verbal and visuospatial memory networks suggests that spreading activation may also occur in visuospatial memory networks. The purpose of the present investigation was to explore this possibility. Specifically, this study sought to create and describe the design frequency corpus and to determine whether this measure of visuospatial spreading activation was related to right hemisphere functioning and spreading activation in verbal memory networks. We used word frequencies taken from the Controlled Oral Word Association Test and design frequencies taken from the Ruff Figural Fluency Test as measures of verbal and visuospatial spreading activation, respectively. Average word and design frequencies were then correlated with measures of left and right cerebral functioning. The results indicated that a significant relationship exists between performance on a test of right posterior functioning (Block Design) and design frequency. A significant negative relationship also exists between spreading activation in semantic memory networks and design frequency. Based on our findings, the hypotheses were supported. Further research will need to be conducted to examine whether spreading activation exists in visuospatial memory networks as well as the parameters that might modulate this spreading activation, such as the influence of neurotransmitters.

  5. Re-Evaluation of the AASHTO-Flexible Pavement Design Equation with Neural Network Modeling

    PubMed Central

    Tiğdemir, Mesut

    2014-01-01

    Here we establish that equivalent single-axle loads values can be estimated using artificial neural networks without the complex design equality of American Association of State Highway and Transportation Officials (AASHTO). More importantly, we find that the neural network model gives the coefficients to be able to obtain the actual load values using the AASHTO design values. Thus, those design traffic values that might result in deterioration can be better calculated using the neural networks model than with the AASHTO design equation. The artificial neural network method is used for this purpose. The existing AASHTO flexible pavement design equation does not currently predict the pavement performance of the strategic highway research program (Long Term Pavement Performance studies) test sections very accurately, and typically over-estimates the number of equivalent single axle loads needed to cause a measured loss of the present serviceability index. Here we aimed to demonstrate that the proposed neural network model can more accurately represent the loads values data, compared against the performance of the AASHTO formula. It is concluded that the neural network may be an appropriate tool for the development of databased-nonparametric models of pavement performance. PMID:25397962

  6. Re-evaluation of the AASHTO-flexible pavement design equation with neural network modeling.

    PubMed

    Tiğdemir, Mesut

    2014-01-01

    Here we establish that equivalent single-axle loads values can be estimated using artificial neural networks without the complex design equality of American Association of State Highway and Transportation Officials (AASHTO). More importantly, we find that the neural network model gives the coefficients to be able to obtain the actual load values using the AASHTO design values. Thus, those design traffic values that might result in deterioration can be better calculated using the neural networks model than with the AASHTO design equation. The artificial neural network method is used for this purpose. The existing AASHTO flexible pavement design equation does not currently predict the pavement performance of the strategic highway research program (Long Term Pavement Performance studies) test sections very accurately, and typically over-estimates the number of equivalent single axle loads needed to cause a measured loss of the present serviceability index. Here we aimed to demonstrate that the proposed neural network model can more accurately represent the loads values data, compared against the performance of the AASHTO formula. It is concluded that the neural network may be an appropriate tool for the development of databased-nonparametric models of pavement performance.

  7. IndeCut evaluates performance of network motif discovery algorithms.

    PubMed

    Ansariola, Mitra; Megraw, Molly; Koslicki, David

    2018-05-01

    Genomic networks represent a complex map of molecular interactions which are descriptive of the biological processes occurring in living cells. Identifying the small over-represented circuitry patterns in these networks helps generate hypotheses about the functional basis of such complex processes. Network motif discovery is a systematic way of achieving this goal. However, a reliable network motif discovery outcome requires generating random background networks which are the result of a uniform and independent graph sampling method. To date, there has been no method to numerically evaluate whether any network motif discovery algorithm performs as intended on realistically sized datasets-thus it was not possible to assess the validity of resulting network motifs. In this work, we present IndeCut, the first method to date that characterizes network motif finding algorithm performance in terms of uniform sampling on realistically sized networks. We demonstrate that it is critical to use IndeCut prior to running any network motif finder for two reasons. First, IndeCut indicates the number of samples needed for a tool to produce an outcome that is both reproducible and accurate. Second, IndeCut allows users to choose the tool that generates samples in the most independent fashion for their network of interest among many available options. The open source software package is available at https://github.com/megrawlab/IndeCut. megrawm@science.oregonstate.edu or david.koslicki@math.oregonstate.edu. Supplementary data are available at Bioinformatics online.

  8. A Novel Cross-Layer Routing Protocol Based on Network Coding for Underwater Sensor Networks.

    PubMed

    Wang, Hao; Wang, Shilian; Bu, Renfei; Zhang, Eryang

    2017-08-08

    Underwater wireless sensor networks (UWSNs) have attracted increasing attention in recent years because of their numerous applications in ocean monitoring, resource discovery and tactical surveillance. However, the design of reliable and efficient transmission and routing protocols is a challenge due to the low acoustic propagation speed and complex channel environment in UWSNs. In this paper, we propose a novel cross-layer routing protocol based on network coding (NCRP) for UWSNs, which utilizes network coding and cross-layer design to greedily forward data packets to sink nodes efficiently. The proposed NCRP takes full advantages of multicast transmission and decode packets jointly with encoded packets received from multiple potential nodes in the entire network. The transmission power is optimized in our design to extend the life cycle of the network. Moreover, we design a real-time routing maintenance protocol to update the route when detecting inefficient relay nodes. Substantial simulations in underwater environment by Network Simulator 3 (NS-3) show that NCRP significantly improves the network performance in terms of energy consumption, end-to-end delay and packet delivery ratio compared with other routing protocols for UWSNs.

  9. Formulating a Theoretical Framework for Assessing Network Loads for Effective Deployment in Network-Centric Operations and Warfare

    DTIC Science & Technology

    2008-11-01

    is particularly important in order to design a network that is realistically deployable. The goal of this project is the design of a theoretical ... framework to assess and predict the effectiveness and performance of networks and their loads.

  10. Experimental high-speed network

    NASA Astrophysics Data System (ADS)

    McNeill, Kevin M.; Klein, William P.; Vercillo, Richard; Alsafadi, Yasser H.; Parra, Miguel V.; Dallas, William J.

    1993-09-01

    Many existing local area networking protocols currently applied in medical imaging were originally designed for relatively low-speed, low-volume networking. These protocols utilize small packet sizes appropriate for text based communication. Local area networks of this type typically provide raw bandwidth under 125 MHz. These older network technologies are not optimized for the low delay, high data traffic environment of a totally digital radiology department. Some current implementations use point-to-point links when greater bandwidth is required. However, the use of point-to-point communications for a total digital radiology department network presents many disadvantages. This paper describes work on an experimental multi-access local area network called XFT. The work includes the protocol specification, and the design and implementation of network interface hardware and software. The protocol specifies the Physical and Data Link layers (OSI layers 1 & 2) for a fiber-optic based token ring providing a raw bandwidth of 500 MHz. The protocol design and implementation of the XFT interface hardware includes many features to optimize image transfer and provide flexibility for additional future enhancements which include: a modular hardware design supporting easy portability to a variety of host system buses, a versatile message buffer design providing 16 MB of memory, and the capability to extend the raw bandwidth of the network to 3.0 GHz.

  11. Proof of Concept in Disrupted Tactical Networking

    DTIC Science & Technology

    2017-09-01

    because of the risk of detection. In this study , we design projectile-based mesh networking prototypes as one potential type of short-living network...to communicate because of the risk of detection. In this study , we design projectile-based mesh networking prototypes as one potential type of short...reader with a background in systems-theory. This study is designed using systems theory and uses systems theory as a lens through which to observe

  12. Reconfigurable Flight Control Design using a Robust Servo LQR and Radial Basis Function Neural Networks

    NASA Technical Reports Server (NTRS)

    Burken, John J.

    2005-01-01

    This viewgraph presentation reviews the use of a Robust Servo Linear Quadratic Regulator (LQR) and a Radial Basis Function (RBF) Neural Network in reconfigurable flight control designs in adaptation to a aircraft part failure. The method uses a robust LQR servomechanism design with model Reference adaptive control, and RBF neural networks. During the failure the LQR servomechanism behaved well, and using the neural networks improved the tracking.

  13. Redundant Design in Interdependent Networks

    PubMed Central

    2016-01-01

    Modern infrastructure networks are often coupled together and thus could be modeled as interdependent networks. Overload and interdependent effect make interdependent networks more fragile when suffering from attacks. Existing research has primarily concentrated on the cascading failure process of interdependent networks without load, or the robustness of isolated network with load. Only limited research has been done on the cascading failure process caused by overload in interdependent networks. Redundant design is a primary approach to enhance the reliability and robustness of the system. In this paper, we propose two redundant methods, node back-up and dependency redundancy, and the experiment results indicate that two measures are effective and costless. Two detailed models about redundant design are introduced based on the non-linear load-capacity model. Based on the attributes and historical failure distribution of nodes, we introduce three static selecting strategies-Random-based, Degree-based, Initial load-based and a dynamic strategy-HFD (historical failure distribution) to identify which nodes could have a back-up with priority. In addition, we consider the cost and efficiency of different redundant proportions to determine the best proportion with maximal enhancement and minimal cost. Experiments on interdependent networks demonstrate that the combination of HFD and dependency redundancy is an effective and preferred measure to implement redundant design on interdependent networks. The results suggest that the redundant design proposed in this paper can permit construction of highly robust interactive networked systems. PMID:27764174

  14. Assessment of water chemistry, habitat, and benthic macroinvertebrates at selected stream-quality monitoring sites in Chester County, Pennsylvania, 1998-2000

    USGS Publications Warehouse

    Reif, Andrew G.

    2004-01-01

    Biological, chemical, and habitat data have been collected from a network of sites in Chester County, Pa., from 1970 to 2003 to assess stream quality. Forty sites in 6 major stream basins were sampled between 1998 and 2000. Biological data were used to determine levels of impairment in the benthic-macroinvertebrate community in Chester County streams and relate the impairment, in conjunction with chemical and habitat data, to overall stream quality. Biological data consisted of benthic-macroinvertebrate samples that were collected annually in the fall. Water-chemistry samples were collected and instream habitat was assessed in support of the biological sampling.Most sites in the network were designated as nonimpacted or slightly impacted by human activities or extreme climatic conditions on the basis of biological-metric analysis of benthic-macroinvertebrate data. Impacted sites were affected by factors, such as nutrient enrichment, erosion and sedimentation, point discharges, and droughts and floods. Streams in the Schuylkill River, Delaware River, and East Branch Brandywine Creek Basins in Chester County generally had low nutrient concentrations, except in areas affected by wastewater-treatment discharges, and stream habitat that was affected by erosion. Streams in the West Branch Brandywine, Christina, Big Elk, and Octoraro Creek Basins in Chester County generally had elevated nutrient concentrations and streambottom habitat that was affected by sediment deposition.Macroinvertebrate communities identified in samples from French Creek, Pigeon Creek (Schuylkill River Basin), and East Branch Brandywine Creek at Glenmoore consistently indicate good stream conditions and were the best conditions measured in the network. Macroinvertebrate communities identified in samples from Trout Creek (site 61), West Branch Red Clay Creek (site 55) (Christina River Basin), and Valley Creek near Atglen (site 34) (Octoraro Creek Basin) indicated fair to poor stream conditions and were the worst conditions measured in the network. Trout Creek is heavily impacted due to erosion, and Valley Creek near Atglen and West Branch Red Clay Creek are influenced by wastewater discharges. Hydrologic conditions in 1999, including a prolonged drought and a flood, influenced chemical concentrations and macroinvertebrate community structure throughout the county. Concentrations of nutrients and ions were lower in 1999 when compared to 1998 and 2000 concentrations. Macroinvertebrate communities identified in samples from 1999 contained lower numbers of individuals when compared to 1998 and 2000 but had similar community structure. Results from chemical and biological sampling in 2000 indicated that the benthic-macroinvertebrate community structure and the concentrations of nutrients and ions recovered to pre-1999 levels.

  15. Energy neutral and low power wireless communications

    NASA Astrophysics Data System (ADS)

    Orhan, Oner

    Wireless sensor nodes are typically designed to have low cost and small size. These design objectives impose restrictions on the capacity and efficiency of the transceiver components and energy storage units that can be used. As a result, energy becomes a bottleneck and continuous operation of the sensor network requires frequent battery replacements, increasing the maintenance cost. Energy harvesting and energy efficient transceiver architectures are able to overcome these challenges by collecting energy from the environment and utilizing the energy in an intelligent manner. However, due to the nature of the ambient energy sources, the amount of useful energy that can be harvested is limited and unreliable. Consequently, optimal management of the harvested energy and design of low power transceivers pose new challenges for wireless network design and operation. The first part of this dissertation is on energy neutral wireless networking, where optimal transmission schemes under different system setups and objectives are investigated. First, throughput maximization for energy harvesting two-hop networks with decode-and-forward half-duplex relays is studied. For a system with two parallel relays, various combinations of the following four transmission modes are considered: Broadcast from the source, multi-access from the relays, and successive relaying phases I and II. Next, the energy cost of the processing circuitry as well as the transmission energy are taken into account for communication over a broadband fading channel powered by an energy harvesting transmitter. Under this setup, throughput maximization, energy maximization, and transmission completion time minimization problems are studied. Finally, source and channel coding for an energy-limited wireless sensor node is investigated under various energy constraints including energy harvesting, processing and sampling costs. For each objective, optimal transmission policies are formulated as the solutions of a convex optimization problem, and the properties of these optimal policies are identified. In the second part of this thesis, low power transceiver design is considered for millimeter wave communication systems. In particular, using an additive quantization noise model, the effect of analog-digital conversion (ADC) resolution and bandwidth on the achievable rate is investigated for a multi-antenna system under a receiver power constraint. Two receiver architectures, analog and digital combining, are compared in terms of performance.

  16. Symmetry compression method for discovering network motifs.

    PubMed

    Wang, Jianxin; Huang, Yuannan; Wu, Fang-Xiang; Pan, Yi

    2012-01-01

    Discovering network motifs could provide a significant insight into systems biology. Interestingly, many biological networks have been found to have a high degree of symmetry (automorphism), which is inherent in biological network topologies. The symmetry due to the large number of basic symmetric subgraphs (BSSs) causes a certain redundant calculation in discovering network motifs. Therefore, we compress all basic symmetric subgraphs before extracting compressed subgraphs and propose an efficient decompression algorithm to decompress all compressed subgraphs without loss of any information. In contrast to previous approaches, the novel Symmetry Compression method for Motif Detection, named as SCMD, eliminates most redundant calculations caused by widespread symmetry of biological networks. We use SCMD to improve three notable exact algorithms and two efficient sampling algorithms. Results of all exact algorithms with SCMD are the same as those of the original algorithms, since SCMD is a lossless method. The sampling results show that the use of SCMD almost does not affect the quality of sampling results. For highly symmetric networks, we find that SCMD used in both exact and sampling algorithms can help get a remarkable speedup. Furthermore, SCMD enables us to find larger motifs in biological networks with notable symmetry than previously possible.

  17. Description and Preliminary Testing of the CDSN Seismic Sensor Systems

    USGS Publications Warehouse

    Peterson, Jon; Tilgner, Edwin E.

    1985-01-01

    INTRODUCTION The China Digital Seismograph Network (CDSN) is being designed and installed to provide the People's Republic of China with the facilities needed to create a national digital database for earthquake research. The CDSN, which is being developed jointly by the PRC State Seismological Bureau and the U.S. Geological Survey, will consist initially of nine digitally-recording seismograph stations, a data management system to be used for compiling network-day tapes, and a depot maintenance center. Data produced by the network will be shared with research scientists throughout the world. A national seismograph network must be designed to support a variety of research objectives. From this standpoint, the choices and tradeoffs involved in specifying signal bandwidth, resolution, and dynamic range are the most important decisions in system design. As in the case of the CDSN, these decisions are made during the selection and design of the seismic sensor system and encoder components. The purpose of this report is to describe the CDSN sensor systems, their important signal characteristics, and the results of preliminary tests that have been performed on the instruments. Four overlapping data bands will be recorded at each station: short period (SP), broadband (BB), long period (LP), and very long period (VLP). Amplitude response curves are illustrated in Figure I. Vertical and horizontal components will be recorded for each data band. The SP and LP channels will be recorded with sufficient sensitivities to resolve earth background noise at seismically quiet sites. The BB channels will have a lower sensitivity and are intended for broadband recording of moderate-to-large body-wave signals and for increasing the effective amplitude range in the short- and long-period bands. The VLP channel does not provide additional spectral coverage at long periods; its purpose is to make use of on-site filtration and decimation to reduce post processing requirements for VLP studies. Early plans also included a triaxial set of low-sensitivity accelerometers for recording strong signals from large local and regional earthquakes. The accelerometers are not being installed; however, they may be added in the future. The short-period signals will be derived from a three-component set of PRC-supplied Model DJ-I SP seismometers and US-supplied SP amplifiers. The seismometers will be installed in surface or shallow subsurface vaults, except at two of the stations where they will be installed in boreholes. The BB, LP, and VLP signals will be derived from Streckeisen STS-1 broadband sensor systems installed in vaults, except at one site where the LP signals only will-be derived from a KS-36000 borehole seismometer installed at a depth of 100 meters. Analog signals will be sampled and quantized by an analog-to-digital converter (ADC) that is part of the recording system. Sampling rates chosen for the CDSN are as follows: * SP 40 samples/second * BB 20 samples/second * LP 1 sample/second * VLP 6 samples/minute The ADC 16-bit data word format makes use of 14 bits to quantize the signal and 2 bits to specify an automatically ranged gain of 1, 8, 32, or 128. This will provide 84 dB of resolution and up to 42 dB of gain ranging for a total opera- ting range of 126 dB peak to peak. Magnetic tape cartridges, each having a capacity of 67 megabytes, will be used for recording the digital data. LP and VLP data will be recorded continu- ously. SP and BB data will be processed through an automatic signal detector of the type described by Murdock and Hutt (1983), and only detected events will be stored on tape. Detection parameters, such as turn-on sensitivity and mini- mum recording duration for the SP and BB channels, will be fully programmable and easily changed. One or more of the data channels may also be recorded on analog recorders. A CDSN recording system was not

  18. Optical burst switching based satellite backbone network

    NASA Astrophysics Data System (ADS)

    Li, Tingting; Guo, Hongxiang; Wang, Cen; Wu, Jian

    2018-02-01

    We propose a novel time slot based optical burst switching (OBS) architecture for GEO/LEO based satellite backbone network. This architecture can provide high speed data transmission rate and high switching capacity . Furthermore, we design the control plane of this optical satellite backbone network. The software defined network (SDN) and network slice (NS) technologies are introduced. Under the properly designed control mechanism, this backbone network is flexible to support various services with diverse transmission requirements. Additionally, the LEO access and handoff management in this network is also discussed.

  19. Design of microstrip patch antennas using knowledge insertion through retraining

    NASA Astrophysics Data System (ADS)

    Divakar, T. V. S.; Sudhakar, A.

    2018-04-01

    The traditional way of analyzing/designing neural network is to collect experimental data and train neural network. Then, the trained neural network acts as global approximate function. The network is then used to calculate parameters for unknown configurations. The main drawback of this method is one does not have enough experimental data, cost of prototypes being a major factor [1-4]. Therefore, in this method the author collected training data from available approximate formulas with in full design range and trained the network with it. After successful training, the network is retrained with available measured results. This simple way inserts experimental knowledge into the network [5]. This method is tested for rectangular microstrip antenna and circular microstrip antenna.

  20. A multidisciplinary monitoring network at Mayon volcano, Philippines: A collaborative effort between PHIVOLCS and EOS

    NASA Astrophysics Data System (ADS)

    Schwandner, F. M.; Hidayat, D.; Laguerta, E. P.; Baloloy, A. V.; Valerio, R.; Vaquilar, R.; Arpa, M. C.; Marcial, S. S.; Novianti, M. L.

    2012-04-01

    Mount Mayon in Albay province (Philippines) is an openly-degassing basaltic-andesitic stratovolcano, located on the northern edge of the northwest-trending OAS graben. Its latest eruptions were in Aug-Sept 2006 and Dec 2009. Mayon's current status is PHIVOLCS' level 1 with low seismicity dominated mostly local and regional tectonic earthquakes and continuous emission of SO2 from its summit crater. A research collaboration between the Earth Observatory of Singapore-NTU and the Philippine Institute of Volcanology and Seismology (PHIVOLCS) was initiated in 2009, aimed at developing a multi-disciplinary monitoring network around Mayon. The network design comprises a network of co-located geophysical, geochemical, hydrological and meteorological sensors, in both radial and circular arrangements. Radially arranged stations are intended to capture and distinguish vertical conduit processes, while the circular station design (including existing PHIVOLCS stations in cooperation with JICA, Japan) is meant to distinguish locations and sector activity of subsurface events. Geophysical instrumentation from EOS currently includes 4 broadband seismographs (in addition to 3 existing broadbands and 3 short period instruments from PHIVOLCS & JICA), and 5 tiltmeters. Four continuous cGPS stations will be installed in 2012, complementing 5 existing PHIVOLCS stations. Stations are also designed to house a multi-sensor package of static subsurface soil CO2 monitoring stations, the first of which was installed in early 2012, and which include subsoil sensors for heat flux, temperature, and moisture, as well as meteorological stations (with sonic anemometers and contact rain gages). These latter sensors are all controlled from one control box per station. Meteorological stations will help us to validate tilt, gas permeability, and also know lahar initiation potential. Since early 2011, separate stations downwind of the two prevailing wind directions from the summit continuously monitor the SO2 plume during daylight (the first Asian NOVAC dual-channel mini-DOAS). One unused agricultural well and one boxed spring were equipped with multi-sensor probes, installed in spring and summer 2011, to detect bulk volumetric strain and changes in chemical composition in high-gain and low-gain mode. All stations are autonomous in terms of their power source (solar), and are designed to withstand typhoons, break-in attempts and direct/indirect lightning strikes. To telemeter the data from these instruments to the local PHIVOLCS observatory at Lignon Hill (Legazpi), we use spread-spectrum radios with our own repeater stations, GSM/GPRS radio modems, and 3G broadband Internet. High rate data including seismic and NOVAC SO2 data are transmitted via spread-spectrum radio, whereas tilt, ground CO2, meteorology, hydrology and soil parameters are transmitted via 3G and SMS. We designed a low-cost datalogger system, which has been operating since Jan 2011, performing continuous data acquisition with sampling rate of 20 minute/sample and transmitted through GSM network, for tilt data. The receiving station is the PHIVOLCS Lignon Hill Observatory (LHO), where an off-grid power system has been installed to ensure continuous operation of the monitoring computers and radios. Local pre-processing by observatory staff and local archiving ensures close to immediate availability of data products in times of crisis. The data are also forwarded via TCP/IP to servers at PHIVOLCS headquarters and at EOS. Network infrastructure and data flows will be completed in 2012.

Top