Sample records for association benchmarking network

  1. The Medical Library Association Benchmarking Network: development and implementation.

    PubMed

    Dudden, Rosalind Farnam; Corcoran, Kate; Kaplan, Janice; Magouirk, Jeff; Rand, Debra C; Smith, Bernie Todd

    2006-04-01

    This article explores the development and implementation of the Medical Library Association (MLA) Benchmarking Network from the initial idea and test survey, to the implementation of a national survey in 2002, to the establishment of a continuing program in 2004. Started as a program for hospital libraries, it has expanded to include other nonacademic health sciences libraries. The activities and timelines of MLA's Benchmarking Network task forces and editorial board from 1998 to 2004 are described. The Benchmarking Network task forces successfully developed an extensive questionnaire with parameters of size and measures of library activity and published a report of the data collected by September 2002. The data were available to all MLA members in the form of aggregate tables. Utilization of Web-based technologies proved feasible for data intake and interactive display. A companion article analyzes and presents some of the data. MLA has continued to develop the Benchmarking Network with the completion of a second survey in 2004. The Benchmarking Network has provided many small libraries with comparative data to present to their administrators. It is a challenge for the future to convince all MLA members to participate in this valuable program.

  2. The Medical Library Association Benchmarking Network: development and implementation*

    PubMed Central

    Dudden, Rosalind Farnam; Corcoran, Kate; Kaplan, Janice; Magouirk, Jeff; Rand, Debra C.; Smith, Bernie Todd

    2006-01-01

    Objective: This article explores the development and implementation of the Medical Library Association (MLA) Benchmarking Network from the initial idea and test survey, to the implementation of a national survey in 2002, to the establishment of a continuing program in 2004. Started as a program for hospital libraries, it has expanded to include other nonacademic health sciences libraries. Methods: The activities and timelines of MLA's Benchmarking Network task forces and editorial board from 1998 to 2004 are described. Results: The Benchmarking Network task forces successfully developed an extensive questionnaire with parameters of size and measures of library activity and published a report of the data collected by September 2002. The data were available to all MLA members in the form of aggregate tables. Utilization of Web-based technologies proved feasible for data intake and interactive display. A companion article analyzes and presents some of the data. MLA has continued to develop the Benchmarking Network with the completion of a second survey in 2004. Conclusions: The Benchmarking Network has provided many small libraries with comparative data to present to their administrators. It is a challenge for the future to convince all MLA members to participate in this valuable program. PMID:16636702

  3. BIOREL: the benchmark resource to estimate the relevance of the gene networks.

    PubMed

    Antonov, Alexey V; Mewes, Hans W

    2006-02-06

    The progress of high-throughput methodologies in functional genomics has lead to the development of statistical procedures to infer gene networks from various types of high-throughput data. However, due to the lack of common standards, the biological significance of the results of the different studies is hard to compare. To overcome this problem we propose a benchmark procedure and have developed a web resource (BIOREL), which is useful for estimating the biological relevance of any genetic network by integrating different sources of biological information. The associations of each gene from the network are classified as biologically relevant or not. The proportion of genes in the network classified as "relevant" is used as the overall network relevance score. Employing synthetic data we demonstrated that such a score ranks the networks fairly in respect to the relevance level. Using BIOREL as the benchmark resource we compared the quality of experimental and theoretically predicted protein interaction data.

  4. A large-scale benchmark of gene prioritization methods.

    PubMed

    Guala, Dimitri; Sonnhammer, Erik L L

    2017-04-21

    In order to maximize the use of results from high-throughput experimental studies, e.g. GWAS, for identification and diagnostics of new disease-associated genes, it is important to have properly analyzed and benchmarked gene prioritization tools. While prospective benchmarks are underpowered to provide statistically significant results in their attempt to differentiate the performance of gene prioritization tools, a strategy for retrospective benchmarking has been missing, and new tools usually only provide internal validations. The Gene Ontology(GO) contains genes clustered around annotation terms. This intrinsic property of GO can be utilized in construction of robust benchmarks, objective to the problem domain. We demonstrate how this can be achieved for network-based gene prioritization tools, utilizing the FunCoup network. We use cross-validation and a set of appropriate performance measures to compare state-of-the-art gene prioritization algorithms: three based on network diffusion, NetRank and two implementations of Random Walk with Restart, and MaxLink that utilizes network neighborhood. Our benchmark suite provides a systematic and objective way to compare the multitude of available and future gene prioritization tools, enabling researchers to select the best gene prioritization tool for the task at hand, and helping to guide the development of more accurate methods.

  5. Binary Associative Memories as a Benchmark for Spiking Neuromorphic Hardware

    PubMed Central

    Stöckel, Andreas; Jenzen, Christoph; Thies, Michael; Rückert, Ulrich

    2017-01-01

    Large-scale neuromorphic hardware platforms, specialized computer systems for energy efficient simulation of spiking neural networks, are being developed around the world, for example as part of the European Human Brain Project (HBP). Due to conceptual differences, a universal performance analysis of these systems in terms of runtime, accuracy and energy efficiency is non-trivial, yet indispensable for further hard- and software development. In this paper we describe a scalable benchmark based on a spiking neural network implementation of the binary neural associative memory. We treat neuromorphic hardware and software simulators as black-boxes and execute exactly the same network description across all devices. Experiments on the HBP platforms under varying configurations of the associative memory show that the presented method allows to test the quality of the neuron model implementation, and to explain significant deviations from the expected reference output. PMID:28878642

  6. Feature pruning by upstream drainage area to support automated generalization of the United States National Hydrography Dataset

    USGS Publications Warehouse

    Stanislawski, L.V.

    2009-01-01

    The United States Geological Survey has been researching generalization approaches to enable multiple-scale display and delivery of geographic data. This paper presents automated methods to prune network and polygon features of the United States high-resolution National Hydrography Dataset (NHD) to lower resolutions. Feature-pruning rules, data enrichment, and partitioning are derived from knowledge of surface water, the NHD model, and associated feature specification standards. Relative prominence of network features is estimated from upstream drainage area (UDA). Network and polygon features are pruned by UDA and NHD reach code to achieve a drainage density appropriate for any less detailed map scale. Data partitioning maintains local drainage density variations that characterize the terrain. For demonstration, a 48 subbasin area of 1:24 000-scale NHD was pruned to 1:100 000-scale (100 K) and compared to a benchmark, the 100 K NHD. The coefficient of line correspondence (CLC) is used to evaluate how well pruned network features match the benchmark network. CLC values of 0.82 and 0.77 result from pruning with and without partitioning, respectively. The number of polygons that remain after pruning is about seven times that of the benchmark, but the area covered by the polygons that remain after pruning is only about 10% greater than the area covered by benchmark polygons. ?? 2009.

  7. Probing the functions of long non-coding RNAs by exploiting the topology of global association and interaction network.

    PubMed

    Deng, Lei; Wu, Hongjie; Liu, Chuyao; Zhan, Weihua; Zhang, Jingpu

    2018-06-01

    Long non-coding RNAs (lncRNAs) are involved in many biological processes, such as immune response, development, differentiation and gene imprinting and are associated with diseases and cancers. But the functions of the vast majority of lncRNAs are still unknown. Predicting the biological functions of lncRNAs is one of the key challenges in the post-genomic era. In our work, We first build a global network including a lncRNA similarity network, a lncRNA-protein association network and a protein-protein interaction network according to the expressions and interactions, then extract the topological feature vectors of the global network. Using these features, we present an SVM-based machine learning approach, PLNRGO, to annotate human lncRNAs. In PLNRGO, we construct a training data set according to the proteins with GO annotations and train a binary classifier for each GO term. We assess the performance of PLNRGO on our manually annotated lncRNA benchmark and a protein-coding gene benchmark with known functional annotations. As a result, the performance of our method is significantly better than that of other state-of-the-art methods in terms of maximum F-measure and coverage. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Analysis of 2D Torus and Hub Topologies of 100Mb/s Ethernet for the Whitney Commodity Computing Testbed

    NASA Technical Reports Server (NTRS)

    Pedretti, Kevin T.; Fineberg, Samuel A.; Kutler, Paul (Technical Monitor)

    1997-01-01

    A variety of different network technologies and topologies are currently being evaluated as part of the Whitney Project. This paper reports on the implementation and performance of a Fast Ethernet network configured in a 4x4 2D torus topology in a testbed cluster of 'commodity' Pentium Pro PCs. Several benchmarks were used for performance evaluation: an MPI point to point message passing benchmark, an MPI collective communication benchmark, and the NAS Parallel Benchmarks version 2.2 (NPB2). Our results show that for point to point communication on an unloaded network, the hub and 1 hop routes on the torus have about the same bandwidth and latency. However, the bandwidth decreases and the latency increases on the torus for each additional route hop. Collective communication benchmarks show that the torus provides roughly four times more aggregate bandwidth and eight times faster MPI barrier synchronizations than a hub based network for 16 processor systems. Finally, the SOAPBOX benchmarks, which simulate real-world CFD applications, generally demonstrated substantially better performance on the torus than on the hub. In the few cases the hub was faster, the difference was negligible. In total, our experimental results lead to the conclusion that for Fast Ethernet networks, the torus topology has better performance and scales better than a hub based network.

  9. The Medical Library Association Benchmarking Network: results.

    PubMed

    Dudden, Rosalind Farnam; Corcoran, Kate; Kaplan, Janice; Magouirk, Jeff; Rand, Debra C; Smith, Bernie Todd

    2006-04-01

    This article presents some limited results from the Medical Library Association (MLA) Benchmarking Network survey conducted in 2002. Other uses of the data are also presented. After several years of development and testing, a Web-based survey opened for data input in December 2001. Three hundred eighty-five MLA members entered data on the size of their institutions and the activities of their libraries. The data from 344 hospital libraries were edited and selected for reporting in aggregate tables and on an interactive site in the Members-Only area of MLANET. The data represent a 16% to 23% return rate and have a 95% confidence level. Specific questions can be answered using the reports. The data can be used to review internal processes, perform outcomes benchmarking, retest a hypothesis, refute a previous survey findings, or develop library standards. The data can be used to compare to current surveys or look for trends by comparing the data to past surveys. The impact of this project on MLA will reach into areas of research and advocacy. The data will be useful in the everyday working of small health sciences libraries as well as provide concrete data on the current practices of health sciences libraries.

  10. The Medical Library Association Benchmarking Network: results*

    PubMed Central

    Dudden, Rosalind Farnam; Corcoran, Kate; Kaplan, Janice; Magouirk, Jeff; Rand, Debra C.; Smith, Bernie Todd

    2006-01-01

    Objective: This article presents some limited results from the Medical Library Association (MLA) Benchmarking Network survey conducted in 2002. Other uses of the data are also presented. Methods: After several years of development and testing, a Web-based survey opened for data input in December 2001. Three hundred eighty-five MLA members entered data on the size of their institutions and the activities of their libraries. The data from 344 hospital libraries were edited and selected for reporting in aggregate tables and on an interactive site in the Members-Only area of MLANET. The data represent a 16% to 23% return rate and have a 95% confidence level. Results: Specific questions can be answered using the reports. The data can be used to review internal processes, perform outcomes benchmarking, retest a hypothesis, refute a previous survey findings, or develop library standards. The data can be used to compare to current surveys or look for trends by comparing the data to past surveys. Conclusions: The impact of this project on MLA will reach into areas of research and advocacy. The data will be useful in the everyday working of small health sciences libraries as well as provide concrete data on the current practices of health sciences libraries. PMID:16636703

  11. GeneNetWeaver: in silico benchmark generation and performance profiling of network inference methods.

    PubMed

    Schaffter, Thomas; Marbach, Daniel; Floreano, Dario

    2011-08-15

    Over the last decade, numerous methods have been developed for inference of regulatory networks from gene expression data. However, accurate and systematic evaluation of these methods is hampered by the difficulty of constructing adequate benchmarks and the lack of tools for a differentiated analysis of network predictions on such benchmarks. Here, we describe a novel and comprehensive method for in silico benchmark generation and performance profiling of network inference methods available to the community as an open-source software called GeneNetWeaver (GNW). In addition to the generation of detailed dynamical models of gene regulatory networks to be used as benchmarks, GNW provides a network motif analysis that reveals systematic prediction errors, thereby indicating potential ways of improving inference methods. The accuracy of network inference methods is evaluated using standard metrics such as precision-recall and receiver operating characteristic curves. We show how GNW can be used to assess the performance and identify the strengths and weaknesses of six inference methods. Furthermore, we used GNW to provide the international Dialogue for Reverse Engineering Assessments and Methods (DREAM) competition with three network inference challenges (DREAM3, DREAM4 and DREAM5). GNW is available at http://gnw.sourceforge.net along with its Java source code, user manual and supporting data. Supplementary data are available at Bioinformatics online. dario.floreano@epfl.ch.

  12. Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Ciotti, Robert; Gunney, Brian T. N.; Spelce, Thomas E.; Koniges, Alice; Dossa, Don; Adamidis, Panagiotis; Rabenseifner, Rolf; Tiyyagura, Sunil R.; Mueller, Matthias; hide

    2006-01-01

    The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers - SGI Altix BX2, Cray XI, Cray Opteron Cluster, Dell Xeon cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks (IMB) results to study the performance of 11 MPI communication functions on these systems.

  13. Use of benchmarking and public reporting for infection control in four high-income countries.

    PubMed

    Haustein, Thomas; Gastmeier, Petra; Holmes, Alison; Lucet, Jean-Christophe; Shannon, Richard P; Pittet, Didier; Harbarth, Stephan

    2011-06-01

    Benchmarking of surveillance data for health-care-associated infection (HCAI) has been used for more than three decades to inform prevention strategies and improve patients' safety. In recent years, public reporting of HCAI indicators has been mandated in several countries because of an increasing demand for transparency, although many methodological issues surrounding benchmarking remain unresolved and are highly debated. In this Review, we describe developments in benchmarking and public reporting of HCAI indicators in England, France, Germany, and the USA. Although benchmarking networks in these countries are derived from a common model and use similar methods, approaches to public reporting have been more diverse. The USA and England have predominantly focused on reporting of infection rates, whereas France has put emphasis on process and structure indicators. In Germany, HCAI indicators of individual institutions are treated confidentially and are not disseminated publicly. Although evidence for a direct effect of public reporting of indicators alone on incidence of HCAIs is weak at present, it has been associated with substantial organisational change. An opportunity now exists to learn from the different strategies that have been adopted. Copyright © 2011 Elsevier Ltd. All rights reserved.

  14. NetBenchmark: a bioconductor package for reproducible benchmarks of gene regulatory network inference.

    PubMed

    Bellot, Pau; Olsen, Catharina; Salembier, Philippe; Oliveras-Vergés, Albert; Meyer, Patrick E

    2015-09-29

    In the last decade, a great number of methods for reconstructing gene regulatory networks from expression data have been proposed. However, very few tools and datasets allow to evaluate accurately and reproducibly those methods. Hence, we propose here a new tool, able to perform a systematic, yet fully reproducible, evaluation of transcriptional network inference methods. Our open-source and freely available Bioconductor package aggregates a large set of tools to assess the robustness of network inference algorithms against different simulators, topologies, sample sizes and noise intensities. The benchmarking framework that uses various datasets highlights the specialization of some methods toward network types and data. As a result, it is possible to identify the techniques that have broad overall performances.

  15. Connecting to young adults: an online social network survey of beliefs and attitudes associated with prescription opioid misuse among college students.

    PubMed

    Lord, Sarah; Brevard, Julie; Budman, Simon

    2011-01-01

    A survey of motives and attitudes associated with patterns of nonmedical prescription opioid medication use among college students was conducted on Facebook, a popular online social networking Web site. Response metrics for a 2-week random advertisement post, targeting students who had misused prescription medications, surpassed typical benchmarks for online marketing campaigns and yielded 527 valid surveys. Respondent characteristics, substance use patterns, and use motives were consistent with other surveys of prescription opioid use among college populations. Results support the potential of online social networks to serve as powerful vehicles to connect with college-aged populations about their drug use. Limitations of the study are noted.

  16. Analysis of 100Mb/s Ethernet for the Whitney Commodity Computing Testbed

    NASA Technical Reports Server (NTRS)

    Fineberg, Samuel A.; Pedretti, Kevin T.; Kutler, Paul (Technical Monitor)

    1997-01-01

    We evaluate the performance of a Fast Ethernet network configured with a single large switch, a single hub, and a 4x4 2D torus topology in a testbed cluster of "commodity" Pentium Pro PCs. We also evaluated a mixed network composed of ethernet hubs and switches. An MPI collective communication benchmark, and the NAS Parallel Benchmarks version 2.2 (NPB2) show that the torus network performs best for all sizes that we were able to test (up to 16 nodes). For larger networks the ethernet switch outperforms the hub, though its performance is far less than peak. The hub/switch combination tests indicate that the NAS parallel benchmarks are relatively insensitive to hub densities of less than 7 nodes per hub.

  17. The national hydrologic bench-mark network

    USGS Publications Warehouse

    Cobb, Ernest D.; Biesecker, J.E.

    1971-01-01

    The United States is undergoing a dramatic growth of population and demands on its natural resources. The effects are widespread and often produce significant alterations of the environment. The hydrologic bench-mark network was established to provide data on stream basins which are little affected by these changes. The network is made up of selected stream basins which are not expected to be significantly altered by man. Data obtained from these basins can be used to document natural changes in hydrologic characteristics with time, to provide a better understanding of the hydrologic structure of natural basins, and to provide a comparative base for studying the effects of man on the hydrologic environment. There are 57 bench-mark basins in 37 States. These basins are in areas having a wide variety of climate and topography. The bench-mark basins and the types of data collected in the basins are described.

  18. Connecting to Young Adults: An Online Social Network Survey of Beliefs and Attitudes Associated With Prescription Opioid Misuse Among College Students

    PubMed Central

    Lord, Sarah; Brevard, Julie; Budman, Simon

    2011-01-01

    A survey of motives and attitudes associated with patterns of nonmedical prescription opioid medication use among college students was conducted on Facebook, a popular online social networking Web site. Response metrics for a 2-week random advertisement post, targeting students who had misused prescription medications, surpassed typical benchmarks for online marketing campaigns and yielded 527 valid surveys. Respondent characteristics, substance use patterns, and use motives were consistent with other surveys of prescription opioid use among college populations. Results support the potential of online social networks to serve as powerful vehicles to connect with college-aged populations about their drug use. Limitations of the study are noted. PMID:21190407

  19. A community detection algorithm using network topologies and rule-based hierarchical arc-merging strategies

    PubMed Central

    2017-01-01

    The authors use four criteria to examine a novel community detection algorithm: (a) effectiveness in terms of producing high values of normalized mutual information (NMI) and modularity, using well-known social networks for testing; (b) examination, meaning the ability to examine mitigating resolution limit problems using NMI values and synthetic networks; (c) correctness, meaning the ability to identify useful community structure results in terms of NMI values and Lancichinetti-Fortunato-Radicchi (LFR) benchmark networks; and (d) scalability, or the ability to produce comparable modularity values with fast execution times when working with large-scale real-world networks. In addition to describing a simple hierarchical arc-merging (HAM) algorithm that uses network topology information, we introduce rule-based arc-merging strategies for identifying community structures. Five well-studied social network datasets and eight sets of LFR benchmark networks were employed to validate the correctness of a ground-truth community, eight large-scale real-world complex networks were used to measure its efficiency, and two synthetic networks were used to determine its susceptibility to two resolution limit problems. Our experimental results indicate that the proposed HAM algorithm exhibited satisfactory performance efficiency, and that HAM-identified and ground-truth communities were comparable in terms of social and LFR benchmark networks, while mitigating resolution limit problems. PMID:29121100

  20. An integrated data envelopment analysis-artificial neural network approach for benchmarking of bank branches

    NASA Astrophysics Data System (ADS)

    Shokrollahpour, Elsa; Hosseinzadeh Lotfi, Farhad; Zandieh, Mostafa

    2016-06-01

    Efficiency and quality of services are crucial to today's banking industries. The competition in this section has become increasingly intense, as a result of fast improvements in Technology. Therefore, performance analysis of the banking sectors attracts more attention these days. Even though data envelopment analysis (DEA) is a pioneer approach in the literature as of an efficiency measurement tool and finding benchmarks, it is on the other hand unable to demonstrate the possible future benchmarks. The drawback to it could be that the benchmarks it provides us with, may still be less efficient compared to the more advanced future benchmarks. To cover for this weakness, artificial neural network is integrated with DEA in this paper to calculate the relative efficiency and more reliable benchmarks of one of the Iranian commercial bank branches. Therefore, each branch could have a strategy to improve the efficiency and eliminate the cause of inefficiencies based on a 5-year time forecast.

  1. Time and frequency structure of causal correlation networks in the China bond market

    NASA Astrophysics Data System (ADS)

    Wang, Zhongxing; Yan, Yan; Chen, Xiaosong

    2017-07-01

    There are more than eight hundred interest rates published in the China bond market every day. Identifying the benchmark interest rates that have broad influences on most other interest rates is a major concern for economists. In this paper, a multi-variable Granger causality test is developed and applied to construct a directed network of interest rates, whose important nodes, regarded as key interest rates, are evaluated with CheiRank scores. The results indicate that repo rates are the benchmark of short-term rates, the central bank bill rates are in the core position of mid-term interest rates network, and treasury bond rates lead the long-term bond rates. The evolution of benchmark interest rates from 2008 to 2014 is also studied, and it is found that SHIBOR has generally become the benchmark interest rate in China. In the frequency domain we identify the properties of information flows between interest rates, and the result confirms the existence of market segmentation in the China bond market.

  2. INFORMAS (International Network for Food and Obesity/non-communicable diseases Research, Monitoring and Action Support): overview and key principles.

    PubMed

    Swinburn, B; Sacks, G; Vandevijvere, S; Kumanyika, S; Lobstein, T; Neal, B; Barquera, S; Friel, S; Hawkes, C; Kelly, B; L'abbé, M; Lee, A; Ma, J; Macmullan, J; Mohan, S; Monteiro, C; Rayner, M; Sanders, D; Snowdon, W; Walker, C

    2013-10-01

    Non-communicable diseases (NCDs) dominate disease burdens globally and poor nutrition increasingly contributes to this global burden. Comprehensive monitoring of food environments, and evaluation of the impact of public and private sector policies on food environments is needed to strengthen accountability systems to reduce NCDs. The International Network for Food and Obesity/NCDs Research, Monitoring and Action Support (INFORMAS) is a global network of public-interest organizations and researchers that aims to monitor, benchmark and support public and private sector actions to create healthy food environments and reduce obesity, NCDs and their related inequalities. The INFORMAS framework includes two 'process' modules, that monitor the policies and actions of the public and private sectors, seven 'impact' modules that monitor the key characteristics of food environments and three 'outcome' modules that monitor dietary quality, risk factors and NCD morbidity and mortality. Monitoring frameworks and indicators have been developed for 10 modules to provide consistency, but allowing for stepwise approaches ('minimal', 'expanded', 'optimal') to data collection and analysis. INFORMAS data will enable benchmarking of food environments between countries, and monitoring of progress over time within countries. Through monitoring and benchmarking, INFORMAS will strengthen the accountability systems needed to help reduce the burden of obesity, NCDs and their related inequalities. © 2013 The Authors. Obesity Reviews published by John Wiley & Sons Ltd on behalf of the International Association for the Study of Obesity.

  3. Predicting Cost/Performance Trade-Offs for Whitney: A Commodity Computing Cluster

    NASA Technical Reports Server (NTRS)

    Becker, Jeffrey C.; Nitzberg, Bill; VanderWijngaart, Rob F.; Kutler, Paul (Technical Monitor)

    1997-01-01

    Recent advances in low-end processor and network technology have made it possible to build a "supercomputer" out of commodity components. We develop simple models of the NAS Parallel Benchmarks version 2 (NPB 2) to explore the cost/performance trade-offs involved in building a balanced parallel computer supporting a scientific workload. We develop closed form expressions detailing the number and size of messages sent by each benchmark. Coupling these with measured single processor performance, network latency, and network bandwidth, our models predict benchmark performance to within 30%. A comparison based on total system cost reveals that current commodity technology (200 MHz Pentium Pros with 100baseT Ethernet) is well balanced for the NPBs up to a total system cost of around $1,000,000.

  4. [A German network for regional anaesthesia of the scientific working group regional anaesthesia within DGAI and BDA].

    PubMed

    Volk, Thomas; Engelhardt, Lars; Spies, Claudia; Steinfeldt, Thorsten; Kutter, Bernd; Heller, Axel; Werner, Christian; Heid, Florian; Bürkle, Hartmut; Koch, Thea; Vicent, Oliver; Geiger, Peter; Kessler, Paul; Wulf, Hinnerk

    2009-11-01

    Regional anaesthesia generally is considered to be safe. However, reports of complications with different severities are also well known. The scientific working group of regional anaesthesia of the DGAI has founded a network in conjunction with the BDA. With the aid of a registry, we are now able to describe risk profiles and associations in case of a complication. Moreover, a benchmark has been implemented in order to continuously improve complication rates. (c) Georg Thieme Verlag KG Stuttgart-New York.

  5. Benchmark for Peak Detection Algorithms in Fiber Bragg Grating Interrogation and a New Neural Network for its Performance Improvement

    PubMed Central

    Negri, Lucas; Nied, Ademir; Kalinowski, Hypolito; Paterno, Aleksander

    2011-01-01

    This paper presents a benchmark for peak detection algorithms employed in fiber Bragg grating spectrometric interrogation systems. The accuracy, precision, and computational performance of currently used algorithms and those of a new proposed artificial neural network algorithm are compared. Centroid and gaussian fitting algorithms are shown to have the highest precision but produce systematic errors that depend on the FBG refractive index modulation profile. The proposed neural network displays relatively good precision with reduced systematic errors and improved computational performance when compared to other networks. Additionally, suitable algorithms may be chosen with the general guidelines presented. PMID:22163806

  6. Systematic Evaluation of Molecular Networks for Discovery of Disease Genes.

    PubMed

    Huang, Justin K; Carlin, Daniel E; Yu, Michael Ku; Zhang, Wei; Kreisberg, Jason F; Tamayo, Pablo; Ideker, Trey

    2018-04-25

    Gene networks are rapidly growing in size and number, raising the question of which networks are most appropriate for particular applications. Here, we evaluate 21 human genome-wide interaction networks for their ability to recover 446 disease gene sets identified through literature curation, gene expression profiling, or genome-wide association studies. While all networks have some ability to recover disease genes, we observe a wide range of performance with STRING, ConsensusPathDB, and GIANT networks having the best performance overall. A general tendency is that performance scales with network size, suggesting that new interaction discovery currently outweighs the detrimental effects of false positives. Correcting for size, we find that the DIP network provides the highest efficiency (value per interaction). Based on these results, we create a parsimonious composite network with both high efficiency and performance. This work provides a benchmark for selection of molecular networks in human disease research. Copyright © 2018 Elsevier Inc. All rights reserved.

  7. Part mutual information for quantifying direct associations in networks.

    PubMed

    Zhao, Juan; Zhou, Yiwei; Zhang, Xiujun; Chen, Luonan

    2016-05-03

    Quantitatively identifying direct dependencies between variables is an important task in data analysis, in particular for reconstructing various types of networks and causal relations in science and engineering. One of the most widely used criteria is partial correlation, but it can only measure linearly direct association and miss nonlinear associations. However, based on conditional independence, conditional mutual information (CMI) is able to quantify nonlinearly direct relationships among variables from the observed data, superior to linear measures, but suffers from a serious problem of underestimation, in particular for those variables with tight associations in a network, which severely limits its applications. In this work, we propose a new concept, "partial independence," with a new measure, "part mutual information" (PMI), which not only can overcome the problem of CMI but also retains the quantification properties of both mutual information (MI) and CMI. Specifically, we first defined PMI to measure nonlinearly direct dependencies between variables and then derived its relations with MI and CMI. Finally, we used a number of simulated data as benchmark examples to numerically demonstrate PMI features and further real gene expression data from Escherichia coli and yeast to reconstruct gene regulatory networks, which all validated the advantages of PMI for accurately quantifying nonlinearly direct associations in networks.

  8. Low-rank network decomposition reveals structural characteristics of small-world networks

    NASA Astrophysics Data System (ADS)

    Barranca, Victor J.; Zhou, Douglas; Cai, David

    2015-12-01

    Small-world networks occur naturally throughout biological, technological, and social systems. With their prevalence, it is particularly important to prudently identify small-world networks and further characterize their unique connection structure with respect to network function. In this work we develop a formalism for classifying networks and identifying small-world structure using a decomposition of network connectivity matrices into low-rank and sparse components, corresponding to connections within clusters of highly connected nodes and sparse interconnections between clusters, respectively. We show that the network decomposition is independent of node indexing and define associated bounded measures of connectivity structure, which provide insight into the clustering and regularity of network connections. While many existing network characterizations rely on constructing benchmark networks for comparison or fail to describe the structural properties of relatively densely connected networks, our classification relies only on the intrinsic network structure and is quite robust with respect to changes in connection density, producing stable results across network realizations. Using this framework, we analyze several real-world networks and reveal new structural properties, which are often indiscernible by previously established characterizations of network connectivity.

  9. Achieving excellence in veterans healthcare--a balanced scorecard approach.

    PubMed

    Biro, Lawrence A; Moreland, Michael E; Cowgill, David E

    2003-01-01

    This article provides healthcare administrators and managers with a framework and model for developing a balanced scorecard and demonstrates the remarkable success of this process, which brings focus to leadership decisions about the allocation of resources. This scorecard was developed as a top management tool designed to structure multiple priorities of a large, complex, integrated healthcare system and to establish benchmarks to measure success in achieving targets for performance in identified areas. Significant benefits and positive results were derived from the implementation of the balanced scorecard, based upon benchmarks considered to be critical success factors. The network's chief executive officer and top leadership team set and articulated the network's primary operating principles: quality and efficiency in the provision of comprehensive healthcare and support services. Under the weighted benchmarks of the balanced scorecard, the facilities in the network were mandated to adhere to one non-negotiable tenet: providing care that is second to none. The balanced scorecard approach to leadership continuously ensures that this is the primary goal and focal point for all activity within the network. To that end, systems are always in place to ensure that the network is fully successful on all performance measures relating to quality.

  10. A statistical summary of data from the U.S. Geological Survey's national water quality networks

    USGS Publications Warehouse

    Smith, R.A.; Alexander, R.B.

    1983-01-01

    The U.S. Geological Survey Operates two nationwide networks to monitor water quality, the National Hydrologic Bench-Mark Network and the National Stream Quality Accounting Network (NASQAN). The Bench-Mark network is composed of 51 stations in small drainage basins which are as close as possible to their natural state, with no human influence and little likelihood of future development. Stations in the NASQAN program are located to monitor flow from accounting units (subregional drainage basins) which collectively encompass the entire land surface of the nation. Data collected at both networks include streamflow, concentrations of major inorganic constituents, nutrients, and trace metals. The goals of the two water quality sampling programs include the determination of mean constituent concentrations and transport rates as well as the analysis of long-term trends in those variables. This report presents a station-by-station statistical summary of data from the two networks for the period 1974 through 1981. (Author 's abstract)

  11. Supply network configuration—A benchmarking problem

    NASA Astrophysics Data System (ADS)

    Brandenburg, Marcus

    2018-03-01

    Managing supply networks is a highly relevant task that strongly influences the competitiveness of firms from various industries. Designing supply networks is a strategic process that considerably affects the structure of the whole network. In contrast, supply networks for new products are configured without major adaptations of the existing structure, but the network has to be configured before the new product is actually launched in the marketplace. Due to dynamics and uncertainties, the resulting planning problem is highly complex. However, formal models and solution approaches that support supply network configuration decisions for new products are scant. The paper at hand aims at stimulating related model-based research. To formulate mathematical models and solution procedures, a benchmarking problem is introduced which is derived from a case study of a cosmetics manufacturer. Tasks, objectives, and constraints of the problem are described in great detail and numerical values and ranges of all problem parameters are given. In addition, several directions for future research are suggested.

  12. Benchmarking of energy consumption in municipal wastewater treatment plants - a survey of over 200 plants in Italy.

    PubMed

    Vaccari, M; Foladori, P; Nembrini, S; Vitali, F

    2018-05-01

    One of the largest surveys in Europe about energy consumption in Italian wastewater treatment plants (WWTPs) is presented, based on 241 WWTPs and a total population equivalent (PE) of more than 9,000,000 PE. The study contributes towards standardised resilient data and benchmarking and to identify potentials for energy savings. In the energy benchmark, three indicators were used: specific energy consumption expressed per population equivalents (kWh PE -1 year -1 ), per cubic meter (kWh/m 3 ), and per unit of chemical oxygen demand (COD) removed (kWh/kgCOD). The indicator kWh/m 3 , even though widely applied, resulted in a biased benchmark, because highly influenced by stormwater and infiltrations. Plants with combined networks (often used in Europe) showed an apparent better energy performance. Conversely, the indicator kWh PE -1 year -1 resulted in a more meaningful definition of a benchmark. High energy efficiency was associated with: (i) large capacity of the plant, (ii) higher COD concentration in wastewater, (iii) separate sewer systems, (iv) capacity utilisation over 80%, and (v) high organic loads, but without overloading. The 25th percentile was proposed as a benchmark for four size classes: 23 kWh PE -1 y -1 for large plants > 100,000 PE; 42 kWh PE -1 y -1 for capacity 10,000 < PE < 100,000, 48 kWh PE -1 y -1 for capacity 2,000 < PE < 10,000 and 76 kWh PE -1 y -1 for small plants < 2,000 PE.

  13. Adaptive Neuron Model: An architecture for the rapid learning of nonlinear topological transformations

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul (Inventor)

    1994-01-01

    A method for the rapid learning of nonlinear mappings and topological transformations using a dynamically reconfigurable artificial neural network is presented. This fully-recurrent Adaptive Neuron Model (ANM) network was applied to the highly degenerate inverse kinematics problem in robotics, and its performance evaluation is bench-marked. Once trained, the resulting neuromorphic architecture was implemented in custom analog neural network hardware and the parameters capturing the functional transformation downloaded onto the system. This neuroprocessor, capable of 10(exp 9) ops/sec, was interfaced directly to a three degree of freedom Heathkit robotic manipulator. Calculation of the hardware feed-forward pass for this mapping was benchmarked at approximately 10 microsec.

  14. Benchmarking 2009: Trends in Education Philanthropy

    ERIC Educational Resources Information Center

    Bearman, Jessica; Kilgore, Gin

    2009-01-01

    "Benchmarking 2009: Trends in Education Philanthropy" is Grantmakers for Education's (GFE) second annual study of grantmaking trends and priorities among members. As a national network dedicated to improving education outcomes through philanthropy, GFE members are mindful of their role in fostering greater knowledge in the field. They believe it's…

  15. Benchmarking 2011: Trends in Education Philanthropy

    ERIC Educational Resources Information Center

    Grantmakers for Education, 2011

    2011-01-01

    The analysis in "Benchmarking 2011" is based on data from an unduplicated sample of 184 education grantmaking organizations--approximately two-thirds of Grantmakers for Education's (GFE's) network of grantmakers--who responded to an online survey consisting of fixed-choice and open-ended questions. Because a different subset of funders elects to…

  16. Evolutionary Wavelet Neural Network ensembles for breast cancer and Parkinson's disease prediction.

    PubMed

    Khan, Maryam Mahsal; Mendes, Alexandre; Chalup, Stephan K

    2018-01-01

    Wavelet Neural Networks are a combination of neural networks and wavelets and have been mostly used in the area of time-series prediction and control. Recently, Evolutionary Wavelet Neural Networks have been employed to develop cancer prediction models. The present study proposes to use ensembles of Evolutionary Wavelet Neural Networks. The search for a high quality ensemble is directed by a fitness function that incorporates the accuracy of the classifiers both independently and as part of the ensemble itself. The ensemble approach is tested on three publicly available biomedical benchmark datasets, one on Breast Cancer and two on Parkinson's disease, using a 10-fold cross-validation strategy. Our experimental results show that, for the first dataset, the performance was similar to previous studies reported in literature. On the second dataset, the Evolutionary Wavelet Neural Network ensembles performed better than all previous methods. The third dataset is relatively new and this study is the first to report benchmark results.

  17. Evolutionary Wavelet Neural Network ensembles for breast cancer and Parkinson’s disease prediction

    PubMed Central

    Mendes, Alexandre; Chalup, Stephan K.

    2018-01-01

    Wavelet Neural Networks are a combination of neural networks and wavelets and have been mostly used in the area of time-series prediction and control. Recently, Evolutionary Wavelet Neural Networks have been employed to develop cancer prediction models. The present study proposes to use ensembles of Evolutionary Wavelet Neural Networks. The search for a high quality ensemble is directed by a fitness function that incorporates the accuracy of the classifiers both independently and as part of the ensemble itself. The ensemble approach is tested on three publicly available biomedical benchmark datasets, one on Breast Cancer and two on Parkinson’s disease, using a 10-fold cross-validation strategy. Our experimental results show that, for the first dataset, the performance was similar to previous studies reported in literature. On the second dataset, the Evolutionary Wavelet Neural Network ensembles performed better than all previous methods. The third dataset is relatively new and this study is the first to report benchmark results. PMID:29420578

  18. A Faster Parallel Algorithm and Efficient Multithreaded Implementations for Evaluating Betweenness Centrality on Massive Datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madduri, Kamesh; Ediger, David; Jiang, Karl

    2009-02-15

    We present a new lock-free parallel algorithm for computing betweenness centralityof massive small-world networks. With minor changes to the data structures, ouralgorithm also achieves better spatial cache locality compared to previous approaches. Betweenness centrality is a key algorithm kernel in HPCS SSCA#2, a benchmark extensively used to evaluate the performance of emerging high-performance computing architectures for graph-theoretic computations. We design optimized implementations of betweenness centrality and the SSCA#2 benchmark for two hardware multithreaded systems: a Cray XMT system with the Threadstorm processor, and a single-socket Sun multicore server with the UltraSPARC T2 processor. For a small-world network of 134 millionmore » vertices and 1.073 billion edges, the 16-processor XMT system and the 8-core Sun Fire T5120 server achieve TEPS scores (an algorithmic performance count for the SSCA#2 benchmark) of 160 million and 90 million respectively, which corresponds to more than a 2X performance improvement over the previous parallel implementations. To better characterize the performance of these multithreaded systems, we correlate the SSCA#2 performance results with data from the memory-intensive STREAM and RandomAccess benchmarks. Finally, we demonstrate the applicability of our implementation to analyze massive real-world datasets by computing approximate betweenness centrality for a large-scale IMDb movie-actor network.« less

  19. A Faster Parallel Algorithm and Efficient Multithreaded Implementations for Evaluating Betweenness Centrality on Massive Datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madduri, Kamesh; Ediger, David; Jiang, Karl

    2009-05-29

    We present a new lock-free parallel algorithm for computing betweenness centrality of massive small-world networks. With minor changes to the data structures, our algorithm also achieves better spatial cache locality compared to previous approaches. Betweenness centrality is a key algorithm kernel in the HPCS SSCA#2 Graph Analysis benchmark, which has been extensively used to evaluate the performance of emerging high-performance computing architectures for graph-theoretic computations. We design optimized implementations of betweenness centrality and the SSCA#2 benchmark for two hardware multithreaded systems: a Cray XMT system with the ThreadStorm processor, and a single-socket Sun multicore server with the UltraSparc T2 processor.more » For a small-world network of 134 million vertices and 1.073 billion edges, the 16-processor XMT system and the 8-core Sun Fire T5120 server achieve TEPS scores (an algorithmic performance count for the SSCA#2 benchmark) of 160 million and 90 million respectively, which corresponds to more than a 2X performance improvement over the previous parallel implementations. To better characterize the performance of these multithreaded systems, we correlate the SSCA#2 performance results with data from the memory-intensive STREAM and RandomAccess benchmarks. Finally, we demonstrate the applicability of our implementation to analyze massive real-world datasets by computing approximate betweenness centrality for a large-scale IMDb movie-actor network.« less

  20. Results Oriented Benchmarking: The Evolution of Benchmarking at NASA from Competitive Comparisons to World Class Space Partnerships

    NASA Technical Reports Server (NTRS)

    Bell, Michael A.

    1999-01-01

    Informal benchmarking using personal or professional networks has taken place for many years at the Kennedy Space Center (KSC). The National Aeronautics and Space Administration (NASA) recognized early on, the need to formalize the benchmarking process for better utilization of resources and improved benchmarking performance. The need to compete in a faster, better, cheaper environment has been the catalyst for formalizing these efforts. A pioneering benchmarking consortium was chartered at KSC in January 1994. The consortium known as the Kennedy Benchmarking Clearinghouse (KBC), is a collaborative effort of NASA and all major KSC contractors. The charter of this consortium is to facilitate effective benchmarking, and leverage the resulting quality improvements across KSC. The KBC acts as a resource with experienced facilitators and a proven process. One of the initial actions of the KBC was to develop a holistic methodology for Center-wide benchmarking. This approach to Benchmarking integrates the best features of proven benchmarking models (i.e., Camp, Spendolini, Watson, and Balm). This cost-effective alternative to conventional Benchmarking approaches has provided a foundation for consistent benchmarking at KSC through the development of common terminology, tools, and techniques. Through these efforts a foundation and infrastructure has been built which allows short duration benchmarking studies yielding results gleaned from world class partners that can be readily implemented. The KBC has been recognized with the Silver Medal Award (in the applied research category) from the International Benchmarking Clearinghouse.

  1. Benchmarking to improve the quality of cystic fibrosis care.

    PubMed

    Schechter, Michael S

    2012-11-01

    Benchmarking involves the ascertainment of healthcare programs with most favorable outcomes as a means to identify and spread effective strategies for delivery of care. The recent interest in the development of patient registries for patients with cystic fibrosis (CF) has been fueled in part by an interest in using them to facilitate benchmarking. This review summarizes reports of how benchmarking has been operationalized in attempts to improve CF care. Although certain goals of benchmarking can be accomplished with an exclusive focus on registry data analysis, benchmarking programs in Germany and the United States have supplemented these data analyses with exploratory interactions and discussions to better understand successful approaches to care and encourage their spread throughout the care network. Benchmarking allows the discovery and facilitates the spread of effective approaches to care. It provides a pragmatic alternative to traditional research methods such as randomized controlled trials, providing insights into methods that optimize delivery of care and allowing judgments about the relative effectiveness of different therapeutic approaches.

  2. Congestion Avoidance Testbed Experiments. Volume 2

    NASA Technical Reports Server (NTRS)

    Denny, Barbara A.; Lee, Diane S.; McKenney, Paul E., Sr.; Lee, Danny

    1994-01-01

    DARTnet provides an excellent environment for executing networking experiments. Since the network is private and spans the continental United States, it gives researchers a great opportunity to test network behavior under controlled conditions. However, this opportunity is not available very often, and therefore a support environment for such testing is lacking. To help remedy this situation, part of SRI's effort in this project was devoted to advancing the state of the art in the techniques used for benchmarking network performance. The second objective of SRI's effort in this project was to advance networking technology in the area of traffic control, and to test our ideas on DARTnet, using the tools we developed to improve benchmarking networks. Networks are becoming more common and are being used by more and more people. The applications, such as multimedia conferencing and distributed simulations, are also placing greater demand on the resources the networks provide. Hence, new mechanisms for traffic control must be created to enable their networks to serve the needs of their users. SRI's objective, therefore, was to investigate a new queueing and scheduling approach that will help to meet the needs of a large, diverse user population in a "fair" way.

  3. Analysis of Photonic Networks for a Chip Multiprocessor Using Scientific Applications

    DTIC Science & Technology

    2009-05-01

    Analysis of Photonic Networks for a Chip Multiprocessor Using Scientific Applications Gilbert Hendry†, Shoaib Kamil‡?, Aleksandr Biberman†, Johnnie...electronic networks -on-chip warrants investigating real application traces on functionally compa- rable photonic and electronic network designs. We... network can achieve 75× improvement in energy ef- ficiency for synthetic benchmarks and up to 37× improve- ment for real scientific applications

  4. Identifying key genes in glaucoma based on a benchmarked dataset and the gene regulatory network.

    PubMed

    Chen, Xi; Wang, Qiao-Ling; Zhang, Meng-Hui

    2017-10-01

    The current study aimed to identify key genes in glaucoma based on a benchmarked dataset and gene regulatory network (GRN). Local and global noise was added to the gene expression dataset to produce a benchmarked dataset. Differentially-expressed genes (DEGs) between patients with glaucoma and normal controls were identified utilizing the Linear Models for Microarray Data (Limma) package based on benchmarked dataset. A total of 5 GRN inference methods, including Zscore, GeneNet, context likelihood of relatedness (CLR) algorithm, Partial Correlation coefficient with Information Theory (PCIT) and GEne Network Inference with Ensemble of Trees (Genie3) were evaluated using receiver operating characteristic (ROC) and precision and recall (PR) curves. The interference method with the best performance was selected to construct the GRN. Subsequently, topological centrality (degree, closeness and betweenness) was conducted to identify key genes in the GRN of glaucoma. Finally, the key genes were validated by performing reverse transcription-quantitative polymerase chain reaction (RT-qPCR). A total of 176 DEGs were detected from the benchmarked dataset. The ROC and PR curves of the 5 methods were analyzed and it was determined that Genie3 had a clear advantage over the other methods; thus, Genie3 was used to construct the GRN. Following topological centrality analysis, 14 key genes for glaucoma were identified, including IL6 , EPHA2 and GSTT1 and 5 of these 14 key genes were validated by RT-qPCR. Therefore, the current study identified 14 key genes in glaucoma, which may be potential biomarkers to use in the diagnosis of glaucoma and aid in identifying the molecular mechanism of this disease.

  5. Space network scheduling benchmark: A proof-of-concept process for technology transfer

    NASA Technical Reports Server (NTRS)

    Moe, Karen; Happell, Nadine; Hayden, B. J.; Barclay, Cathy

    1993-01-01

    This paper describes a detailed proof-of-concept activity to evaluate flexible scheduling technology as implemented in the Request Oriented Scheduling Engine (ROSE) and applied to Space Network (SN) scheduling. The criteria developed for an operational evaluation of a reusable scheduling system is addressed including a methodology to prove that the proposed system performs at least as well as the current system in function and performance. The improvement of the new technology must be demonstrated and evaluated against the cost of making changes. Finally, there is a need to show significant improvement in SN operational procedures. Successful completion of a proof-of-concept would eventually lead to an operational concept and implementation transition plan, which is outside the scope of this paper. However, a high-fidelity benchmark using actual SN scheduling requests has been designed to test the ROSE scheduling tool. The benchmark evaluation methodology, scheduling data, and preliminary results are described.

  6. Efficient ibuprofen delivery from anhydrous semisolid formulation based on a novel cross-linked silicone polymer network: an in vitro and in vivo study.

    PubMed

    Aliyar, Hyder; Huber, Robert; Loubert, Gary; Schalau, Gerald

    2014-07-01

    The use of silicone as a primary polymer in topical semisolid pharmaceutical formulations is infrequent. Recent development of novel silicone materials provides an opportunity to investigate their drug delivery efficiencies. In this study, an anhydrous semisolid formulation was prepared using a novel cross-linked silicone polymer network swollen in isododecane. Similar formulations were prepared using petrolatum, an acrylic, or a cellulose polymer. All formulations contained 5% ibuprofen (IBP). In vitro permeability was evaluated for all formulations and a commercial product using human cadaver epidermis. The silicone formulation delivered IBP more efficiently than all other formulations in terms of flux, cumulative amount, and percent drug release. The silicone formulation showed the maximum flux of 85.9 μg . cm(-2) . h(-1) and a cumulative IBP release of 261.6 μg in 8 h, whereas the benchmark showed 20.1 μg . cm(-2) . h(-1) and 30.9 μg, respectively. An in vivo study conducted on rats showed calculated blood AUCs of 59.2 and 17.6 μg . h/g (p < 0.003) for the silicone formulation and the benchmark, respectively. The IBP in excised rat skin was 264 ± 59 μg/g for the silicone formulation and 102 ± 5 μg/g for the benchmark. The results obtained from the in vitro and in vivo studies demonstrate efficient topical IBP delivery by the silicone formulation. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  7. Multilayer Optimization of Heterogeneous Networks Using Grammatical Genetic Programming.

    PubMed

    Fenton, Michael; Lynch, David; Kucera, Stepan; Claussen, Holger; O'Neill, Michael

    2017-09-01

    Heterogeneous cellular networks are composed of macro cells (MCs) and small cells (SCs) in which all cells occupy the same bandwidth. Provision has been made under the third generation partnership project-long term evolution framework for enhanced intercell interference coordination (eICIC) between cell tiers. Expanding on previous works, this paper instruments grammatical genetic programming to evolve control heuristics for heterogeneous networks. Three aspects of the eICIC framework are addressed including setting SC powers and selection biases, MC duty cycles, and scheduling of user equipments (UEs) at SCs. The evolved heuristics yield minimum downlink rates three times higher than a baseline method, and twice that of a state-of-the-art benchmark. Furthermore, a greater number of UEs receive transmissions under the proposed scheme than in either the baseline or benchmark cases.

  8. Stochastic fluctuations and the detectability limit of network communities.

    PubMed

    Floretta, Lucio; Liechti, Jonas; Flammini, Alessandro; De Los Rios, Paolo

    2013-12-01

    We have analyzed the detectability limits of network communities in the framework of the popular Girvan and Newman benchmark. By carefully taking into account the inevitable stochastic fluctuations that affect the construction of each and every instance of the benchmark, we come to the conclusion that the native, putative partition of the network is completely lost even before the in-degree/out-degree ratio becomes equal to that of a structureless Erdös-Rényi network. We develop a simple iterative scheme, analytically well described by an infinite branching process, to provide an estimate of the true detectability limit. Using various algorithms based on modularity optimization, we show that all of them behave (semiquantitatively) in the same way, with the same functional form of the detectability threshold as a function of the network parameters. Because the same behavior has also been found by further modularity-optimization methods and for methods based on different heuristics implementations, we conclude that indeed a correct definition of the detectability limit must take into account the stochastic fluctuations of the network construction.

  9. Echo state networks with filter neurons and a delay&sum readout.

    PubMed

    Holzmann, Georg; Hauser, Helmut

    2010-03-01

    Echo state networks (ESNs) are a novel approach to recurrent neural network training with the advantage of a very simple and linear learning algorithm. It has been demonstrated that ESNs outperform other methods on a number of benchmark tasks. Although the approach is appealing, there are still some inherent limitations in the original formulation. Here we suggest two enhancements of this network model. First, the previously proposed idea of filters in neurons is extended to arbitrary infinite impulse response (IIR) filter neurons. This enables such networks to learn multiple attractors and signals at different timescales, which is especially important for modeling real-world time series. Second, a delay&sum readout is introduced, which adds trainable delays in the synaptic connections of output neurons and therefore vastly improves the memory capacity of echo state networks. It is shown in commonly used benchmark tasks and real-world examples, that this new structure is able to significantly outperform standard ESNs and other state-of-the-art models for nonlinear dynamical system modeling. Copyright 2009 Elsevier Ltd. All rights reserved.

  10. The philosophy of benchmark testing a standards-based picture archiving and communications system.

    PubMed

    Richardson, N E; Thomas, J A; Lyche, D K; Romlein, J; Norton, G S; Dolecek, Q E

    1999-05-01

    The Department of Defense issued its requirements for a Digital Imaging Network-Picture Archiving and Communications System (DIN-PACS) in a Request for Proposals (RFP) to industry in January 1997, with subsequent contracts being awarded in November 1997 to the Agfa Division of Bayer and IBM Global Government Industry. The Government's technical evaluation process consisted of evaluating a written technical proposal as well as conducting a benchmark test of each proposed system at the vendor's test facility. The purpose of benchmark testing was to evaluate the performance of the fully integrated system in a simulated operational environment. The benchmark test procedures and test equipment were developed through a joint effort between the Government, academic institutions, and private consultants. Herein the authors discuss the resources required and the methods used to benchmark test a standards-based PACS.

  11. Adverse Outcome Pathway Network Analyses: Techniques and benchmarking the AOPwiki

    EPA Science Inventory

    Abstract: As the community of toxicological researchers, risk assessors, and risk managers adopt the adverse outcome pathway (AOP) paradigm for organizing toxicological knowledge, the number and diversity of adverse outcome pathways and AOP networks are continuing to grow. This ...

  12. Reverse Engineering Validation using a Benchmark Synthetic Gene Circuit in Human Cells

    PubMed Central

    Kang, Taek; White, Jacob T.; Xie, Zhen; Benenson, Yaakov; Sontag, Eduardo; Bleris, Leonidas

    2013-01-01

    Multi-component biological networks are often understood incompletely, in large part due to the lack of reliable and robust methodologies for network reverse engineering and characterization. As a consequence, developing automated and rigorously validated methodologies for unraveling the complexity of biomolecular networks in human cells remains a central challenge to life scientists and engineers. Today, when it comes to experimental and analytical requirements, there exists a great deal of diversity in reverse engineering methods, which renders the independent validation and comparison of their predictive capabilities difficult. In this work we introduce an experimental platform customized for the development and verification of reverse engineering and pathway characterization algorithms in mammalian cells. Specifically, we stably integrate a synthetic gene network in human kidney cells and use it as a benchmark for validating reverse engineering methodologies. The network, which is orthogonal to endogenous cellular signaling, contains a small set of regulatory interactions that can be used to quantify the reconstruction performance. By performing successive perturbations to each modular component of the network and comparing protein and RNA measurements, we study the conditions under which we can reliably reconstruct the causal relationships of the integrated synthetic network. PMID:23654266

  13. Reverse engineering validation using a benchmark synthetic gene circuit in human cells.

    PubMed

    Kang, Taek; White, Jacob T; Xie, Zhen; Benenson, Yaakov; Sontag, Eduardo; Bleris, Leonidas

    2013-05-17

    Multicomponent biological networks are often understood incompletely, in large part due to the lack of reliable and robust methodologies for network reverse engineering and characterization. As a consequence, developing automated and rigorously validated methodologies for unraveling the complexity of biomolecular networks in human cells remains a central challenge to life scientists and engineers. Today, when it comes to experimental and analytical requirements, there exists a great deal of diversity in reverse engineering methods, which renders the independent validation and comparison of their predictive capabilities difficult. In this work we introduce an experimental platform customized for the development and verification of reverse engineering and pathway characterization algorithms in mammalian cells. Specifically, we stably integrate a synthetic gene network in human kidney cells and use it as a benchmark for validating reverse engineering methodologies. The network, which is orthogonal to endogenous cellular signaling, contains a small set of regulatory interactions that can be used to quantify the reconstruction performance. By performing successive perturbations to each modular component of the network and comparing protein and RNA measurements, we study the conditions under which we can reliably reconstruct the causal relationships of the integrated synthetic network.

  14. The Correlation Fractal Dimension of Complex Networks

    NASA Astrophysics Data System (ADS)

    Wang, Xingyuan; Liu, Zhenzhen; Wang, Mogei

    2013-05-01

    The fractality of complex networks is studied by estimating the correlation dimensions of the networks. Comparing with the previous algorithms of estimating the box dimension, our algorithm achieves a significant reduction in time complexity. For four benchmark cases tested, that is, the Escherichia coli (E. Coli) metabolic network, the Homo sapiens protein interaction network (H. Sapiens PIN), the Saccharomyces cerevisiae protein interaction network (S. Cerevisiae PIN) and the World Wide Web (WWW), experiments are provided to demonstrate the validity of our algorithm.

  15. HyspIRI Low Latency Concept and Benchmarks

    NASA Technical Reports Server (NTRS)

    Mandl, Dan

    2010-01-01

    Topics include HyspIRI low latency data ops concept, HyspIRI data flow, ongoing efforts, experiment with Web Coverage Processing Service (WCPS) approach to injecting new algorithms into SensorWeb, low fidelity HyspIRI IPM testbed, compute cloud testbed, open cloud testbed environment, Global Lambda Integrated Facility (GLIF) and OCC collaboration with Starlight, delay tolerant network (DTN) protocol benchmarking, and EO-1 configuration for preliminary DTN prototype.

  16. Spiking neural network simulation: memory-optimal synaptic event scheduling.

    PubMed

    Stewart, Robert D; Gurney, Kevin N

    2011-06-01

    Spiking neural network simulations incorporating variable transmission delays require synaptic events to be scheduled prior to delivery. Conventional methods have memory requirements that scale with the total number of synapses in a network. We introduce novel scheduling algorithms for both discrete and continuous event delivery, where the memory requirement scales instead with the number of neurons. Superior algorithmic performance is demonstrated using large-scale, benchmarking network simulations.

  17. [NEOCAT, surveillance network of catheter-related bloodstream infections in neonates: 2010 data].

    PubMed

    L'Hériteau, F; Lacavé, L; Leboucher, B; Decousser, J-W; De Chillaz, C; Astagneau, P; Aujard, Y

    2012-09-01

    The NEOCAT surveillance network was implemented in 2006 in order to address catheter-associated bloodstream infections (BSIs) in neonates. The results for 2010 surveillance are presented herein. Neonatal intensive care units (NICUs) participated in the study on a voluntary basis. Umbilical catheters (UCs) and central venous catheters (CVCs) were analyzed separately. In 2010, 26NICUs participated. Overall, 2953 neonates were included (median weight, 1550 g; median gestational age, 32 weeks). These neonates had 2551UCs (median insertion duration, 4 days) and 2147CVCs (median insertion duration, 12 days). Thirty-three BSIs associated with UCs were reported, yielding a 2.9/1000UC-day incidence density, 95% confidence interval (95%CI) (1.9-3.8). UC-associated BSIs appeared after a median period of 5 days after UC insertion. The main microorganism isolated from blood cultures was coagulase negative staphylococci (CNS, n=27), S. aureus (n=3), and Enterobacteriaceae (n=5). Three hundred and six CVC-associated BSIs were recorded, yielding a 11.2/1000 CVC-day incidence density (95%CI, 10.0-12.5). These BSIs occurred after a median period of 12 days after CVC insertion. The main microorganisms were CNS (83%), S. aureus (6%), and Enterobacteriaceae (5%). The NEOCAT network provides a useful benchmark for participating wards. Copyright © 2012 Elsevier Masson SAS. All rights reserved.

  18. Understanding the Collaborative Planning Process in Homeless Services: Networking, Advocacy, and Local Government Support May Reduce Service Gaps.

    PubMed

    Jarpe, Meghan; Mosley, Jennifer E; Smith, Bikki Tran

    2018-06-07

    The Continuum of Care (CoC) process-a nationwide system of regional collaborative planning networks addressing homelessness-is the chief administrative method utilized by the US Department of Housing and Urban Development to prevent and reduce homelessness in the United States. The objective of this study is to provide a benchmark comprehensive picture of the structure and practices of CoC networks, as well as information about which of those factors are associated with lower service gaps, a key goal of the initiative. A national survey of the complete population of CoCs in the United States was conducted in 2014 (n = 312, 75% response rate). This survey is the first to gather information on all available CoC networks. Ordinary least squares (OLS) regression was used to determine the relationship between internal networking, advocacy frequency, government investment, and degree of service gaps for CoCs of different sizes. United States. Lead contacts for CoCs (n = 312) that responded to the 2014 survey. Severity of regional service gaps for people who are homeless. Descriptive statistics show that CoCs vary considerably in regard to size, leadership, membership, and other organizational characteristics. Several independent variables were associated with reduced regional service gaps: networking for small CoCs (β = -.39, P < .05) and local government support for midsized CoCs (β = -.10, P < .05). For large CoCs, local government support was again significantly associated with lower service gaps, but there was also a significant interaction effect between advocacy and networking (β = .04, P < .05). To reduce service gaps and better serve the homeless, CoCs should consider taking steps to improve networking, particularly when advocacy is out of reach, and cultivate local government investment and support.

  19. Seeding for pervasively overlapping communities

    NASA Astrophysics Data System (ADS)

    Lee, Conrad; Reid, Fergal; McDaid, Aaron; Hurley, Neil

    2011-06-01

    In some social and biological networks, the majority of nodes belong to multiple communities. It has recently been shown that a number of the algorithms specifically designed to detect overlapping communities do not perform well in such highly overlapping settings. Here, we consider one class of these algorithms, those which optimize a local fitness measure, typically by using a greedy heuristic to expand a seed into a community. We perform synthetic benchmarks which indicate that an appropriate seeding strategy becomes more important as the extent of community overlap increases. We find that distinct cliques provide the best seeds. We find further support for this seeding strategy with benchmarks on a Facebook network and the yeast interactome.

  20. Benchmarking the quality of breast cancer care in a nationwide voluntary system: the first five-year results (2003–2007) from Germany as a proof of concept

    PubMed Central

    Brucker, Sara Y; Schumacher, Claudia; Sohn, Christoph; Rezai, Mahdi; Bamberg, Michael; Wallwiener, Diethelm

    2008-01-01

    Background The main study objectives were: to establish a nationwide voluntary collaborative network of breast centres with independent data analysis; to define suitable quality indicators (QIs) for benchmarking the quality of breast cancer (BC) care; to demonstrate existing differences in BC care quality; and to show that BC care quality improved with benchmarking from 2003 to 2007. Methods BC centres participated voluntarily in a scientific benchmarking procedure. A generic XML-based data set was developed and used for data collection. Nine guideline-based quality targets serving as rate-based QIs were initially defined, reviewed annually and modified or expanded accordingly. QI changes over time were analysed descriptively. Results During 2003–2007, respective increases in participating breast centres and postoperatively confirmed BCs were from 59 to 220 and from 5,994 to 31,656 (> 60% of new BCs/year in Germany). Starting from 9 process QIs, 12 QIs were developed by 2007 as surrogates for long-term outcome. Results for most QIs increased. From 2003 to 2007, the most notable increases seen were for preoperative histological confirmation of diagnosis (58% (in 2003) to 88% (in 2007)), appropriate endocrine therapy in hormone receptor-positive patients (27 to 93%), appropriate radiotherapy after breast-conserving therapy (20 to 79%) and appropriate radiotherapy after mastectomy (8 to 65%). Conclusion Nationwide external benchmarking of BC care is feasible and successful. The benchmarking system described allows both comparisons among participating institutions as well as the tracking of changes in average quality of care over time for the network as a whole. Marked QI increases indicate improved quality of BC care. PMID:19055735

  1. Benchmarking the quality of breast cancer care in a nationwide voluntary system: the first five-year results (2003-2007) from Germany as a proof of concept.

    PubMed

    Brucker, Sara Y; Schumacher, Claudia; Sohn, Christoph; Rezai, Mahdi; Bamberg, Michael; Wallwiener, Diethelm

    2008-12-02

    The main study objectives were: to establish a nationwide voluntary collaborative network of breast centres with independent data analysis; to define suitable quality indicators (QIs) for benchmarking the quality of breast cancer (BC) care; to demonstrate existing differences in BC care quality; and to show that BC care quality improved with benchmarking from 2003 to 2007. BC centres participated voluntarily in a scientific benchmarking procedure. A generic XML-based data set was developed and used for data collection. Nine guideline-based quality targets serving as rate-based QIs were initially defined, reviewed annually and modified or expanded accordingly. QI changes over time were analysed descriptively. During 2003-2007, respective increases in participating breast centres and postoperatively confirmed BCs were from 59 to 220 and from 5,994 to 31,656 (> 60% of new BCs/year in Germany). Starting from 9 process QIs, 12 QIs were developed by 2007 as surrogates for long-term outcome. Results for most QIs increased. From 2003 to 2007, the most notable increases seen were for preoperative histological confirmation of diagnosis (58% (in 2003) to 88% (in 2007)), appropriate endocrine therapy in hormone receptor-positive patients (27 to 93%), appropriate radiotherapy after breast-conserving therapy (20 to 79%) and appropriate radiotherapy after mastectomy (8 to 65%). Nationwide external benchmarking of BC care is feasible and successful. The benchmarking system described allows both comparisons among participating institutions as well as the tracking of changes in average quality of care over time for the network as a whole. Marked QI increases indicate improved quality of BC care.

  2. Efficient tree tensor network states (TTNS) for quantum chemistry: Generalizations of the density matrix renormalization group algorithm

    NASA Astrophysics Data System (ADS)

    Nakatani, Naoki; Chan, Garnet Kin-Lic

    2013-04-01

    We investigate tree tensor network states for quantum chemistry. Tree tensor network states represent one of the simplest generalizations of matrix product states and the density matrix renormalization group. While matrix product states encode a one-dimensional entanglement structure, tree tensor network states encode a tree entanglement structure, allowing for a more flexible description of general molecules. We describe an optimal tree tensor network state algorithm for quantum chemistry. We introduce the concept of half-renormalization which greatly improves the efficiency of the calculations. Using our efficient formulation we demonstrate the strengths and weaknesses of tree tensor network states versus matrix product states. We carry out benchmark calculations both on tree systems (hydrogen trees and π-conjugated dendrimers) as well as non-tree molecules (hydrogen chains, nitrogen dimer, and chromium dimer). In general, tree tensor network states require much fewer renormalized states to achieve the same accuracy as matrix product states. In non-tree molecules, whether this translates into a computational savings is system dependent, due to the higher prefactor and computational scaling associated with tree algorithms. In tree like molecules, tree network states are easily superior to matrix product states. As an illustration, our largest dendrimer calculation with tree tensor network states correlates 110 electrons in 110 active orbitals.

  3. TRACING CO-REGULATORY NETWORK DYNAMICS IN NOISY, SINGLE-CELL TRANSCRIPTOME TRAJECTORIES.

    PubMed

    Cordero, Pablo; Stuart, Joshua M

    2017-01-01

    The availability of gene expression data at the single cell level makes it possible to probe the molecular underpinnings of complex biological processes such as differentiation and oncogenesis. Promising new methods have emerged for reconstructing a progression 'trajectory' from static single-cell transcriptome measurements. However, it remains unclear how to adequately model the appreciable level of noise in these data to elucidate gene regulatory network rewiring. Here, we present a framework called Single Cell Inference of MorphIng Trajectories and their Associated Regulation (SCIMITAR) that infers progressions from static single-cell transcriptomes by employing a continuous parametrization of Gaussian mixtures in high-dimensional curves. SCIMITAR yields rich models from the data that highlight genes with expression and co-expression patterns that are associated with the inferred progression. Further, SCIMITAR extracts regulatory states from the implicated trajectory-evolvingco-expression networks. We benchmark the method on simulated data to show that it yields accurate cell ordering and gene network inferences. Applied to the interpretation of a single-cell human fetal neuron dataset, SCIMITAR finds progression-associated genes in cornerstone neural differentiation pathways missed by standard differential expression tests. Finally, by leveraging the rewiring of gene-gene co-expression relations across the progression, the method reveals the rise and fall of co-regulatory states and trajectory-dependent gene modules. These analyses implicate new transcription factors in neural differentiation including putative co-factors for the multi-functional NFAT pathway.

  4. Self-growing neural network architecture using crisp and fuzzy entropy

    NASA Technical Reports Server (NTRS)

    Cios, Krzysztof J.

    1992-01-01

    The paper briefly describes the self-growing neural network algorithm, CID2, which makes decision trees equivalent to hidden layers of a neural network. The algorithm generates a feedforward architecture using crisp and fuzzy entropy measures. The results of a real-life recognition problem of distinguishing defects in a glass ribbon and of a benchmark problem of differentiating two spirals are shown and discussed.

  5. Self-growing neural network architecture using crisp and fuzzy entropy

    NASA Technical Reports Server (NTRS)

    Cios, Krzysztof J.

    1992-01-01

    The paper briefly describes the self-growing neural network algorithm, CID3, which makes decision trees equivalent to hidden layers of a neural network. The algorithm generates a feedforward architecture using crisp and fuzzy entropy measures. The results for a real-life recognition problem of distinguishing defects in a glass ribbon, and for a benchmark problen of telling two spirals apart are shown and discussed.

  6. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)

    1993-01-01

    A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  7. Novel probabilistic neuroclassifier

    NASA Astrophysics Data System (ADS)

    Hong, Jiang; Serpen, Gursel

    2003-09-01

    A novel probabilistic potential function neural network classifier algorithm to deal with classes which are multi-modally distributed and formed from sets of disjoint pattern clusters is proposed in this paper. The proposed classifier has a number of desirable properties which distinguish it from other neural network classifiers. A complete description of the algorithm in terms of its architecture and the pseudocode is presented. Simulation analysis of the newly proposed neuro-classifier algorithm on a set of benchmark problems is presented. Benchmark problems tested include IRIS, Sonar, Vowel Recognition, Two-Spiral, Wisconsin Breast Cancer, Cleveland Heart Disease and Thyroid Gland Disease. Simulation results indicate that the proposed neuro-classifier performs consistently better for a subset of problems for which other neural classifiers perform relatively poorly.

  8. Influence of Choice of Null Network on Small-World Parameters of Structural Correlation Networks

    PubMed Central

    Hosseini, S. M. Hadi; Kesler, Shelli R.

    2013-01-01

    In recent years, coordinated variations in brain morphology (e.g., volume, thickness) have been employed as a measure of structural association between brain regions to infer large-scale structural correlation networks. Recent evidence suggests that brain networks constructed in this manner are inherently more clustered than random networks of the same size and degree. Thus, null networks constructed by randomizing topology are not a good choice for benchmarking small-world parameters of these networks. In the present report, we investigated the influence of choice of null networks on small-world parameters of gray matter correlation networks in healthy individuals and survivors of acute lymphoblastic leukemia. Three types of null networks were studied: 1) networks constructed by topology randomization (TOP), 2) networks matched to the distributional properties of the observed covariance matrix (HQS), and 3) networks generated from correlation of randomized input data (COR). The results revealed that the choice of null network not only influences the estimated small-world parameters, it also influences the results of between-group differences in small-world parameters. In addition, at higher network densities, the choice of null network influences the direction of group differences in network measures. Our data suggest that the choice of null network is quite crucial for interpretation of group differences in small-world parameters of structural correlation networks. We argue that none of the available null models is perfect for estimation of small-world parameters for correlation networks and the relative strengths and weaknesses of the selected model should be carefully considered with respect to obtained network measures. PMID:23840672

  9. Promzea: a pipeline for discovery of co-regulatory motifs in maize and other plant species and its application to the anthocyanin and phlobaphene biosynthetic pathways and the Maize Development Atlas.

    PubMed

    Liseron-Monfils, Christophe; Lewis, Tim; Ashlock, Daniel; McNicholas, Paul D; Fauteux, François; Strömvik, Martina; Raizada, Manish N

    2013-03-15

    The discovery of genetic networks and cis-acting DNA motifs underlying their regulation is a major objective of transcriptome studies. The recent release of the maize genome (Zea mays L.) has facilitated in silico searches for regulatory motifs. Several algorithms exist to predict cis-acting elements, but none have been adapted for maize. A benchmark data set was used to evaluate the accuracy of three motif discovery programs: BioProspector, Weeder and MEME. Analysis showed that each motif discovery tool had limited accuracy and appeared to retrieve a distinct set of motifs. Therefore, using the benchmark, statistical filters were optimized to reduce the false discovery ratio, and then remaining motifs from all programs were combined to improve motif prediction. These principles were integrated into a user-friendly pipeline for motif discovery in maize called Promzea, available at http://www.promzea.org and on the Discovery Environment of the iPlant Collaborative website. Promzea was subsequently expanded to include rice and Arabidopsis. Within Promzea, a user enters cDNA sequences or gene IDs; corresponding upstream sequences are retrieved from the maize genome. Predicted motifs are filtered, combined and ranked. Promzea searches the chosen plant genome for genes containing each candidate motif, providing the user with the gene list and corresponding gene annotations. Promzea was validated in silico using a benchmark data set: the Promzea pipeline showed a 22% increase in nucleotide sensitivity compared to the best standalone program tool, Weeder, with equivalent nucleotide specificity. Promzea was also validated by its ability to retrieve the experimentally defined binding sites of transcription factors that regulate the maize anthocyanin and phlobaphene biosynthetic pathways. Promzea predicted additional promoter motifs, and genome-wide motif searches by Promzea identified 127 non-anthocyanin/phlobaphene genes that each contained all five predicted promoter motifs in their promoters, perhaps uncovering a broader co-regulated gene network. Promzea was also tested against tissue-specific microarray data from maize. An online tool customized for promoter motif discovery in plants has been generated called Promzea. Promzea was validated in silico by its ability to retrieve benchmark motifs and experimentally defined motifs and was tested using tissue-specific microarray data. Promzea predicted broader networks of gene regulation associated with the historic anthocyanin and phlobaphene biosynthetic pathways. Promzea is a new bioinformatics tool for understanding transcriptional gene regulation in maize and has been expanded to include rice and Arabidopsis.

  10. Inventory of Exposure-Related Data Systems Sponsored By Federal Agencies

    DTIC Science & Technology

    1992-05-01

    Health and Nutrition Examination Survey (NHANES) .... 1-152 National Herbicide Use Database .......................... 1-157 National Human Adipose Tissue ...Human Adipose Tissue ) ..................................... National Hydrologic Benchmark Network (see National Water Quality Networks Programs...Inorganic compounds (arsenic, iron, lead, mercury, zinc , cadmium , chromium, copper); pesticides (1982 and 1987 data available for 35 pesticides; original

  11. dynGENIE3: dynamical GENIE3 for the inference of gene networks from time series expression data.

    PubMed

    Huynh-Thu, Vân Anh; Geurts, Pierre

    2018-02-21

    The elucidation of gene regulatory networks is one of the major challenges of systems biology. Measurements about genes that are exploited by network inference methods are typically available either in the form of steady-state expression vectors or time series expression data. In our previous work, we proposed the GENIE3 method that exploits variable importance scores derived from Random forests to identify the regulators of each target gene. This method provided state-of-the-art performance on several benchmark datasets, but it could however not specifically be applied to time series expression data. We propose here an adaptation of the GENIE3 method, called dynamical GENIE3 (dynGENIE3), for handling both time series and steady-state expression data. The proposed method is evaluated extensively on the artificial DREAM4 benchmarks and on three real time series expression datasets. Although dynGENIE3 does not systematically yield the best performance on each and every network, it is competitive with diverse methods from the literature, while preserving the main advantages of GENIE3 in terms of scalability.

  12. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, D. H.; Barszcz, E.; Barton, J. T.; Carter, R. L.; Lasinski, T. A.; Browning, D. S.; Dagum, L.; Fatoohi, R. A.; Frederickson, P. O.; Schreiber, R. S.

    1991-01-01

    A new set of benchmarks has been developed for the performance evaluation of highly parallel supercomputers in the framework of the NASA Ames Numerical Aerodynamic Simulation (NAS) Program. These consist of five 'parallel kernel' benchmarks and three 'simulated application' benchmarks. Together they mimic the computation and data movement characteristics of large-scale computational fluid dynamics applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification-all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  13. A building block for hardware belief networks.

    PubMed

    Behin-Aein, Behtash; Diep, Vinh; Datta, Supriyo

    2016-07-21

    Belief networks represent a powerful approach to problems involving probabilistic inference, but much of the work in this area is software based utilizing standard deterministic hardware based on the transistor which provides the gain and directionality needed to interconnect billions of them into useful networks. This paper proposes a transistor like device that could provide an analogous building block for probabilistic networks. We present two proof-of-concept examples of belief networks, one reciprocal and one non-reciprocal, implemented using the proposed device which is simulated using experimentally benchmarked models.

  14. coNCePTual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pakin, Scott

    2004-05-13

    A frequently reinvented wheel among network researchers is a suite of programs that test a network’s performance. A problem with having umpteen versions of performance tests is that it leads to a variety in the way results are reported; colloquially, apples are often compared to oranges. Consider a bandwidth test. Does a bandwidth test run for a fixed number of iterations or a fixed length of time? Is bandwidth measured as ping-pong bandwidth (i.e., 2 * message length / round-trip time) or unidirectional throughput (N messages in one direction followed by a single acknowledgement message)? Is the acknowledgement message ofmore » minimal length or as long as the entire message? Does its length contribute to the total bandwidth? Is data sent unidirectionally or in both directions at once? How many warmup messages (if any) are sent before the timing loop? Is there a delay after the warmup messages (to give the network a chance to reclaim any scarce resources)? Are receives nonblocking (possibly allowing overlap in the NIC) or blocking? The motivation behind creating coNCePTuaL, a simple specification language designed for describing network benchmarks, is that it enables a benchmark to be described sufficiently tersely as to fit easily in a report or research paper, facilitating peer review of the experimental setup and timing measurements. Because coNCePTuaL code is simple to write, network tests can be developed and deployed with low turnaround times -- useful when the results of one test suggest a following test that should be written. Because coNCePTuaL is special-purpose its run-time system can perform the following functions, which benchmark writers often neglect to implement: * logging information about the environment under which the benchmark ran: operating system, CPU architecture and clock speed, timer type and resolution, etc. * aborting a program if it takes longer than a predetermined length of time to complete * writing measurement data and descriptive statistics to a variety of output formats, including the input formats of various graph-plotting programs coNCePTuaL is not limited to network peformance tests, however. It can also be used for network verification. That is, coNCePTuaL programs can be used to locate failed links or to determine the frequency of bit errors --even those that may sneak past the networks CRC hardware. In addition, because coNCePTuaL is a very high-level language, the coNCePTuaL compiler’s backend has a great deal of potential. It would be possible for the backend to produce a variety of target formats such as Fortran + MPI, Perl + sockets, C + a network vendor’s low-level messaging layer, and so forth. It could directly manipulate a network simulator. It could feed into a graphics program to produce a space-time diagram of a coNCePTuaL program. The possibilities are endless.« less

  15. Markov Dynamics as a Zooming Lens for Multiscale Community Detection: Non Clique-Like Communities and the Field-of-View Limit

    PubMed Central

    Schaub, Michael T.; Delvenne, Jean-Charles; Yaliraki, Sophia N.; Barahona, Mauricio

    2012-01-01

    In recent years, there has been a surge of interest in community detection algorithms for complex networks. A variety of computational heuristics, some with a long history, have been proposed for the identification of communities or, alternatively, of good graph partitions. In most cases, the algorithms maximize a particular objective function, thereby finding the ‘right’ split into communities. Although a thorough comparison of algorithms is still lacking, there has been an effort to design benchmarks, i.e., random graph models with known community structure against which algorithms can be evaluated. However, popular community detection methods and benchmarks normally assume an implicit notion of community based on clique-like subgraphs, a form of community structure that is not always characteristic of real networks. Specifically, networks that emerge from geometric constraints can have natural non clique-like substructures with large effective diameters, which can be interpreted as long-range communities. In this work, we show that long-range communities escape detection by popular methods, which are blinded by a restricted ‘field-of-view’ limit, an intrinsic upper scale on the communities they can detect. The field-of-view limit means that long-range communities tend to be overpartitioned. We show how by adopting a dynamical perspective towards community detection [1], [2], in which the evolution of a Markov process on the graph is used as a zooming lens over the structure of the network at all scales, one can detect both clique- or non clique-like communities without imposing an upper scale to the detection. Consequently, the performance of algorithms on inherently low-diameter, clique-like benchmarks may not always be indicative of equally good results in real networks with local, sparser connectivity. We illustrate our ideas with constructive examples and through the analysis of real-world networks from imaging, protein structures and the power grid, where a multiscale structure of non clique-like communities is revealed. PMID:22384178

  16. Device Discovery in Frequency Hopping Wireless Ad Hoc Networks

    DTIC Science & Technology

    2004-09-01

    10.1. Benchmark scatternet configuration used for outreach compar- ison. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 10.2. Average...Slave - Slave A B C DE Figure 10.1: Benchmark scatternet configuration used for outreach comparison. Additionally: • All nodes are within range of one...MSTSs ISOM mean = 6.97 MSTSs NISOM mean = 7.21 MSTSs Exponential distribution MSTSs P ic on et A -D p ac ke t g en er at io n ti m e pr ob ab il it y

  17. Streamflow characteristics at hydrologic bench-mark stations

    USGS Publications Warehouse

    Lawrence, C.L.

    1987-01-01

    The Hydrologic Bench-Mark Network was established in the 1960's. Its objectives were to document the hydrologic characteristics of representative undeveloped watersheds nationwide and to provide a comparative base for studying the effects of man on the hydrologic environment. The network, which consists of 57 streamflow gaging stations and one lake-stage station in 39 States, is planned for permanent operation. This interim report describes streamflow characteristics at each bench-mark site and identifies time trends in annual streamflow that have occurred during the data-collection period. The streamflow characteristics presented for each streamflow station are (1) flood and low-flow frequencies, (2) flow duration, (3) annual mean flow, and (4) the serial correlation coefficient for annual mean discharge. In addition, Kendall's tau is computed as an indicator of time trend in annual discharges. The period of record for most stations was 13 to 17 years, although several stations had longer periods of record. The longest period was 65 years for Merced River near Yosemite, Calif. Records of flow at 6 of 57 streamflow sites in the network showed a statistically significant change in annual mean discharge over the period of record, based on computations of Kendall's tau. The values of Kendall's tau ranged from -0.533 to 0.648. An examination of climatological records showed that changes in precipitation were most likely the cause for the change in annual mean discharge.

  18. Developing a benchmark for emotional analysis of music

    PubMed Central

    Yang, Yi-Hsuan; Soleymani, Mohammad

    2017-01-01

    Music emotion recognition (MER) field rapidly expanded in the last decade. Many new methods and new audio features are developed to improve the performance of MER algorithms. However, it is very difficult to compare the performance of the new methods because of the data representation diversity and scarcity of publicly available data. In this paper, we address these problems by creating a data set and a benchmark for MER. The data set that we release, a MediaEval Database for Emotional Analysis in Music (DEAM), is the largest available data set of dynamic annotations (valence and arousal annotations for 1,802 songs and song excerpts licensed under Creative Commons with 2Hz time resolution). Using DEAM, we organized the ‘Emotion in Music’ task at MediaEval Multimedia Evaluation Campaign from 2013 to 2015. The benchmark attracted, in total, 21 active teams to participate in the challenge. We analyze the results of the benchmark: the winning algorithms and feature-sets. We also describe the design of the benchmark, the evaluation procedures and the data cleaning and transformations that we suggest. The results from the benchmark suggest that the recurrent neural network based approaches combined with large feature-sets work best for dynamic MER. PMID:28282400

  19. Developing a benchmark for emotional analysis of music.

    PubMed

    Aljanaki, Anna; Yang, Yi-Hsuan; Soleymani, Mohammad

    2017-01-01

    Music emotion recognition (MER) field rapidly expanded in the last decade. Many new methods and new audio features are developed to improve the performance of MER algorithms. However, it is very difficult to compare the performance of the new methods because of the data representation diversity and scarcity of publicly available data. In this paper, we address these problems by creating a data set and a benchmark for MER. The data set that we release, a MediaEval Database for Emotional Analysis in Music (DEAM), is the largest available data set of dynamic annotations (valence and arousal annotations for 1,802 songs and song excerpts licensed under Creative Commons with 2Hz time resolution). Using DEAM, we organized the 'Emotion in Music' task at MediaEval Multimedia Evaluation Campaign from 2013 to 2015. The benchmark attracted, in total, 21 active teams to participate in the challenge. We analyze the results of the benchmark: the winning algorithms and feature-sets. We also describe the design of the benchmark, the evaluation procedures and the data cleaning and transformations that we suggest. The results from the benchmark suggest that the recurrent neural network based approaches combined with large feature-sets work best for dynamic MER.

  20. Quantifying ecological impacts of mass extinctions with network analysis of fossil communities

    PubMed Central

    Muscente, A. D.; Prabhu, Anirudh; Zhong, Hao; Eleish, Ahmed; Meyer, Michael B.; Fox, Peter; Hazen, Robert M.; Knoll, Andrew H.

    2018-01-01

    Mass extinctions documented by the fossil record provide critical benchmarks for assessing changes through time in biodiversity and ecology. Efforts to compare biotic crises of the past and present, however, encounter difficulty because taxonomic and ecological changes are decoupled, and although various metrics exist for describing taxonomic turnover, no methods have yet been proposed to quantify the ecological impacts of extinction events. To address this issue, we apply a network-based approach to exploring the evolution of marine animal communities over the Phanerozoic Eon. Network analysis of fossil co-occurrence data enables us to identify nonrandom associations of interrelated paleocommunities. These associations, or evolutionary paleocommunities, dominated total diversity during successive intervals of relative community stasis. Community turnover occurred largely during mass extinctions and radiations, when ecological reorganization resulted in the decline of one association and the rise of another. Altogether, we identify five evolutionary paleocommunities at the generic and familial levels in addition to three ordinal associations that correspond to Sepkoski’s Cambrian, Paleozoic, and Modern evolutionary faunas. In this context, we quantify magnitudes of ecological change by measuring shifts in the representation of evolutionary paleocommunities over geologic time. Our work shows that the Great Ordovician Biodiversification Event had the largest effect on ecology, followed in descending order by the Permian–Triassic, Cretaceous–Paleogene, Devonian, and Triassic–Jurassic mass extinctions. Despite its taxonomic severity, the Ordovician extinction did not strongly affect co-occurrences of taxa, affirming its limited ecological impact. Network paleoecology offers promising approaches to exploring ecological consequences of extinctions and radiations. PMID:29686079

  1. Quantifying ecological impacts of mass extinctions with network analysis of fossil communities.

    PubMed

    Muscente, A D; Prabhu, Anirudh; Zhong, Hao; Eleish, Ahmed; Meyer, Michael B; Fox, Peter; Hazen, Robert M; Knoll, Andrew H

    2018-05-15

    Mass extinctions documented by the fossil record provide critical benchmarks for assessing changes through time in biodiversity and ecology. Efforts to compare biotic crises of the past and present, however, encounter difficulty because taxonomic and ecological changes are decoupled, and although various metrics exist for describing taxonomic turnover, no methods have yet been proposed to quantify the ecological impacts of extinction events. To address this issue, we apply a network-based approach to exploring the evolution of marine animal communities over the Phanerozoic Eon. Network analysis of fossil co-occurrence data enables us to identify nonrandom associations of interrelated paleocommunities. These associations, or evolutionary paleocommunities, dominated total diversity during successive intervals of relative community stasis. Community turnover occurred largely during mass extinctions and radiations, when ecological reorganization resulted in the decline of one association and the rise of another. Altogether, we identify five evolutionary paleocommunities at the generic and familial levels in addition to three ordinal associations that correspond to Sepkoski's Cambrian, Paleozoic, and Modern evolutionary faunas. In this context, we quantify magnitudes of ecological change by measuring shifts in the representation of evolutionary paleocommunities over geologic time. Our work shows that the Great Ordovician Biodiversification Event had the largest effect on ecology, followed in descending order by the Permian-Triassic, Cretaceous-Paleogene, Devonian, and Triassic-Jurassic mass extinctions. Despite its taxonomic severity, the Ordovician extinction did not strongly affect co-occurrences of taxa, affirming its limited ecological impact. Network paleoecology offers promising approaches to exploring ecological consequences of extinctions and radiations. Copyright © 2018 the Author(s). Published by PNAS.

  2. Learning polynomial feedforward neural networks by genetic programming and backpropagation.

    PubMed

    Nikolaev, N Y; Iba, H

    2003-01-01

    This paper presents an approach to learning polynomial feedforward neural networks (PFNNs). The approach suggests, first, finding the polynomial network structure by means of a population-based search technique relying on the genetic programming paradigm, and second, further adjustment of the best discovered network weights by an especially derived backpropagation algorithm for higher order networks with polynomial activation functions. These two stages of the PFNN learning process enable us to identify networks with good training as well as generalization performance. Empirical results show that this approach finds PFNN which outperform considerably some previous constructive polynomial network algorithms on processing benchmark time series.

  3. CHIMERA: Top-down model for hierarchical, overlapping and directed cluster structures in directed and weighted complex networks

    NASA Astrophysics Data System (ADS)

    Franke, R.

    2016-11-01

    In many networks discovered in biology, medicine, neuroscience and other disciplines special properties like a certain degree distribution and hierarchical cluster structure (also called communities) can be observed as general organizing principles. Detecting the cluster structure of an unknown network promises to identify functional subdivisions, hierarchy and interactions on a mesoscale. It is not trivial choosing an appropriate detection algorithm because there are multiple network, cluster and algorithmic properties to be considered. Edges can be weighted and/or directed, clusters overlap or build a hierarchy in several ways. Algorithms differ not only in runtime, memory requirements but also in allowed network and cluster properties. They are based on a specific definition of what a cluster is, too. On the one hand, a comprehensive network creation model is needed to build a large variety of benchmark networks with different reasonable structures to compare algorithms. On the other hand, if a cluster structure is already known, it is desirable to separate effects of this structure from other network properties. This can be done with null model networks that mimic an observed cluster structure to improve statistics on other network features. A third important application is the general study of properties in networks with different cluster structures, possibly evolving over time. Currently there are good benchmark and creation models available. But what is left is a precise sandbox model to build hierarchical, overlapping and directed clusters for undirected or directed, binary or weighted complex random networks on basis of a sophisticated blueprint. This gap shall be closed by the model CHIMERA (Cluster Hierarchy Interconnection Model for Evaluation, Research and Analysis) which will be introduced and described here for the first time.

  4. Hierarchically clustered adaptive quantization CMAC and its learning convergence.

    PubMed

    Teddy, S D; Lai, E M K; Quek, C

    2007-11-01

    The cerebellar model articulation controller (CMAC) neural network (NN) is a well-established computational model of the human cerebellum. Nevertheless, there are two major drawbacks associated with the uniform quantization scheme of the CMAC network. They are the following: (1) a constant output resolution associated with the entire input space and (2) the generalization-accuracy dilemma. Moreover, the size of the CMAC network is an exponential function of the number of inputs. Depending on the characteristics of the training data, only a small percentage of the entire set of CMAC memory cells is utilized. Therefore, the efficient utilization of the CMAC memory is a crucial issue. One approach is to quantize the input space nonuniformly. For existing nonuniformly quantized CMAC systems, there is a tradeoff between memory efficiency and computational complexity. Inspired by the underlying organizational mechanism of the human brain, this paper presents a novel CMAC architecture named hierarchically clustered adaptive quantization CMAC (HCAQ-CMAC). HCAQ-CMAC employs hierarchical clustering for the nonuniform quantization of the input space to identify significant input segments and subsequently allocating more memory cells to these regions. The stability of the HCAQ-CMAC network is theoretically guaranteed by the proof of its learning convergence. The performance of the proposed network is subsequently benchmarked against the original CMAC network, as well as two other existing CMAC variants on two real-life applications, namely, automated control of car maneuver and modeling of the human blood glucose dynamics. The experimental results have demonstrated that the HCAQ-CMAC network offers an efficient memory allocation scheme and improves the generalization and accuracy of the network output to achieve better or comparable performances with smaller memory usages. Index Terms-Cerebellar model articulation controller (CMAC), hierarchical clustering, hierarchically clustered adaptive quantization CMAC (HCAQ-CMAC), learning convergence, nonuniform quantization.

  5. Heterogeneous Distributed Computing for Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Sunderam, Vaidy S.

    1998-01-01

    The research supported under this award focuses on heterogeneous distributed computing for high-performance applications, with particular emphasis on computational aerosciences. The overall goal of this project was to and investigate issues in, and develop solutions to, efficient execution of computational aeroscience codes in heterogeneous concurrent computing environments. In particular, we worked in the context of the PVM[1] system and, subsequent to detailed conversion efforts and performance benchmarking, devising novel techniques to increase the efficacy of heterogeneous networked environments for computational aerosciences. Our work has been based upon the NAS Parallel Benchmark suite, but has also recently expanded in scope to include the NAS I/O benchmarks as specified in the NHT-1 document. In this report we summarize our research accomplishments under the auspices of the grant.

  6. First Observation of Coseismic Seafloor Crustal Deformation due to M7 Class Earthquakes in the Philippine Sea Plate

    NASA Astrophysics Data System (ADS)

    Tadokoro, K.; Ikuta, R.; Ando, M.; Okuda, T.; Sugimoto, S.; Besana, G. M.; Kuno, M.

    2005-12-01

    The Mw7.3 and 7.5 earthquakes (Off Kii-Peninsula Earthquakes) occurred close to the source region of the anticipated Tonankai Trough in September 5, 2004. The focal mechanisms of the two earthquakes have no low angle nodal planes, which shows that the earthquakes are intraplate earthquakes in the Philippine Sea Plate. We observed coseismic horizontal displacement due to the Off Kii-Peninsula Earthquakes by means of a system for observing seafloor crustal deformation, which is the first observation of coseismic seafloor displacement in the world. We have developed a system for observing seafloor crustal deformation. The observation system is composed of 1) acoustic measurement between a ship transducer and sea-bottom transponders, and 2) kinematic GPS positioning of the observation vessel. We have installed a seafloor benchmark close to the epicenters of the Off Kii-Peninsula Earthquakes. The benchmark is composed of three sea-bottom transponders. The location of benchmark is defined as the weight center of the three transponders. We can determine the location of benchmark with an accuracy of about 5 cm at each observation. We have repeatedly measured the seafloor benchmark six times up to now: 1) July 12-16 and 21-22, 2004, 2) November 9-10, 3) January 19, 2005, 4) May 18-20, 5) July 19-20, and 6) August 18-19 and 29-30. The Off Kii-Peninsula Earthquakes occurred during the above monitoring period. The coseismic horizontal displacement of about 21 cm toward SSE was observed at our seafloor benchmark. The displacement is 3.5 times as large as the maximum displacement observed by on land GPS network in Japan, GEONET. The monitoring of seafloor crustal deformation is effective to detect the deformations associated with earthquakes occurring in ocean areas. This study is promoted by "Research Revolution 2002" of Ministry of Education, Culture, Sports, Science and Technology, Japan. We are grateful to the captain and crews of Research Vessel, Asama, of Mie Prefectural Science and Technology Promotion Center, Japan.

  7. Diagnosing the Causes and Severity of One-sided Message Contention

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tallent, Nathan R.; Vishnu, Abhinav; van Dam, Hubertus

    Two trends suggest network contention for one-sided messages is poised to become a performance problem that concerns application developers: an increased interest in one-sided programming models and a rising ratio of hardware threads to network injection bandwidth. Unfortunately, it is difficult to reason about network contention and one-sided messages because one-sided tasks can either decrease or increase contention. We present effective and portable techniques for diagnosing the causes and severity of one-sided message contention. To detect that a message is affected by contention, we maintain statistics representing instantaneous (non-local) network resource demand. Using lightweight measurement and modeling, we identify themore » portion of a message's latency that is due to contention and whether contention occurs at the initiator or target. We attribute these metrics to program statements in their full static and dynamic context. We characterize contention for an important computational chemistry benchmark on InfiniBand, Cray Aries, and IBM Blue Gene/Q interconnects. We pinpoint the sources of contention, estimate their severity, and show that when message delivery time deviates from an ideal model, there are other messages contending for the same network links. With a small change to the benchmark, we reduce contention up to 50% and improve total runtime as much as 20%.« less

  8. Network clustering and community detection using modulus of families of loops.

    PubMed

    Shakeri, Heman; Poggi-Corradini, Pietro; Albin, Nathan; Scoglio, Caterina

    2017-01-01

    We study the structure of loops in networks using the notion of modulus of loop families. We introduce an alternate measure of network clustering by quantifying the richness of families of (simple) loops. Modulus tries to minimize the expected overlap among loops by spreading the expected link usage optimally. We propose weighting networks using these expected link usages to improve classical community detection algorithms. We show that the proposed method enhances the performance of certain algorithms, such as spectral partitioning and modularity maximization heuristics, on standard benchmarks.

  9. InfAcrOnt: calculating cross-ontology term similarities using information flow by a random walk.

    PubMed

    Cheng, Liang; Jiang, Yue; Ju, Hong; Sun, Jie; Peng, Jiajie; Zhou, Meng; Hu, Yang

    2018-01-19

    Since the establishment of the first biomedical ontology Gene Ontology (GO), the number of biomedical ontology has increased dramatically. Nowadays over 300 ontologies have been built including extensively used Disease Ontology (DO) and Human Phenotype Ontology (HPO). Because of the advantage of identifying novel relationships between terms, calculating similarity between ontology terms is one of the major tasks in this research area. Though similarities between terms within each ontology have been studied with in silico methods, term similarities across different ontologies were not investigated as deeply. The latest method took advantage of gene functional interaction network (GFIN) to explore such inter-ontology similarities of terms. However, it only used gene interactions and failed to make full use of the connectivity among gene nodes of the network. In addition, all existent methods are particularly designed for GO and their performances on the extended ontology community remain unknown. We proposed a method InfAcrOnt to infer similarities between terms across ontologies utilizing the entire GFIN. InfAcrOnt builds a term-gene-gene network which comprised ontology annotations and GFIN, and acquires similarities between terms across ontologies through modeling the information flow within the network by random walk. In our benchmark experiments on sub-ontologies of GO, InfAcrOnt achieves a high average area under the receiver operating characteristic curve (AUC) (0.9322 and 0.9309) and low standard deviations (1.8746e-6 and 3.0977e-6) in both human and yeast benchmark datasets exhibiting superior performance. Meanwhile, comparisons of InfAcrOnt results and prior knowledge on pair-wise DO-HPO terms and pair-wise DO-GO terms show high correlations. The experiment results show that InfAcrOnt significantly improves the performance of inferring similarities between terms across ontologies in benchmark set.

  10. Using a binaural biomimetic array to identify bottom objects ensonified by echolocating dolphins

    USGS Publications Warehouse

    Heiweg, D.A.; Moore, P.W.; Martin, S.W.; Dankiewicz, L.A.

    2006-01-01

    The development of a unique dolphin biomimetic sonar produced data that were used to study signal processing methods for object identification. Echoes from four metallic objects proud on the bottom, and a substrate-only condition, were generated by bottlenose dolphins trained to ensonify the targets in very shallow water. Using the two-element ('binaural') receive array, object echo spectra were collected and submitted for identification to four neural network architectures. Identification accuracy was evaluated over two receive array configurations, and five signal processing schemes. The four neural networks included backpropagation, learning vector quantization, genetic learning and probabilistic network architectures. The processing schemes included four methods that capitalized on the binaural data, plus a monaural benchmark process. All the schemes resulted in above-chance identification accuracy when applied to learning vector quantization and backpropagation. Beam-forming or concatenation of spectra from both receive elements outperformed the monaural benchmark, with higher sensitivity and lower bias. Ultimately, best object identification performance was achieved by the learning vector quantization network supplied with beam-formed data. The advantages of multi-element signal processing for object identification are clearly demonstrated in this development of a first-ever dolphin biomimetic sonar. ?? 2006 IOP Publishing Ltd.

  11. Maximizing the Spread of Influence via Generalized Degree Discount.

    PubMed

    Wang, Xiaojie; Zhang, Xue; Zhao, Chengli; Yi, Dongyun

    2016-01-01

    It is a crucial and fundamental issue to identify a small subset of influential spreaders that can control the spreading process in networks. In previous studies, a degree-based heuristic called DegreeDiscount has been shown to effectively identify multiple influential spreaders and has severed as a benchmark method. However, the basic assumption of DegreeDiscount is not adequate, because it treats all the nodes equally without any differences. To consider a general situation in real world networks, a novel heuristic method named GeneralizedDegreeDiscount is proposed in this paper as an effective extension of original method. In our method, the status of a node is defined as a probability of not being influenced by any of its neighbors, and an index generalized discounted degree of one node is presented to measure the expected number of nodes it can influence. Then the spreaders are selected sequentially upon its generalized discounted degree in current network. Empirical experiments are conducted on four real networks, and the results show that the spreaders identified by our approach are more influential than several benchmark methods. Finally, we analyze the relationship between our method and three common degree-based methods.

  12. Maximizing the Spread of Influence via Generalized Degree Discount

    PubMed Central

    Wang, Xiaojie; Zhang, Xue; Zhao, Chengli; Yi, Dongyun

    2016-01-01

    It is a crucial and fundamental issue to identify a small subset of influential spreaders that can control the spreading process in networks. In previous studies, a degree-based heuristic called DegreeDiscount has been shown to effectively identify multiple influential spreaders and has severed as a benchmark method. However, the basic assumption of DegreeDiscount is not adequate, because it treats all the nodes equally without any differences. To consider a general situation in real world networks, a novel heuristic method named GeneralizedDegreeDiscount is proposed in this paper as an effective extension of original method. In our method, the status of a node is defined as a probability of not being influenced by any of its neighbors, and an index generalized discounted degree of one node is presented to measure the expected number of nodes it can influence. Then the spreaders are selected sequentially upon its generalized discounted degree in current network. Empirical experiments are conducted on four real networks, and the results show that the spreaders identified by our approach are more influential than several benchmark methods. Finally, we analyze the relationship between our method and three common degree-based methods. PMID:27732681

  13. Benchmarking Ligand-Based Virtual High-Throughput Screening with the PubChem Database

    PubMed Central

    Butkiewicz, Mariusz; Lowe, Edward W.; Mueller, Ralf; Mendenhall, Jeffrey L.; Teixeira, Pedro L.; Weaver, C. David; Meiler, Jens

    2013-01-01

    With the rapidly increasing availability of High-Throughput Screening (HTS) data in the public domain, such as the PubChem database, methods for ligand-based computer-aided drug discovery (LB-CADD) have the potential to accelerate and reduce the cost of probe development and drug discovery efforts in academia. We assemble nine data sets from realistic HTS campaigns representing major families of drug target proteins for benchmarking LB-CADD methods. Each data set is public domain through PubChem and carefully collated through confirmation screens validating active compounds. These data sets provide the foundation for benchmarking a new cheminformatics framework BCL::ChemInfo, which is freely available for non-commercial use. Quantitative structure activity relationship (QSAR) models are built using Artificial Neural Networks (ANNs), Support Vector Machines (SVMs), Decision Trees (DTs), and Kohonen networks (KNs). Problem-specific descriptor optimization protocols are assessed including Sequential Feature Forward Selection (SFFS) and various information content measures. Measures of predictive power and confidence are evaluated through cross-validation, and a consensus prediction scheme is tested that combines orthogonal machine learning algorithms into a single predictor. Enrichments ranging from 15 to 101 for a TPR cutoff of 25% are observed. PMID:23299552

  14. Training Deep Spiking Neural Networks Using Backpropagation.

    PubMed

    Lee, Jun Haeng; Delbruck, Tobi; Pfeiffer, Michael

    2016-01-01

    Deep spiking neural networks (SNNs) hold the potential for improving the latency and energy efficiency of deep neural networks through data-driven event-based computation. However, training such networks is difficult due to the non-differentiable nature of spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise. This enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membrane potentials. Compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statistics of spikes more precisely. We evaluate the proposed framework on artificially generated events from the original MNIST handwritten digit benchmark, and also on the N-MNIST benchmark recorded with an event-based dynamic vision sensor, in which the proposed method reduces the error rate by a factor of more than three compared to the best previous SNN, and also achieves a higher accuracy than a conventional convolutional neural network (CNN) trained and tested on the same data. We demonstrate in the context of the MNIST task that thanks to their event-driven operation, deep SNNs (both fully connected and convolutional) trained with our method achieve accuracy equivalent with conventional neural networks. In the N-MNIST example, equivalent accuracy is achieved with about five times fewer computational operations.

  15. Development of a sensor coordinated kinematic model for neural network controller training

    NASA Technical Reports Server (NTRS)

    Jorgensen, Charles C.

    1990-01-01

    A robotic benchmark problem useful for evaluating alternative neural network controllers is presented. Specifically, it derives two camera models and the kinematic equations of a multiple degree of freedom manipulator whose end effector is under observation. The mapping developed include forward and inverse translations from binocular images to 3-D target position and the inverse kinematics of mapping point positions into manipulator commands in joint space. Implementation is detailed for a three degree of freedom manipulator with one revolute joint at the base and two prismatic joints on the arms. The example is restricted to operate within a unit cube with arm links of 0.6 and 0.4 units respectively. The development is presented in the context of more complex simulations and a logical path for extension of the benchmark to higher degree of freedom manipulators is presented.

  16. Reinforced two-step-ahead weight adjustment technique for online training of recurrent neural networks.

    PubMed

    Chang, Li-Chiu; Chen, Pin-An; Chang, Fi-John

    2012-08-01

    A reliable forecast of future events possesses great value. The main purpose of this paper is to propose an innovative learning technique for reinforcing the accuracy of two-step-ahead (2SA) forecasts. The real-time recurrent learning (RTRL) algorithm for recurrent neural networks (RNNs) can effectively model the dynamics of complex processes and has been used successfully in one-step-ahead forecasts for various time series. A reinforced RTRL algorithm for 2SA forecasts using RNNs is proposed in this paper, and its performance is investigated by two famous benchmark time series and a streamflow during flood events in Taiwan. Results demonstrate that the proposed reinforced 2SA RTRL algorithm for RNNs can adequately forecast the benchmark (theoretical) time series, significantly improve the accuracy of flood forecasts, and effectively reduce time-lag effects.

  17. All-in-one model for designing optimal water distribution pipe networks

    NASA Astrophysics Data System (ADS)

    Aklog, Dagnachew; Hosoi, Yoshihiko

    2017-05-01

    This paper discusses the development of an easy-to-use, all-in-one model for designing optimal water distribution networks. The model combines different optimization techniques into a single package in which a user can easily choose what optimizer to use and compare the results of different optimizers to gain confidence in the performances of the models. At present, three optimization techniques are included in the model: linear programming (LP), genetic algorithm (GA) and a heuristic one-by-one reduction method (OBORM) that was previously developed by the authors. The optimizers were tested on a number of benchmark problems and performed very well in terms of finding optimal or near-optimal solutions with a reasonable computation effort. The results indicate that the model effectively addresses the issues of complexity and limited performance trust associated with previous models and can thus be used for practical purposes.

  18. Blood and Marrow Transplant Clinical Trials Network Report on the Development of Novel Endpoints and Selection of Promising Approaches for Graft-versus-Host Disease Prevention Trials.

    PubMed

    Pasquini, Marcelo C; Logan, Brent; Jones, Richard J; Alousi, Amin M; Appelbaum, Frederick R; Bolaños-Meade, Javier; Flowers, Mary E D; Giralt, Sergio; Horowitz, Mary M; Jacobsohn, David; Koreth, John; Levine, John E; Luznik, Leo; Maziarz, Richard; Mendizabal, Adam; Pavletic, Steven; Perales, Miguel-Angel; Porter, David; Reshef, Ran; Weisdorf, Daniel; Antin, Joseph H

    2018-06-01

    Graft-versus-host disease (GVHD) is a common complication after hematopoietic cell transplantation (HCT) and associated with significant morbidity and mortality. Preventing GVHD without chronic therapy or increasing relapse is a desired goal. Here we report a benchmark analysis to evaluate the performance of 6 GVHD prevention strategies tested at single institutions compared with a large multicenter outcomes database as a control. Each intervention was compared with the control for the incidence of acute and chronic GVHD and overall survival and against novel composite endpoints: acute and chronic GVHD, relapse-free survival (GRFS), and chronic GVHD, relapse-free survival (CRFS). Modeling GRFS and CRFS using the benchmark analysis further informed the design of 2 clinical trials testing GVHD prophylaxis interventions. This study demonstrates the potential benefit of using an outcomes database to select promising interventions for multicenter clinical trials and proposes novel composite endpoints for use in GVHD prevention trials. Copyright © 2018 The American Society for Blood and Marrow Transplantation. Published by Elsevier Inc. All rights reserved.

  19. Capillary-Driven Flow in Liquid Filaments Connecting Orthogonal Channels

    NASA Technical Reports Server (NTRS)

    Allen, Jeffrey S.

    2005-01-01

    Capillary phenomena plays an important role in the management of product water in PEM fuel cells because of the length scales associated with the porous layers and the gas flow channels. The distribution of liquid water within the network of gas flow channels can be dramatically altered by capillary flow. We experimentally demonstrate the rapid movement of significant volumes of liquid via capillarity through thin liquid films which connect orthogonal channels. The microfluidic experiments discussed provide a good benchmark against which the proper modeling of capillarity by computational models may be tested. The effect of surface wettability, as expressed through the contact angle, on capillary flow will also be discussed.

  20. Establishment of National Gravity Base Network of Iran

    NASA Astrophysics Data System (ADS)

    Hatam Chavari, Y.; Bayer, R.; Hinderer, J.; Ghazavi, K.; Sedighi, M.; Luck, B.; Djamour, Y.; Le Moign, N.; Saadat, R.; Cheraghi, H.

    2009-04-01

    A gravity base network is supposed to be a set of benchmarks uniformly distributed across the country and the absolute gravity values at the benchmarks are known to the best accessible accuracy. The gravity at the benchmark stations are either measured directly with absolute devices or transferred by gravity difference measurements by gravimeters from known stations. To decrease the accumulation of random measuring errors arising from these transfers, the number of base stations distributed across the country should be as small as possible. This is feasible if the stations are selected near to the national airports long distances apart but faster accessible and measurable by a gravimeter carried in an airplane between the stations. To realize the importance of such a network, various applications of a gravity base network are firstly reviewed. A gravity base network is the required reference frame for establishing 1st , 2nd and 3rd order gravity networks. Such a gravity network is used for the following purposes: a. Mapping of the structure of upper crust in geology maps. The required accuracy for the measured gravity values is about 0.2 to 0.4 mGal. b. Oil and mineral explorations. The required accuracy for the measured gravity values is about 5 µGal. c. Geotechnical studies in mining areas for exploring the underground cavities as well as archeological studies. The required accuracy is about 5 µGal and better. d. Subsurface water resource explorations and mapping crustal layers which absorb it. An accuracy of the same level of previous applications is required here too. e. Studying the tectonics of the Earth's crust. Repeated precise gravity measurements at the gravity network stations can assist us in identifying systematic height changes. The accuracy of the order of 5 µGal and more is required. f. Studying volcanoes and their evolution. Repeated precise gravity measurements at the gravity network stations can provide valuable information on the gradual upward movement of lava. g. Producing precise mean gravity anomaly for precise geoid determination. Replacing precise spirit leveling by the GPS leveling using precise geoid model is one of the forth coming application of the precise geoid. A gravity base network of 28 stations established over Iran. The stations were built mainly at bedrocks. All stations were measured by an FG5 absolute gravimeter, at least 12 hours at each station, to obtain an accuracy of a few micro gals. Several stations were repeated several times during recent years to estimate the gravity changes.

  1. A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design.

    PubMed

    Ó Conchúir, Shane; Barlow, Kyle A; Pache, Roland A; Ollikainen, Noah; Kundert, Kale; O'Meara, Matthew J; Smith, Colin A; Kortemme, Tanja

    2015-01-01

    The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks) to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available.

  2. An Expert System for Processing Uncorrelated Satellite Tracks

    DTIC Science & Technology

    1992-12-17

    earthworms with much intellect e\\en though they routinely carry out this same function. One definition given artificial intelligence is "the study of mental...Networks: Benchmarking Studies ," Proceedings from the IEEE International Conference on Neural Networkv. pp. 64-65, 1988. 229 Lyddane, R., "Small...reverse if necessary and rdenqtl_ by block number, Field Group Subgroup Artificial Intelligence, Expert Systems, Neural Networks. Orbital Mechanics

  3. GASNet-EX Performance Improvements Due to Specialization for the Cray Aries Network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hargrove, Paul H.; Bonachea, Dan

    This document is a deliverable for milestone STPM17-6 of the Exascale Computing Project, delivered by WBS 2.3.1.14. It reports on the improvements in performance observed on Cray XC-series systems due to enhancements made to the GASNet-EX software. These enhancements, known as “specializations”, primarily consist of replacing network-independent implementations of several recently added features with implementations tailored to the Cray Aries network. Performance gains from specialization include (1) Negotiated-Payload Active Messages improve bandwidth of a ping-pong test by up to 14%, (2) Immediate Operations reduce running time of a synthetic benchmark by up to 93%, (3) non-bulk RMA Put bandwidth ismore » increased by up to 32%, (4) Remote Atomic performance is 70% faster than the reference on a point-to-point test and allows a hot-spot test to scale robustly, and (5) non-contiguous RMA interfaces see up to 8.6x speedups for an intra-node benchmark and 26% for inter-node. These improvements are available in the GASNet-EX 2018.3.0 release.« less

  4. Augmented neural networks and problem structure-based heuristics for the bin-packing problem

    NASA Astrophysics Data System (ADS)

    Kasap, Nihat; Agarwal, Anurag

    2012-08-01

    In this article, we report on a research project where we applied augmented-neural-networks (AugNNs) approach for solving the classical bin-packing problem (BPP). AugNN is a metaheuristic that combines a priority rule heuristic with the iterative search approach of neural networks to generate good solutions fast. This is the first time this approach has been applied to the BPP. We also propose a decomposition approach for solving harder BPP, in which subproblems are solved using a combination of AugNN approach and heuristics that exploit the problem structure. We discuss the characteristics of problems on which such problem structure-based heuristics could be applied. We empirically show the effectiveness of the AugNN and the decomposition approach on many benchmark problems in the literature. For the 1210 benchmark problems tested, 917 problems were solved to optimality and the average gap between the obtained solution and the upper bound for all the problems was reduced to under 0.66% and computation time averaged below 33 s per problem. We also discuss the computational complexity of our approach.

  5. An approach to radiation safety department benchmarking in academic and medical facilities.

    PubMed

    Harvey, Richard P

    2015-02-01

    Based on anecdotal evidence and networking with colleagues at other facilities, it has become evident that some radiation safety departments are not adequately staffed and radiation safety professionals need to increase their staffing levels. Discussions with management regarding radiation safety department staffing often lead to similar conclusions. Management acknowledges the Radiation Safety Officer (RSO) or Director of Radiation Safety's concern but asks the RSO to provide benchmarking and justification for additional full-time equivalents (FTEs). The RSO must determine a method to benchmark and justify additional staffing needs while struggling to maintain a safe and compliant radiation safety program. Benchmarking and justification are extremely important tools that are commonly used to demonstrate the need for increased staffing in other disciplines and are tools that can be used by radiation safety professionals. Parameters that most RSOs would expect to be positive predictors of radiation safety staff size generally are and can be emphasized in benchmarking and justification report summaries. Facilities with large radiation safety departments tend to have large numbers of authorized users, be broad-scope programs, be subject to increased controls regulations, have large clinical operations, have significant numbers of academic radiation-producing machines, and have laser safety responsibilities.

  6. Distance-Based and Low Energy Adaptive Clustering Protocol for Wireless Sensor Networks

    PubMed Central

    Gani, Abdullah; Anisi, Mohammad Hossein; Ab Hamid, Siti Hafizah; Akhunzada, Adnan; Khan, Muhammad Khurram

    2016-01-01

    A wireless sensor network (WSN) comprises small sensor nodes with limited energy capabilities. The power constraints of WSNs necessitate efficient energy utilization to extend the overall network lifetime of these networks. We propose a distance-based and low-energy adaptive clustering (DISCPLN) protocol to streamline the green issue of efficient energy utilization in WSNs. We also enhance our proposed protocol into the multi-hop-DISCPLN protocol to increase the lifetime of the network in terms of high throughput with minimum delay time and packet loss. We also propose the mobile-DISCPLN protocol to maintain the stability of the network. The modelling and comparison of these protocols with their corresponding benchmarks exhibit promising results. PMID:27658194

  7. Nomenclatural benchmarking: the roles of digital typification and telemicroscopy

    PubMed Central

    Wheeler, Quentin; Bourgoin, Thierry; Coddington, Jonathan; Gostony, Timothy; Hamilton, Andrew; Larimer, Roy; Polaszek, Andrew; Schauff, Michael; Solis, M. Alma

    2012-01-01

    Abstract Nomenclatural benchmarking is the periodic realignment of species names with species theories and is necessary for the accurate and uniform use of Linnaean binominals in the face of changing species limits. Gaining access to types, often for little more than a cursory examination by an expert, is a major bottleneck in the advance and availability of biodiversity informatics. For the nearly two million described species it has been estimated that five to six million name-bearing type specimens exist, including those for synonymized binominals. Recognizing that examination of types in person will remain necessary in special cases, we propose a four-part strategy for opening access to types that relies heavily on digitization and that would eliminate much of the bottleneck: (1) modify codes of nomenclature to create registries of nomenclatural acts, such as the proposed ZooBank, that include a requirement for digital representations (e-types) for all newly described species to avoid adding to backlog; (2) an “r” strategy that would engineer and deploy a network of automated instruments capable of rapidly creating 3-D images of type specimens not requiring participation of taxon experts; (3) a “K” strategy using remotely operable microscopes to engage taxon experts in targeting and annotating informative characters of types to supplement and extend information content of rapidly acquired e-types, a process that can be done on an as-needed basis as in the normal course of revisionary taxonomy; and (4) creation of a global e-type archive associated with the commissions on nomenclature and species registries providing one-stop-shopping for e-types. We describe a first generation implementation of the “K” strategy that adapts current technology to create a network of Remotely Operable Benchmarkers Of Types (ROBOT) specifically engineered to handle the largest backlog of types, pinned insect specimens. The three initial instruments will be in the Smithsonian Institution(Washington, DC), Natural History Museum (London), and Museum National d’Histoire Naturelle (Paris), networking the three largest insect collections in the world with entomologists worldwide. These three instruments make possible remote examination, manipulation, and photography of types for more than 600,000 species. This is a cybertaxonomy demonstration project that we anticipate will lead to similar instruments for a wide range of museum specimens and objects as well as revolutionary changes in collaborative taxonomy and formal and public taxonomic education. PMID:22859888

  8. Energy-efficient virtual optical network mapping approaches over converged flexible bandwidth optical networks and data centers.

    PubMed

    Chen, Bowen; Zhao, Yongli; Zhang, Jie

    2015-09-21

    In this paper, we develop a virtual link priority mapping (LPM) approach and a virtual node priority mapping (NPM) approach to improve the energy efficiency and to reduce the spectrum usage over the converged flexible bandwidth optical networks and data centers. For comparison, the lower bound of the virtual optical network mapping is used for the benchmark solutions. Simulation results show that the LPM approach achieves the better performance in terms of power consumption, energy efficiency, spectrum usage, and the number of regenerators compared to the NPM approach.

  9. A Squeezed Artificial Neural Network for the Symbolic Network Reliability Functions of Binary-State Networks.

    PubMed

    Yeh, Wei-Chang

    Network reliability is an important index to the provision of useful information for decision support in the modern world. There is always a need to calculate symbolic network reliability functions (SNRFs) due to dynamic and rapid changes in network parameters. In this brief, the proposed squeezed artificial neural network (SqANN) approach uses the Monte Carlo simulation to estimate the corresponding reliability of a given designed matrix from the Box-Behnken design, and then the Taguchi method is implemented to find the appropriate number of neurons and activation functions of the hidden layer and the output layer in ANN to evaluate SNRFs. According to the experimental results of the benchmark networks, the comparison appears to support the superiority of the proposed SqANN method over the traditional ANN-based approach with at least 16.6% improvement in the median absolute deviation in the cost of extra 2 s on average for all experiments.Network reliability is an important index to the provision of useful information for decision support in the modern world. There is always a need to calculate symbolic network reliability functions (SNRFs) due to dynamic and rapid changes in network parameters. In this brief, the proposed squeezed artificial neural network (SqANN) approach uses the Monte Carlo simulation to estimate the corresponding reliability of a given designed matrix from the Box-Behnken design, and then the Taguchi method is implemented to find the appropriate number of neurons and activation functions of the hidden layer and the output layer in ANN to evaluate SNRFs. According to the experimental results of the benchmark networks, the comparison appears to support the superiority of the proposed SqANN method over the traditional ANN-based approach with at least 16.6% improvement in the median absolute deviation in the cost of extra 2 s on average for all experiments.

  10. Multiplex visibility graphs to investigate recurrent neural network dynamics

    NASA Astrophysics Data System (ADS)

    Bianchi, Filippo Maria; Livi, Lorenzo; Alippi, Cesare; Jenssen, Robert

    2017-03-01

    A recurrent neural network (RNN) is a universal approximator of dynamical systems, whose performance often depends on sensitive hyperparameters. Tuning them properly may be difficult and, typically, based on a trial-and-error approach. In this work, we adopt a graph-based framework to interpret and characterize internal dynamics of a class of RNNs called echo state networks (ESNs). We design principled unsupervised methods to derive hyperparameters configurations yielding maximal ESN performance, expressed in terms of prediction error and memory capacity. In particular, we propose to model time series generated by each neuron activations with a horizontal visibility graph, whose topological properties have been shown to be related to the underlying system dynamics. Successively, horizontal visibility graphs associated with all neurons become layers of a larger structure called a multiplex. We show that topological properties of such a multiplex reflect important features of ESN dynamics that can be used to guide the tuning of its hyperparamers. Results obtained on several benchmarks and a real-world dataset of telephone call data records show the effectiveness of the proposed methods.

  11. Multiplex visibility graphs to investigate recurrent neural network dynamics

    PubMed Central

    Bianchi, Filippo Maria; Livi, Lorenzo; Alippi, Cesare; Jenssen, Robert

    2017-01-01

    A recurrent neural network (RNN) is a universal approximator of dynamical systems, whose performance often depends on sensitive hyperparameters. Tuning them properly may be difficult and, typically, based on a trial-and-error approach. In this work, we adopt a graph-based framework to interpret and characterize internal dynamics of a class of RNNs called echo state networks (ESNs). We design principled unsupervised methods to derive hyperparameters configurations yielding maximal ESN performance, expressed in terms of prediction error and memory capacity. In particular, we propose to model time series generated by each neuron activations with a horizontal visibility graph, whose topological properties have been shown to be related to the underlying system dynamics. Successively, horizontal visibility graphs associated with all neurons become layers of a larger structure called a multiplex. We show that topological properties of such a multiplex reflect important features of ESN dynamics that can be used to guide the tuning of its hyperparamers. Results obtained on several benchmarks and a real-world dataset of telephone call data records show the effectiveness of the proposed methods. PMID:28281563

  12. IEEE 342 Node Low Voltage Networked Test System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schneider, Kevin P.; Phanivong, Phillippe K.; Lacroix, Jean-Sebastian

    The IEEE Distribution Test Feeders provide a benchmark for new algorithms to the distribution analyses community. The low voltage network test feeder represents a moderate size urban system that is unbalanced and highly networked. This is the first distribution test feeder developed by the IEEE that contains unbalanced networked components. The 342 node Low Voltage Networked Test System includes many elements that may be found in a networked system: multiple 13.2kV primary feeders, network protectors, a 120/208V grid network, and multiple 277/480V spot networks. This paper presents a brief review of the history of low voltage networks and how theymore » evolved into the modern systems. This paper will then present a description of the 342 Node IEEE Low Voltage Network Test System and power flow results.« less

  13. Exploring Infiniband Hardware Virtualization in OpenNebula towards Efficient High-Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pais Pitta de Lacerda Ruivo, Tiago; Bernabeu Altayo, Gerard; Garzoglio, Gabriele

    2014-11-11

    has been widely accepted that software virtualization has a big negative impact on high-performance computing (HPC) application performance. This work explores the potential use of Infiniband hardware virtualization in an OpenNebula cloud towards the efficient support of MPI-based workloads. We have implemented, deployed, and tested an Infiniband network on the FermiCloud private Infrastructure-as-a-Service (IaaS) cloud. To avoid software virtualization towards minimizing the virtualization overhead, we employed a technique called Single Root Input/Output Virtualization (SRIOV). Our solution spanned modifications to the Linux’s Hypervisor as well as the OpenNebula manager. We evaluated the performance of the hardware virtualization on up to 56more » virtual machines connected by up to 8 DDR Infiniband network links, with micro-benchmarks (latency and bandwidth) as well as w a MPI-intensive application (the HPL Linpack benchmark).« less

  14. BENCHMARK DOSES FOR CHEMICAL MIXTURES: EVALUATION OF A MIXTURE OF 18 PHAHS.

    EPA Science Inventory

    Benchmark doses (BMDs), defined as doses of a substance that are expected to result in a pre-specified level of "benchmark" response (BMR), have been used for quantifying the risk associated with exposure to environmental hazards. The lower confidence limit of the BMD is used as...

  15. Ventilator-associated pneumonia rates at major trauma centers compared with a national benchmark: a multi-institutional study of the AAST.

    PubMed

    Michetti, Christopher P; Fakhry, Samir M; Ferguson, Pamela L; Cook, Alan; Moore, Forrest O; Gross, Ronald

    2012-05-01

    Ventilator-associated pneumonia (VAP) rates reported by the National Healthcare Safety Network (NHSN) are used as a benchmark and quality measure, yet different rates are reported from many trauma centers. This multi-institutional study was undertaken to elucidate VAP rates at major trauma centers. VAP rate/1,000 ventilator days, diagnostic methods, institutional, and aggregate patient data were collected retrospectively from a convenience sample of trauma centers for 2008 and 2009 and analyzed with descriptive statistics. At 47 participating Level I and II centers, the pooled mean VAP rate was 17.2 versus 8.1 for NHSN (2006-2008). Hospitals' rates were highly variable (range, 1.8-57.6), with 72.3% being above NHSN's mean. Rates differed based on who determined the rate (trauma service, 27.5; infection control or quality or epidemiology, 11.9; or collaborative effort, 19.9) and the frequency with which VAP was excluded based on aspiration or diagnosis before hospital day 5. In 2008 and 2009, blunt trauma patients had higher VAP rates (17.3 and 17.6, respectively) than penetrating patients (11.0 and 10.9, respectively). More centers used a clinical diagnostic strategy (57%) than a bacteriologic strategy (43%). Patients with VAP had a mean Injury Severity Score of 28.7, mean Intensive Care Unit length of stay of 20.8 days, and a 12.2% mortality rate. 50.5% of VAP patients had a traumatic brain injury. VAP rates at major trauma centers are markedly higher than those reported by NHSN and vary significantly among centers. Available data are insufficient to set benchmarks, because it is questionable whether any one data set is truly representative of most trauma centers. Application of a single benchmark to all centers may be inappropriate, and reliable diagnostic and reporting standards are needed. Prospective analysis of a larger data set is warranted, with attention to injury severity, risk factors specific to trauma patients, diagnostic method used, VAP definitions and exclusions, and reporting guidelines. III, prognostic study.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Munro, J.F.; Kristal, J.; Thompson, G.

    The Office of Environmental Management is bringing Headquarters and the Field together to implement process improvements throughout the Complex through a systematic process of organizational learning called benchmarking. Simply stated, benchmarking is a process of continuously comparing and measuring practices, processes, or methodologies with those of other private and public organizations. The EM benchmarking program, which began as the result of a recommendation from Xerox Corporation, is building trust and removing barriers to performance enhancement across the DOE organization. The EM benchmarking program is designed to be field-centered with Headquarters providing facilitatory and integrative functions on an ``as needed`` basis.more » One of the main goals of the program is to assist Field Offices and their associated M&O/M&I contractors develop the capabilities to do benchmarking for themselves. In this regard, a central precept is that in order to realize tangible performance benefits, program managers and staff -- the ones closest to the work - must take ownership of the studies. This avoids the ``check the box`` mentality associated with some third party studies. This workshop will provide participants with a basic level of understanding why the EM benchmarking team was developed and the nature and scope of its mission. Participants will also begin to understand the types of study levels and the particular methodology the EM benchmarking team is using to conduct studies. The EM benchmarking team will also encourage discussion on ways that DOE (both Headquarters and the Field) can team with its M&O/M&I contractors to conduct additional benchmarking studies. This ``introduction to benchmarking`` is intended to create a desire to know more and a greater appreciation of how benchmarking processes could be creatively employed to enhance performance.« less

  17. Multidisciplinary breast centres in Germany: a review and update of quality assurance through benchmarking and certification.

    PubMed

    Wallwiener, Markus; Brucker, Sara Y; Wallwiener, Diethelm

    2012-06-01

    This review summarizes the rationale for the creation of breast centres and discusses the studies conducted in Germany to obtain proof of principle for a voluntary, external benchmarking programme and proof of concept for third-party dual certification of breast centres and their mandatory quality management systems to the German Cancer Society (DKG) and German Society of Senology (DGS) Requirements of Breast Centres and ISO 9001 or similar. In addition, we report the most recent data on benchmarking and certification of breast centres in Germany. Review and summary of pertinent publications. Literature searches to identify additional relevant studies. Updates from the DKG/DGS programmes. Improvements in surrogate parameters as represented by structural and process quality indicators suggest that outcome quality is improving. The voluntary benchmarking programme has gained wide acceptance among DKG/DGS-certified breast centres. This is evidenced by early results from one of the largest studies in multidisciplinary cancer services research, initiated by the DKG and DGS to implement certified breast centres. The goal of establishing a nationwide network of certified breast centres in Germany can be considered largely achieved. Nonetheless the network still needs to be improved, and there is potential for optimization along the chain of care from mammography screening, interventional diagnosis and treatment through to follow-up. Specialization, guideline-concordant procedures as well as certification and recertification of breast centres remain essential to achieve further improvements in quality of breast cancer care and to stabilize and enhance the nationwide provision of high-quality breast cancer care.

  18. BioPreDyn-bench: a suite of benchmark problems for dynamic modelling in systems biology.

    PubMed

    Villaverde, Alejandro F; Henriques, David; Smallbone, Kieran; Bongard, Sophia; Schmid, Joachim; Cicin-Sain, Damjan; Crombach, Anton; Saez-Rodriguez, Julio; Mauch, Klaus; Balsa-Canto, Eva; Mendes, Pedro; Jaeger, Johannes; Banga, Julio R

    2015-02-20

    Dynamic modelling is one of the cornerstones of systems biology. Many research efforts are currently being invested in the development and exploitation of large-scale kinetic models. The associated problems of parameter estimation (model calibration) and optimal experimental design are particularly challenging. The community has already developed many methods and software packages which aim to facilitate these tasks. However, there is a lack of suitable benchmark problems which allow a fair and systematic evaluation and comparison of these contributions. Here we present BioPreDyn-bench, a set of challenging parameter estimation problems which aspire to serve as reference test cases in this area. This set comprises six problems including medium and large-scale kinetic models of the bacterium E. coli, baker's yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network. The level of description includes metabolism, transcription, signal transduction, and development. For each problem we provide (i) a basic description and formulation, (ii) implementations ready-to-run in several formats, (iii) computational results obtained with specific solvers, (iv) a basic analysis and interpretation. This suite of benchmark problems can be readily used to evaluate and compare parameter estimation methods. Further, it can also be used to build test problems for sensitivity and identifiability analysis, model reduction and optimal experimental design methods. The suite, including codes and documentation, can be freely downloaded from the BioPreDyn-bench website, https://sites.google.com/site/biopredynbenchmarks/ .

  19. Global Positioning System (GPS) survey of Augustine Volcano, Alaska, August 3-8, 2000: data processing, geodetic coordinates and comparison with prior geodetic surveys

    USGS Publications Warehouse

    Pauk, Benjamin A.; Power, John A.; Lisowski, Mike; Dzurisin, Daniel; Iwatsubo, Eugene Y.; Melbourne, Tim

    2001-01-01

    Between August 3 and 8,2000,the Alaska Volcano Observatory completed a Global Positioning System (GPS) survey at Augustine Volcano, Alaska. Augustine is a frequently active calcalkaline volcano located in the lower portion of Cook Inlet (fig. 1), with reported eruptions in 1812, 1882, 1909?, 1935, 1964, 1976, and 1986 (Miller et al., 1998). Geodetic measurements using electronic and optical surveying techniques (EDM and theodolite) were begun at Augustine Volcano in 1986. In 1988 and 1989, an island-wide trilateration network comprising 19 benchmarks was completed and measured in its entirety (Power and Iwatsubo, 1998). Partial GPS surveys of the Augustine Island geodetic network were completed in 1992 and 1995; however, neither of these surveys included all marks on the island.Additional GPS measurements of benchmarks A5 and A15 (fig. 2) were made during the summers of 1992, 1993, 1994, and 1996. The goals of the 2000 GPS survey were to:1) re-measure all existing benchmarks on Augustine Island using a homogeneous set of GPS equipment operated in a consistent manner, 2) add measurements at benchmarks on the western shore of Cook Inlet at distances of 15 to 25 km, 3) add measurements at an existing benchmark (BURR) on Augustine Island that was not previously surveyed, and 4) add additional marks in areas of the island thought to be actively deforming. The entire survey resulted in collection of GPS data at a total of 24 sites (fig. 1 and 2). In this report we describe the methods of GPS data collection and processing used at Augustine during the 2000 survey. We use this data to calculate coordinates and elevations for all 24 sites surveyed. Data from the 2000 survey is then compared toelectronic and optical measurements made in 1988 and 1989. This report also contains a general description of all marks surveyed in 2000 and photographs of all new marks established during the 2000 survey (Appendix A).

  20. Benchmarks and Quality Assurance for Online Course Development in Higher Education

    ERIC Educational Resources Information Center

    Wang, Hong

    2008-01-01

    As online education has entered the main stream of the U.S. higher education, quality assurance in online course development has become a critical topic in distance education. This short article summarizes the major benchmarks related to online course development, listing and comparing the benchmarks of the National Education Association (NEA),…

  1. Issues in Institutional Benchmarking of Student Learning Outcomes Using Case Examples

    ERIC Educational Resources Information Center

    Judd, Thomas P.; Pondish, Christopher; Secolsky, Charles

    2013-01-01

    Benchmarking is a process that can take place at both the inter-institutional and intra-institutional level. This paper focuses on benchmarking intra-institutional student learning outcomes using case examples. The findings of the study illustrate the point that when the outcomes statements associated with the mission of the institution are…

  2. Adaptive Critic Neural Network-Based Terminal Area Energy Management and Approach and Landing Guidance

    NASA Technical Reports Server (NTRS)

    Grantham, Katie

    2003-01-01

    Reusable Launch Vehicles (RLVs) have different mission requirements than the Space Shuttle, which is used for benchmark guidance design. Therefore, alternative Terminal Area Energy Management (TAEM) and Approach and Landing (A/L) Guidance schemes can be examined in the interest of cost reduction. A neural network based solution for a finite horizon trajectory optimization problem is presented in this paper. In this approach the optimal trajectory of the vehicle is produced by adaptive critic based neural networks, which were trained off-line to maintain a gradual glideslope.

  3. Comprehensive curation and analysis of global interaction networks in Saccharomyces cerevisiae

    PubMed Central

    Reguly, Teresa; Breitkreutz, Ashton; Boucher, Lorrie; Breitkreutz, Bobby-Joe; Hon, Gary C; Myers, Chad L; Parsons, Ainslie; Friesen, Helena; Oughtred, Rose; Tong, Amy; Stark, Chris; Ho, Yuen; Botstein, David; Andrews, Brenda; Boone, Charles; Troyanskya, Olga G; Ideker, Trey; Dolinski, Kara; Batada, Nizar N; Tyers, Mike

    2006-01-01

    Background The study of complex biological networks and prediction of gene function has been enabled by high-throughput (HTP) methods for detection of genetic and protein interactions. Sparse coverage in HTP datasets may, however, distort network properties and confound predictions. Although a vast number of well substantiated interactions are recorded in the scientific literature, these data have not yet been distilled into networks that enable system-level inference. Results We describe here a comprehensive database of genetic and protein interactions, and associated experimental evidence, for the budding yeast Saccharomyces cerevisiae, as manually curated from over 31,793 abstracts and online publications. This literature-curated (LC) dataset contains 33,311 interactions, on the order of all extant HTP datasets combined. Surprisingly, HTP protein-interaction datasets currently achieve only around 14% coverage of the interactions in the literature. The LC network nevertheless shares attributes with HTP networks, including scale-free connectivity and correlations between interactions, abundance, localization, and expression. We find that essential genes or proteins are enriched for interactions with other essential genes or proteins, suggesting that the global network may be functionally unified. This interconnectivity is supported by a substantial overlap of protein and genetic interactions in the LC dataset. We show that the LC dataset considerably improves the predictive power of network-analysis approaches. The full LC dataset is available at the BioGRID () and SGD () databases. Conclusion Comprehensive datasets of biological interactions derived from the primary literature provide critical benchmarks for HTP methods, augment functional prediction, and reveal system-level attributes of biological networks. PMID:16762047

  4. Harnessing Diversity towards the Reconstructing of Large Scale Gene Regulatory Networks

    PubMed Central

    Yamanaka, Ryota; Kitano, Hiroaki

    2013-01-01

    Elucidating gene regulatory network (GRN) from large scale experimental data remains a central challenge in systems biology. Recently, numerous techniques, particularly consensus driven approaches combining different algorithms, have become a potentially promising strategy to infer accurate GRNs. Here, we develop a novel consensus inference algorithm, TopkNet that can integrate multiple algorithms to infer GRNs. Comprehensive performance benchmarking on a cloud computing framework demonstrated that (i) a simple strategy to combine many algorithms does not always lead to performance improvement compared to the cost of consensus and (ii) TopkNet integrating only high-performance algorithms provide significant performance improvement compared to the best individual algorithms and community prediction. These results suggest that a priori determination of high-performance algorithms is a key to reconstruct an unknown regulatory network. Similarity among gene-expression datasets can be useful to determine potential optimal algorithms for reconstruction of unknown regulatory networks, i.e., if expression-data associated with known regulatory network is similar to that with unknown regulatory network, optimal algorithms determined for the known regulatory network can be repurposed to infer the unknown regulatory network. Based on this observation, we developed a quantitative measure of similarity among gene-expression datasets and demonstrated that, if similarity between the two expression datasets is high, TopkNet integrating algorithms that are optimal for known dataset perform well on the unknown dataset. The consensus framework, TopkNet, together with the similarity measure proposed in this study provides a powerful strategy towards harnessing the wisdom of the crowds in reconstruction of unknown regulatory networks. PMID:24278007

  5. Complete graph model for community detection

    NASA Astrophysics Data System (ADS)

    Sun, Peng Gang; Sun, Xiya

    2017-04-01

    Community detection brings plenty of considerable problems, which has attracted more attention for many years. This paper develops a new framework, which tries to measure the interior and the exterior of a community based on a same metric, complete graph model. In particular, the exterior is modeled as a complete bipartite. We partition a network into subnetworks by maximizing the difference between the interior and the exterior of the subnetworks. In addition, we compare our approach with some state of the art methods on computer-generated networks based on the LFR benchmark as well as real-world networks. The experimental results indicate that our approach obtains better results for community detection, is capable of splitting irregular networks and achieves perfect results on the karate network and the dolphin network.

  6. Framewise phoneme classification with bidirectional LSTM and other neural network architectures.

    PubMed

    Graves, Alex; Schmidhuber, Jürgen

    2005-01-01

    In this paper, we present bidirectional Long Short Term Memory (LSTM) networks, and a modified, full gradient version of the LSTM learning algorithm. We evaluate Bidirectional LSTM (BLSTM) and several other network architectures on the benchmark task of framewise phoneme classification, using the TIMIT database. Our main findings are that bidirectional networks outperform unidirectional ones, and Long Short Term Memory (LSTM) is much faster and also more accurate than both standard Recurrent Neural Nets (RNNs) and time-windowed Multilayer Perceptrons (MLPs). Our results support the view that contextual information is crucial to speech processing, and suggest that BLSTM is an effective architecture with which to exploit it.

  7. SA-SOM algorithm for detecting communities in complex networks

    NASA Astrophysics Data System (ADS)

    Chen, Luogeng; Wang, Yanran; Huang, Xiaoming; Hu, Mengyu; Hu, Fang

    2017-10-01

    Currently, community detection is a hot topic. This paper, based on the self-organizing map (SOM) algorithm, introduced the idea of self-adaptation (SA) that the number of communities can be identified automatically, a novel algorithm SA-SOM of detecting communities in complex networks is proposed. Several representative real-world networks and a set of computer-generated networks by LFR-benchmark are utilized to verify the accuracy and the efficiency of this algorithm. The experimental findings demonstrate that this algorithm can identify the communities automatically, accurately and efficiently. Furthermore, this algorithm can also acquire higher values of modularity, NMI and density than the SOM algorithm does.

  8. Stock market index prediction using neural networks

    NASA Astrophysics Data System (ADS)

    Komo, Darmadi; Chang, Chein-I.; Ko, Hanseok

    1994-03-01

    A neural network approach to stock market index prediction is presented. Actual data of the Wall Street Journal's Dow Jones Industrial Index has been used for a benchmark in our experiments where Radial Basis Function based neural networks have been designed to model these indices over the period from January 1988 to Dec 1992. A notable success has been achieved with the proposed model producing over 90% prediction accuracies observed based on monthly Dow Jones Industrial Index predictions. The model has also captured both moderate and heavy index fluctuations. The experiments conducted in this study demonstrated that the Radial Basis Function neural network represents an excellent candidate to predict stock market index.

  9. Comparison between extreme learning machine and wavelet neural networks in data classification

    NASA Astrophysics Data System (ADS)

    Yahia, Siwar; Said, Salwa; Jemai, Olfa; Zaied, Mourad; Ben Amar, Chokri

    2017-03-01

    Extreme learning Machine is a well known learning algorithm in the field of machine learning. It's about a feed forward neural network with a single-hidden layer. It is an extremely fast learning algorithm with good generalization performance. In this paper, we aim to compare the Extreme learning Machine with wavelet neural networks, which is a very used algorithm. We have used six benchmark data sets to evaluate each technique. These datasets Including Wisconsin Breast Cancer, Glass Identification, Ionosphere, Pima Indians Diabetes, Wine Recognition and Iris Plant. Experimental results have shown that both extreme learning machine and wavelet neural networks have reached good results.

  10. Benchmarking Deep Learning Models on Large Healthcare Datasets.

    PubMed

    Purushotham, Sanjay; Meng, Chuizheng; Che, Zhengping; Liu, Yan

    2018-06-04

    Deep learning models (aka Deep Neural Networks) have revolutionized many fields including computer vision, natural language processing, speech recognition, and is being increasingly used in clinical healthcare applications. However, few works exist which have benchmarked the performance of the deep learning models with respect to the state-of-the-art machine learning models and prognostic scoring systems on publicly available healthcare datasets. In this paper, we present the benchmarking results for several clinical prediction tasks such as mortality prediction, length of stay prediction, and ICD-9 code group prediction using Deep Learning models, ensemble of machine learning models (Super Learner algorithm), SAPS II and SOFA scores. We used the Medical Information Mart for Intensive Care III (MIMIC-III) (v1.4) publicly available dataset, which includes all patients admitted to an ICU at the Beth Israel Deaconess Medical Center from 2001 to 2012, for the benchmarking tasks. Our results show that deep learning models consistently outperform all the other approaches especially when the 'raw' clinical time series data is used as input features to the models. Copyright © 2018 Elsevier Inc. All rights reserved.

  11. Robust visual tracking via multiple discriminative models with object proposals

    NASA Astrophysics Data System (ADS)

    Zhang, Yuanqiang; Bi, Duyan; Zha, Yufei; Li, Huanyu; Ku, Tao; Wu, Min; Ding, Wenshan; Fan, Zunlin

    2018-04-01

    Model drift is an important reason for tracking failure. In this paper, multiple discriminative models with object proposals are used to improve the model discrimination for relieving this problem. Firstly, the target location and scale changing are captured by lots of high-quality object proposals, which are represented by deep convolutional features for target semantics. And then, through sharing a feature map obtained by a pre-trained network, ROI pooling is exploited to wrap the various sizes of object proposals into vectors of the same length, which are used to learn a discriminative model conveniently. Lastly, these historical snapshot vectors are trained by different lifetime models. Based on entropy decision mechanism, the bad model owing to model drift can be corrected by selecting the best discriminative model. This would improve the robustness of the tracker significantly. We extensively evaluate our tracker on two popular benchmarks, the OTB 2013 benchmark and UAV20L benchmark. On both benchmarks, our tracker achieves the best performance on precision and success rate compared with the state-of-the-art trackers.

  12. Teachers' Perceptions of the Effectiveness of Benchmark Assessment Data to Predict Student Math Grades

    ERIC Educational Resources Information Center

    Lewis, Lawanna M.

    2010-01-01

    The purpose of this correlational quantitative study was to examine the extent to which teachers perceive the use of benchmark assessment data as effective; the extent to which the time spent teaching mathematics is associated with students' mathematics grades, and the extent to which the results of math benchmark assessment influence teachers'…

  13. Unusually High Incidences of Staphylococcus aureus Infection within Studies of Ventilator Associated Pneumonia Prevention Using Topical Antibiotics: Benchmarking the Evidence Base

    PubMed Central

    2018-01-01

    Selective digestive decontamination (SDD, topical antibiotic regimens applied to the respiratory tract) appears effective for preventing ventilator associated pneumonia (VAP) in intensive care unit (ICU) patients. However, potential contextual effects of SDD on Staphylococcus aureus infections in the ICU remain unclear. The S. aureus ventilator associated pneumonia (S. aureus VAP), VAP overall and S. aureus bacteremia incidences within component (control and intervention) groups within 27 SDD studies were benchmarked against 115 observational groups. Component groups from 66 studies of various interventions other than SDD provided additional points of reference. In 27 SDD study control groups, the mean S. aureus VAP incidence is 9.6% (95% CI; 6.9–13.2) versus a benchmark derived from 115 observational groups being 4.8% (95% CI; 4.2–5.6). In nine SDD study control groups the mean S. aureus bacteremia incidence is 3.8% (95% CI; 2.1–5.7) versus a benchmark derived from 10 observational groups being 2.1% (95% CI; 1.1–4.1). The incidences of S. aureus VAP and S. aureus bacteremia within the control groups of SDD studies are each higher than literature derived benchmarks. Paradoxically, within the SDD intervention groups, the incidences of both S. aureus VAP and VAP overall are more similar to the benchmarks. PMID:29300363

  14. Regular Topologies for Gigabit Wide-Area Networks. Volume 1

    NASA Technical Reports Server (NTRS)

    Shacham, Nachum; Denny, Barbara A.; Lee, Diane S.; Khan, Irfan H.; Lee, Danny Y. C.; McKenney, Paul

    1994-01-01

    In general terms, this project aimed at the analysis and design of techniques for very high-speed networking. The formal objectives of the project were to: (1) Identify switch and network technologies for wide-area networks that interconnect a large number of users and can provide individual data paths at gigabit/s rates; (2) Quantitatively evaluate and compare existing and proposed architectures and protocols, identify their strength and growth potentials, and ascertain the compatibility of competing technologies; and (3) Propose new approaches to existing architectures and protocols, and identify opportunities for research to overcome deficiencies and enhance performance. The project was organized into two parts: 1. The design, analysis, and specification of techniques and protocols for very-high-speed network environments. In this part, SRI has focused on several key high-speed networking areas, including Forward Error Control (FEC) for high-speed networks in which data distortion is the result of packet loss, and the distribution of broadband, real-time traffic in multiple user sessions. 2. Congestion Avoidance Testbed Experiment (CATE). This part of the project was done within the framework of the DARTnet experimental T1 national network. The aim of the work was to advance the state of the art in benchmarking DARTnet's performance and traffic control by developing support tools for network experimentation, by designing benchmarks that allow various algorithms to be meaningfully compared, and by investigating new queueing techniques that better satisfy the needs of best-effort and reserved-resource traffic. This document is the final technical report describing the results obtained by SRI under this project. The report consists of three volumes: Volume 1 contains a technical description of the network techniques developed by SRI in the areas of FEC and multicast of real-time traffic. Volume 2 describes the work performed under CATE. Volume 3 contains the source code of all software developed under CATE.

  15. Parameters that affect parallel processing for computational electromagnetic simulation codes on high performance computing clusters

    NASA Astrophysics Data System (ADS)

    Moon, Hongsik

    What is the impact of multicore and associated advanced technologies on computational software for science? Most researchers and students have multicore laptops or desktops for their research and they need computing power to run computational software packages. Computing power was initially derived from Central Processing Unit (CPU) clock speed. That changed when increases in clock speed became constrained by power requirements. Chip manufacturers turned to multicore CPU architectures and associated technological advancements to create the CPUs for the future. Most software applications benefited by the increased computing power the same way that increases in clock speed helped applications run faster. However, for Computational ElectroMagnetics (CEM) software developers, this change was not an obvious benefit - it appeared to be a detriment. Developers were challenged to find a way to correctly utilize the advancements in hardware so that their codes could benefit. The solution was parallelization and this dissertation details the investigation to address these challenges. Prior to multicore CPUs, advanced computer technologies were compared with the performance using benchmark software and the metric was FLoting-point Operations Per Seconds (FLOPS) which indicates system performance for scientific applications that make heavy use of floating-point calculations. Is FLOPS an effective metric for parallelized CEM simulation tools on new multicore system? Parallel CEM software needs to be benchmarked not only by FLOPS but also by the performance of other parameters related to type and utilization of the hardware, such as CPU, Random Access Memory (RAM), hard disk, network, etc. The codes need to be optimized for more than just FLOPs and new parameters must be included in benchmarking. In this dissertation, the parallel CEM software named High Order Basis Based Integral Equation Solver (HOBBIES) is introduced. This code was developed to address the needs of the changing computer hardware platforms in order to provide fast, accurate and efficient solutions to large, complex electromagnetic problems. The research in this dissertation proves that the performance of parallel code is intimately related to the configuration of the computer hardware and can be maximized for different hardware platforms. To benchmark and optimize the performance of parallel CEM software, a variety of large, complex projects are created and executed on a variety of computer platforms. The computer platforms used in this research are detailed in this dissertation. The projects run as benchmarks are also described in detail and results are presented. The parameters that affect parallel CEM software on High Performance Computing Clusters (HPCC) are investigated. This research demonstrates methods to maximize the performance of parallel CEM software code.

  16. EBR-II Reactor Physics Benchmark Evaluation Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pope, Chad L.; Lum, Edward S; Stewart, Ryan

    This report provides a reactor physics benchmark evaluation with associated uncertainty quantification for the critical configuration of the April 1986 Experimental Breeder Reactor II Run 138B core configuration.

  17. Pesticides in groundwater of the United States: decadal-scale changes, 1993-2011

    USGS Publications Warehouse

    Toccalino, Patricia L.; Gilliom, Robert J.; Lindsey, Bruce D.; Rupert, Michael G.

    2014-01-01

    The national occurrence of 83 pesticide compounds in groundwater of the United States and decadal-scale changes in concentrations for 35 compounds were assessed for the 20-year period from 1993–2011. Samples were collected from 1271 wells in 58 nationally distributed well networks. Networks consisted of shallow (mostly monitoring) wells in agricultural and urban land-use areas and deeper (mostly domestic and public supply) wells in major aquifers in mixed land-use areas. Wells were sampled once during 1993–2001 and once during 2002–2011. Pesticides were frequently detected (53% of all samples), but concentrations seldom exceeded human-health benchmarks (1.8% of all samples). The five most frequently detected pesticide compounds—atrazine, deethylatrazine, simazine, metolachlor, and prometon—each had statistically significant (p < 0.1) changes in concentrations between decades in one or more categories of well networks nationally aggregated by land use. For agricultural networks, concentrations of atrazine, metolachlor, and prometon decreased from the first decade to the second decade. For urban networks, deethylatrazine concentrations increased and prometon concentrations decreased. For major aquifers, concentrations of deethylatrazine and simazine increased. The directions of concentration changes for individual well networks generally were consistent with changes determined from nationally aggregated data. Altogether, 36 of the 58 individual well networks had statistically significant changes in concentrations of one or more pesticides between decades, with the majority of changes attributed to the five most frequently detected pesticide compounds. The magnitudes of median decadal-scale concentration changes were small—ranging from −0.09 to 0.03 µg/L—and were 35- to 230,000-fold less than human-health benchmarks.

  18. Characterization and Compensation of Network-Level Anomalies in Mixed-Signal Neuromorphic Modeling Platforms

    PubMed Central

    Petrovici, Mihai A.; Vogginger, Bernhard; Müller, Paul; Breitwieser, Oliver; Lundqvist, Mikael; Muller, Lyle; Ehrlich, Matthias; Destexhe, Alain; Lansner, Anders; Schüffny, René; Schemmel, Johannes; Meier, Karlheinz

    2014-01-01

    Advancing the size and complexity of neural network models leads to an ever increasing demand for computational resources for their simulation. Neuromorphic devices offer a number of advantages over conventional computing architectures, such as high emulation speed or low power consumption, but this usually comes at the price of reduced configurability and precision. In this article, we investigate the consequences of several such factors that are common to neuromorphic devices, more specifically limited hardware resources, limited parameter configurability and parameter variations due to fixed-pattern noise and trial-to-trial variability. Our final aim is to provide an array of methods for coping with such inevitable distortion mechanisms. As a platform for testing our proposed strategies, we use an executable system specification (ESS) of the BrainScaleS neuromorphic system, which has been designed as a universal emulation back-end for neuroscientific modeling. We address the most essential limitations of this device in detail and study their effects on three prototypical benchmark network models within a well-defined, systematic workflow. For each network model, we start by defining quantifiable functionality measures by which we then assess the effects of typical hardware-specific distortion mechanisms, both in idealized software simulations and on the ESS. For those effects that cause unacceptable deviations from the original network dynamics, we suggest generic compensation mechanisms and demonstrate their effectiveness. Both the suggested workflow and the investigated compensation mechanisms are largely back-end independent and do not require additional hardware configurability beyond the one required to emulate the benchmark networks in the first place. We hereby provide a generic methodological environment for configurable neuromorphic devices that are targeted at emulating large-scale, functional neural networks. PMID:25303102

  19. Characterization and compensation of network-level anomalies in mixed-signal neuromorphic modeling platforms.

    PubMed

    Petrovici, Mihai A; Vogginger, Bernhard; Müller, Paul; Breitwieser, Oliver; Lundqvist, Mikael; Muller, Lyle; Ehrlich, Matthias; Destexhe, Alain; Lansner, Anders; Schüffny, René; Schemmel, Johannes; Meier, Karlheinz

    2014-01-01

    Advancing the size and complexity of neural network models leads to an ever increasing demand for computational resources for their simulation. Neuromorphic devices offer a number of advantages over conventional computing architectures, such as high emulation speed or low power consumption, but this usually comes at the price of reduced configurability and precision. In this article, we investigate the consequences of several such factors that are common to neuromorphic devices, more specifically limited hardware resources, limited parameter configurability and parameter variations due to fixed-pattern noise and trial-to-trial variability. Our final aim is to provide an array of methods for coping with such inevitable distortion mechanisms. As a platform for testing our proposed strategies, we use an executable system specification (ESS) of the BrainScaleS neuromorphic system, which has been designed as a universal emulation back-end for neuroscientific modeling. We address the most essential limitations of this device in detail and study their effects on three prototypical benchmark network models within a well-defined, systematic workflow. For each network model, we start by defining quantifiable functionality measures by which we then assess the effects of typical hardware-specific distortion mechanisms, both in idealized software simulations and on the ESS. For those effects that cause unacceptable deviations from the original network dynamics, we suggest generic compensation mechanisms and demonstrate their effectiveness. Both the suggested workflow and the investigated compensation mechanisms are largely back-end independent and do not require additional hardware configurability beyond the one required to emulate the benchmark networks in the first place. We hereby provide a generic methodological environment for configurable neuromorphic devices that are targeted at emulating large-scale, functional neural networks.

  20. Current Issues for Higher Education Information Resources Management.

    ERIC Educational Resources Information Center

    CAUSE/EFFECT, 1996

    1996-01-01

    Issues identified as important to the future of information resources management and use in higher education include information policy in a networked environment, distributed computing, integrating information resources and college planning, benchmarking information technology, integrated digital libraries, technology integration in teaching,…

  1. Benchmarks for single-phase flow in fractured porous media

    NASA Astrophysics Data System (ADS)

    Flemisch, Bernd; Berre, Inga; Boon, Wietse; Fumagalli, Alessio; Schwenck, Nicolas; Scotti, Anna; Stefansson, Ivar; Tatomir, Alexandru

    2018-01-01

    This paper presents several test cases intended to be benchmarks for numerical schemes for single-phase fluid flow in fractured porous media. A number of solution strategies are compared, including a vertex and two cell-centred finite volume methods, a non-conforming embedded discrete fracture model, a primal and a dual extended finite element formulation, and a mortar discrete fracture model. The proposed benchmarks test the schemes by increasing the difficulties in terms of network geometry, e.g. intersecting fractures, and physical parameters, e.g. low and high fracture-matrix permeability ratio as well as heterogeneous fracture permeabilities. For each problem, the results presented are the number of unknowns, the approximation errors in the porous matrix and in the fractures with respect to a reference solution, and the sparsity and condition number of the discretized linear system. All data and meshes used in this study are publicly available for further comparisons.

  2. Space Weather Action Plan Ionizing Radiation Benchmarks: Phase 1 update and plans for Phase 2

    NASA Astrophysics Data System (ADS)

    Talaat, E. R.; Kozyra, J.; Onsager, T. G.; Posner, A.; Allen, J. E., Jr.; Black, C.; Christian, E. R.; Copeland, K.; Fry, D. J.; Johnston, W. R.; Kanekal, S. G.; Mertens, C. J.; Minow, J. I.; Pierson, J.; Rutledge, R.; Semones, E.; Sibeck, D. G.; St Cyr, O. C.; Xapsos, M.

    2017-12-01

    Changes in the near-Earth radiation environment can affect satellite operations, astronauts in space, commercial space activities, and the radiation environment on aircraft at relevant latitudes or altitudes. Understanding the diverse effects of increased radiation is challenging, but producing ionizing radiation benchmarks will help address these effects. The following areas have been considered in addressing the near-Earth radiation environment: the Earth's trapped radiation belts, the galactic cosmic ray background, and solar energetic-particle events. The radiation benchmarks attempt to account for any change in the near-Earth radiation environment, which, under extreme cases, could present a significant risk to critical infrastructure operations or human health. The goal of these ionizing radiation benchmarks and associated confidence levels will define at least the radiation intensity as a function of time, particle type, and energy for an occurrence frequency of 1 in 100 years and an intensity level at the theoretical maximum for the event. In this paper, we present the benchmarks that address radiation levels at all applicable altitudes and latitudes in the near-Earth environment, the assumptions made and the associated uncertainties, and the next steps planned for updating the benchmarks.

  3. Benchmarks in Clinical Productivity: A National Comprehensive Cancer Network Survey

    PubMed Central

    Stewart, F. Marc; Wasserman, Robert L.; Bloomfield, Clara D.; Petersdorf, Stephen; Witherspoon, Robert P.; Appelbaum, Frederick R.; Ziskind, Andrew; McKenna, Brian; Dodson, Jennifer M.; Weeks, Jane; Vaughan, William P.; Storer, Barry; Perkel, Sara; Waldinger, Marcy

    2007-01-01

    Purpose Oncologists in academic cancer centers usually generate professional fees that are insufficient to cover salaries and other expenses, despite significant clinical activity; therefore, supplemental funding is frequently required in order to support competitive levels of physician compensation. Relative value units (RVUs) allow comparisons of productivity across institutions and practice locations and provide a reasonable point of reference on which funding decisions can be based. Methods We reviewed the clinical productivity and other characteristics of oncology physicians practicing in 13 major academic cancer institutions with membership or shared membership in the National Comprehensive Cancer Network (NCCN). The objectives of this study were to develop tools that would lead to better-informed decision making regarding practice management and physician deployment in comprehensive cancer centers and to determine benchmarks of productivity using RVUs accrued by physicians at each institution. Three hundred fifty-three individual physician practices across the 13 NCCN institutions in the survey provided data describing adult hematology/medical oncology and bone marrow/stem-cell transplantation programs. Data from the member institutions participating in the survey included all American Medical Association Current Procedural Terminology (CPT®) codes generated (billed) by each physician during each organization's fiscal year 2003 as a measure of actual clinical productivity. Physician characteristic data included specialty, clinical full-time equivalent (CFTE) status, faculty rank, faculty track, number of years of experience, and total salary by funding source. The average adult hematologist/medical oncologist in our sample would produce 3,745 RVUs if he/she worked full-time as a clinician (100% CFTE), compared with 4,506 RVUs for a 100% CFTE transplant oncologist. Results and Conclusion Our results suggest specific clinical productivity targets for academic oncologists and provide a methodology for analyzing potential factors associated with clinical productivity and developing clinical productivity targets specific for physicians with a mix of research, administrative, teaching, and clinical salary support. PMID:20859362

  4. Benchmarks in clinical productivity: a national comprehensive cancer network survey.

    PubMed

    Stewart, F Marc; Wasserman, Robert L; Bloomfield, Clara D; Petersdorf, Stephen; Witherspoon, Robert P; Appelbaum, Frederick R; Ziskind, Andrew; McKenna, Brian; Dodson, Jennifer M; Weeks, Jane; Vaughan, William P; Storer, Barry; Perkel, Sara; Waldinger, Marcy

    2007-01-01

    Oncologists in academic cancer centers usually generate professional fees that are insufficient to cover salaries and other expenses, despite significant clinical activity; therefore, supplemental funding is frequently required in order to support competitive levels of physician compensation. Relative value units (RVUs) allow comparisons of productivity across institutions and practice locations and provide a reasonable point of reference on which funding decisions can be based. We reviewed the clinical productivity and other characteristics of oncology physicians practicing in 13 major academic cancer institutions with membership or shared membership in the National Comprehensive Cancer Network (NCCN). The objectives of this study were to develop tools that would lead to better-informed decision making regarding practice management and physician deployment in comprehensive cancer centers and to determine benchmarks of productivity using RVUs accrued by physicians at each institution. Three hundred fifty-three individual physician practices across the 13 NCCN institutions in the survey provided data describing adult hematology/medical oncology and bone marrow/stem-cell transplantation programs. Data from the member institutions participating in the survey included all American Medical Association Current Procedural Terminology (CPT®) codes generated (billed) by each physician during each organization's fiscal year 2003 as a measure of actual clinical productivity. Physician characteristic data included specialty, clinical full-time equivalent (CFTE) status, faculty rank, faculty track, number of years of experience, and total salary by funding source. The average adult hematologist/medical oncologist in our sample would produce 3,745 RVUs if he/she worked full-time as a clinician (100% CFTE), compared with 4,506 RVUs for a 100% CFTE transplant oncologist. Our results suggest specific clinical productivity targets for academic oncologists and provide a methodology for analyzing potential factors associated with clinical productivity and developing clinical productivity targets specific for physicians with a mix of research, administrative, teaching, and clinical salary support.

  5. Reconstruction of metabolic networks from high-throughput metabolite profiling data: in silico analysis of red blood cell metabolism.

    PubMed

    Nemenman, Ilya; Escola, G Sean; Hlavacek, William S; Unkefer, Pat J; Unkefer, Clifford J; Wall, Michael E

    2007-12-01

    We investigate the ability of algorithms developed for reverse engineering of transcriptional regulatory networks to reconstruct metabolic networks from high-throughput metabolite profiling data. For benchmarking purposes, we generate synthetic metabolic profiles based on a well-established model for red blood cell metabolism. A variety of data sets are generated, accounting for different properties of real metabolic networks, such as experimental noise, metabolite correlations, and temporal dynamics. These data sets are made available online. We use ARACNE, a mainstream algorithm for reverse engineering of transcriptional regulatory networks from gene expression data, to predict metabolic interactions from these data sets. We find that the performance of ARACNE on metabolic data is comparable to that on gene expression data.

  6. Direct Adaptive Aircraft Control Using Dynamic Cell Structure Neural Networks

    NASA Technical Reports Server (NTRS)

    Jorgensen, Charles C.

    1997-01-01

    A Dynamic Cell Structure (DCS) Neural Network was developed which learns topology representing networks (TRNS) of F-15 aircraft aerodynamic stability and control derivatives. The network is integrated into a direct adaptive tracking controller. The combination produces a robust adaptive architecture capable of handling multiple accident and off- nominal flight scenarios. This paper describes the DCS network and modifications to the parameter estimation procedure. The work represents one step towards an integrated real-time reconfiguration control architecture for rapid prototyping of new aircraft designs. Performance was evaluated using three off-line benchmarks and on-line nonlinear Virtual Reality simulation. Flight control was evaluated under scenarios including differential stabilator lock, soft sensor failure, control and stability derivative variations, and air turbulence.

  7. Paradoxical Acinetobacter-associated ventilator-associated pneumonia incidence rates within prevention studies using respiratory tract applications of topical polymyxin: benchmarking the evidence base.

    PubMed

    Hurley, J C

    2018-04-10

    Regimens containing topical polymyxin appear to be more effective in preventing ventilator-associated pneumonia (VAP) than other methods. To benchmark the incidence rates of Acinetobacter-associated VAP (AAVAP) within component (control and intervention) groups from concurrent controlled studies of polymyxin compared with studies of various VAP prevention methods other than polymyxin (non-polymyxin studies). An AAVAP benchmark was derived using data from 77 observational groups without any VAP prevention method under study. Data from 41 non-polymyxin studies provided additional points of reference. The benchmarking was undertaken by meta-regression using generalized estimating equation methods. Within 20 studies of topical polymyxin, the mean AAVAP was 4.6% [95% confidence interval (CI) 3.0-6.9] and 3.7% (95% CI 2.0-5.3) for control and intervention groups, respectively. In contrast, the AAVAP benchmark was 1.5% (95% CI 1.2-2.0). In the AAVAP meta-regression model, group origin from a trauma intensive care unit (+0.55; +0.16 to +0.94, P = 0.006) or membership of a polymyxin control group (+0.64; +0.21 to +1.31, P = 0.023), but not membership of a polymyxin intervention group (+0.24; -0.37 to +0.84, P = 0.45), were significant positive correlates. The mean incidence of AAVAP within the control groups of studies of topical polymyxin is more than double the benchmark, whereas the incidence rates within the groups of non-polymyxin studies and, paradoxically, polymyxin intervention groups are more similar to the benchmark. These incidence rates, which are paradoxical in the context of an apparent effect against VAP within controlled trials of topical polymyxin-based interventions, force a re-appraisal. Copyright © 2018 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.

  8. Analyzing GAIAN Database (GaianDB) on a Tactical Network

    DTIC Science & Technology

    2015-11-30

    we connected 3 Raspberry Pi’s running GaianDB and our augmented version of splatform to a network of 3 CSRs. The Raspberry Pi is a low power, low...based on Debian from a connected secure digital high capacity (SDHC) card or a universal serial bus (USB) device. The Raspberry Pi comes equipped with...requirements, capabilities, and cost make the Raspberry Pi a useful device for sensor experimentation. From there, we performed 3 types of benchmarks

  9. Decreasing unnecessary utilization in acute bronchiolitis care: results from the value in inpatient pediatrics network.

    PubMed

    Ralston, Shawn; Garber, Matthew; Narang, Steve; Shen, Mark; Pate, Brian; Pope, John; Lossius, Michele; Croland, Trina; Bennett, Jeff; Jewell, Jennifer; Krugman, Scott; Robbins, Elizabeth; Nazif, Joanne; Liewehr, Sheila; Miller, Ansley; Marks, Michelle; Pappas, Rita; Pardue, Jeanann; Quinonez, Ricardo; Fine, Bryan R; Ryan, Michael

    2013-01-01

    Acute viral bronchiolitis is the most common diagnosis resulting in hospital admission in pediatrics. Utilization of non-evidence-based therapies and testing remains common despite a large volume of evidence to guide quality improvement efforts. Our objective was to reduce utilization of unnecessary therapies in the inpatient care of bronchiolitis across a diverse network of clinical sites. We formed a voluntary quality improvement collaborative of pediatric hospitalists for the purpose of benchmarking the use of bronchodilators, steroids, chest radiography, chest physiotherapy, and viral testing in bronchiolitis using hospital administrative data. We shared resources within the network, including protocols, scores, order sets, and key bibliographies, and established group norms for decreasing utilization. Aggregate data on 11,568 hospitalizations for bronchiolitis from 17 centers was analyzed for this report. The network was organized in 2008. By 2010, we saw a 46% reduction in overall volume of bronchodilators used, a 3.4 dose per patient absolute decrease in utilization (95% confidence interval [CI] 1.4-5.8). Overall exposure to any dose of bronchodilator decreased by 12 percentage points as well (95% CI 5%-25%). There was also a statistically significant decline in chest physiotherapy usage, but not for steroids, chest radiography, or viral testing. Benchmarking within a voluntary pediatric hospitalist collaborative facilitated decreased utilization of bronchodilators and chest physiotherapy in bronchiolitis. Copyright © 2012 Society of Hospital Medicine.

  10. Simple techniques for improving deep neural network outcomes on commodity hardware

    NASA Astrophysics Data System (ADS)

    Colina, Nicholas Christopher A.; Perez, Carlos E.; Paraan, Francis N. C.

    2017-08-01

    We benchmark improvements in the performance of deep neural networks (DNN) on the MNIST data test upon imple-menting two simple modifications to the algorithm that have little overhead computational cost. First is GPU parallelization on a commodity graphics card, and second is initializing the DNN with random orthogonal weight matrices prior to optimization. Eigenspectra analysis of the weight matrices reveal that the initially orthogonal matrices remain nearly orthogonal after training. The probability distributions from which these orthogonal matrices are drawn are also shown to significantly affect the performance of these deep neural networks.

  11. Identification of overlapping communities and their hierarchy by locally calculating community-changing resolution levels

    NASA Astrophysics Data System (ADS)

    Havemann, Frank; Heinz, Michael; Struck, Alexander; Gläser, Jochen

    2011-01-01

    We propose a new local, deterministic and parameter-free algorithm that detects fuzzy and crisp overlapping communities in a weighted network and simultaneously reveals their hierarchy. Using a local fitness function, the algorithm greedily expands natural communities of seeds until the whole graph is covered. The hierarchy of communities is obtained analytically by calculating resolution levels at which communities grow rather than numerically by testing different resolution levels. This analytic procedure is not only more exact than its numerical alternatives such as LFM and GCE but also much faster. Critical resolution levels can be identified by searching for intervals in which large changes of the resolution do not lead to growth of communities. We tested our algorithm on benchmark graphs and on a network of 492 papers in information science. Combined with a specific post-processing, the algorithm gives much more precise results on LFR benchmarks with high overlap compared to other algorithms and performs very similarly to GCE.

  12. Similarity indices of meteo-climatic gauging stations: definition and comparison.

    PubMed

    Barca, Emanuele; Bruno, Delia Evelina; Passarella, Giuseppe

    2016-07-01

    Space-time dependencies among monitoring network stations have been investigated to detect and quantify similarity relationships among gauging stations. In this work, besides the well-known rank correlation index, two new similarity indices have been defined and applied to compute the similarity matrix related to the Apulian meteo-climatic monitoring network. The similarity matrices can be applied to address reliably the issue of missing data in space-time series. In order to establish the effectiveness of the similarity indices, a simulation test was then designed and performed with the aim of estimating missing monthly rainfall rates in a suitably selected gauging station. The results of the simulation allowed us to evaluate the effectiveness of the proposed similarity indices. Finally, the multiple imputation by chained equations method was used as a benchmark to have an absolute yardstick for comparing the outcomes of the test. In conclusion, the new proposed multiplicative similarity index resulted at least as reliable as the selected benchmark.

  13. Hierarchical Artificial Bee Colony Algorithm for RFID Network Planning Optimization

    PubMed Central

    Ma, Lianbo; Chen, Hanning; Hu, Kunyuan; Zhu, Yunlong

    2014-01-01

    This paper presents a novel optimization algorithm, namely, hierarchical artificial bee colony optimization, called HABC, to tackle the radio frequency identification network planning (RNP) problem. In the proposed multilevel model, the higher-level species can be aggregated by the subpopulations from lower level. In the bottom level, each subpopulation employing the canonical ABC method searches the part-dimensional optimum in parallel, which can be constructed into a complete solution for the upper level. At the same time, the comprehensive learning method with crossover and mutation operators is applied to enhance the global search ability between species. Experiments are conducted on a set of 10 benchmark optimization problems. The results demonstrate that the proposed HABC obtains remarkable performance on most chosen benchmark functions when compared to several successful swarm intelligence and evolutionary algorithms. Then HABC is used for solving the real-world RNP problem on two instances with different scales. Simulation results show that the proposed algorithm is superior for solving RNP, in terms of optimization accuracy and computation robustness. PMID:24592200

  14. Hierarchical artificial bee colony algorithm for RFID network planning optimization.

    PubMed

    Ma, Lianbo; Chen, Hanning; Hu, Kunyuan; Zhu, Yunlong

    2014-01-01

    This paper presents a novel optimization algorithm, namely, hierarchical artificial bee colony optimization, called HABC, to tackle the radio frequency identification network planning (RNP) problem. In the proposed multilevel model, the higher-level species can be aggregated by the subpopulations from lower level. In the bottom level, each subpopulation employing the canonical ABC method searches the part-dimensional optimum in parallel, which can be constructed into a complete solution for the upper level. At the same time, the comprehensive learning method with crossover and mutation operators is applied to enhance the global search ability between species. Experiments are conducted on a set of 10 benchmark optimization problems. The results demonstrate that the proposed HABC obtains remarkable performance on most chosen benchmark functions when compared to several successful swarm intelligence and evolutionary algorithms. Then HABC is used for solving the real-world RNP problem on two instances with different scales. Simulation results show that the proposed algorithm is superior for solving RNP, in terms of optimization accuracy and computation robustness.

  15. Model risk for European-style stock index options.

    PubMed

    Gençay, Ramazan; Gibson, Rajna

    2007-01-01

    In empirical modeling, there have been two strands for pricing in the options literature, namely the parametric and nonparametric models. Often, the support for the nonparametric methods is based on a benchmark such as the Black-Scholes (BS) model with constant volatility. In this paper, we study the stochastic volatility (SV) and stochastic volatility random jump (SVJ) models as parametric benchmarks against feedforward neural network (FNN) models, a class of neural network models. Our choice for FNN models is due to their well-studied universal approximation properties of an unknown function and its partial derivatives. Since the partial derivatives of an option pricing formula are risk pricing tools, an accurate estimation of the unknown option pricing function is essential for pricing and hedging. Our findings indicate that FNN models offer themselves as robust option pricing tools, over their sophisticated parametric counterparts in predictive settings. There are two routes to explain the superiority of FNN models over the parametric models in forecast settings. These are nonnormality of return distributions and adaptive learning.

  16. Monitoring land subsidence in Sacramento Valley, California, using GPS

    USGS Publications Warehouse

    Blodgett, J.C.; Ikehara, M.E.; Williams, Gary E.

    1990-01-01

    Land subsidence measurement is usually based on a comparison of bench-mark elevations surveyed at different times. These bench marks, established for mapping or the national vertical control network, are not necessarily suitable for measuring land subsidence. Also, many bench marks have been destroyed or are unstable. Conventional releveling of the study area would be costly and would require several years to complete. Differences of as much as 3.9 ft between recent leveling and published bench-mark elevations have been documented at seven locations in the Sacramento Valley. Estimates of land subsidence less than about 0.3 ft are questionable because elevation data are based on leveling and adjustment procedures that occured over many years. A new vertical control network based on the Global Positioning System (GPS) provides highly accurate vertical control data at relatively low costs, and the survey points can be placed where needed to obtain adequate areal coverage of the area affected by land subsidence.

  17. Knowledge extraction from evolving spiking neural networks with rank order population coding.

    PubMed

    Soltic, Snjezana; Kasabov, Nikola

    2010-12-01

    This paper demonstrates how knowledge can be extracted from evolving spiking neural networks with rank order population coding. Knowledge discovery is a very important feature of intelligent systems. Yet, a disproportionally small amount of research is centered on the issue of knowledge extraction from spiking neural networks which are considered to be the third generation of artificial neural networks. The lack of knowledge representation compatibility is becoming a major detriment to end users of these networks. We show that a high-level knowledge can be obtained from evolving spiking neural networks. More specifically, we propose a method for fuzzy rule extraction from an evolving spiking network with rank order population coding. The proposed method was used for knowledge discovery on two benchmark taste recognition problems where the knowledge learnt by an evolving spiking neural network was extracted in the form of zero-order Takagi-Sugeno fuzzy IF-THEN rules.

  18. Single image super-resolution based on convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Zou, Lamei; Luo, Ming; Yang, Weidong; Li, Peng; Jin, Liujia

    2018-03-01

    We present a deep learning method for single image super-resolution (SISR). The proposed approach learns end-to-end mapping between low-resolution (LR) images and high-resolution (HR) images. The mapping is represented as a deep convolutional neural network which inputs the LR image and outputs the HR image. Our network uses 5 convolution layers, which kernels size include 5×5, 3×3 and 1×1. In our proposed network, we use residual-learning and combine different sizes of convolution kernels at the same layer. The experiment results show that our proposed method performs better than the existing methods in reconstructing quality index and human visual effects on benchmarked images.

  19. Accurate detection of hierarchical communities in complex networks based on nonlinear dynamical evolution

    NASA Astrophysics Data System (ADS)

    Zhuo, Zhao; Cai, Shi-Min; Tang, Ming; Lai, Ying-Cheng

    2018-04-01

    One of the most challenging problems in network science is to accurately detect communities at distinct hierarchical scales. Most existing methods are based on structural analysis and manipulation, which are NP-hard. We articulate an alternative, dynamical evolution-based approach to the problem. The basic principle is to computationally implement a nonlinear dynamical process on all nodes in the network with a general coupling scheme, creating a networked dynamical system. Under a proper system setting and with an adjustable control parameter, the community structure of the network would "come out" or emerge naturally from the dynamical evolution of the system. As the control parameter is systematically varied, the community hierarchies at different scales can be revealed. As a concrete example of this general principle, we exploit clustered synchronization as a dynamical mechanism through which the hierarchical community structure can be uncovered. In particular, for quite arbitrary choices of the nonlinear nodal dynamics and coupling scheme, decreasing the coupling parameter from the global synchronization regime, in which the dynamical states of all nodes are perfectly synchronized, can lead to a weaker type of synchronization organized as clusters. We demonstrate the existence of optimal choices of the coupling parameter for which the synchronization clusters encode accurate information about the hierarchical community structure of the network. We test and validate our method using a standard class of benchmark modular networks with two distinct hierarchies of communities and a number of empirical networks arising from the real world. Our method is computationally extremely efficient, eliminating completely the NP-hard difficulty associated with previous methods. The basic principle of exploiting dynamical evolution to uncover hidden community organizations at different scales represents a "game-change" type of approach to addressing the problem of community detection in complex networks.

  20. Moving to a Modernized Height Reference System in Canada: Rationale, Status and Plans

    NASA Astrophysics Data System (ADS)

    Veronneau, M.; Huang, J.

    2007-05-01

    A modern society depends on a common coordinate reference system through which geospatial information can be interrelated and exploited reliably. For height measurements this requires the ability to measure mean sea level elevations easily, accurately, and at the lowest possible cost. The current national reference system for elevations, the Canadian Geodetic Vertical Datum of 1928 (CGVD28), offers only partial geographic coverage of the Canadian territory and is affected by inaccuracies that are becoming more apparent as users move to space- based technologies such as GPS. Furthermore, the maintenance and expansion of the national vertical network using spirit-levelling, a costly, time consuming and labour intensive proposition, has only been minimally funded over the past decade. It is now generally accepted that the most sustainable alternative for the realization of a national vertical datum is a gravimetric geoid model. This approach defines the datum in relation to an ellipsoid, making it compatible with space-based technologies for positioning. While simplifying access to heights above mean sea level all across the Canadian territory, this approach imposes additional demands on the quality of the geoid model. These are being met by recent and upcoming space gravimetry missions that have and will be measuring the Earth`s gravity field with increasing and unprecedented accuracy. To maintain compatibility with the CGVD28 datum materialized at benchmarks, the current first-order levelling can be readjusted by constraining geoid heights at selected stations of the Canadian Base Network. The new reference would change CGVD28 heights of benchmarks by up to 1 m across Canada. However, local height differences between benchmarks would maintain a relative precision of a few cm or better. CGVD28 will co-exist with the new height reference as long as it will be required, but it will undoubtedly disappear as benchmarks are destroyed over time. The adoption of GNSS technologies for positioning should naturally move users to the new height reference and offer the possibility of transferring heights over longer distances, within the precision of the geoid model. This transition will also reduce user dependency on a dense network of benchmarks and offer the possibility for geodetic agencies to provide the reference frame with a reduced number of 3D control points. While the rationale for moving to a modernized height system is easily understood, the acceptance of the new system by users will only occur gradually as they adopt new technologies and procedures to access the height reference. A stakeholder consultation indicates user readiness and an implementation plan is starting to unfold. This presentation will look at the current state of the geoid model and control networks that will support the modernized height system. Results of the consultation and the recommendations regarding the roles and responsibilities of the various stakeholders involved in implementing the transition will also be reported.

  1. Benchmarking in national health service procurement in Scotland.

    PubMed

    Walker, Scott; Masson, Ron; Telford, Ronnie; White, David

    2007-11-01

    The paper reports the results of a study on benchmarking activities undertaken by the procurement organization within the National Health Service (NHS) in Scotland, namely National Procurement (previously Scottish Healthcare Supplies Contracts Branch). NHS performance is of course politically important, and benchmarking is increasingly seen as a means to improve performance, so the study was carried out to determine if the current benchmarking approaches could be enhanced. A review of the benchmarking activities used by the private sector, local government and NHS organizations was carried out to establish a framework of the motivations, benefits, problems and costs associated with benchmarking. This framework was used to carry out the research through case studies and a questionnaire survey of NHS procurement organizations both in Scotland and other parts of the UK. Nine of the 16 Scottish Health Boards surveyed reported carrying out benchmarking during the last three years. The findings of the research were that there were similarities in approaches between local government and NHS Scotland Health, but differences between NHS Scotland and other UK NHS procurement organizations. Benefits were seen as significant and it was recommended that National Procurement should pursue the formation of a benchmarking group with members drawn from NHS Scotland and external benchmarking bodies to establish measures to be used in benchmarking across the whole of NHS Scotland.

  2. Toxicological benchmarks for screening potential contaminants of concern for effects on terrestrial plants: 1994 revision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Will, M.E.; Suter, G.W. II

    1994-09-01

    One of the initial stages in ecological risk assessment for hazardous waste sites is screening contaminants to determine which of them are worthy of further consideration as contaminants of potential concern. This process is termed contaminant screening. It is performed by comparing measured ambient concentrations of chemicals to benchmark concentrations. Currently, no standard benchmark concentrations exist for assessing contaminants in soil with respect to their toxicity to plants. This report presents a standard method for deriving benchmarks for this purpose (phytotoxicity benchmarks), a set of data concerning effects of chemicals in soil or soil solution on plants, and a setmore » of phytotoxicity benchmarks for 38 chemicals potentially associated with United States Department of Energy (DOE) sites. In addition, background information on the phytotoxicity and occurrence of the chemicals in soils is presented, and literature describing the experiments from which data were drawn for benchmark derivation is reviewed. Chemicals that are found in soil at concentrations exceeding both the phytotoxicity benchmark and the background concentration for the soil type should be considered contaminants of potential concern.« less

  3. Toxicological Benchmarks for Screening Potential Contaminants of Concern for Effects on Terrestrial Plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suter, G.W. II

    1993-01-01

    One of the initial stages in ecological risk assessment for hazardous waste sites is screening contaminants to determine which of them are worthy of further consideration as contaminants of potential concern. This process is termed contaminant screening. It is performed by comparing measured ambient concentrations of chemicals to benchmark concentrations. Currently, no standard benchmark concentrations exist for assessing contaminants in soil with respect to their toxicity to plants. This report presents a standard method for deriving benchmarks for this purpose (phytotoxicity benchmarks), a set of data concerning effects of chemicals in soil or soil solution on plants, and a setmore » of phytotoxicity benchmarks for 38 chemicals potentially associated with United States Department of Energy (DOE) sites. In addition, background information on the phytotoxicity and occurrence of the chemicals in soils is presented, and literature describing the experiments from which data were drawn for benchmark derivation is reviewed. Chemicals that are found in soil at concentrations exceeding both the phytotoxicity benchmark and the background concentration for the soil type should be considered contaminants of potential concern.« less

  4. M-Finder: Uncovering functionally associated proteins from interactome data integrated with GO annotations

    PubMed Central

    2013-01-01

    Background Protein-protein interactions (PPIs) play a key role in understanding the mechanisms of cellular processes. The availability of interactome data has catalyzed the development of computational approaches to elucidate functional behaviors of proteins on a system level. Gene Ontology (GO) and its annotations are a significant resource for functional characterization of proteins. Because of wide coverage, GO data have often been adopted as a benchmark for protein function prediction on the genomic scale. Results We propose a computational approach, called M-Finder, for functional association pattern mining. This method employs semantic analytics to integrate the genome-wide PPIs with GO data. We also introduce an interactive web application tool that visualizes a functional association network linked to a protein specified by a user. The proposed approach comprises two major components. First, the PPIs that have been generated by high-throughput methods are weighted in terms of their functional consistency using GO and its annotations. We assess two advanced semantic similarity metrics which quantify the functional association level of each interacting protein pair. We demonstrate that these measures outperform the other existing methods by evaluating their agreement to other biological features, such as sequence similarity, the presence of common Pfam domains, and core PPIs. Second, the information flow-based algorithm is employed to discover a set of proteins functionally associated with the protein in a query and their links efficiently. This algorithm reconstructs a functional association network of the query protein. The output network size can be flexibly determined by parameters. Conclusions M-Finder provides a useful framework to investigate functional association patterns with any protein. This software will also allow users to perform further systematic analysis of a set of proteins for any specific function. It is available online at http://bionet.ecs.baylor.edu/mfinder PMID:24565382

  5. A Visual Evaluation Study of Graph Sampling Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Fangyan; Zhang, Song; Wong, Pak C.

    2017-01-29

    We evaluate a dozen prevailing graph-sampling techniques with an ultimate goal to better visualize and understand big and complex graphs that exhibit different properties and structures. The evaluation uses eight benchmark datasets with four different graph types collected from Stanford Network Analysis Platform and NetworkX to give a comprehensive comparison of various types of graphs. The study provides a practical guideline for visualizing big graphs of different sizes and structures. The paper discusses results and important observations from the study.

  6. A Standard-Setting Study to Establish College Success Criteria to Inform the SAT® College and Career Readiness Benchmark. Research Report 2012-3

    ERIC Educational Resources Information Center

    Kobrin, Jennifer L.; Patterson, Brian F.; Wiley, Andrew; Mattern, Krista D.

    2012-01-01

    In 2011, the College Board released its SAT college and career readiness benchmark, which represents the level of academic preparedness associated with a high likelihood of college success and completion. The goal of this study, which was conducted in 2008, was to establish college success criteria to inform the development of the benchmark. The…

  7. Benchmarking hypercube hardware and software

    NASA Technical Reports Server (NTRS)

    Grunwald, Dirk C.; Reed, Daniel A.

    1986-01-01

    It was long a truism in computer systems design that balanced systems achieve the best performance. Message passing parallel processors are no different. To quantify the balance of a hypercube design, an experimental methodology was developed and the associated suite of benchmarks was applied to several existing hypercubes. The benchmark suite includes tests of both processor speed in the absence of internode communication and message transmission speed as a function of communication patterns.

  8. Evaluation of in-network adaptation of scalable high efficiency video coding (SHVC) in mobile environments

    NASA Astrophysics Data System (ADS)

    Nightingale, James; Wang, Qi; Grecos, Christos; Goma, Sergio

    2014-02-01

    High Efficiency Video Coding (HEVC), the latest video compression standard (also known as H.265), can deliver video streams of comparable quality to the current H.264 Advanced Video Coding (H.264/AVC) standard with a 50% reduction in bandwidth. Research into SHVC, the scalable extension to the HEVC standard, is still in its infancy. One important area for investigation is whether, given the greater compression ratio of HEVC (and SHVC), the loss of packets containing video content will have a greater impact on the quality of delivered video than is the case with H.264/AVC or its scalable extension H.264/SVC. In this work we empirically evaluate the layer-based, in-network adaptation of video streams encoded using SHVC in situations where dynamically changing bandwidths and datagram loss ratios require the real-time adaptation of video streams. Through the use of extensive experimentation, we establish a comprehensive set of benchmarks for SHVC-based highdefinition video streaming in loss prone network environments such as those commonly found in mobile networks. Among other results, we highlight that packet losses of only 1% can lead to a substantial reduction in PSNR of over 3dB and error propagation in over 130 pictures following the one in which the loss occurred. This work would be one of the earliest studies in this cutting-edge area that reports benchmark evaluation results for the effects of datagram loss on SHVC picture quality and offers empirical and analytical insights into SHVC adaptation to lossy, mobile networking conditions.

  9. Toxicological Benchmarks for Screening Potential Contaminants of Concern for Effects on Soil and Litter Invertebrates and Heterotrophic Process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Will, M.E.

    1994-01-01

    This report presents a standard method for deriving benchmarks for the purpose of ''contaminant screening,'' performed by comparing measured ambient concentrations of chemicals. The work was performed under Work Breakdown Structure 1.4.12.2.3.04.07.02 (Activity Data Sheet 8304). In addition, this report presents sets of data concerning the effects of chemicals in soil on invertebrates and soil microbial processes, benchmarks for chemicals potentially associated with United States Department of Energy sites, and literature describing the experiments from which data were drawn for benchmark derivation.

  10. International land Model Benchmarking (ILAMB) Package v002.00

    DOE Data Explorer

    Collier, Nathaniel [Oak Ridge National Laboratory; Hoffman, Forrest M. [Oak Ridge National Laboratory; Mu, Mingquan [University of California, Irvine; Randerson, James T. [University of California, Irvine; Riley, William J. [Lawrence Berkeley National Laboratory

    2016-05-09

    As a contribution to International Land Model Benchmarking (ILAMB) Project, we are providing new analysis approaches, benchmarking tools, and science leadership. The goal of ILAMB is to assess and improve the performance of land models through international cooperation and to inform the design of new measurement campaigns and field studies to reduce uncertainties associated with key biogeochemical processes and feedbacks. ILAMB is expected to be a primary analysis tool for CMIP6 and future model-data intercomparison experiments. This team has developed initial prototype benchmarking systems for ILAMB, which will be improved and extended to include ocean model metrics and diagnostics.

  11. International land Model Benchmarking (ILAMB) Package v001.00

    DOE Data Explorer

    Mu, Mingquan [University of California, Irvine; Randerson, James T. [University of California, Irvine; Riley, William J. [Lawrence Berkeley National Laboratory; Hoffman, Forrest M. [Oak Ridge National Laboratory

    2016-05-02

    As a contribution to International Land Model Benchmarking (ILAMB) Project, we are providing new analysis approaches, benchmarking tools, and science leadership. The goal of ILAMB is to assess and improve the performance of land models through international cooperation and to inform the design of new measurement campaigns and field studies to reduce uncertainties associated with key biogeochemical processes and feedbacks. ILAMB is expected to be a primary analysis tool for CMIP6 and future model-data intercomparison experiments. This team has developed initial prototype benchmarking systems for ILAMB, which will be improved and extended to include ocean model metrics and diagnostics.

  12. Reconstruction of stochastic temporal networks through diffusive arrival times

    NASA Astrophysics Data System (ADS)

    Li, Xun; Li, Xiang

    2017-06-01

    Temporal networks have opened a new dimension in defining and quantification of complex interacting systems. Our ability to identify and reproduce time-resolved interaction patterns is, however, limited by the restricted access to empirical individual-level data. Here we propose an inverse modelling method based on first-arrival observations of the diffusion process taking place on temporal networks. We describe an efficient coordinate-ascent implementation for inferring stochastic temporal networks that builds in particular but not exclusively on the null model assumption of mutually independent interaction sequences at the dyadic level. The results of benchmark tests applied on both synthesized and empirical network data sets confirm the validity of our algorithm, showing the feasibility of statistically accurate inference of temporal networks only from moderate-sized samples of diffusion cascades. Our approach provides an effective and flexible scheme for the temporally augmented inverse problems of network reconstruction and has potential in a broad variety of applications.

  13. Parameterized centrality metric for network analysis

    NASA Astrophysics Data System (ADS)

    Ghosh, Rumi; Lerman, Kristina

    2011-06-01

    A variety of metrics have been proposed to measure the relative importance of nodes in a network. One of these, alpha-centrality [P. Bonacich, Am. J. Sociol.0002-960210.1086/228631 92, 1170 (1987)], measures the number of attenuated paths that exist between nodes. We introduce a normalized version of this metric and use it to study network structure, for example, to rank nodes and find community structure of the network. Specifically, we extend the modularity-maximization method for community detection to use this metric as the measure of node connectivity. Normalized alpha-centrality is a powerful tool for network analysis, since it contains a tunable parameter that sets the length scale of interactions. Studying how rankings and discovered communities change when this parameter is varied allows us to identify locally and globally important nodes and structures. We apply the proposed metric to several benchmark networks and show that it leads to better insights into network structure than alternative metrics.

  14. A systematic approach to infer biological relevance and biases of gene network structures.

    PubMed

    Antonov, Alexey V; Tetko, Igor V; Mewes, Hans W

    2006-01-10

    The development of high-throughput technologies has generated the need for bioinformatics approaches to assess the biological relevance of gene networks. Although several tools have been proposed for analysing the enrichment of functional categories in a set of genes, none of them is suitable for evaluating the biological relevance of the gene network. We propose a procedure and develop a web-based resource (BIOREL) to estimate the functional bias (biological relevance) of any given genetic network by integrating different sources of biological information. The weights of the edges in the network may be either binary or continuous. These essential features make our web tool unique among many similar services. BIOREL provides standardized estimations of the network biases extracted from independent data. By the analyses of real data we demonstrate that the potential application of BIOREL ranges from various benchmarking purposes to systematic analysis of the network biology.

  15. Reconstruction of stochastic temporal networks through diffusive arrival times

    PubMed Central

    Li, Xun; Li, Xiang

    2017-01-01

    Temporal networks have opened a new dimension in defining and quantification of complex interacting systems. Our ability to identify and reproduce time-resolved interaction patterns is, however, limited by the restricted access to empirical individual-level data. Here we propose an inverse modelling method based on first-arrival observations of the diffusion process taking place on temporal networks. We describe an efficient coordinate-ascent implementation for inferring stochastic temporal networks that builds in particular but not exclusively on the null model assumption of mutually independent interaction sequences at the dyadic level. The results of benchmark tests applied on both synthesized and empirical network data sets confirm the validity of our algorithm, showing the feasibility of statistically accurate inference of temporal networks only from moderate-sized samples of diffusion cascades. Our approach provides an effective and flexible scheme for the temporally augmented inverse problems of network reconstruction and has potential in a broad variety of applications. PMID:28604687

  16. Ubiquitousness of link-density and link-pattern communities in real-world networks

    NASA Astrophysics Data System (ADS)

    Šubelj, L.; Bajec, M.

    2012-01-01

    Community structure appears to be an intrinsic property of many complex real-world networks. However, recent work shows that real-world networks reveal even more sophisticated modules than classical cohesive (link-density) communities. In particular, networks can also be naturally partitioned according to similar patterns of connectedness among the nodes, revealing link-pattern communities. We here propose a propagation based algorithm that can extract both link-density and link-pattern communities, without any prior knowledge of the true structure. The algorithm was first validated on different classes of synthetic benchmark networks with community structure, and also on random networks. We have further applied the algorithm to different social, information, technological and biological networks, where it indeed reveals meaningful (composites of) link-density and link-pattern communities. The results thus seem to imply that, similarly as link-density counterparts, link-pattern communities appear ubiquitous in nature and design.

  17. A BHR Composite Network-Based Visualization Method for Deformation Risk Level of Underground Space

    PubMed Central

    Zheng, Wei; Zhang, Xiaoya; Lu, Qi

    2015-01-01

    This study proposes a visualization processing method for the deformation risk level of underground space. The proposed method is based on a BP-Hopfield-RGB (BHR) composite network. Complex environmental factors are integrated in the BP neural network. Dynamic monitoring data are then automatically classified in the Hopfield network. The deformation risk level is combined with the RGB color space model and is displayed visually in real time, after which experiments are conducted with the use of an ultrasonic omnidirectional sensor device for structural deformation monitoring. The proposed method is also compared with some typical methods using a benchmark dataset. Results show that the BHR composite network visualizes the deformation monitoring process in real time and can dynamically indicate dangerous zones. PMID:26011618

  18. NETWORK DESIGN FACTORS FOR ASSESSING TEMPORAL VARIABILITY IN GROUND-WATER QUALITY

    EPA Science Inventory

    A 1.5 year benchmark data Set was collected at biweekly frequency from two siteS in shallow sand and gravel deposits in West Central Illinois. ne site was near a hog-processing facility and the other represented uncontaminated conditions. onsistent sampling and analytical protoco...

  19. Establishing Benchmarks and Measuring Progress at "HSTW" Sites.

    ERIC Educational Resources Information Center

    Southern Regional Education Board (SREB), 2010

    2010-01-01

    Schools that join the "High Schools That Work (HSTW)" network are expected to show progress in changing school and classroom practices in ways that improve student achievement and readiness for postsecondary studies and careers. They are expected to focus on practices that have proven most effective in advancing student achievement.…

  20. The Long-Term Agro-Ecosystem Research (LTAR) Network: A New In-Situ Data Network For Agriculture

    NASA Astrophysics Data System (ADS)

    Walbridge, M. R.

    2014-12-01

    Agriculture in the 21st Century faces significant challenges due to increases in the demand for agricultural products from a global population expected to reach 9.5 billion by 2050, changes in land use that are reducing the area of arable land worldwide, and the uncertainties associated with increasing climate variability and change. There is broad agreement that meeting these challenges will require significant changes in agro-ecosystem management at the landscape scale. In 2012, the USDA/ARS announced the reorganization of 10 existing benchmark watersheds, experimental ranges, and research farms into a Long-Term Agro-ecosystem Research (LTAR) network. Earlier this year, the LTAR network expanded to 18 sites, including 3 led by land grant universities and/or private foundations. The central question addressed by the LTAR network is, "How do we sustain or enhance productivity, profitability, and ecosystem services in agro-ecosystems and agricultural landscapes"? All 18 LTAR sites possess rich historical databases that extend up to 100 years into the past. However as LTAR moves forward, the focus is on collecting a core set of common measurements over the next 30-50 years that can be used to draw inferences regarding the nature of agricultural sustainability and how it varies across regional and continental-scale gradients. As such, LTAR is part long-term research network and part observatory network. Rather than focusing on a single site, each LTAR has developed regional partnerships that allow it to address agro-ecosystem function in the large basins and eco-climatic zones that underpin regional food production systems. Partners include other long-term in-situ data networks (e.g., Ameriflux, CZO, GRACEnet, LTER, NEON). 'Next steps' include designing and implementing a cross-site experiment addressing LTAR's central question.

  1. LEGO: a novel method for gene set over-representation analysis by incorporating network-based gene weights

    PubMed Central

    Dong, Xinran; Hao, Yun; Wang, Xiao; Tian, Weidong

    2016-01-01

    Pathway or gene set over-representation analysis (ORA) has become a routine task in functional genomics studies. However, currently widely used ORA tools employ statistical methods such as Fisher’s exact test that reduce a pathway into a list of genes, ignoring the constitutive functional non-equivalent roles of genes and the complex gene-gene interactions. Here, we develop a novel method named LEGO (functional Link Enrichment of Gene Ontology or gene sets) that takes into consideration these two types of information by incorporating network-based gene weights in ORA analysis. In three benchmarks, LEGO achieves better performance than Fisher and three other network-based methods. To further evaluate LEGO’s usefulness, we compare LEGO with five gene expression-based and three pathway topology-based methods using a benchmark of 34 disease gene expression datasets compiled by a recent publication, and show that LEGO is among the top-ranked methods in terms of both sensitivity and prioritization for detecting target KEGG pathways. In addition, we develop a cluster-and-filter approach to reduce the redundancy among the enriched gene sets, making the results more interpretable to biologists. Finally, we apply LEGO to two lists of autism genes, and identify relevant gene sets to autism that could not be found by Fisher. PMID:26750448

  2. LEGO: a novel method for gene set over-representation analysis by incorporating network-based gene weights.

    PubMed

    Dong, Xinran; Hao, Yun; Wang, Xiao; Tian, Weidong

    2016-01-11

    Pathway or gene set over-representation analysis (ORA) has become a routine task in functional genomics studies. However, currently widely used ORA tools employ statistical methods such as Fisher's exact test that reduce a pathway into a list of genes, ignoring the constitutive functional non-equivalent roles of genes and the complex gene-gene interactions. Here, we develop a novel method named LEGO (functional Link Enrichment of Gene Ontology or gene sets) that takes into consideration these two types of information by incorporating network-based gene weights in ORA analysis. In three benchmarks, LEGO achieves better performance than Fisher and three other network-based methods. To further evaluate LEGO's usefulness, we compare LEGO with five gene expression-based and three pathway topology-based methods using a benchmark of 34 disease gene expression datasets compiled by a recent publication, and show that LEGO is among the top-ranked methods in terms of both sensitivity and prioritization for detecting target KEGG pathways. In addition, we develop a cluster-and-filter approach to reduce the redundancy among the enriched gene sets, making the results more interpretable to biologists. Finally, we apply LEGO to two lists of autism genes, and identify relevant gene sets to autism that could not be found by Fisher.

  3. Application-oriented programming model for sensor networks embedded in the human body.

    PubMed

    Barbosa, Talles M G de A; Sene, Iwens G; da Rocha, Adson F; Nascimento, Fransisco A de O; Carvalho, Hervaldo S; Camapum, Juliana F

    2006-01-01

    This work presents a new programming model for sensor networks embedded in the human body which is based on the concept of multi-programming application-oriented software. This model was conceived with a top-down approach of four layers and its main goal is to allow the healthcare professionals to program and to reconfigure the network locally or by the Internet. In order to evaluate this hypothesis, a benchmarking was executed in order to allow the assessment of the mean time spent in the programming of a multi-functional sensor node used for the measurement and transmission of the electrocardiogram.

  4. pyNBS: A Python implementation for network-based stratification of tumor mutations.

    PubMed

    Huang, Justin K; Jia, Tongqiu; Carlin, Daniel E; Ideker, Trey

    2018-03-28

    We present pyNBS: a modularized Python 2.7 implementation of the network-based stratification (NBS) algorithm for stratifying tumor somatic mutation profiles into molecularly and clinically relevant subtypes. In addition to release of the software, we benchmark its key parameters and provide a compact cancer reference network that increases the significance of tumor stratification using the NBS algorithm. The structure of the code exposes key steps of the algorithm to foster further collaborative development. The package, along with examples and data, can be downloaded and installed from the URL http://www.github.com/huangger/pyNBS/. jkh013@ucsd.edu.

  5. Default cascades in complex networks: topology and systemic risk.

    PubMed

    Roukny, Tarik; Bersini, Hugues; Pirotte, Hugues; Caldarelli, Guido; Battiston, Stefano

    2013-09-26

    The recent crisis has brought to the fore a crucial question that remains still open: what would be the optimal architecture of financial systems? We investigate the stability of several benchmark topologies in a simple default cascading dynamics in bank networks. We analyze the interplay of several crucial drivers, i.e., network topology, banks' capital ratios, market illiquidity, and random vs targeted shocks. We find that, in general, topology matters only--but substantially--when the market is illiquid. No single topology is always superior to others. In particular, scale-free networks can be both more robust and more fragile than homogeneous architectures. This finding has important policy implications. We also apply our methodology to a comprehensive dataset of an interbank market from 1999 to 2011.

  6. Issues in ATM Support of High-Performance, Geographically Distributed Computing

    NASA Technical Reports Server (NTRS)

    Claus, Russell W.; Dowd, Patrick W.; Srinidhi, Saragur M.; Blade, Eric D.G

    1995-01-01

    This report experimentally assesses the effect of the underlying network in a cluster-based computing environment. The assessment is quantified by application-level benchmarking, process-level communication, and network file input/output. Two testbeds were considered, one small cluster of Sun workstations and another large cluster composed of 32 high-end IBM RS/6000 platforms. The clusters had Ethernet, fiber distributed data interface (FDDI), Fibre Channel, and asynchronous transfer mode (ATM) network interface cards installed, providing the same processors and operating system for the entire suite of experiments. The primary goal of this report is to assess the suitability of an ATM-based, local-area network to support interprocess communication and remote file input/output systems for distributed computing.

  7. Algorithms for tensor network renormalization

    NASA Astrophysics Data System (ADS)

    Evenbly, G.

    2017-01-01

    We discuss in detail algorithms for implementing tensor network renormalization (TNR) for the study of classical statistical and quantum many-body systems. First, we recall established techniques for how the partition function of a 2 D classical many-body system or the Euclidean path integral of a 1 D quantum system can be represented as a network of tensors, before describing how TNR can be implemented to efficiently contract the network via a sequence of coarse-graining transformations. The efficacy of the TNR approach is then benchmarked for the 2 D classical statistical and 1 D quantum Ising models; in particular the ability of TNR to maintain a high level of accuracy over sustained coarse-graining transformations, even at a critical point, is demonstrated.

  8. Performance effects of irregular communications patterns on massively parallel multiprocessors

    NASA Technical Reports Server (NTRS)

    Saltz, Joel; Petiton, Serge; Berryman, Harry; Rifkin, Adam

    1991-01-01

    A detailed study of the performance effects of irregular communications patterns on the CM-2 was conducted. The communications capabilities of the CM-2 were characterized under a variety of controlled conditions. In the process of carrying out the performance evaluation, extensive use was made of a parameterized synthetic mesh. In addition, timings with unstructured meshes generated for aerodynamic codes and a set of sparse matrices with banded patterns on non-zeroes were performed. This benchmarking suite stresses the communications capabilities of the CM-2 in a range of different ways. Benchmark results demonstrate that it is possible to make effective use of much of the massive concurrency available in the communications network.

  9. Distributed and decentralized state estimation in gas networks as distributed parameter systems.

    PubMed

    Ahmadian Behrooz, Hesam; Boozarjomehry, R Bozorgmehry

    2015-09-01

    In this paper, a framework for distributed and decentralized state estimation in high-pressure and long-distance gas transmission networks (GTNs) is proposed. The non-isothermal model of the plant including mass, momentum and energy balance equations are used to simulate the dynamic behavior. Due to several disadvantages of implementing a centralized Kalman filter for large-scale systems, the continuous/discrete form of extended Kalman filter for distributed and decentralized estimation (DDE) has been extended for these systems. Accordingly, the global model is decomposed into several subsystems, called local models. Some heuristic rules are suggested for system decomposition in gas pipeline networks. In the construction of local models, due to the existence of common states and interconnections among the subsystems, the assimilation and prediction steps of the Kalman filter are modified to take the overlapping and external states into account. However, dynamic Riccati equation for each subsystem is constructed based on the local model, which introduces a maximum error of 5% in the estimated standard deviation of the states in the benchmarks studied in this paper. The performance of the proposed methodology has been shown based on the comparison of its accuracy and computational demands against their counterparts in centralized Kalman filter for two viable benchmarks. In a real life network, it is shown that while the accuracy is not significantly decreased, the real-time factor of the state estimation is increased by a factor of 10. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Dengue forecasting in São Paulo city with generalized additive models, artificial neural networks and seasonal autoregressive integrated moving average models.

    PubMed

    Baquero, Oswaldo Santos; Santana, Lidia Maria Reis; Chiaravalloti-Neto, Francisco

    2018-01-01

    Globally, the number of dengue cases has been on the increase since 1990 and this trend has also been found in Brazil and its most populated city-São Paulo. Surveillance systems based on predictions allow for timely decision making processes, and in turn, timely and efficient interventions to reduce the burden of the disease. We conducted a comparative study of dengue predictions in São Paulo city to test the performance of trained seasonal autoregressive integrated moving average models, generalized additive models and artificial neural networks. We also used a naïve model as a benchmark. A generalized additive model with lags of the number of cases and meteorological variables had the best performance, predicted epidemics of unprecedented magnitude and its performance was 3.16 times higher than the benchmark and 1.47 higher that the next best performing model. The predictive models captured the seasonal patterns but differed in their capacity to anticipate large epidemics and all outperformed the benchmark. In addition to be able to predict epidemics of unprecedented magnitude, the best model had computational advantages, since its training and tuning was straightforward and required seconds or at most few minutes. These are desired characteristics to provide timely results for decision makers. However, it should be noted that predictions are made just one month ahead and this is a limitation that future studies could try to reduce.

  11. Do Medicare Advantage Plans Minimize Costs? Investigating the Relationship Between Benchmarks, Costs, and Rebates.

    PubMed

    Zuckerman, Stephen; Skopec, Laura; Guterman, Stuart

    2017-12-01

    Medicare Advantage (MA), the program that allows people to receive their Medicare benefits through private health plans, uses a benchmark-and-bidding system to induce plans to provide benefits at lower costs. However, prior research suggests medical costs, profits, and other plan costs are not as low under this system as they might otherwise be. To examine how well the current system encourages MA plans to bid their lowest cost by examining the relationship between costs and bonuses (rebates) and the benchmarks Medicare uses in determining plan payments. Regression analysis using 2015 data for HMO and local PPO plans. Costs and rebates are higher for MA plans in areas with higher benchmarks, and plan costs vary less than benchmarks do. A one-dollar increase in benchmarks is associated with 32-cent-higher plan costs and a 52-cent-higher rebate, even when controlling for market and plan factors that can affect costs. This suggests the current benchmark-and-bidding system allows plans to bid higher than local input prices and other market conditions would seem to warrant. To incentivize MA plans to maximize efficiency and minimize costs, Medicare could change the way benchmarks are set or used.

  12. Synapse-Centric Mapping of Cortical Models to the SpiNNaker Neuromorphic Architecture

    PubMed Central

    Knight, James C.; Furber, Steve B.

    2016-01-01

    While the adult human brain has approximately 8.8 × 1010 neurons, this number is dwarfed by its 1 × 1015 synapses. From the point of view of neuromorphic engineering and neural simulation in general this makes the simulation of these synapses a particularly complex problem. SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Current solutions for simulating spiking neural networks on SpiNNaker are heavily inspired by work on distributed high-performance computing. However, while SpiNNaker shares many characteristics with such distributed systems, its component nodes have much more limited resources and, as the system lacks global synchronization, the computation performed on each node must complete within a fixed time step. We first analyze the performance of the current SpiNNaker neural simulation software and identify several problems that occur when it is used to simulate networks of the type often used to model the cortex which contain large numbers of sparsely connected synapses. We then present a new, more flexible approach for mapping the simulation of such networks to SpiNNaker which solves many of these problems. Finally we analyze the performance of our new approach using both benchmarks, designed to represent cortical connectivity, and larger, functional cortical models. In a benchmark network where neurons receive input from 8000 STDP synapses, our new approach allows 4× more neurons to be simulated on each SpiNNaker core than has been previously possible. We also demonstrate that the largest plastic neural network previously simulated on neuromorphic hardware can be run in real time using our new approach: double the speed that was previously achieved. Additionally this network contains two types of plastic synapse which previously had to be trained separately but, using our new approach, can be trained simultaneously. PMID:27683540

  13. Toxicological benchmarks for screening potential contaminants of concern for effects on soil and litter invertebrates and heterotrophic process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Will, M.E.; Suter, G.W. II

    1994-09-01

    One of the initial stages in ecological risk assessments for hazardous waste sites is the screening of contaminants to determine which of them are worthy of further consideration as {open_quotes}contaminants of potential concern.{close_quotes} This process is termed {open_quotes}contaminant screening.{close_quotes} It is performed by comparing measured ambient concentrations of chemicals to benchmark concentrations. Currently, no standard benchmark concentrations exist for assessing contaminants in soil with respect to their toxicity to soil- and litter-dwelling invertebrates, including earthworms, other micro- and macroinvertebrates, or heterotrophic bacteria and fungi. This report presents a standard method for deriving benchmarks for this purpose, sets of data concerningmore » effects of chemicals in soil on invertebrates and soil microbial processes, and benchmarks for chemicals potentially associated with United States Department of Energy sites. In addition, literature describing the experiments from which data were drawn for benchmark derivation. Chemicals that are found in soil at concentrations exceeding both the benchmarks and the background concentration for the soil type should be considered contaminants of potential concern.« less

  14. ViSAPy: a Python tool for biophysics-based generation of virtual spiking activity for evaluation of spike-sorting algorithms.

    PubMed

    Hagen, Espen; Ness, Torbjørn V; Khosrowshahi, Amir; Sørensen, Christina; Fyhn, Marianne; Hafting, Torkel; Franke, Felix; Einevoll, Gaute T

    2015-04-30

    New, silicon-based multielectrodes comprising hundreds or more electrode contacts offer the possibility to record spike trains from thousands of neurons simultaneously. This potential cannot be realized unless accurate, reliable automated methods for spike sorting are developed, in turn requiring benchmarking data sets with known ground-truth spike times. We here present a general simulation tool for computing benchmarking data for evaluation of spike-sorting algorithms entitled ViSAPy (Virtual Spiking Activity in Python). The tool is based on a well-established biophysical forward-modeling scheme and is implemented as a Python package built on top of the neuronal simulator NEURON and the Python tool LFPy. ViSAPy allows for arbitrary combinations of multicompartmental neuron models and geometries of recording multielectrodes. Three example benchmarking data sets are generated, i.e., tetrode and polytrode data mimicking in vivo cortical recordings and microelectrode array (MEA) recordings of in vitro activity in salamander retinas. The synthesized example benchmarking data mimics salient features of typical experimental recordings, for example, spike waveforms depending on interspike interval. ViSAPy goes beyond existing methods as it includes biologically realistic model noise, synaptic activation by recurrent spiking networks, finite-sized electrode contacts, and allows for inhomogeneous electrical conductivities. ViSAPy is optimized to allow for generation of long time series of benchmarking data, spanning minutes of biological time, by parallel execution on multi-core computers. ViSAPy is an open-ended tool as it can be generalized to produce benchmarking data or arbitrary recording-electrode geometries and with various levels of complexity. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  15. Making governance work in the health care sector: evidence from a 'natural experiment' in Italy.

    PubMed

    Nuti, Sabina; Vola, Federico; Bonini, Anna; Vainieri, Milena

    2016-01-01

    The Italian Health care System provides universal coverage for comprehensive health services and is mainly financed through general taxation. Since the early 1990s, a strong decentralization policy has been adopted in Italy and the state has gradually ceded its jurisdiction to regional governments, of which there are twenty. These regions now have political, administrative, fiscal and organizational responsibility for the provision of health care. This paper examines the different governance models that the regions have adopted and investigates the performance evaluation systems (PESs) associated with them, focusing on the experience of a network of ten regional governments that share the same PES. The article draws on the wide range of governance models and PESs in order to design a natural experiment. Through an analysis of 14 indicators measured in 2007 and in 2012 for all the regions, the study examines how different performance evaluation models are associated with different health care performances and whether the network-shared PES has made any difference to the results achieved by the regions involved. The initial results support the idea that systematic benchmarking and public disclosure of data are powerful tools to guarantee the balanced and sustained improvement of the health care systems, but only if they are integrated with the regional governance mechanisms.

  16. Deep Constrained Siamese Hash Coding Network and Load-Balanced Locality-Sensitive Hashing for Near Duplicate Image Detection.

    PubMed

    Hu, Weiming; Fan, Yabo; Xing, Junliang; Sun, Liang; Cai, Zhaoquan; Maybank, Stephen

    2018-09-01

    We construct a new efficient near duplicate image detection method using a hierarchical hash code learning neural network and load-balanced locality-sensitive hashing (LSH) indexing. We propose a deep constrained siamese hash coding neural network combined with deep feature learning. Our neural network is able to extract effective features for near duplicate image detection. The extracted features are used to construct a LSH-based index. We propose a load-balanced LSH method to produce load-balanced buckets in the hashing process. The load-balanced LSH significantly reduces the query time. Based on the proposed load-balanced LSH, we design an effective and feasible algorithm for near duplicate image detection. Extensive experiments on three benchmark data sets demonstrate the effectiveness of our deep siamese hash encoding network and load-balanced LSH.

  17. Data from selected U.S. Geological Survey National Stream Water-Quality Networks (WQN)

    USGS Publications Warehouse

    Alexander, Richard B.; Slack, J.R.; Ludtke, A.S.; Fitzgerald, K.K.; Schertz, T.L.; Briel, L.I.; Buttleman, K.P.

    1996-01-01

    This CD-ROM set contains data from two USGS national stream water-quality networks, the Hydrologic Benchmark Network (HBN) and the National Stream Quality Accounting Network (NASQAN), operated during the past 30 years. These networks were established to provide national and regional descriptions of stream water-quality conditions and trends, based on uniform monitoring of selected watersheds throughout the United States, and to improve our understanding of the effects of the natural environment and human activities on water quality. The HBN, consisting of 63 relatively small, minimally disturbed watersheds, provides data for investigating naturally induced changes in streamflow and water quality and the effects of airborne substances on water quality. NASQAN, consisting of 618 larger, more culturally influenced watersheds, provides information for tracking water-quality conditions in major U.S. rivers and streams.

  18. A critical care network pressure ulcer prevention quality improvement project.

    PubMed

    McBride, Joanna; Richardson, Annette

    2015-03-30

    Pressure ulcer prevention is an important safety issue, often underrated and an extremely painful event harming patients. Critically ill patients are one of the highest risk groups in hospital. The impact of pressure ulcers are wide ranging, and they can result in increased critical care and the hospital length of stay, significant interference with functional recovery and rehabilitation and increase cost. This quality improvement project had four aims: (1) to establish a critical care network pressure ulcer prevention group; (2) to establish baseline pressure ulcer prevention practices; (3) to measure, compare and monitor pressure ulcers prevalence; (4) to develop network pressure ulcer prevention standards. The approach used to improve quality included strong critical care nursing leadership to develop a cross-organisational pressure ulcer prevention group and a benchmarking exercise of current practices across a well-established critical care Network in the North of England. The National Safety Thermometer tool was used to measure pressure ulcer prevalence in 23 critical care units, and best available evidence, local consensus and another Critical Care Networks' bundle of interventions were used to develop a local pressure ulcer prevention standards document. The aims of the quality improvement project were achieved. This project was driven by successful leadership and had an agreed common goal. The National Safety Thermometer tool was an innovative approach to measure and compare pressure ulcer prevalence rates at a regional level. A limitation was the exclusion of moisture lesions. The project showed excellent engagement and collaborate working in the quest to prevent pressure ulcers from many critical care nurses with the North of England Critical Care Network. A concise set of Network standards was developed for use in conjunction with local guidelines to enhance pressure ulcer prevention. © 2015 British Association of Critical Care Nurses.

  19. Vulnerability and Gambling Addiction: Psychosocial Benchmarks and Avenues for Intervention

    ERIC Educational Resources Information Center

    Suissa, Amnon Jacob

    2011-01-01

    Defined by researchers as "a silent epidemic" the gambling phenomenon is a social problem that has a negative impact on individuals, families and communities. Among these effects, there is exasperating evidence of comprised community networks, a deterioration of family and social ties, psychiatric co-morbidity, suicides and more recently,…

  20. Bibliographic Networks and Microcomputer Applications for Aerospace and Defense Scientific and Technical Information.

    DTIC Science & Technology

    1986-10-01

    The package had been modified and enhanced by a commercial vendor who was marketing the package. Unforeseen events halted pursuit of this approach and...them against the criteria listed in the test plan. Benchmarking took over 10 months to complete. The UNICORN System from SIRSI Corporation and BRS

  1. Regimes of Performance: Practices of the Normalised Self in the Neoliberal University

    ERIC Educational Resources Information Center

    Morrissey, John

    2015-01-01

    Universities today inescapably find themselves part of nationally and globally competitive networks that appear firmly inflected by neoliberal concerns of rankings, benchmarking and productivity. This, of course, has in turn led to progressively anticipated and regulated forms of academic subjectivity that many fear are overly econo-centric in…

  2. A cooperative game framework for detecting overlapping communities in social networks

    NASA Astrophysics Data System (ADS)

    Jonnalagadda, Annapurna; Kuppusamy, Lakshmanan

    2018-02-01

    Community detection in social networks is a challenging and complex task, which received much attention from researchers of multiple domains in recent years. The evolution of communities in social networks happens merely due to the self-interest of the nodes. The interesting feature of community structure in social networks is the multi membership of the nodes resulting in overlapping communities. Assuming the nodes of the social network as self-interested players, the dynamics of community formation can be captured in the form of a game. In this paper, we propose a greedy algorithm, namely, Weighted Graph Community Game (WGCG), in order to model the interactions among the self-interested nodes of the social network. The proposed algorithm employs the Shapley value mechanism to discover the inherent communities of the underlying social network. The experimental evaluation on the real-world and synthetic benchmark networks demonstrates that the performance of the proposed algorithm is superior to the state-of-the-art overlapping community detection algorithms.

  3. HEp-2 cell image classification method based on very deep convolutional networks with small datasets

    NASA Astrophysics Data System (ADS)

    Lu, Mengchi; Gao, Long; Guo, Xifeng; Liu, Qiang; Yin, Jianping

    2017-07-01

    Human Epithelial-2 (HEp-2) cell images staining patterns classification have been widely used to identify autoimmune diseases by the anti-Nuclear antibodies (ANA) test in the Indirect Immunofluorescence (IIF) protocol. Because manual test is time consuming, subjective and labor intensive, image-based Computer Aided Diagnosis (CAD) systems for HEp-2 cell classification are developing. However, methods proposed recently are mostly manual features extraction with low accuracy. Besides, the scale of available benchmark datasets is small, which does not exactly suitable for using deep learning methods. This issue will influence the accuracy of cell classification directly even after data augmentation. To address these issues, this paper presents a high accuracy automatic HEp-2 cell classification method with small datasets, by utilizing very deep convolutional networks (VGGNet). Specifically, the proposed method consists of three main phases, namely image preprocessing, feature extraction and classification. Moreover, an improved VGGNet is presented to address the challenges of small-scale datasets. Experimental results over two benchmark datasets demonstrate that the proposed method achieves superior performance in terms of accuracy compared with existing methods.

  4. AlloRep: A Repository of Sequence, Structural and Mutagenesis Data for the LacI/GalR Transcription Regulators.

    PubMed

    Sousa, Filipa L; Parente, Daniel J; Shis, David L; Hessman, Jacob A; Chazelle, Allen; Bennett, Matthew R; Teichmann, Sarah A; Swint-Kruse, Liskin

    2016-02-22

    Protein families evolve functional variation by accumulating point mutations at functionally important amino acid positions. Homologs in the LacI/GalR family of transcription regulators have evolved to bind diverse DNA sequences and allosteric regulatory molecules. In addition to playing key roles in bacterial metabolism, these proteins have been widely used as a model family for benchmarking structural and functional prediction algorithms. We have collected manually curated sequence alignments for >3000 sequences, in vivo phenotypic and biochemical data for >5750 LacI/GalR mutational variants, and noncovalent residue contact networks for 65 LacI/GalR homolog structures. Using this rich data resource, we compared the noncovalent residue contact networks of the LacI/GalR subfamilies to design and experimentally validate an allosteric mutant of a synthetic LacI/GalR repressor for use in biotechnology. The AlloRep database (freely available at www.AlloRep.org) is a key resource for future evolutionary studies of LacI/GalR homologs and for benchmarking computational predictions of functional change. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Pruning artificial neural networks using neural complexity measures.

    PubMed

    Jorgensen, Thomas D; Haynes, Barry P; Norlund, Charlotte C F

    2008-10-01

    This paper describes a new method for pruning artificial neural networks, using a measure of the neural complexity of the neural network. This measure is used to determine the connections that should be pruned. The measure computes the information-theoretic complexity of a neural network, which is similar to, yet different from previous research on pruning. The method proposed here shows how overly large and complex networks can be reduced in size, whilst retaining learnt behaviour and fitness. The technique proposed here helps to discover a network topology that matches the complexity of the problem it is meant to solve. This novel pruning technique is tested in a robot control domain, simulating a racecar. It is shown, that the proposed pruning method is a significant improvement over the most commonly used pruning method Magnitude Based Pruning. Furthermore, some of the pruned networks prove to be faster learners than the benchmark network that they originate from. This means that this pruning method can also help to unleash hidden potential in a network, because the learning time decreases substantially for a pruned a network, due to the reduction of dimensionality of the network.

  6. Financial time series prediction using spiking neural networks.

    PubMed

    Reid, David; Hussain, Abir Jaafar; Tawfik, Hissam

    2014-01-01

    In this paper a novel application of a particular type of spiking neural network, a Polychronous Spiking Network, was used for financial time series prediction. It is argued that the inherent temporal capabilities of this type of network are suited to non-stationary data such as this. The performance of the spiking neural network was benchmarked against three systems: two "traditional", rate-encoded, neural networks; a Multi-Layer Perceptron neural network and a Dynamic Ridge Polynomial neural network, and a standard Linear Predictor Coefficients model. For this comparison three non-stationary and noisy time series were used: IBM stock data; US/Euro exchange rate data, and the price of Brent crude oil. The experiments demonstrated favourable prediction results for the Spiking Neural Network in terms of Annualised Return and prediction error for 5-Step ahead predictions. These results were also supported by other relevant metrics such as Maximum Drawdown and Signal-To-Noise ratio. This work demonstrated the applicability of the Polychronous Spiking Network to financial data forecasting and this in turn indicates the potential of using such networks over traditional systems in difficult to manage non-stationary environments.

  7. Extension of a System Level Tool for Component Level Analysis

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok; Schallhorn, Paul

    2002-01-01

    This paper presents an extension of a numerical algorithm for network flow analysis code to perform multi-dimensional flow calculation. The one dimensional momentum equation in network flow analysis code has been extended to include momentum transport due to shear stress and transverse component of velocity. Both laminar and turbulent flows are considered. Turbulence is represented by Prandtl's mixing length hypothesis. Three classical examples (Poiseuille flow, Couette flow and shear driven flow in a rectangular cavity) are presented as benchmark for the verification of the numerical scheme.

  8. Extension of a System Level Tool for Component Level Analysis

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok; Schallhorn, Paul; McConnaughey, Paul K. (Technical Monitor)

    2001-01-01

    This paper presents an extension of a numerical algorithm for network flow analysis code to perform multi-dimensional flow calculation. The one dimensional momentum equation in network flow analysis code has been extended to include momentum transport due to shear stress and transverse component of velocity. Both laminar and turbulent flows are considered. Turbulence is represented by Prandtl's mixing length hypothesis. Three classical examples (Poiseuille flow, Couette flow, and shear driven flow in a rectangular cavity) are presented as benchmark for the verification of the numerical scheme.

  9. Renormalization group contraction of tensor networks in three dimensions

    NASA Astrophysics Data System (ADS)

    García-Sáez, Artur; Latorre, José I.

    2013-02-01

    We present a new strategy for contracting tensor networks in arbitrary geometries. This method is designed to follow as strictly as possible the renormalization group philosophy, by first contracting tensors in an exact way and, then, performing a controlled truncation of the resulting tensor. We benchmark this approximation procedure in two dimensions against an exact contraction. We then apply the same idea to a three-dimensional quantum system. The underlying rational for emphasizing the exact coarse graining renormalization group step prior to truncation is related to monogamy of entanglement.

  10. On the applicability of STDP-based learning mechanisms to spiking neuron network models

    NASA Astrophysics Data System (ADS)

    Sboev, A.; Vlasov, D.; Serenko, A.; Rybka, R.; Moloshnikov, I.

    2016-11-01

    The ways to creating practically effective method for spiking neuron networks learning, that would be appropriate for implementing in neuromorphic hardware and at the same time based on the biologically plausible plasticity rules, namely, on STDP, are discussed. The influence of the amount of correlation between input and output spike trains on the learnability by different STDP rules is evaluated. A usability of alternative combined learning schemes, involving artificial and spiking neuron models is demonstrated on the iris benchmark task and on the practical task of gender recognition.

  11. Assessing Low-Intensity Relationships in Complex Networks

    PubMed Central

    Spitz, Andreas; Gimmler, Anna; Stoeck, Thorsten; Zweig, Katharina Anna; Horvát, Emőke-Ágnes

    2016-01-01

    Many large network data sets are noisy and contain links representing low-intensity relationships that are difficult to differentiate from random interactions. This is especially relevant for high-throughput data from systems biology, large-scale ecological data, but also for Web 2.0 data on human interactions. In these networks with missing and spurious links, it is possible to refine the data based on the principle of structural similarity, which assesses the shared neighborhood of two nodes. By using similarity measures to globally rank all possible links and choosing the top-ranked pairs, true links can be validated, missing links inferred, and spurious observations removed. While many similarity measures have been proposed to this end, there is no general consensus on which one to use. In this article, we first contribute a set of benchmarks for complex networks from three different settings (e-commerce, systems biology, and social networks) and thus enable a quantitative performance analysis of classic node similarity measures. Based on this, we then propose a new methodology for link assessment called z* that assesses the statistical significance of the number of their common neighbors by comparison with the expected value in a suitably chosen random graph model and which is a consistently top-performing algorithm for all benchmarks. In addition to a global ranking of links, we also use this method to identify the most similar neighbors of each single node in a local ranking, thereby showing the versatility of the method in two distinct scenarios and augmenting its applicability. Finally, we perform an exploratory analysis on an oceanographic plankton data set and find that the distribution of microbes follows similar biogeographic rules as those of macroorganisms, a result that rejects the global dispersal hypothesis for microbes. PMID:27096435

  12. Beyond the hype: deep neural networks outperform established methods using a ChEMBL bioactivity benchmark set.

    PubMed

    Lenselink, Eelke B; Ten Dijke, Niels; Bongers, Brandon; Papadatos, George; van Vlijmen, Herman W T; Kowalczyk, Wojtek; IJzerman, Adriaan P; van Westen, Gerard J P

    2017-08-14

    The increase of publicly available bioactivity data in recent years has fueled and catalyzed research in chemogenomics, data mining, and modeling approaches. As a direct result, over the past few years a multitude of different methods have been reported and evaluated, such as target fishing, nearest neighbor similarity-based methods, and Quantitative Structure Activity Relationship (QSAR)-based protocols. However, such studies are typically conducted on different datasets, using different validation strategies, and different metrics. In this study, different methods were compared using one single standardized dataset obtained from ChEMBL, which is made available to the public, using standardized metrics (BEDROC and Matthews Correlation Coefficient). Specifically, the performance of Naïve Bayes, Random Forests, Support Vector Machines, Logistic Regression, and Deep Neural Networks was assessed using QSAR and proteochemometric (PCM) methods. All methods were validated using both a random split validation and a temporal validation, with the latter being a more realistic benchmark of expected prospective execution. Deep Neural Networks are the top performing classifiers, highlighting the added value of Deep Neural Networks over other more conventional methods. Moreover, the best method ('DNN_PCM') performed significantly better at almost one standard deviation higher than the mean performance. Furthermore, Multi-task and PCM implementations were shown to improve performance over single task Deep Neural Networks. Conversely, target prediction performed almost two standard deviations under the mean performance. Random Forests, Support Vector Machines, and Logistic Regression performed around mean performance. Finally, using an ensemble of DNNs, alongside additional tuning, enhanced the relative performance by another 27% (compared with unoptimized 'DNN_PCM'). Here, a standardized set to test and evaluate different machine learning algorithms in the context of multi-task learning is offered by providing the data and the protocols. Graphical Abstract .

  13. Assessing Low-Intensity Relationships in Complex Networks.

    PubMed

    Spitz, Andreas; Gimmler, Anna; Stoeck, Thorsten; Zweig, Katharina Anna; Horvát, Emőke-Ágnes

    2016-01-01

    Many large network data sets are noisy and contain links representing low-intensity relationships that are difficult to differentiate from random interactions. This is especially relevant for high-throughput data from systems biology, large-scale ecological data, but also for Web 2.0 data on human interactions. In these networks with missing and spurious links, it is possible to refine the data based on the principle of structural similarity, which assesses the shared neighborhood of two nodes. By using similarity measures to globally rank all possible links and choosing the top-ranked pairs, true links can be validated, missing links inferred, and spurious observations removed. While many similarity measures have been proposed to this end, there is no general consensus on which one to use. In this article, we first contribute a set of benchmarks for complex networks from three different settings (e-commerce, systems biology, and social networks) and thus enable a quantitative performance analysis of classic node similarity measures. Based on this, we then propose a new methodology for link assessment called z* that assesses the statistical significance of the number of their common neighbors by comparison with the expected value in a suitably chosen random graph model and which is a consistently top-performing algorithm for all benchmarks. In addition to a global ranking of links, we also use this method to identify the most similar neighbors of each single node in a local ranking, thereby showing the versatility of the method in two distinct scenarios and augmenting its applicability. Finally, we perform an exploratory analysis on an oceanographic plankton data set and find that the distribution of microbes follows similar biogeographic rules as those of macroorganisms, a result that rejects the global dispersal hypothesis for microbes.

  14. Reconstruction of network topology using status-time-series data

    NASA Astrophysics Data System (ADS)

    Pandey, Pradumn Kumar; Badarla, Venkataramana

    2018-01-01

    Uncovering the heterogeneous connection pattern of a networked system from the available status-time-series (STS) data of a dynamical process on the network is of great interest in network science and known as a reverse engineering problem. Dynamical processes on a network are affected by the structure of the network. The dependency between the diffusion dynamics and structure of the network can be utilized to retrieve the connection pattern from the diffusion data. Information of the network structure can help to devise the control of dynamics on the network. In this paper, we consider the problem of network reconstruction from the available status-time-series (STS) data using matrix analysis. The proposed method of network reconstruction from the STS data is tested successfully under susceptible-infected-susceptible (SIS) diffusion dynamics on real-world and computer-generated benchmark networks. High accuracy and efficiency of the proposed reconstruction procedure from the status-time-series data define the novelty of the method. Our proposed method outperforms compressed sensing theory (CST) based method of network reconstruction using STS data. Further, the same procedure of network reconstruction is applied to the weighted networks. The ordering of the edges in the weighted networks is identified with high accuracy.

  15. Finding undetected protein associations in cell signaling by belief propagation.

    PubMed

    Bailly-Bechet, M; Borgs, C; Braunstein, A; Chayes, J; Dagkessamanskaia, A; François, J-M; Zecchina, R

    2011-01-11

    External information propagates in the cell mainly through signaling cascades and transcriptional activation, allowing it to react to a wide spectrum of environmental changes. High-throughput experiments identify numerous molecular components of such cascades that may, however, interact through unknown partners. Some of them may be detected using data coming from the integration of a protein-protein interaction network and mRNA expression profiles. This inference problem can be mapped onto the problem of finding appropriate optimal connected subgraphs of a network defined by these datasets. The optimization procedure turns out to be computationally intractable in general. Here we present a new distributed algorithm for this task, inspired from statistical physics, and apply this scheme to alpha factor and drug perturbations data in yeast. We identify the role of the COS8 protein, a member of a gene family of previously unknown function, and validate the results by genetic experiments. The algorithm we present is specially suited for very large datasets, can run in parallel, and can be adapted to other problems in systems biology. On renowned benchmarks it outperforms other algorithms in the field.

  16. Simple Deterministically Constructed Recurrent Neural Networks

    NASA Astrophysics Data System (ADS)

    Rodan, Ali; Tiňo, Peter

    A large number of models for time series processing, forecasting or modeling follows a state-space formulation. Models in the specific class of state-space approaches, referred to as Reservoir Computing, fix their state-transition function. The state space with the associated state transition structure forms a reservoir, which is supposed to be sufficiently complex so as to capture a large number of features of the input stream that can be potentially exploited by the reservoir-to-output readout mapping. The largely "black box" character of reservoirs prevents us from performing a deeper theoretical investigation of the dynamical properties of successful reservoirs. Reservoir construction is largely driven by a series of (more-or-less) ad-hoc randomized model building stages, with both the researchers and practitioners having to rely on a series of trials and errors. We show that a very simple deterministically constructed reservoir with simple cycle topology gives performances comparable to those of the Echo State Network (ESN) on a number of time series benchmarks. Moreover, we argue that the memory capacity of such a model can be made arbitrarily close to the proved theoretical limit.

  17. Graph processing platforms at scale: practices and experiences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lim, Seung-Hwan; Lee, Sangkeun; Brown, Tyler C

    2015-01-01

    Graph analysis unveils hidden associations of data in many phenomena and artifacts, such as road network, social networks, genomic information, and scientific collaboration. Unfortunately, a wide diversity in the characteristics of graphs and graph operations make it challenging to find a right combination of tools and implementation of algorithms to discover desired knowledge from the target data set. This study presents an extensive empirical study of three representative graph processing platforms: Pegasus, GraphX, and Urika. Each system represents a combination of options in data model, processing paradigm, and infrastructure. We benchmarked each platform using three popular graph operations, degree distribution,more » connected components, and PageRank over a variety of real-world graphs. Our experiments show that each graph processing platform shows different strength, depending the type of graph operations. While Urika performs the best in non-iterative operations like degree distribution, GraphX outputforms iterative operations like connected components and PageRank. In addition, we discuss challenges to optimize the performance of each platform over large scale real world graphs.« less

  18. Classification with an edge: Improving semantic image segmentation with boundary detection

    NASA Astrophysics Data System (ADS)

    Marmanis, D.; Schindler, K.; Wegner, J. D.; Galliani, S.; Datcu, M.; Stilla, U.

    2018-01-01

    We present an end-to-end trainable deep convolutional neural network (DCNN) for semantic segmentation with built-in awareness of semantically meaningful boundaries. Semantic segmentation is a fundamental remote sensing task, and most state-of-the-art methods rely on DCNNs as their workhorse. A major reason for their success is that deep networks learn to accumulate contextual information over very large receptive fields. However, this success comes at a cost, since the associated loss of effective spatial resolution washes out high-frequency details and leads to blurry object boundaries. Here, we propose to counter this effect by combining semantic segmentation with semantically informed edge detection, thus making class boundaries explicit in the model. First, we construct a comparatively simple, memory-efficient model by adding boundary detection to the SEGNET encoder-decoder architecture. Second, we also include boundary detection in FCN-type models and set up a high-end classifier ensemble. We show that boundary detection significantly improves semantic segmentation with CNNs in an end-to-end training scheme. Our best model achieves >90% overall accuracy on the ISPRS Vaihingen benchmark.

  19. SLA-aware differentiated QoS in elastic optical networks

    NASA Astrophysics Data System (ADS)

    Agrawal, Anuj; Vyas, Upama; Bhatia, Vimal; Prakash, Shashi

    2017-07-01

    The quality of service (QoS) offered by optical networks can be improved by accurate provisioning of service level specifications (SLSs) included in the service level agreement (SLA). A large number of users coexisting in the network require different services. Thus, a pragmatic network needs to offer a differentiated QoS to a variety of users according to the SLA contracted for different services at varying costs. In conventional wavelength division multiplexed (WDM) optical networks, service differentiation is feasible only for a limited number of users because of its fixed-grid structure. Newly introduced flex-grid based elastic optical networks (EONs) are more adaptive to traffic requirements as compared to the WDM networks because of the flexibility in their grid structure. Thus, we propose an efficient SLA provisioning algorithm with improved QoS for these flex-grid EONs empowered by optical orthogonal frequency division multiplexing (O-OFDM). The proposed algorithm, called SLA-aware differentiated QoS (SADQ), employs differentiation at the level of routing, spectrum allocation, and connection survivability. The proposed SADQ aims to accurately provision the SLA using such multilevel differentiation with an objective to improve the spectrum utilization from the network operator's perspective. SADQ is evaluated for three different CoSs under various traffic demand patterns and for different ratios of the number of requests belonging to the three considered CoSs. We propose two new SLA metrics for the improvement of functional QoS requirements, namely, security, confidentiality and survivability of high class of service (CoS) traffic. Since, to the best of our knowledge, the proposed SADQ is the first scheme in optical networks to employ exhaustive differentiation at the levels of routing, spectrum allocation, and survivability in a single algorithm, we first compare the performance of SADQ in EON and currently deployed WDM networks to assess the differentiation capability of EON and WDM networks under such differentiated service environment. The proposed SADQ is then compared with two existing benchmark routing and spectrum allocation (RSA) schemes that are also designed under EONs. Simulations indicate that the performance of SADQ is distinctly better in EON than in WDM network under differentiated QoS scenario. The comparative analysis of the proposed SADQ with the considered benchmark RSA strategies designed under EON shows the improved performance of SADQ in EON paradigm for offering differentiated services as per the SLA.

  20. XWeB: The XML Warehouse Benchmark

    NASA Astrophysics Data System (ADS)

    Mahboubi, Hadj; Darmont, Jérôme

    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.

  1. Benchmarking: measuring the outcomes of evidence-based practice.

    PubMed

    DeLise, D C; Leasure, A R

    2001-01-01

    Measurement of the outcomes associated with implementation of evidence-based practice changes is becoming increasingly emphasized by multiple health care disciplines. A final step to the process of implementing and sustaining evidence-supported practice changes is that of outcomes evaluation and monitoring. The comparison of outcomes to internal and external measures is known as benchmarking. This article discusses evidence-based practice, provides an overview of outcomes evaluation, and describes the process of benchmarking to improve practice. A case study is used to illustrate this concept.

  2. Benchmarking homogenization algorithms for monthly data

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M. J.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2012-01-01

    The COST (European Cooperation in Science and Technology) Action ES0601: advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random independent break-type inhomogeneities with normally distributed breakpoint sizes were added to the simulated datasets. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study. After the deadline at which details of the imposed inhomogeneities were revealed, 22 additional solutions were submitted. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data. Training the users on homogenization software was found to be very important. Moreover, state-of-the-art relative homogenization algorithms developed to work with an inhomogeneous reference are shown to perform best. The study showed that automatic algorithms can perform as well as manual ones.

  3. Benchmarking monthly homogenization algorithms

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2011-08-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data. Training was found to be very important. Moreover, state-of-the-art relative homogenization algorithms developed to work with an inhomogeneous reference are shown to perform best. The study showed that currently automatic algorithms can perform as well as manual ones.

  4. Cascaded Segmentation-Detection Networks for Word-Level Text Spotting.

    PubMed

    Qin, Siyang; Manduchi, Roberto

    2017-11-01

    We introduce an algorithm for word-level text spotting that is able to accurately and reliably determine the bounding regions of individual words of text "in the wild". Our system is formed by the cascade of two convolutional neural networks. The first network is fully convolutional and is in charge of detecting areas containing text. This results in a very reliable but possibly inaccurate segmentation of the input image. The second network (inspired by the popular YOLO architecture) analyzes each segment produced in the first stage, and predicts oriented rectangular regions containing individual words. No post-processing (e.g. text line grouping) is necessary. With execution time of 450 ms for a 1000 × 560 image on a Titan X GPU, our system achieves good performance on the ICDAR 2013, 2015 benchmarks [2], [1].

  5. [Hydraulic simulation and safety assessment of secondary water supply system with anti-negative pressure facility].

    PubMed

    Wang, Huan-Huan; Liu, Shu-Ming; Jiang, Shuaiz; Meng, Fan-Lin; Bai, Lu

    2013-01-01

    In the last few decades, anti-negative pressure facility (ANPF) has been emerged as a revolutionary approach for sloving the pollution in the Second Water Supply System (SWSS) in China. This study analyzed implications of the safety in SWSS with ANPF, utilizing the water distribution network hydraulic model. A method of hydraulic simulation and security assessment was presented which was able to reflect the number and location of nodes that can be installed in ANPF. Benchmark results through two instance networks showed that 67% and 89% of nodes in each network did not fit the ANPFs for installation. The simple and pratical algorithm was recommended in the water distribution network design and planing in order to increase the security of SWSS.

  6. Default Cascades in Complex Networks: Topology and Systemic Risk

    PubMed Central

    Roukny, Tarik; Bersini, Hugues; Pirotte, Hugues; Caldarelli, Guido; Battiston, Stefano

    2013-01-01

    The recent crisis has brought to the fore a crucial question that remains still open: what would be the optimal architecture of financial systems? We investigate the stability of several benchmark topologies in a simple default cascading dynamics in bank networks. We analyze the interplay of several crucial drivers, i.e., network topology, banks' capital ratios, market illiquidity, and random vs targeted shocks. We find that, in general, topology matters only – but substantially – when the market is illiquid. No single topology is always superior to others. In particular, scale-free networks can be both more robust and more fragile than homogeneous architectures. This finding has important policy implications. We also apply our methodology to a comprehensive dataset of an interbank market from 1999 to 2011. PMID:24067913

  7. Learning to forget: continual prediction with LSTM.

    PubMed

    Gers, F A; Schmidhuber, J; Cummins, F

    2000-10-01

    Long short-term memory (LSTM; Hochreiter & Schmidhuber, 1997) can solve numerous tasks not solvable by previous learning algorithms for recurrent neural networks (RNNs). We identify a weakness of LSTM networks processing continual input streams that are not a priori segmented into subsequences with explicitly marked ends at which the network's internal state could be reset. Without resets, the state may grow indefinitely and eventually cause the network to break down. Our remedy is a novel, adaptive "forget gate" that enables an LSTM cell to learn to reset itself at appropriate times, thus releasing internal resources. We review illustrative benchmark problems on which standard LSTM outperforms other RNN algorithms. All algorithms (including LSTM) fail to solve continual versions of these problems. LSTM with forget gates, however, easily solves them, and in an elegant way.

  8. Model Uncertainty and Bayesian Model Averaged Benchmark Dose Estimation for Continuous Data

    EPA Science Inventory

    The benchmark dose (BMD) approach has gained acceptance as a valuable risk assessment tool, but risk assessors still face significant challenges associated with selecting an appropriate BMD/BMDL estimate from the results of a set of acceptable dose-response models. Current approa...

  9. Information processing using a single dynamical node as complex system

    PubMed Central

    Appeltant, L.; Soriano, M.C.; Van der Sande, G.; Danckaert, J.; Massar, S.; Dambre, J.; Schrauwen, B.; Mirasso, C.R.; Fischer, I.

    2011-01-01

    Novel methods for information processing are highly desired in our information-driven society. Inspired by the brain's ability to process information, the recently introduced paradigm known as 'reservoir computing' shows that complex networks can efficiently perform computation. Here we introduce a novel architecture that reduces the usually required large number of elements to a single nonlinear node with delayed feedback. Through an electronic implementation, we experimentally and numerically demonstrate excellent performance in a speech recognition benchmark. Complementary numerical studies also show excellent performance for a time series prediction benchmark. These results prove that delay-dynamical systems, even in their simplest manifestation, can perform efficient information processing. This finding paves the way to feasible and resource-efficient technological implementations of reservoir computing. PMID:21915110

  10. Performance modeling & simulation of complex systems (A systems engineering design & analysis approach)

    NASA Technical Reports Server (NTRS)

    Hall, Laverne

    1995-01-01

    Modeling of the Multi-mission Image Processing System (MIPS) will be described as an example of the use of a modeling tool to design a distributed system that supports multiple application scenarios. This paper examines: (a) modeling tool selection, capabilities, and operation (namely NETWORK 2.5 by CACl), (b) pointers for building or constructing a model and how the MIPS model was developed, (c) the importance of benchmarking or testing the performance of equipment/subsystems being considered for incorporation the design/architecture, (d) the essential step of model validation and/or calibration using the benchmark results, (e) sample simulation results from the MIPS model, and (f) how modeling and simulation analysis affected the MIPS design process by having a supportive and informative impact.

  11. Deep neural network for traffic sign recognition systems: An analysis of spatial transformers and stochastic optimisation methods.

    PubMed

    Arcos-García, Álvaro; Álvarez-García, Juan A; Soria-Morillo, Luis M

    2018-03-01

    This paper presents a Deep Learning approach for traffic sign recognition systems. Several classification experiments are conducted over publicly available traffic sign datasets from Germany and Belgium using a Deep Neural Network which comprises Convolutional layers and Spatial Transformer Networks. Such trials are built to measure the impact of diverse factors with the end goal of designing a Convolutional Neural Network that can improve the state-of-the-art of traffic sign classification task. First, different adaptive and non-adaptive stochastic gradient descent optimisation algorithms such as SGD, SGD-Nesterov, RMSprop and Adam are evaluated. Subsequently, multiple combinations of Spatial Transformer Networks placed at distinct positions within the main neural network are analysed. The recognition rate of the proposed Convolutional Neural Network reports an accuracy of 99.71% in the German Traffic Sign Recognition Benchmark, outperforming previous state-of-the-art methods and also being more efficient in terms of memory requirements. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Using a two-phase evolutionary framework to select multiple network spreaders based on community structure

    NASA Astrophysics Data System (ADS)

    Fu, Yu-Hsiang; Huang, Chung-Yuan; Sun, Chuen-Tsai

    2016-11-01

    Using network community structures to identify multiple influential spreaders is an appropriate method for analyzing the dissemination of information, ideas and infectious diseases. For example, data on spreaders selected from groups of customers who make similar purchases may be used to advertise products and to optimize limited resource allocation. Other examples include community detection approaches aimed at identifying structures and groups in social or complex networks. However, determining the number of communities in a network remains a challenge. In this paper we describe our proposal for a two-phase evolutionary framework (TPEF) for determining community numbers and maximizing community modularity. Lancichinetti-Fortunato-Radicchi benchmark networks were used to test our proposed method and to analyze execution time, community structure quality, convergence, and the network spreading effect. Results indicate that our proposed TPEF generates satisfactory levels of community quality and convergence. They also suggest a need for an index, mechanism or sampling technique to determine whether a community detection approach should be used for selecting multiple network spreaders.

  13. Design and Benchmarking of a Network-In-the-Loop Simulation for Use in a Hardware-In-the-Loop System

    NASA Technical Reports Server (NTRS)

    Aretskin-Hariton, Eliot; Thomas, George; Culley, Dennis; Kratz, Jonathan

    2017-01-01

    Distributed engine control (DEC) systems alter aircraft engine design constraints because of fundamental differences in the input and output communication between DEC and centralized control architectures. The change in the way communication is implemented may create new optimum engine-aircraft configurations. This paper continues the exploration of digital network communication by demonstrating a Network-In-the-Loop simulation at the NASA Glenn Research Center. This simulation incorporates a real-time network protocol, the Engine Area Distributed Interconnect Network Lite (EADIN Lite), with the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k) software. The objective of this study is to assess digital control network impact to the control system. Performance is evaluated relative to a truth model for large transient maneuvers and a typical flight profile for commercial aircraft. Results show that a decrease in network bandwidth from 250 Kbps (sampling all sensors every time step) to 40 Kbps, resulted in very small differences in control system performance.

  14. Design and Benchmarking of a Network-In-the-Loop Simulation for Use in a Hardware-In-the-Loop System

    NASA Technical Reports Server (NTRS)

    Aretskin-Hariton, Eliot D.; Thomas, George Lindsey; Culley, Dennis E.; Kratz, Jonathan L.

    2017-01-01

    Distributed engine control (DEC) systems alter aircraft engine design constraints be- cause of fundamental differences in the input and output communication between DEC and centralized control architectures. The change in the way communication is implemented may create new optimum engine-aircraft configurations. This paper continues the exploration of digital network communication by demonstrating a Network-In-the-Loop simulation at the NASA Glenn Research Center. This simulation incorporates a real-time network protocol, the Engine Area Distributed Interconnect Network Lite (EADIN Lite), with the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k) software. The objective of this study is to assess digital control network impact to the control system. Performance is evaluated relative to a truth model for large transient maneuvers and a typical flight profile for commercial aircraft. Results show that a decrease in network bandwidth from 250 Kbps (sampling all sensors every time step) to 40 Kbps, resulted in very small differences in control system performance.

  15. Identifying protein complexes in PPI network using non-cooperative sequential game.

    PubMed

    Maulik, Ujjwal; Basu, Srinka; Ray, Sumanta

    2017-08-21

    Identifying protein complexes from protein-protein interaction (PPI) network is an important and challenging task in computational biology as it helps in better understanding of cellular mechanisms in various organisms. In this paper we propose a noncooperative sequential game based model for protein complex detection from PPI network. The key hypothesis is that protein complex formation is driven by mechanism that eventually optimizes the number of interactions within the complex leading to dense subgraph. The hypothesis is drawn from the observed network property named small world. The proposed multi-player game model translates the hypothesis into the game strategies. The Nash equilibrium of the game corresponds to a network partition where each protein either belong to a complex or form a singleton cluster. We further propose an algorithm to find the Nash equilibrium of the sequential game. The exhaustive experiment on synthetic benchmark and real life yeast networks evaluates the structural as well as biological significance of the network partitions.

  16. An Analysis of Academic Research Libraries Assessment Data: A Look at Professional Models and Benchmarking Data

    ERIC Educational Resources Information Center

    Lewin, Heather S.; Passonneau, Sarah M.

    2012-01-01

    This research provides the first review of publicly available assessment information found on Association of Research Libraries (ARL) members' websites. After providing an overarching review of benchmarking assessment data, and of professionally recommended assessment models, this paper examines if libraries contextualized their assessment…

  17. Benchmarks Momentum on Increase

    ERIC Educational Resources Information Center

    McNeil, Michele

    2008-01-01

    No longer content with the patchwork quilt of assessments used to measure states' K-12 performance, top policy groups are pushing states toward international benchmarking as a way to better prepare students for a competitive global economy. The National Governors Association, the Council of Chief State School Officers, and the standards-advocacy…

  18. Warfighter Visualizations Compilations

    DTIC Science & Technology

    2013-05-01

    list of the user’s favorite websites or other textual content, sub-categorized into types, such as blogs, social networking sites, comics , videos...available: The example in the prototype shows a random archived comic from the website. Other options include thumbnail strips of imagery or dynamic...varied, and range from serving as statistical benchmarks, for increasing social consciousness and interaction, for improving educational interactions

  19. Closing the Gap: The Maturing of Quality Assurance in Australian University Libraries

    ERIC Educational Resources Information Center

    Tang, Karen

    2012-01-01

    A benchmarking review of the quality assurance practices of the libraries of the Australian Technology Network conducted in 2006 revealed exemplars of best practice, but also sector-wide gaps. A follow-up review in 2010 indicated the best practices that remain relevant. While some gaps persist, there has been improvement across the libraries and…

  20. Structural reducibility of multilayer networks

    NASA Astrophysics Data System (ADS)

    de Domenico, Manlio; Nicosia, Vincenzo; Arenas, Alexandre; Latora, Vito

    2015-04-01

    Many complex systems can be represented as networks consisting of distinct types of interactions, which can be categorized as links belonging to different layers. For example, a good description of the full protein-protein interactome requires, for some organisms, up to seven distinct network layers, accounting for different genetic and physical interactions, each containing thousands of protein-protein relationships. A fundamental open question is then how many layers are indeed necessary to accurately represent the structure of a multilayered complex system. Here we introduce a method based on quantum theory to reduce the number of layers to a minimum while maximizing the distinguishability between the multilayer network and the corresponding aggregated graph. We validate our approach on synthetic benchmarks and we show that the number of informative layers in some real multilayer networks of protein-genetic interactions, social, economical and transportation systems can be reduced by up to 75%.

  1. Optical interconnection network for parallel access to multi-rank memory in future computing systems.

    PubMed

    Wang, Kang; Gu, Huaxi; Yang, Yintang; Wang, Kun

    2015-08-10

    With the number of cores increasing, there is an emerging need for a high-bandwidth low-latency interconnection network, serving core-to-memory communication. In this paper, aiming at the goal of simultaneous access to multi-rank memory, we propose an optical interconnection network for core-to-memory communication. In the proposed network, the wavelength usage is delicately arranged so that cores can communicate with different ranks at the same time and broadcast for flow control can be achieved. A distributed memory controller architecture that works in a pipeline mode is also designed for efficient optical communication and transaction address processes. The scaling method and wavelength assignment for the proposed network are investigated. Compared with traditional electronic bus-based core-to-memory communication, the simulation results based on the PARSEC benchmark show that the bandwidth enhancement and latency reduction are apparent.

  2. Validation of pore network simulations of ex-situ water distributions in a gas diffusion layer of proton exchange membrane fuel cells with X-ray tomographic images

    NASA Astrophysics Data System (ADS)

    Agaesse, Tristan; Lamibrac, Adrien; Büchi, Felix N.; Pauchet, Joel; Prat, Marc

    2016-11-01

    Understanding and modeling two-phase flows in the gas diffusion layer (GDL) of proton exchange membrane fuel cells are important in order to improve fuel cells performance. They are scientifically challenging because of the peculiarities of GDLs microstructures. In the present work, simulations on a pore network model are compared to X-ray tomographic images of water distributions during an ex-situ water invasion experiment. A method based on watershed segmentation was developed to extract a pore network from the 3D segmented image of the dry GDL. Pore network modeling and a full morphology model were then used to perform two-phase simulations and compared to the experimental data. The results show good agreement between experimental and simulated microscopic water distributions. Pore network extraction parameters were also benchmarked using the experimental data and results from full morphology simulations.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sunden, Fanny; Peck, Ariana; Salzman, Julia

    Enzymes enable life by accelerating reaction rates to biological timescales. Conventional studies have focused on identifying the residues that have a direct involvement in an enzymatic reaction, but these so-called ‘catalytic residues’ are embedded in extensive interaction networks. Although fundamental to our understanding of enzyme function, evolution, and engineering, the properties of these networks have yet to be quantitatively and systematically explored. We dissected an interaction network of five residues in the active site of Escherichia coli alkaline phosphatase. Analysis of the complex catalytic interdependence of specific residues identified three energetically independent but structurally interconnected functional units with distinct modesmore » of cooperativity. From an evolutionary perspective, this network is orders of magnitude more probable to arise than a fully cooperative network. From a functional perspective, new catalytic insights emerge. Further, such comprehensive energetic characterization will be necessary to benchmark the algorithms required to rationally engineer highly efficient enzymes.« less

  4. Network evolution model for supply chain with manufactures as the core.

    PubMed

    Fang, Haiyang; Jiang, Dali; Yang, Tinghong; Fang, Ling; Yang, Jian; Li, Wu; Zhao, Jing

    2018-01-01

    Building evolution model of supply chain networks could be helpful to understand its development law. However, specific characteristics and attributes of real supply chains are often neglected in existing evolution models. This work proposes a new evolution model of supply chain with manufactures as the core, based on external market demand and internal competition-cooperation. The evolution model assumes the external market environment is relatively stable, considers several factors, including specific topology of supply chain, external market demand, ecological growth and flow conservation. The simulation results suggest that the networks evolved by our model have similar structures as real supply chains. Meanwhile, the influences of external market demand and internal competition-cooperation to network evolution are analyzed. Additionally, 38 benchmark data sets are applied to validate the rationality of our evolution model, in which, nine manufacturing supply chains match the features of the networks constructed by our model.

  5. Network evolution model for supply chain with manufactures as the core

    PubMed Central

    Jiang, Dali; Fang, Ling; Yang, Jian; Li, Wu; Zhao, Jing

    2018-01-01

    Building evolution model of supply chain networks could be helpful to understand its development law. However, specific characteristics and attributes of real supply chains are often neglected in existing evolution models. This work proposes a new evolution model of supply chain with manufactures as the core, based on external market demand and internal competition-cooperation. The evolution model assumes the external market environment is relatively stable, considers several factors, including specific topology of supply chain, external market demand, ecological growth and flow conservation. The simulation results suggest that the networks evolved by our model have similar structures as real supply chains. Meanwhile, the influences of external market demand and internal competition-cooperation to network evolution are analyzed. Additionally, 38 benchmark data sets are applied to validate the rationality of our evolution model, in which, nine manufacturing supply chains match the features of the networks constructed by our model. PMID:29370201

  6. A Seafloor Benchmark for 3-dimensional Geodesy

    NASA Astrophysics Data System (ADS)

    Chadwell, C. D.; Webb, S. C.; Nooner, S. L.

    2014-12-01

    We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone. Using a ROV to place and remove sensors on the benchmarks will significantly reduce the number of sensors required by the community to monitor offshore strain in subduction zones.

  7. Competing dynamic phases of active polymer networks

    NASA Astrophysics Data System (ADS)

    Freedman, Simon; Banerjee, Shiladitya; Dinner, Aaron R.

    Recent experiments on in-vitro reconstituted assemblies of F-actin, myosin-II motors, and cross-linking proteins show that tuning local network properties can changes the fundamental biomechanical behavior of the system. For example, by varying cross-linker density and actin bundle rigidity, one can switch between contractile networks useful for reshaping cells, polarity sorted networks ideal for directed molecular transport, and frustrated networks with robust structural properties. To efficiently investigate the dynamic phases of actomyosin networks, we developed a coarse grained non-equilibrium molecular dynamics simulation of model semiflexible filaments, molecular motors, and cross-linkers with phenomenologically defined interactions. The simulation's accuracy was verified by benchmarking the mechanical properties of its individual components and collective behavior against experimental results at the molecular and network scales. By adjusting the model's parameters, we can reproduce the qualitative phases observed in experiment and predict the protein characteristics where phase crossovers could occur in collective network dynamics. Our model provides a framework for understanding cells' multiple uses of actomyosin networks and their applicability in materials research. Supported by the Department of Defense (DoD) through the National Defense Science & Engineering Graduate Fellowship (NDSEG) Program.

  8. Quantitative Characterization of the Microstructure and Transport Properties of Biopolymer Networks

    PubMed Central

    Jiao, Yang; Torquato, Salvatore

    2012-01-01

    Biopolymer networks are of fundamental importance to many biological processes in normal and tumorous tissues. In this paper, we employ the panoply of theoretical and simulation techniques developed for characterizing heterogeneous materials to quantify the microstructure and effective diffusive transport properties (diffusion coefficient De and mean survival time τ) of collagen type I networks at various collagen concentrations. In particular, we compute the pore-size probability density function P(δ) for the networks and present a variety of analytical estimates of the effective diffusion coefficient De for finite-sized diffusing particles, including the low-density approximation, the Ogston approximation, and the Torquato approximation. The Hashin-Strikman upper bound on the effective diffusion coefficient De and the pore-size lower bound on the mean survival time τ are used as benchmarks to test our analytical approximations and numerical results. Moreover, we generalize the efficient first-passage-time techniques for Brownian-motion simulations in suspensions of spheres to the case of fiber networks and compute the associated effective diffusion coefficient De as well as the mean survival time τ, which is related to nuclear magnetic resonance (NMR) relaxation times. Our numerical results for De are in excellent agreement with analytical results for simple network microstructures, such as periodic arrays of parallel cylinders. Specifically, the Torquato approximation provides the most accurate estimates of De for all collagen concentrations among all of the analytical approximations we consider. We formulate a universal curve for τ for the networks at different collagen concentrations, extending the work of Yeong and Torquato [J. Chem. Phys. 106, 8814 (1997)]. We apply rigorous cross-property relations to estimate the effective bulk modulus of collagen networks from a knowledge of the effective diffusion coefficient computed here. The use of cross-property relations to link other physical properties to the transport properties of collagen networks is also discussed. PMID:22683739

  9. Module discovery by exhaustive search for densely connected, co-expressed regions in biomolecular interaction networks.

    PubMed

    Colak, Recep; Moser, Flavia; Chu, Jeffrey Shih-Chieh; Schönhuth, Alexander; Chen, Nansheng; Ester, Martin

    2010-10-25

    Computational prediction of functionally related groups of genes (functional modules) from large-scale data is an important issue in computational biology. Gene expression experiments and interaction networks are well studied large-scale data sources, available for many not yet exhaustively annotated organisms. It has been well established, when analyzing these two data sources jointly, modules are often reflected by highly interconnected (dense) regions in the interaction networks whose participating genes are co-expressed. However, the tractability of the problem had remained unclear and methods by which to exhaustively search for such constellations had not been presented. We provide an algorithmic framework, referred to as Densely Connected Biclustering (DECOB), by which the aforementioned search problem becomes tractable. To benchmark the predictive power inherent to the approach, we computed all co-expressed, dense regions in physical protein and genetic interaction networks from human and yeast. An automatized filtering procedure reduces our output which results in smaller collections of modules, comparable to state-of-the-art approaches. Our results performed favorably in a fair benchmarking competition which adheres to standard criteria. We demonstrate the usefulness of an exhaustive module search, by using the unreduced output to more quickly perform GO term related function prediction tasks. We point out the advantages of our exhaustive output by predicting functional relationships using two examples. We demonstrate that the computation of all densely connected and co-expressed regions in interaction networks is an approach to module discovery of considerable value. Beyond confirming the well settled hypothesis that such co-expressed, densely connected interaction network regions reflect functional modules, we open up novel computational ways to comprehensively analyze the modular organization of an organism based on prevalent and largely available large-scale datasets. Software and data sets are available at http://www.sfu.ca/~ester/software/DECOB.zip.

  10. Networked remote area dental services: a viable, sustainable approach to oral health care in challenging environments.

    PubMed

    Dyson, Kate; Kruger, Estie; Tennant, Marc

    2012-12-01

    This study examines the cost effectiveness of a model of remote area oral health service. Retrospective financial analysis. Rural and remote primary health services. Clinical activity data and associated cost data relating to the provision of a networked visiting oral health service by the Centre for Rural and Remote Oral Health formed the basis of the study data frameset. The cost-effectiveness of the Centre's model of service provision at five rural and remote sites in Western Australia during the calendar years 2006, 2008 and 2010 was examined in the study. Calculations of the service provision costs and value of care provided were made using data records and the Fee Schedule of Dental Services for Dentists. The ratio of service provision costs to the value of care provided was determined for each site and was benchmarked against the equivalent ratios applicable to large scale government sector models of service provision. The use of networked models have been effective in other disciplines but this study is the first to show a networked hub and spoke approach of five spokes to one hub is cost efficient in remote oral health care. By excluding special cost-saving initiatives introduced by the Centre, the study examines easily translatable direct service provision costs against direct clinical care outcomes in some of Australia's most challenging locations. This study finds that networked hub and spoke models of care can be financially efficient arrangements in remote oral health care. © 2012 The Authors. Australian Journal of Rural Health © National Rural Health Alliance Inc.

  11. An outer approximation method for the road network design problem

    PubMed Central

    2018-01-01

    Best investment in the road infrastructure or the network design is perceived as a fundamental and benchmark problem in transportation. Given a set of candidate road projects with associated costs, finding the best subset with respect to a limited budget is known as a bilevel Discrete Network Design Problem (DNDP) of NP-hard computationally complexity. We engage with the complexity with a hybrid exact-heuristic methodology based on a two-stage relaxation as follows: (i) the bilevel feature is relaxed to a single-level problem by taking the network performance function of the upper level into the user equilibrium traffic assignment problem (UE-TAP) in the lower level as a constraint. It results in a mixed-integer nonlinear programming (MINLP) problem which is then solved using the Outer Approximation (OA) algorithm (ii) we further relax the multi-commodity UE-TAP to a single-commodity MILP problem, that is, the multiple OD pairs are aggregated to a single OD pair. This methodology has two main advantages: (i) the method is proven to be highly efficient to solve the DNDP for a large-sized network of Winnipeg, Canada. The results suggest that within a limited number of iterations (as termination criterion), global optimum solutions are quickly reached in most of the cases; otherwise, good solutions (close to global optimum solutions) are found in early iterations. Comparative analysis of the networks of Gao and Sioux-Falls shows that for such a non-exact method the global optimum solutions are found in fewer iterations than those found in some analytically exact algorithms in the literature. (ii) Integration of the objective function among the constraints provides a commensurate capability to tackle the multi-objective (or multi-criteria) DNDP as well. PMID:29590111

  12. An outer approximation method for the road network design problem.

    PubMed

    Asadi Bagloee, Saeed; Sarvi, Majid

    2018-01-01

    Best investment in the road infrastructure or the network design is perceived as a fundamental and benchmark problem in transportation. Given a set of candidate road projects with associated costs, finding the best subset with respect to a limited budget is known as a bilevel Discrete Network Design Problem (DNDP) of NP-hard computationally complexity. We engage with the complexity with a hybrid exact-heuristic methodology based on a two-stage relaxation as follows: (i) the bilevel feature is relaxed to a single-level problem by taking the network performance function of the upper level into the user equilibrium traffic assignment problem (UE-TAP) in the lower level as a constraint. It results in a mixed-integer nonlinear programming (MINLP) problem which is then solved using the Outer Approximation (OA) algorithm (ii) we further relax the multi-commodity UE-TAP to a single-commodity MILP problem, that is, the multiple OD pairs are aggregated to a single OD pair. This methodology has two main advantages: (i) the method is proven to be highly efficient to solve the DNDP for a large-sized network of Winnipeg, Canada. The results suggest that within a limited number of iterations (as termination criterion), global optimum solutions are quickly reached in most of the cases; otherwise, good solutions (close to global optimum solutions) are found in early iterations. Comparative analysis of the networks of Gao and Sioux-Falls shows that for such a non-exact method the global optimum solutions are found in fewer iterations than those found in some analytically exact algorithms in the literature. (ii) Integration of the objective function among the constraints provides a commensurate capability to tackle the multi-objective (or multi-criteria) DNDP as well.

  13. Benchmarking care for very low birthweight infants in Ireland and Northern Ireland.

    PubMed

    Murphy, B P; Armstrong, K; Ryan, C A; Jenkins, J G

    2010-01-01

    Benchmarking is that process through which best practice is identified and continuous quality improvement pursued through comparison and sharing. The Vermont Oxford Neonatal Network (VON) is the largest international external reference centre for very low birth weight (VLBW) infants. This report from 2004-7 compares survival and morbidity throughout Ireland and benchmarks these results against VON. A standardised VON database for VLBW infants was created in 14 participating centres across Ireland and Northern Ireland. Data on 716 babies were submitted in 2004, increasing to 796 babies in 2007, with centres caring for from 10 to 120 VLBW infants per year. In 2007, mortality rates in VLBW infants varied from 4% to 19%. Standardised mortality ratios indicate that the number of deaths observed was not significantly different from the number expected, based on the characteristics of infants treated. There was no difference in the incidence of severe intraventricular haemorrhage between all-Ireland and VON groups (5% vs 6%, respectively). All-Ireland rates for chronic lung disease (CLD; 15-21%) remained lower than rates seen in the VON group (24-28%). The rates of late onset nosocomial infection in the all-Ireland group (25-26%) remained double those in the VON group (12-13%). This is the first all-Ireland international benchmarking report in any medical specialty. Survival, severe intraventricular haemorrhage and CLD compare favourably with international standards, but rates of nosocomial infection in neonatal units are concerning. Benchmarking clinical outcomes is critical for quality improvement and informing decisions concerning neonatal intensive care service provision.

  14. Educating Next Generation Nuclear Criticality Safety Engineers at the Idaho National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. D. Bess; J. B. Briggs; A. S. Garcia

    2011-09-01

    One of the challenges in educating our next generation of nuclear safety engineers is the limitation of opportunities to receive significant experience or hands-on training prior to graduation. Such training is generally restricted to on-the-job-training before this new engineering workforce can adequately provide assessment of nuclear systems and establish safety guidelines. Participation in the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP) can provide students and young professionals the opportunity to gain experience and enhance critical engineering skills. The ICSBEP and IRPhEP publish annual handbooks that contain evaluations of experiments along withmore » summarized experimental data and peer-reviewed benchmark specifications to support the validation of neutronics codes, nuclear cross-section data, and the validation of reactor designs. Participation in the benchmark process not only benefits those who use these Handbooks within the international community, but provides the individual with opportunities for professional development, networking with an international community of experts, and valuable experience to be used in future employment. Traditionally students have participated in benchmarking activities via internships at national laboratories, universities, or companies involved with the ICSBEP and IRPhEP programs. Additional programs have been developed to facilitate the nuclear education of students while participating in the benchmark projects. These programs include coordination with the Center for Space Nuclear Research (CSNR) Next Degree Program, the Collaboration with the Department of Energy Idaho Operations Office to train nuclear and criticality safety engineers, and student evaluations as the basis for their Master's thesis in nuclear engineering.« less

  15. Gestational age specific neonatal survival in the State of Qatar (2003-2008) - a comparative study with international benchmarks.

    PubMed

    Rahman, Sajjad; Salameh, Khalil; Al-Rifai, Hilal; Masoud, Ahmed; Lutfi, Samawal; Salama, Husam; Abdoh, Ghassan; Omar, Fahmi; Bener, Abdulbari

    2011-09-01

    To analyze and compare the current gestational age specific neonatal survival rates between Qatar and international benchmarks. An analytical comparative study. Women's Hospital, Hamad Medical Corporation, Doha, Qatar, from 2003-2008. Six year's (2003-2008) gestational age specific neonatal mortality data was stratified for each completed week of gestation at birth from 24 weeks till term. The data from World Health Statistics by WHO (2010), Vermont Oxford Network (VON, 2007) and National Statistics United Kingdom (2006) were used as international benchmarks for comparative analysis. A total of 82,002 babies were born during the study period. Qatar's neonatal mortality rate (NMR) dropped from 6/1000 in 2003 to 4.3/1000 in 2008 (p < 0.05). The overall and gestational age specific neonatal mortality rates of Qatar were comparable with international benchmarks. The survival of < 27 weeks and term babies was better in Qatar (p=0.01 and p < 0.001 respectively) as compared to VON. The survival of > 32 weeks babies was better in UK (p=0.01) as compared to Qatar. The relative risk (RR) of death decreased with increasing gestational age (p < 0.0001). Preterm babies (45%) followed by lethal chromosomal and congenital anomalies (26.5%) were the two leading causes of neonatal deaths in Qatar. The current total and gestational age specific neonatal survival rates in the State of Qatar are comparable with international benchmarks. In Qatar, persistently high rates of low birth weight and lethal chromosomal and congenital anomalies significantly contribute towards neonatal mortality.

  16. Benchmarking the Integration of WAVEWATCH III Results into HAZUS-MH: Preliminary Results

    NASA Technical Reports Server (NTRS)

    Berglund, Judith; Holland, Donald; McKellip, Rodney; Sciaudone, Jeff; Vickery, Peter; Wang, Zhanxian; Ying, Ken

    2005-01-01

    The report summarizes the results from the preliminary benchmarking activities associated with the use of WAVEWATCH III (WW3) results in the HAZUS-MH MR1 flood module. Project partner Applied Research Associates (ARA) is integrating the WW3 model into HAZUS. The current version of HAZUS-MH predicts loss estimates from hurricane-related coastal flooding by using values of surge only. Using WW3, wave setup can be included with surge. Loss estimates resulting from the use of surge-only and surge-plus-wave-setup were compared. This benchmarking study is preliminary because the HAZUS-MH MR1 flood module was under development at the time of the study. In addition, WW3 is not scheduled to be fully integrated with HAZUS-MH and available for public release until 2008.

  17. A Benchmark for Endoluminal Scene Segmentation of Colonoscopy Images.

    PubMed

    Vázquez, David; Bernal, Jorge; Sánchez, F Javier; Fernández-Esparrach, Gloria; López, Antonio M; Romero, Adriana; Drozdzal, Michal; Courville, Aaron

    2017-01-01

    Colorectal cancer (CRC) is the third cause of cancer death worldwide. Currently, the standard approach to reduce CRC-related mortality is to perform regular screening in search for polyps and colonoscopy is the screening tool of choice. The main limitations of this screening procedure are polyp miss rate and the inability to perform visual assessment of polyp malignancy. These drawbacks can be reduced by designing decision support systems (DSS) aiming to help clinicians in the different stages of the procedure by providing endoluminal scene segmentation. Thus, in this paper, we introduce an extended benchmark of colonoscopy image segmentation, with the hope of establishing a new strong benchmark for colonoscopy image analysis research. The proposed dataset consists of 4 relevant classes to inspect the endoluminal scene, targeting different clinical needs. Together with the dataset and taking advantage of advances in semantic segmentation literature, we provide new baselines by training standard fully convolutional networks (FCNs). We perform a comparative study to show that FCNs significantly outperform, without any further postprocessing, prior results in endoluminal scene segmentation, especially with respect to polyp segmentation and localization.

  18. Development and Application of Benchmark Examples for Mixed-Mode I/II Quasi-Static Delamination Propagation Predictions

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2012-01-01

    The development of benchmark examples for quasi-static delamination propagation prediction is presented. The example is based on a finite element model of the Mixed-Mode Bending (MMB) specimen for 50% mode II. The benchmarking is demonstrated for Abaqus/Standard, however, the example is independent of the analysis software used and allows the assessment of the automated delamination propagation prediction capability in commercial finite element codes based on the virtual crack closure technique (VCCT). First, a quasi-static benchmark example was created for the specimen. Second, starting from an initially straight front, the delamination was allowed to propagate under quasi-static loading. Third, the load-displacement as well as delamination length versus applied load/displacement relationships from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Overall, the results are encouraging, but further assessment for mixed-mode delamination fatigue onset and growth is required.

  19. An iterative network partition algorithm for accurate identification of dense network modules

    PubMed Central

    Sun, Siqi; Dong, Xinran; Fu, Yao; Tian, Weidong

    2012-01-01

    A key step in network analysis is to partition a complex network into dense modules. Currently, modularity is one of the most popular benefit functions used to partition network modules. However, recent studies suggested that it has an inherent limitation in detecting dense network modules. In this study, we observed that despite the limitation, modularity has the advantage of preserving the primary network structure of the undetected modules. Thus, we have developed a simple iterative Network Partition (iNP) algorithm to partition a network. The iNP algorithm provides a general framework in which any modularity-based algorithm can be implemented in the network partition step. Here, we tested iNP with three modularity-based algorithms: multi-step greedy (MSG), spectral clustering and Qcut. Compared with the original three methods, iNP achieved a significant improvement in the quality of network partition in a benchmark study with simulated networks, identified more modules with significantly better enrichment of functionally related genes in both yeast protein complex network and breast cancer gene co-expression network, and discovered more cancer-specific modules in the cancer gene co-expression network. As such, iNP should have a broad application as a general method to assist in the analysis of biological networks. PMID:22121225

  20. Benchmarking Equity in Transfer Policies for Career and Technical Associate's Degrees

    ERIC Educational Resources Information Center

    Chase, Megan M.

    2011-01-01

    Using critical policy analysis, this study considers state policies that impede technical credit transfer from public 2-year colleges to 4-year institutions of higher education. The states of Ohio, Texas, Washington, and Wisconsin are considered, and seven policy benchmarks for facilitating the transfer of technical credits are proposed. (Contains…

  1. Benchmarking and Threshold Standards in Higher Education. Staff and Educational Development Series.

    ERIC Educational Resources Information Center

    Smith, Helen, Ed.; Armstrong, Michael, Ed.; Brown, Sally, Ed.

    This book explores the issues involved in developing standards in higher education, examining the practical issues involved in benchmarking and offering a critical analysis of the problems associated with this developmental tool. The book focuses primarily on experience in the United Kingdom (UK), but looks also at international activity in this…

  2. ARL Physics Web Pages: An Evaluation by Established, Transitional and Emerging Benchmarks.

    ERIC Educational Resources Information Center

    Duffy, Jane C.

    2002-01-01

    Provides an overview of characteristics among Association of Research Libraries (ARL) physics Web pages. Examines current academic Web literature and from that develops six benchmarks to measure physics Web pages: ease of navigation; logic of presentation; representation of all forms of information; engagement of the discipline; interactivity of…

  3. Recommendations for Benchmarking Web Site Usage among Academic Libraries.

    ERIC Educational Resources Information Center

    Hightower, Christy; Sih, Julie; Tilghman, Adam

    1998-01-01

    To help library directors and Web developers create a benchmarking program to compare statistics of academic Web sites, the authors analyzed the Web server log files of 14 university science and engineering libraries. Recommends a centralized voluntary reporting structure coordinated by the Association of Research Libraries (ARL) and a method for…

  4. 2010 Recruiting Benchmarks Survey. Research Brief

    ERIC Educational Resources Information Center

    National Association of Colleges and Employers (NJ1), 2010

    2010-01-01

    The National Association of Colleges and Employers conducted its annual survey of employer members from June 15, 2010 to August 15, 2010, to benchmark data relevant to college recruiting. From a base of 861 employers holding organizational membership, there were 268 responses for a response rate of 31 percent. Following are some of the major…

  5. A novel strategy for load balancing of distributed medical applications.

    PubMed

    Logeswaran, Rajasvaran; Chen, Li-Choo

    2012-04-01

    Current trends in medicine, specifically in the electronic handling of medical applications, ranging from digital imaging, paperless hospital administration and electronic medical records, telemedicine, to computer-aided diagnosis, creates a burden on the network. Distributed Service Architectures, such as Intelligent Network (IN), Telecommunication Information Networking Architecture (TINA) and Open Service Access (OSA), are able to meet this new challenge. Distribution enables computational tasks to be spread among multiple processors; hence, performance is an important issue. This paper proposes a novel approach in load balancing, the Random Sender Initiated Algorithm, for distribution of tasks among several nodes sharing the same computational object (CO) instances in Distributed Service Architectures. Simulations illustrate that the proposed algorithm produces better network performance than the benchmark load balancing algorithms-the Random Node Selection Algorithm and the Shortest Queue Algorithm, especially under medium and heavily loaded conditions.

  6. Wavelet decomposition and radial basis function networks for system monitoring

    NASA Astrophysics Data System (ADS)

    Ikonomopoulos, A.; Endou, A.

    1998-10-01

    Two approaches are coupled to develop a novel collection of black box models for monitoring operational parameters in a complex system. The idea springs from the intention of obtaining multiple predictions for each system variable and fusing them before they are used to validate the actual measurement. The proposed architecture pairs the analytical abilities of the discrete wavelet decomposition with the computational power of radial basis function networks. Members of a wavelet family are constructed in a systematic way and chosen through a statistical selection criterion that optimizes the structure of the network. Network parameters are further optimized through a quasi-Newton algorithm. The methodology is demonstrated utilizing data obtained during two transients of the Monju fast breeder reactor. The models developed are benchmarked with respect to similar regressors based on Gaussian basis functions.

  7. Hybrid services efficient provisioning over the network coding-enabled elastic optical networks

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Gu, Rentao; Ji, Yuefeng; Kavehrad, Mohsen

    2017-03-01

    As a variety of services have emerged, hybrid services have become more common in real optical networks. Although the elastic spectrum resource optimizations over the elastic optical networks (EONs) have been widely investigated, little research has been carried out on the hybrid services of the routing and spectrum allocation (RSA), especially over the network coding-enabled EON. We investigated the RSA for the unicast service and network coding-based multicast service over the network coding-enabled EON with the constraints of time delay and transmission distance. To address this issue, a mathematical model was built to minimize the total spectrum consumption for the hybrid services over the network coding-enabled EON under the constraints of time delay and transmission distance. The model guarantees different routing constraints for different types of services. The immediate nodes over the network coding-enabled EON are assumed to be capable of encoding the flows for different kinds of information. We proposed an efficient heuristic algorithm of the network coding-based adaptive routing and layered graph-based spectrum allocation algorithm (NCAR-LGSA). From the simulation results, NCAR-LGSA shows highly efficient performances in terms of the spectrum resources utilization under different network scenarios compared with the benchmark algorithms.

  8. Financial Time Series Prediction Using Spiking Neural Networks

    PubMed Central

    Reid, David; Hussain, Abir Jaafar; Tawfik, Hissam

    2014-01-01

    In this paper a novel application of a particular type of spiking neural network, a Polychronous Spiking Network, was used for financial time series prediction. It is argued that the inherent temporal capabilities of this type of network are suited to non-stationary data such as this. The performance of the spiking neural network was benchmarked against three systems: two “traditional”, rate-encoded, neural networks; a Multi-Layer Perceptron neural network and a Dynamic Ridge Polynomial neural network, and a standard Linear Predictor Coefficients model. For this comparison three non-stationary and noisy time series were used: IBM stock data; US/Euro exchange rate data, and the price of Brent crude oil. The experiments demonstrated favourable prediction results for the Spiking Neural Network in terms of Annualised Return and prediction error for 5-Step ahead predictions. These results were also supported by other relevant metrics such as Maximum Drawdown and Signal-To-Noise ratio. This work demonstrated the applicability of the Polychronous Spiking Network to financial data forecasting and this in turn indicates the potential of using such networks over traditional systems in difficult to manage non-stationary environments. PMID:25170618

  9. Neural network explanation using inversion.

    PubMed

    Saad, Emad W; Wunsch, Donald C

    2007-01-01

    An important drawback of many artificial neural networks (ANN) is their lack of explanation capability [Andrews, R., Diederich, J., & Tickle, A. B. (1996). A survey and critique of techniques for extracting rules from trained artificial neural networks. Knowledge-Based Systems, 8, 373-389]. This paper starts with a survey of algorithms which attempt to explain the ANN output. We then present HYPINV, a new explanation algorithm which relies on network inversion; i.e. calculating the ANN input which produces a desired output. HYPINV is a pedagogical algorithm, that extracts rules, in the form of hyperplanes. It is able to generate rules with arbitrarily desired fidelity, maintaining a fidelity-complexity tradeoff. To our knowledge, HYPINV is the only pedagogical rule extraction method, which extracts hyperplane rules from continuous or binary attribute neural networks. Different network inversion techniques, involving gradient descent as well as an evolutionary algorithm, are presented. An information theoretic treatment of rule extraction is presented. HYPINV is applied to example synthetic problems, to a real aerospace problem, and compared with similar algorithms using benchmark problems.

  10. Classifying multispectral data by neural networks

    NASA Technical Reports Server (NTRS)

    Telfer, Brian A.; Szu, Harold H.; Kiang, Richard K.

    1993-01-01

    Several energy functions for synthesizing neural networks are tested on 2-D synthetic data and on Landsat-4 Thematic Mapper data. These new energy functions, designed specifically for minimizing misclassification error, in some cases yield significant improvements in classification accuracy over the standard least mean squares energy function. In addition to operating on networks with one output unit per class, a new energy function is tested for binary encoded outputs, which result in smaller network sizes. The Thematic Mapper data (four bands were used) is classified on a single pixel basis, to provide a starting benchmark against which further improvements will be measured. Improvements are underway to make use of both subpixel and superpixel (i.e. contextual or neighborhood) information in tile processing. For single pixel classification, the best neural network result is 78.7 percent, compared with 71.7 percent for a classical nearest neighbor classifier. The 78.7 percent result also improves on several earlier neural network results on this data.

  11. Parallel replica dynamics method for bistable stochastic reaction networks: Simulation and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Wang, Ting; Plecháč, Petr

    2017-12-01

    Stochastic reaction networks that exhibit bistable behavior are common in systems biology, materials science, and catalysis. Sampling of stationary distributions is crucial for understanding and characterizing the long-time dynamics of bistable stochastic dynamical systems. However, simulations are often hindered by the insufficient sampling of rare transitions between the two metastable regions. In this paper, we apply the parallel replica method for a continuous time Markov chain in order to improve sampling of the stationary distribution in bistable stochastic reaction networks. The proposed method uses parallel computing to accelerate the sampling of rare transitions. Furthermore, it can be combined with the path-space information bounds for parametric sensitivity analysis. With the proposed methodology, we study three bistable biological networks: the Schlögl model, the genetic switch network, and the enzymatic futile cycle network. We demonstrate the algorithmic speedup achieved in these numerical benchmarks. More significant acceleration is expected when multi-core or graphics processing unit computer architectures and programming tools such as CUDA are employed.

  12. Learning Universal Computations with Spikes

    PubMed Central

    Thalmeier, Dominik; Uhlmann, Marvin; Kappen, Hilbert J.; Memmesheimer, Raoul-Martin

    2016-01-01

    Providing the neurobiological basis of information processing in higher animals, spiking neural networks must be able to learn a variety of complicated computations, including the generation of appropriate, possibly delayed reactions to inputs and the self-sustained generation of complex activity patterns, e.g. for locomotion. Many such computations require previous building of intrinsic world models. Here we show how spiking neural networks may solve these different tasks. Firstly, we derive constraints under which classes of spiking neural networks lend themselves to substrates of powerful general purpose computing. The networks contain dendritic or synaptic nonlinearities and have a constrained connectivity. We then combine such networks with learning rules for outputs or recurrent connections. We show that this allows to learn even difficult benchmark tasks such as the self-sustained generation of desired low-dimensional chaotic dynamics or memory-dependent computations. Furthermore, we show how spiking networks can build models of external world systems and use the acquired knowledge to control them. PMID:27309381

  13. Improved personalized recommendation based on a similarity network

    NASA Astrophysics Data System (ADS)

    Wang, Ximeng; Liu, Yun; Xiong, Fei

    2016-08-01

    A recommender system helps individual users find the preferred items rapidly and has attracted extensive attention in recent years. Many successful recommendation algorithms are designed on bipartite networks, such as network-based inference or heat conduction. However, most of these algorithms define the resource-allocation methods for an average allocation. That is not reasonable because average allocation cannot indicate the user choice preference and the influence between users which leads to a series of non-personalized recommendation results. We propose a personalized recommendation approach that combines the similarity function and bipartite network to generate a similarity network that improves the resource-allocation process. Our model introduces user influence into the recommender system and states that the user influence can make the resource-allocation process more reasonable. We use four different metrics to evaluate our algorithms for three benchmark data sets. Experimental results show that the improved recommendation on a similarity network can obtain better accuracy and diversity than some competing approaches.

  14. Stylized facts in social networks: Community-based static modeling

    NASA Astrophysics Data System (ADS)

    Jo, Hang-Hyun; Murase, Yohsuke; Török, János; Kertész, János; Kaski, Kimmo

    2018-06-01

    The past analyses of datasets of social networks have enabled us to make empirical findings of a number of aspects of human society, which are commonly featured as stylized facts of social networks, such as broad distributions of network quantities, existence of communities, assortative mixing, and intensity-topology correlations. Since the understanding of the structure of these complex social networks is far from complete, for deeper insight into human society more comprehensive datasets and modeling of the stylized facts are needed. Although the existing dynamical and static models can generate some stylized facts, here we take an alternative approach by devising a community-based static model with heterogeneous community sizes and larger communities having smaller link density and weight. With these few assumptions we are able to generate realistic social networks that show most stylized facts for a wide range of parameters, as demonstrated numerically and analytically. Since our community-based static model is simple to implement and easily scalable, it can be used as a reference system, benchmark, or testbed for further applications.

  15. Creating, generating and comparing random network models with NetworkRandomizer.

    PubMed

    Tosadori, Gabriele; Bestvina, Ivan; Spoto, Fausto; Laudanna, Carlo; Scardoni, Giovanni

    2016-01-01

    Biological networks are becoming a fundamental tool for the investigation of high-throughput data in several fields of biology and biotechnology. With the increasing amount of information, network-based models are gaining more and more interest and new techniques are required in order to mine the information and to validate the results. To fill the validation gap we present an app, for the Cytoscape platform, which aims at creating randomised networks and randomising existing, real networks. Since there is a lack of tools that allow performing such operations, our app aims at enabling researchers to exploit different, well known random network models that could be used as a benchmark for validating real, biological datasets. We also propose a novel methodology for creating random weighted networks, i.e. the multiplication algorithm, starting from real, quantitative data. Finally, the app provides a statistical tool that compares real versus randomly computed attributes, in order to validate the numerical findings. In summary, our app aims at creating a standardised methodology for the validation of the results in the context of the Cytoscape platform.

  16. An artificial bioindicator system for network intrusion detection.

    PubMed

    Blum, Christian; Lozano, José A; Davidson, Pedro Pinacho

    An artificial bioindicator system is developed in order to solve a network intrusion detection problem. The system, inspired by an ecological approach to biological immune systems, evolves a population of agents that learn to survive in their environment. An adaptation process allows the transformation of the agent population into a bioindicator that is capable of reacting to system anomalies. Two characteristics stand out in our proposal. On the one hand, it is able to discover new, previously unseen attacks, and on the other hand, contrary to most of the existing systems for network intrusion detection, it does not need any previous training. We experimentally compare our proposal with three state-of-the-art algorithms and show that it outperforms the competing approaches on widely used benchmark data.

  17. Performance Monitoring of Distributed Data Processing Systems

    NASA Technical Reports Server (NTRS)

    Ojha, Anand K.

    2000-01-01

    Test and checkout systems are essential components in ensuring safety and reliability of aircraft and related systems for space missions. A variety of systems, developed over several years, are in use at the NASA/KSC. Many of these systems are configured as distributed data processing systems with the functionality spread over several multiprocessor nodes interconnected through networks. To be cost-effective, a system should take the least amount of resource and perform a given testing task in the least amount of time. There are two aspects of performance evaluation: monitoring and benchmarking. While monitoring is valuable to system administrators in operating and maintaining, benchmarking is important in designing and upgrading computer-based systems. These two aspects of performance evaluation are the foci of this project. This paper first discusses various issues related to software, hardware, and hybrid performance monitoring as applicable to distributed systems, and specifically to the TCMS (Test Control and Monitoring System). Next, a comparison of several probing instructions are made to show that the hybrid monitoring technique developed by the NIST (National Institutes for Standards and Technology) is the least intrusive and takes only one-fourth of the time taken by software monitoring probes. In the rest of the paper, issues related to benchmarking a distributed system have been discussed and finally a prescription for developing a micro-benchmark for the TCMS has been provided.

  18. Building a glaucoma interaction network using a text mining approach.

    PubMed

    Soliman, Maha; Nasraoui, Olfa; Cooper, Nigel G F

    2016-01-01

    The volume of biomedical literature and its underlying knowledge base is rapidly expanding, making it beyond the ability of a single human being to read through all the literature. Several automated methods have been developed to help make sense of this dilemma. The present study reports on the results of a text mining approach to extract gene interactions from the data warehouse of published experimental results which are then used to benchmark an interaction network associated with glaucoma. To the best of our knowledge, there is, as yet, no glaucoma interaction network derived solely from text mining approaches. The presence of such a network could provide a useful summative knowledge base to complement other forms of clinical information related to this disease. A glaucoma corpus was constructed from PubMed Central and a text mining approach was applied to extract genes and their relations from this corpus. The extracted relations between genes were checked using reference interaction databases and classified generally as known or new relations. The extracted genes and relations were then used to construct a glaucoma interaction network. Analysis of the resulting network indicated that it bears the characteristics of a small world interaction network. Our analysis showed the presence of seven glaucoma linked genes that defined the network modularity. A web-based system for browsing and visualizing the extracted glaucoma related interaction networks is made available at http://neurogene.spd.louisville.edu/GlaucomaINViewer/Form1.aspx. This study has reported the first version of a glaucoma interaction network using a text mining approach. The power of such an approach is in its ability to cover a wide range of glaucoma related studies published over many years. Hence, a bigger picture of the disease can be established. To the best of our knowledge, this is the first glaucoma interaction network to summarize the known literature. The major findings were a set of relations that could not be found in existing interaction databases and that were found to be new, in addition to a smaller subnetwork consisting of interconnected clusters of seven glaucoma genes. Future improvements can be applied towards obtaining a better version of this network.

  19. Delay Tolerant Networking - Bundle Protocol Simulation

    NASA Technical Reports Server (NTRS)

    SeGui, John; Jenning, Esther

    2006-01-01

    In this paper, we report on the addition of MACHETE models needed to support DTN, namely: the Bundle Protocol (BP) model. To illustrate the useof MACHETE with the additional DTN model, we provide an example simulation to benchmark its performance. We demonstrate the use of the DTN protocol and discuss statistics gathered concerning the total time needed to simulate numerous bundle transmissions.

  20. Fusion and Sense Making of Heterogeneous Sensor Network and Other Sources

    DTIC Science & Technology

    2017-03-16

    multimodal fusion framework that uses both training data and web resources for scene classification, the experimental results on the benchmark datasets...show that the proposed text-aided scene classification framework could significantly improve classification performance. Experimental results also show...human whose adaptability is achieved by reliability- dependent weighting of different sensory modalities. Experimental results show that the proposed

  1. Predicting drug-target interactions by dual-network integrated logistic matrix factorization

    NASA Astrophysics Data System (ADS)

    Hao, Ming; Bryant, Stephen H.; Wang, Yanli

    2017-01-01

    In this work, we propose a dual-network integrated logistic matrix factorization (DNILMF) algorithm to predict potential drug-target interactions (DTI). The prediction procedure consists of four steps: (1) inferring new drug/target profiles and constructing profile kernel matrix; (2) diffusing drug profile kernel matrix with drug structure kernel matrix; (3) diffusing target profile kernel matrix with target sequence kernel matrix; and (4) building DNILMF model and smoothing new drug/target predictions based on their neighbors. We compare our algorithm with the state-of-the-art method based on the benchmark dataset. Results indicate that the DNILMF algorithm outperforms the previously reported approaches in terms of AUPR (area under precision-recall curve) and AUC (area under curve of receiver operating characteristic) based on the 5 trials of 10-fold cross-validation. We conclude that the performance improvement depends on not only the proposed objective function, but also the used nonlinear diffusion technique which is important but under studied in the DTI prediction field. In addition, we also compile a new DTI dataset for increasing the diversity of currently available benchmark datasets. The top prediction results for the new dataset are confirmed by experimental studies or supported by other computational research.

  2. Improving Protein Fold Recognition by Deep Learning Networks.

    PubMed

    Jo, Taeho; Hou, Jie; Eickholt, Jesse; Cheng, Jianlin

    2015-12-04

    For accurate recognition of protein folds, a deep learning network method (DN-Fold) was developed to predict if a given query-template protein pair belongs to the same structural fold. The input used stemmed from the protein sequence and structural features extracted from the protein pair. We evaluated the performance of DN-Fold along with 18 different methods on Lindahl's benchmark dataset and on a large benchmark set extracted from SCOP 1.75 consisting of about one million protein pairs, at three different levels of fold recognition (i.e., protein family, superfamily, and fold) depending on the evolutionary distance between protein sequences. The correct recognition rate of ensembled DN-Fold for Top 1 predictions is 84.5%, 61.5%, and 33.6% and for Top 5 is 91.2%, 76.5%, and 60.7% at family, superfamily, and fold levels, respectively. We also evaluated the performance of single DN-Fold (DN-FoldS), which showed the comparable results at the level of family and superfamily, compared to ensemble DN-Fold. Finally, we extended the binary classification problem of fold recognition to real-value regression task, which also show a promising performance. DN-Fold is freely available through a web server at http://iris.rnet.missouri.edu/dnfold.

  3. Potential Release Site Sediment Concentrations Correlated to Storm Water Station Runoff through GIS Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    C.T. McLean

    2005-06-01

    This research examined the relationship between sediment sample data taken at Potential Release Sites (PRSs) and storm water samples taken at selected sites in and around Los Alamos National Laboratory (LANL). The PRSs had been evaluated for erosion potential and a matrix scoring system implemented. It was assumed that there would be a stronger relationship between the high erosion PRSs and the storm water samples. To establish the relationship, the research was broken into two areas. The first area was raster-based modeling, and the second area was data analysis utilizing the raster based modeling results and the sediment and stormmore » water sample results. Two geodatabases were created utilizing raster modeling functions and the Arc Hydro program. The geodatabase created using only Arc Hydro functions contains very fine catchment drainage areas in association with the geometric network and can be used for future contaminant tracking. The second geodatabase contains sub-watersheds for all storm water stations used in the study along with a geometric network. The second area of the study focused on data analysis. The analytical sediment data table was joined to the PRSs spatial data in ArcMap. All PRSs and PRSs with high erosion potential were joined separately to create two datasets for each of 14 analytes. Only the PRSs above the background value were retained. The storm water station spatial data were joined to the table of analyte values that were either greater than the National Pollutant Discharge Elimination System (NPDES) Multi-Sector General Permit (MSGP) benchmark value, or the Department of Energy (DOE) Drinking Water Defined Contribution Guideline (DWDCG). Only the storm water stations were retained that had sample values greater than the NPDES MSGP benchmark value or the DOE DWDCG. Separate maps were created for each analyte showing the sub-watersheds, the PRSs over background, and the storm water stations greater than the NPDES MSGP benchmark value or the DOE DWDCG. Tables were then created for each analyte that listed the PRSs average value by storm water station allowing a tabular view of the mapped data. The final table that was created listed the number of high erosion PRSs and regular PRSs over background values that were contained in each watershed. An overall relationship between the high erosion PRSs or the regular PRSs and the storm water stations was not identified through the methods used in this research. However, the Arc Hydro data models created for this analysis were used to track possible sources of contamination found through sampling at the storm water gaging stations. This geometric network tracing was used to identify possible relationships between the storm water stations and the PRSs. The methods outlined for the geometric network tracing could be used to find other relationships between the sites. A cursory statistical analysis was performed which could be expanded and applied to the data sets generated during this research to establish a broader relationship between the PRSs and storm water stations.« less

  4. Analysis of Students' Assessments in Middle School Curriculum Materials: Aiming Precisely at Benchmarks and Standards.

    ERIC Educational Resources Information Center

    Stern, Luli; Ahlgren, Andrew

    2002-01-01

    Project 2061 of the American Association for the Advancement of Science (AAAS) developed and field-tested a procedure for analyzing curriculum materials, including assessments, in terms of contribution to the attainment of benchmarks and standards. Using this procedure, Project 2061 produced a database of reports on nine science middle school…

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cohen, J; Dossa, D; Gokhale, M

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe:more » (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows: SuperMicro X7DBE Xeon Dual Socket Blackford Server Motherboard; 2 Intel Xeon Dual-Core 2.66 GHz processors; 1 GB DDR2 PC2-5300 RAM (2 x 512); 80GB Hard Drive (Seagate SATA II Barracuda). The Fusion board is presently capable of 4X in a PCIe slot. The image resampling benchmark was run on a dual Xeon workstation with NVIDIA graphics card (see Chapter 5 for full specification). An XtremeData Opteron+FPGA was used for the language classification application. We observed that these benchmarks are not uniformly I/O intensive. The only benchmark that showed greater that 50% of the time in I/O was the graph algorithm when it accessed data files over NFS. When local disk was used, the graph benchmark spent at most 40% of its time in I/O. The other benchmarks were CPU dominated. The image resampling benchmark and language classification showed order of magnitude speedup over software by using co-processor technology to offload the CPU-intensive kernels. Our experiments to date suggest that emerging hardware technologies offer significant benefit to boosting the performance of data-intensive algorithms. Using GPU and FPGA co-processors, we were able to improve performance by more than an order of magnitude on the benchmark algorithms, eliminating the processor bottleneck of CPU-bound tasks. Experiments with a prototype solid state nonvolative memory available today show 10X better throughput on random reads than disk, with a 2X speedup on a graph processing benchmark when compared to the use of local SATA disk.« less

  6. Extraction of tidal channel networks from airborne scanning laser altimetry

    NASA Astrophysics Data System (ADS)

    Mason, David C.; Scott, Tania R.; Wang, Hai-Jing

    Tidal channel networks are important features of the inter-tidal zone, and play a key role in tidal propagation and in the evolution of salt marshes and tidal flats. The study of their morphology is currently an active area of research, and a number of theories related to networks have been developed which require validation using dense and extensive observations of network forms and cross-sections. The conventional method of measuring networks is cumbersome and subjective, involving manual digitisation of aerial photographs in conjunction with field measurement of channel depths and widths for selected parts of the network. This paper describes a semi-automatic technique developed to extract networks from high-resolution LiDAR data of the inter-tidal zone. A multi-level knowledge-based approach has been implemented, whereby low-level algorithms first extract channel fragments based mainly on image properties then a high-level processing stage improves the network using domain knowledge. The approach adopted at low level uses multi-scale edge detection to detect channel edges, then associates adjacent anti-parallel edges together to form channels. The higher level processing includes a channel repair mechanism. The algorithm may be extended to extract networks from aerial photographs as well as LiDAR data. Its performance is illustrated using LiDAR data of two study sites, the River Ems, Germany and the Venice Lagoon. For the River Ems data, the error of omission for the automatic channel extractor is 26%, partly because numerous small channels are lost because they fall below the edge threshold, though these are less than 10 cm deep and unlikely to be hydraulically significant. The error of commission is lower, at 11%. For the Venice Lagoon data, the error of omission is 14%, but the error of commission is 42%, due partly to the difficulty of interpreting channels in these natural scenes. As a benchmark, previous work has shown that this type of algorithm specifically designed for extracting tidal networks from LiDAR data is able to achieve substantially improved results compared with those obtained using standard algorithms for drainage network extraction from Digital Terrain Models.

  7. a Protocol for High-Accuracy Theoretical Thermochemistry

    NASA Astrophysics Data System (ADS)

    Welch, Bradley; Dawes, Richard

    2017-06-01

    Theoretical studies of spectroscopy and reaction dynamics including the necessary development of potential energy surfaces rely on accurate thermochemical information. The Active Thermochemical Tables (ATcT) approach by Ruscic^{1} incorporates data for a large number of chemical species from a variety of sources (both experimental and theoretical) and derives a self-consistent network capable of making extremely accurate estimates of quantities such as temperature dependent enthalpies of formation. The network provides rigorous uncertainties, and since the values don't rely on a single measurement or calculation, the provenance of each quantity is also obtained. To expand and improve the network it is desirable to have a reliable protocol such as the HEAT approach^{2} for calculating accurate theoretical data. Here we present and benchmark an approach based on explicitly-correlated coupled-cluster theory and vibrational perturbation theory (VPT2). Methyldioxy and Methyl Hydroperoxide are important and well-characterized species in combustion processes and begin the family of (ethyl-, propyl-based, etc) similar compounds (much less is known about the larger members). Accurate anharmonic frequencies are essential to accurately describe even the 0 K enthalpies of formation, but are especially important for finite temperature studies. Here we benchmark the spectroscopic and thermochemical accuracy of the approach, comparing with available data for the smallest systems, and comment on the outlook for larger systems that are less well-known and characterized. ^{1}B. Ruscic, Active Thermochemical Tables (ATcT) values based on ver. 1.118 of the Thermochemical Network (2015); available at ATcT.anl.gov ^{2}A. Tajti, P. G. Szalay, A. G. Császár, M. Kállay, J. Gauss, E. F. Valeev, B. A. Flowers, J. Vázquez, and J. F. Stanton. JCP 121, (2004): 11599.

  8. A deep learning framework for supporting the classification of breast lesions in ultrasound images.

    PubMed

    Han, Seokmin; Kang, Ho-Kyung; Jeong, Ja-Yeon; Park, Moon-Ho; Kim, Wonsik; Bang, Won-Chul; Seong, Yeong-Kyeong

    2017-09-15

    In this research, we exploited the deep learning framework to differentiate the distinctive types of lesions and nodules in breast acquired with ultrasound imaging. A biopsy-proven benchmarking dataset was built from 5151 patients cases containing a total of 7408 ultrasound breast images, representative of semi-automatically segmented lesions associated with masses. The dataset comprised 4254 benign and 3154 malignant lesions. The developed method includes histogram equalization, image cropping and margin augmentation. The GoogLeNet convolutionary neural network was trained to the database to differentiate benign and malignant tumors. The networks were trained on the data with augmentation and the data without augmentation. Both of them showed an area under the curve of over 0.9. The networks showed an accuracy of about 0.9 (90%), a sensitivity of 0.86 and a specificity of 0.96. Although target regions of interest (ROIs) were selected by radiologists, meaning that radiologists still have to point out the location of the ROI, the classification of malignant lesions showed promising results. If this method is used by radiologists in clinical situations it can classify malignant lesions in a short time and support the diagnosis of radiologists in discriminating malignant lesions. Therefore, the proposed method can work in tandem with human radiologists to improve performance, which is a fundamental purpose of computer-aided diagnosis.

  9. Prediction of Body Fluids where Proteins are Secreted into Based on Protein Interaction Network

    PubMed Central

    Hu, Le-Le; Huang, Tao; Cai, Yu-Dong; Chou, Kuo-Chen

    2011-01-01

    Determining the body fluids where secreted proteins can be secreted into is important for protein function annotation and disease biomarker discovery. In this study, we developed a network-based method to predict which kind of body fluids human proteins can be secreted into. For a newly constructed benchmark dataset that consists of 529 human-secreted proteins, the prediction accuracy for the most possible body fluid location predicted by our method via the jackknife test was 79.02%, significantly higher than the success rate by a random guess (29.36%). The likelihood that the predicted body fluids of the first four orders contain all the true body fluids where the proteins can be secreted into is 62.94%. Our method was further demonstrated with two independent datasets: one contains 57 proteins that can be secreted into blood; while the other contains 61 proteins that can be secreted into plasma/serum and were possible biomarkers associated with various cancers. For the 57 proteins in first dataset, 55 were correctly predicted as blood-secrete proteins. For the 61 proteins in the second dataset, 58 were predicted to be most possible in plasma/serum. These encouraging results indicate that the network-based prediction method is quite promising. It is anticipated that the method will benefit the relevant areas for both basic research and drug development. PMID:21829572

  10. Resolving anatomical and functional structure in human brain organization: identifying mesoscale organization in weighted network representations.

    PubMed

    Lohse, Christian; Bassett, Danielle S; Lim, Kelvin O; Carlson, Jean M

    2014-10-01

    Human brain anatomy and function display a combination of modular and hierarchical organization, suggesting the importance of both cohesive structures and variable resolutions in the facilitation of healthy cognitive processes. However, tools to simultaneously probe these features of brain architecture require further development. We propose and apply a set of methods to extract cohesive structures in network representations of brain connectivity using multi-resolution techniques. We employ a combination of soft thresholding, windowed thresholding, and resolution in community detection, that enable us to identify and isolate structures associated with different weights. One such mesoscale structure is bipartivity, which quantifies the extent to which the brain is divided into two partitions with high connectivity between partitions and low connectivity within partitions. A second, complementary mesoscale structure is modularity, which quantifies the extent to which the brain is divided into multiple communities with strong connectivity within each community and weak connectivity between communities. Our methods lead to multi-resolution curves of these network diagnostics over a range of spatial, geometric, and structural scales. For statistical comparison, we contrast our results with those obtained for several benchmark null models. Our work demonstrates that multi-resolution diagnostic curves capture complex organizational profiles in weighted graphs. We apply these methods to the identification of resolution-specific characteristics of healthy weighted graph architecture and altered connectivity profiles in psychiatric disease.

  11. A deep learning framework for supporting the classification of breast lesions in ultrasound images

    NASA Astrophysics Data System (ADS)

    Han, Seokmin; Kang, Ho-Kyung; Jeong, Ja-Yeon; Park, Moon-Ho; Kim, Wonsik; Bang, Won-Chul; Seong, Yeong-Kyeong

    2017-10-01

    In this research, we exploited the deep learning framework to differentiate the distinctive types of lesions and nodules in breast acquired with ultrasound imaging. A biopsy-proven benchmarking dataset was built from 5151 patients cases containing a total of 7408 ultrasound breast images, representative of semi-automatically segmented lesions associated with masses. The dataset comprised 4254 benign and 3154 malignant lesions. The developed method includes histogram equalization, image cropping and margin augmentation. The GoogLeNet convolutionary neural network was trained to the database to differentiate benign and malignant tumors. The networks were trained on the data with augmentation and the data without augmentation. Both of them showed an area under the curve of over 0.9. The networks showed an accuracy of about 0.9 (90%), a sensitivity of 0.86 and a specificity of 0.96. Although target regions of interest (ROIs) were selected by radiologists, meaning that radiologists still have to point out the location of the ROI, the classification of malignant lesions showed promising results. If this method is used by radiologists in clinical situations it can classify malignant lesions in a short time and support the diagnosis of radiologists in discriminating malignant lesions. Therefore, the proposed method can work in tandem with human radiologists to improve performance, which is a fundamental purpose of computer-aided diagnosis.

  12. Filtering Gene Ontology semantic similarity for identifying protein complexes in large protein interaction networks.

    PubMed

    Wang, Jian; Xie, Dong; Lin, Hongfei; Yang, Zhihao; Zhang, Yijia

    2012-06-21

    Many biological processes recognize in particular the importance of protein complexes, and various computational approaches have been developed to identify complexes from protein-protein interaction (PPI) networks. However, high false-positive rate of PPIs leads to challenging identification. A protein semantic similarity measure is proposed in this study, based on the ontology structure of Gene Ontology (GO) terms and GO annotations to estimate the reliability of interactions in PPI networks. Interaction pairs with low GO semantic similarity are removed from the network as unreliable interactions. Then, a cluster-expanding algorithm is used to detect complexes with core-attachment structure on filtered network. Our method is applied to three different yeast PPI networks. The effectiveness of our method is examined on two benchmark complex datasets. Experimental results show that our method performed better than other state-of-the-art approaches in most evaluation metrics. The method detects protein complexes from large scale PPI networks by filtering GO semantic similarity. Removing interactions with low GO similarity significantly improves the performance of complex identification. The expanding strategy is also effective to identify attachment proteins of complexes.

  13. Evaluation of Graph Pattern Matching Workloads in Graph Analysis Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Seokyong; Lee, Sangkeun; Lim, Seung-Hwan

    2016-01-01

    Graph analysis has emerged as a powerful method for data scientists to represent, integrate, query, and explore heterogeneous data sources. As a result, graph data management and mining became a popular area of research, and led to the development of plethora of systems in recent years. Unfortunately, the number of emerging graph analysis systems and the wide range of applications, coupled with a lack of apples-to-apples comparisons, make it difficult to understand the trade-offs between different systems and the graph operations for which they are designed. A fair comparison of these systems is a challenging task for the following reasons:more » multiple data models, non-standardized serialization formats, various query interfaces to users, and diverse environments they operate in. To address these key challenges, in this paper we present a new benchmark suite by extending the Lehigh University Benchmark (LUBM) to cover the most common capabilities of various graph analysis systems. We provide the design process of the benchmark, which generalizes the workflow for data scientists to conduct the desired graph analysis on different graph analysis systems. Equipped with this extended benchmark suite, we present performance comparison for nine subgraph pattern retrieval operations over six graph analysis systems, namely NetworkX, Neo4j, Jena, Titan, GraphX, and uRiKA. Through the proposed benchmark suite, this study reveals both quantitative and qualitative findings in (1) implications in loading data into each system; (2) challenges in describing graph patterns for each query interface; and (3) different sensitivity of each system to query selectivity. We envision that this study will pave the road for: (i) data scientists to select the suitable graph analysis systems, and (ii) data management system designers to advance graph analysis systems.« less

  14. A community resource benchmarking predictions of peptide binding to MHC-I molecules.

    PubMed

    Peters, Bjoern; Bui, Huynh-Hoa; Frankild, Sune; Nielson, Morten; Lundegaard, Claus; Kostem, Emrah; Basch, Derek; Lamberth, Kasper; Harndahl, Mikkel; Fleri, Ward; Wilson, Stephen S; Sidney, John; Lund, Ole; Buus, Soren; Sette, Alessandro

    2006-06-09

    Recognition of peptides bound to major histocompatibility complex (MHC) class I molecules by T lymphocytes is an essential part of immune surveillance. Each MHC allele has a characteristic peptide binding preference, which can be captured in prediction algorithms, allowing for the rapid scan of entire pathogen proteomes for peptide likely to bind MHC. Here we make public a large set of 48,828 quantitative peptide-binding affinity measurements relating to 48 different mouse, human, macaque, and chimpanzee MHC class I alleles. We use this data to establish a set of benchmark predictions with one neural network method and two matrix-based prediction methods extensively utilized in our groups. In general, the neural network outperforms the matrix-based predictions mainly due to its ability to generalize even on a small amount of data. We also retrieved predictions from tools publicly available on the internet. While differences in the data used to generate these predictions hamper direct comparisons, we do conclude that tools based on combinatorial peptide libraries perform remarkably well. The transparent prediction evaluation on this dataset provides tool developers with a benchmark for comparison of newly developed prediction methods. In addition, to generate and evaluate our own prediction methods, we have established an easily extensible web-based prediction framework that allows automated side-by-side comparisons of prediction methods implemented by experts. This is an advance over the current practice of tool developers having to generate reference predictions themselves, which can lead to underestimating the performance of prediction methods they are not as familiar with as their own. The overall goal of this effort is to provide a transparent prediction evaluation allowing bioinformaticians to identify promising features of prediction methods and providing guidance to immunologists regarding the reliability of prediction tools.

  15. Development of Benchmark Examples for Quasi-Static Delamination Propagation and Fatigue Growth Predictions

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2012-01-01

    The development of benchmark examples for quasi-static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for Abaqus/Standard. The example is based on a finite element model of a Double-Cantilever Beam specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, a quasi-static benchmark example was created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall the results are encouraging, but further assessment for mixed-mode delamination is required.

  16. Development of Benchmark Examples for Static Delamination Propagation and Fatigue Growth Predictions

    NASA Technical Reports Server (NTRS)

    Kruger, Ronald

    2011-01-01

    The development of benchmark examples for static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of an End-Notched Flexure (ENF) specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, static benchmark examples were created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during stable delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall, the results are encouraging but further assessment for mixed-mode delamination is required.

  17. Development and Application of Benchmark Examples for Mode II Static Delamination Propagation and Fatigue Growth Predictions

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2011-01-01

    The development of benchmark examples for static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of an End-Notched Flexure (ENF) specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, static benchmark examples were created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall the results are encouraging, but further assessment for mixed-mode delamination is required.

  18. Automated Instrumentation, Monitoring and Visualization of PVM Programs Using AIMS

    NASA Technical Reports Server (NTRS)

    Mehra, Pankaj; VanVoorst, Brian; Yan, Jerry; Lum, Henry, Jr. (Technical Monitor)

    1994-01-01

    We present views and analysis of the execution of several PVM (Parallel Virtual Machine) codes for Computational Fluid Dynamics on a networks of Sparcstations, including: (1) NAS Parallel Benchmarks CG and MG; (2) a multi-partitioning algorithm for NAS Parallel Benchmark SP; and (3) an overset grid flowsolver. These views and analysis were obtained using our Automated Instrumentation and Monitoring System (AIMS) version 3.0, a toolkit for debugging the performance of PVM programs. We will describe the architecture, operation and application of AIMS. The AIMS toolkit contains: (1) Xinstrument, which can automatically instrument various computational and communication constructs in message-passing parallel programs; (2) Monitor, a library of runtime trace-collection routines; (3) VK (Visual Kernel), an execution-animation tool with source-code clickback; and (4) Tally, a tool for statistical analysis of execution profiles. Currently, Xinstrument can handle C and Fortran 77 programs using PVM 3.2.x; Monitor has been implemented and tested on Sun 4 systems running SunOS 4.1.2; and VK uses XIIR5 and Motif 1.2. Data and views obtained using AIMS clearly illustrate several characteristic features of executing parallel programs on networked workstations: (1) the impact of long message latencies; (2) the impact of multiprogramming overheads and associated load imbalance; (3) cache and virtual-memory effects; and (4) significant skews between workstation clocks. Interestingly, AIMS can compensate for constant skew (zero drift) by calibrating the skew between a parent and its spawned children. In addition, AIMS' skew-compensation algorithm can adjust timestamps in a way that eliminates physically impossible communications (e.g., messages going backwards in time). Our current efforts are directed toward creating new views to explain the observed performance of PVM programs. Some of the features planned for the near future include: (1) ConfigView, showing the physical topology of the virtual machine, inferred using specially formatted IP (Internet Protocol) packets: and (2) LoadView, synchronous animation of PVM-program execution and resource-utilization patterns.

  19. A Comparative Analysis of Community Detection Algorithms on Artificial Networks

    PubMed Central

    Yang, Zhao; Algesheimer, René; Tessone, Claudio J.

    2016-01-01

    Many community detection algorithms have been developed to uncover the mesoscopic properties of complex networks. However how good an algorithm is, in terms of accuracy and computing time, remains still open. Testing algorithms on real-world network has certain restrictions which made their insights potentially biased: the networks are usually small, and the underlying communities are not defined objectively. In this study, we employ the Lancichinetti-Fortunato-Radicchi benchmark graph to test eight state-of-the-art algorithms. We quantify the accuracy using complementary measures and algorithms’ computing time. Based on simple network properties and the aforementioned results, we provide guidelines that help to choose the most adequate community detection algorithm for a given network. Moreover, these rules allow uncovering limitations in the use of specific algorithms given macroscopic network properties. Our contribution is threefold: firstly, we provide actual techniques to determine which is the most suited algorithm in most circumstances based on observable properties of the network under consideration. Secondly, we use the mixing parameter as an easily measurable indicator of finding the ranges of reliability of the different algorithms. Finally, we study the dependency with network size focusing on both the algorithm’s predicting power and the effective computing time. PMID:27476470

  20. A new mutually reinforcing network node and link ranking algorithm

    PubMed Central

    Wang, Zhenghua; Dueñas-Osorio, Leonardo; Padgett, Jamie E.

    2015-01-01

    This study proposes a novel Normalized Wide network Ranking algorithm (NWRank) that has the advantage of ranking nodes and links of a network simultaneously. This algorithm combines the mutual reinforcement feature of Hypertext Induced Topic Selection (HITS) and the weight normalization feature of PageRank. Relative weights are assigned to links based on the degree of the adjacent neighbors and the Betweenness Centrality instead of assigning the same weight to every link as assumed in PageRank. Numerical experiment results show that NWRank performs consistently better than HITS, PageRank, eigenvector centrality, and edge betweenness from the perspective of network connectivity and approximate network flow, which is also supported by comparisons with the expensive N-1 benchmark removal criteria based on network efficiency. Furthermore, it can avoid some problems, such as the Tightly Knit Community effect, which exists in HITS. NWRank provides a new inexpensive way to rank nodes and links of a network, which has practical applications, particularly to prioritize resource allocation for upgrade of hierarchical and distributed networks, as well as to support decision making in the design of networks, where node and link importance depend on a balance of local and global integrity. PMID:26492958

  1. Interconnect Performance Evaluation of SGI Altix 3700 BX2, Cray X1, Cray Opteron Cluster, and Dell PowerEdge

    NASA Technical Reports Server (NTRS)

    Fatoohi, Rod; Saini, Subbash; Ciotti, Robert

    2006-01-01

    We study the performance of inter-process communication on four high-speed multiprocessor systems using a set of communication benchmarks. The goal is to identify certain limiting factors and bottlenecks with the interconnect of these systems as well as to compare these interconnects. We measured network bandwidth using different number of communicating processors and communication patterns, such as point-to-point communication, collective communication, and dense communication patterns. The four platforms are: a 512-processor SGI Altix 3700 BX2 shared-memory machine with 3.2 GB/s links; a 64-processor (single-streaming) Cray XI shared-memory machine with 32 1.6 GB/s links; a 128-processor Cray Opteron cluster using a Myrinet network; and a 1280-node Dell PowerEdge cluster with an InfiniBand network. Our, results show the impact of the network bandwidth and topology on the overall performance of each interconnect.

  2. Neuromorphic photonic networks using silicon photonic weight banks.

    PubMed

    Tait, Alexander N; de Lima, Thomas Ferreira; Zhou, Ellen; Wu, Allie X; Nahmias, Mitchell A; Shastri, Bhavin J; Prucnal, Paul R

    2017-08-07

    Photonic systems for high-performance information processing have attracted renewed interest. Neuromorphic silicon photonics has the potential to integrate processing functions that vastly exceed the capabilities of electronics. We report first observations of a recurrent silicon photonic neural network, in which connections are configured by microring weight banks. A mathematical isomorphism between the silicon photonic circuit and a continuous neural network model is demonstrated through dynamical bifurcation analysis. Exploiting this isomorphism, a simulated 24-node silicon photonic neural network is programmed using "neural compiler" to solve a differential system emulation task. A 294-fold acceleration against a conventional benchmark is predicted. We also propose and derive power consumption analysis for modulator-class neurons that, as opposed to laser-class neurons, are compatible with silicon photonic platforms. At increased scale, Neuromorphic silicon photonics could access new regimes of ultrafast information processing for radio, control, and scientific computing.

  3. A Study of Information Content in the U.S. Television Commercials: Has It Become Less Informative but More Creative?

    ERIC Educational Resources Information Center

    Ng, Daniel; Supaporn, Potibut

    A study investigated the trend of current U.S. television commercial informativeness by comparing the results with Alan Resnik and Bruce Stern's previous benchmark study conducted in 1977. A systematic random sampling procedure was used to select viewing dates and times of commercials from the three national networks. Ultimately, a total of 550…

  4. Cooperative strategies for forest science management and leadership in an increasingly complex and globalized world: Proceedings of a workshop; 23- 26 August 1998; Quebec City, Quebec, Canada

    Treesearch

    Lane G. Eskew; David R. DeYoe; Denver P. Burns; Jean-Claude Mercier

    1999-01-01

    The purpose of this workshop was to develop organizational networks to help achieve best practices in management and leadership of forest research and foster continuous learning toward that goal through organizational benchmarking. The papers and notes herein document the presentations and discussions of the workshop.

  5. Fault-Tolerant Multiprocessor and VLSI-Based Systems.

    DTIC Science & Technology

    1987-03-15

    54590 170 Table 1: Statistics for the Benchmark Programs pages are distributed amongst the groups of the reconfigured memory in proportion to the...distances are proportional to only the logarithm of the sure that possesses relevance to a system which consists of alare nmbe ofhomgenouseleent...and comn.unication overhead resulting from faults communicating with all of the other elements in the system the network to degrade proportionately to

  6. Benchmarks for Enhanced Network Performance: Hands-On Testing of Operating System Solutions to Identify the Optimal Application Server Platform for the Graduate School of Business and Public Policy

    DTIC Science & Technology

    2010-09-01

    for Applied Mathematics. Kennedy, R. C. (2009a). Clocking Windows netbook performance. Retrieved on 08/14/2010, from http...podcasts.infoworld.com/d/hardware/clocking-windows- netbook -performance-883?_kip_ipx=1177119066-1281460794 Kennedy, R. C. (2009b). OfficeBench 7: A cool new way to

  7. Oregon's Technical, Human, and Organizational Networking Infrastructure for Science and Mathematics: A Planning Project. Benchmark Reports.

    ERIC Educational Resources Information Center

    Lamb, William G., Ed.

    This compilation of reports is part of a planning project that aims to establish a coalition of organizations and key people who can work together to bring computerized telecommunications (CT) to Oregon as a teaching tool for science and mathematics teachers and students, and to give that coalition practical ideas for proposals to make CT a…

  8. Congenital Heart Surgery Case Mix Across North American Centers and Impact on Performance Assessment.

    PubMed

    Pasquali, Sara K; Wallace, Amelia S; Gaynor, J William; Jacobs, Marshall L; O'Brien, Sean M; Hill, Kevin D; Gaies, Michael G; Romano, Jennifer C; Shahian, David M; Mayer, John E; Jacobs, Jeffrey P

    2016-11-01

    Performance assessment in congenital heart surgery is challenging due to the wide heterogeneity of disease. We describe current case mix across centers, evaluate methodology inclusive of all cardiac operations versus the more homogeneous subset of Society of Thoracic Surgeons benchmark operations, and describe implications regarding performance assessment. Centers (n = 119) participating in the Society of Thoracic Surgeons Congenital Heart Surgery Database (2010 through 2014) were included. Index operation type and frequency across centers were described. Center performance (risk-adjusted operative mortality) was evaluated and classified when including the benchmark versus all eligible operations. Overall, 207 types of operations were performed during the study period (112,140 total cases). Few operations were performed across all centers; only 25% were performed at least once by 75% or more of centers. There was 7.9-fold variation across centers in the proportion of total cases comprising high-complexity cases (STAT 5). In contrast, the benchmark operations made up 36% of cases, and all but 2 were performed by at least 90% of centers. When evaluating performance based on benchmark versus all operations, 15% of centers changed performance classification; 85% remained unchanged. Benchmark versus all operation methodology was associated with lower power, with 35% versus 78% of centers meeting sample size thresholds. There is wide variation in congenital heart surgery case mix across centers. Metrics based on benchmark versus all operations are associated with strengths (less heterogeneity) and weaknesses (lower power), and lead to differing performance classification for some centers. These findings have implications for ongoing efforts to optimize performance assessment, including choice of target population and appropriate interpretation of reported metrics. Copyright © 2016 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  9. Implementation of Finite Volume based Navier Stokes Algorithm Within General Purpose Flow Network Code

    NASA Technical Reports Server (NTRS)

    Schallhorn, Paul; Majumdar, Alok

    2012-01-01

    This paper describes a finite volume based numerical algorithm that allows multi-dimensional computation of fluid flow within a system level network flow analysis. There are several thermo-fluid engineering problems where higher fidelity solutions are needed that are not within the capacity of system level codes. The proposed algorithm will allow NASA's Generalized Fluid System Simulation Program (GFSSP) to perform multi-dimensional flow calculation within the framework of GFSSP s typical system level flow network consisting of fluid nodes and branches. The paper presents several classical two-dimensional fluid dynamics problems that have been solved by GFSSP's multi-dimensional flow solver. The numerical solutions are compared with the analytical and benchmark solution of Poiseulle, Couette and flow in a driven cavity.

  10. Mitigating wildland fire hazard using complex network centrality measures

    NASA Astrophysics Data System (ADS)

    Russo, Lucia; Russo, Paola; Siettos, Constantinos I.

    2016-12-01

    We show how to distribute firebreaks in heterogeneous forest landscapes in the presence of strong wind using complex network centrality measures. The proposed framework is essentially a two-tire one: at the inner part a state-of- the-art Cellular Automata model is used to compute the weights of the underlying lattice network while at the outer part the allocation of the fire breaks is scheduled in terms of a hierarchy of centralities which influence the most the spread of fire. For illustration purposes we applied the proposed framework to a real-case wildfire that broke up in Spetses Island, Greece in 1990. We evaluate the scheme against the benchmark of random allocation of firebreaks under the weather conditions of the real incident i.e. in the presence of relatively strong winds.

  11. Power Consumption Analysis of Operating Systems for Wireless Sensor Networks

    PubMed Central

    Lajara, Rafael; Pelegrí-Sebastiá, José; Perez Solano, Juan J.

    2010-01-01

    In this paper four wireless sensor network operating systems are compared in terms of power consumption. The analysis takes into account the most common operating systems—TinyOS v1.0, TinyOS v2.0, Mantis and Contiki—running on Tmote Sky and MICAz devices. With the objective of ensuring a fair evaluation, a benchmark composed of four applications has been developed, covering the most typical tasks that a Wireless Sensor Network performs. The results show the instant and average current consumption of the devices during the execution of these applications. The experimental measurements provide a good insight into the power mode in which the device components are running at every moment, and they can be used to compare the performance of different operating systems executing the same tasks. PMID:22219688

  12. Power consumption analysis of operating systems for wireless sensor networks.

    PubMed

    Lajara, Rafael; Pelegrí-Sebastiá, José; Perez Solano, Juan J

    2010-01-01

    In this paper four wireless sensor network operating systems are compared in terms of power consumption. The analysis takes into account the most common operating systems--TinyOS v1.0, TinyOS v2.0, Mantis and Contiki--running on Tmote Sky and MICAz devices. With the objective of ensuring a fair evaluation, a benchmark composed of four applications has been developed, covering the most typical tasks that a Wireless Sensor Network performs. The results show the instant and average current consumption of the devices during the execution of these applications. The experimental measurements provide a good insight into the power mode in which the device components are running at every moment, and they can be used to compare the performance of different operating systems executing the same tasks.

  13. Global-local feature attention network with reranking strategy for image caption generation

    NASA Astrophysics Data System (ADS)

    Wu, Jie; Xie, Si-ya; Shi, Xin-bao; Chen, Yao-wen

    2017-11-01

    In this paper, a novel framework, named as global-local feature attention network with reranking strategy (GLAN-RS), is presented for image captioning task. Rather than only adopting unitary visual information in the classical models, GLAN-RS explores the attention mechanism to capture local convolutional salient image maps. Furthermore, we adopt reranking strategy to adjust the priority of the candidate captions and select the best one. The proposed model is verified using the Microsoft Common Objects in Context (MSCOCO) benchmark dataset across seven standard evaluation metrics. Experimental results show that GLAN-RS significantly outperforms the state-of-the-art approaches, such as multimodal recurrent neural network (MRNN) and Google NIC, which gets an improvement of 20% in terms of BLEU4 score and 13 points in terms of CIDER score.

  14. Community detection in complex networks using link prediction

    NASA Astrophysics Data System (ADS)

    Cheng, Hui-Min; Ning, Yi-Zi; Yin, Zhao; Yan, Chao; Liu, Xin; Zhang, Zhong-Yuan

    2018-01-01

    Community detection and link prediction are both of great significance in network analysis, which provide very valuable insights into topological structures of the network from different perspectives. In this paper, we propose a novel community detection algorithm with inclusion of link prediction, motivated by the question whether link prediction can be devoted to improving the accuracy of community partition. For link prediction, we propose two novel indices to compute the similarity between each pair of nodes, one of which aims to add missing links, and the other tries to remove spurious edges. Extensive experiments are conducted on benchmark data sets, and the results of our proposed algorithm are compared with two classes of baselines. In conclusion, our proposed algorithm is competitive, revealing that link prediction does improve the precision of community detection.

  15. Excited, Proud, and Accomplished: Exploring the Effects of Feedback Supplemented with Web-Based Peer Benchmarking on Self-Regulated Learning in Marketing Classrooms

    ERIC Educational Resources Information Center

    Raska, David

    2014-01-01

    This research explores and tests the effect of an innovative performance feedback practice--feedback supplemented with web-based peer benchmarking--through a lens of social cognitive framework for self-regulated learning. The results suggest that providing performance feedback with references to exemplary peer output is positively associated with…

  16. A Quantitative Methodology for Determining the Critical Benchmarks for Project 2061 Strand Maps

    ERIC Educational Resources Information Center

    Kuhn, G.

    2008-01-01

    The American Association for the Advancement of Science (AAAS) was tasked with identifying the key science concepts for science literacy in K-12 students in America (AAAS, 1990, 1993). The AAAS Atlas of Science Literacy (2001) has organized roughly half of these science concepts or benchmarks into fifty flow charts. Each flow chart or strand map…

  17. Academic Productivity in Psychiatry: Benchmarks for the H-Index.

    PubMed

    MacMaster, Frank P; Swansburg, Rose; Rittenbach, Katherine

    2017-08-01

    Bibliometrics play an increasingly critical role in the assessment of faculty for promotion and merit increases. Bibliometrics is the statistical analysis of publications, aimed at evaluating their impact. The objective of this study is to describe h-index and citation benchmarks in academic psychiatry. Faculty lists were acquired from online resources for all academic departments of psychiatry listed as having residency training programs in Canada (as of June 2016). Potential authors were then searched on Web of Science (Thomson Reuters) for their corresponding h-index and total number of citations. The sample included 1683 faculty members in academic psychiatry departments. Restricted to those with a rank of assistant, associate, or full professor resulted in 1601 faculty members (assistant = 911, associate = 387, full = 303). h-index and total citations differed significantly by academic rank. Both were highest in the full professor rank, followed by associate, then assistant. The range in each, however, was large. This study provides the initial benchmarks for the h-index and total citations in academic psychiatry. Regardless of any controversies or criticisms of bibliometrics, they are increasingly influencing promotion, merit increases, and grant support. As such, benchmarking by specialties is needed in order to provide needed context.

  18. Extensive site-directed mutagenesis reveals interconnected functional units in the alkaline phosphatase active site

    PubMed Central

    Sunden, Fanny; Peck, Ariana; Salzman, Julia; Ressl, Susanne; Herschlag, Daniel

    2015-01-01

    Enzymes enable life by accelerating reaction rates to biological timescales. Conventional studies have focused on identifying the residues that have a direct involvement in an enzymatic reaction, but these so-called ‘catalytic residues’ are embedded in extensive interaction networks. Although fundamental to our understanding of enzyme function, evolution, and engineering, the properties of these networks have yet to be quantitatively and systematically explored. We dissected an interaction network of five residues in the active site of Escherichia coli alkaline phosphatase. Analysis of the complex catalytic interdependence of specific residues identified three energetically independent but structurally interconnected functional units with distinct modes of cooperativity. From an evolutionary perspective, this network is orders of magnitude more probable to arise than a fully cooperative network. From a functional perspective, new catalytic insights emerge. Further, such comprehensive energetic characterization will be necessary to benchmark the algorithms required to rationally engineer highly efficient enzymes. DOI: http://dx.doi.org/10.7554/eLife.06181.001 PMID:25902402

  19. Decorrelated jet substructure tagging using adversarial neural networks

    NASA Astrophysics Data System (ADS)

    Shimmin, Chase; Sadowski, Peter; Baldi, Pierre; Weik, Edison; Whiteson, Daniel; Goul, Edward; Søgaard, Andreas

    2017-10-01

    We describe a strategy for constructing a neural network jet substructure tagger which powerfully discriminates boosted decay signals while remaining largely uncorrelated with the jet mass. This reduces the impact of systematic uncertainties in background modeling while enhancing signal purity, resulting in improved discovery significance relative to existing taggers. The network is trained using an adversarial strategy, resulting in a tagger that learns to balance classification accuracy with decorrelation. As a benchmark scenario, we consider the case where large-radius jets originating from a boosted resonance decay are discriminated from a background of nonresonant quark and gluon jets. We show that in the presence of systematic uncertainties on the background rate, our adversarially trained, decorrelated tagger considerably outperforms a conventionally trained neural network, despite having a slightly worse signal-background separation power. We generalize the adversarial training technique to include a parametric dependence on the signal hypothesis, training a single network that provides optimized, interpolatable decorrelated jet tagging across a continuous range of hypothetical resonance masses, after training on discrete choices of the signal mass.

  20. Improving information filtering via network manipulation

    NASA Astrophysics Data System (ADS)

    Zhang, Fuguo; Zeng, An

    2012-12-01

    The recommender system is a very promising way to address the problem of overabundant information for online users. Although the information filtering for the online commercial systems has received much attention recently, almost all of the previous works are dedicated to design new algorithms and consider the user-item bipartite networks as given and constant information. However, many problems for recommender systems such as the cold-start problem (i.e., low recommendation accuracy for the small-degree items) are actually due to the limitation of the underlying user-item bipartite networks. In this letter, we propose a strategy to enhance the performance of the already existing recommendation algorithms by directly manipulating the user-item bipartite networks, namely adding some virtual connections to the networks. Numerical analyses on two benchmark data sets, MovieLens and Netflix, show that our method can remarkably improves the recommendation performance. Specifically, it not only improves the recommendations accuracy (especially for the small-degree items), but also helps the recommender systems generate more diverse and novel recommendations.

  1. Modeling structure and resilience of the dark network.

    PubMed

    De Domenico, Manlio; Arenas, Alex

    2017-02-01

    While the statistical and resilience properties of the Internet are no longer changing significantly across time, the Darknet, a network devoted to keep anonymous its traffic, still experiences rapid changes to improve the security of its users. Here we study the structure of the Darknet and find that its topology is rather peculiar, being characterized by a nonhomogeneous distribution of connections, typical of scale-free networks; very short path lengths and high clustering, typical of small-world networks; and lack of a core of highly connected nodes. We propose a model to reproduce such features, demonstrating that the mechanisms used to improve cybersecurity are responsible for the observed topology. Unexpectedly, we reveal that its peculiar structure makes the Darknet much more resilient than the Internet (used as a benchmark for comparison at a descriptive level) to random failures, targeted attacks, and cascade failures, as a result of adaptive changes in response to the attempts of dismantling the network across time.

  2. A regularization approach to continuous learning with an application to financial derivatives pricing.

    PubMed

    Ormoneit, D

    1999-12-01

    We consider the training of neural networks in cases where the nonlinear relationship of interest gradually changes over time. One possibility to deal with this problem is by regularization where a variation penalty is added to the usual mean squared error criterion. To learn the regularized network weights we suggest the Iterative Extended Kalman Filter (IEKF) as a learning rule, which may be derived from a Bayesian perspective on the regularization problem. A primary application of our algorithm is in financial derivatives pricing, where neural networks may be used to model the dependency of the derivatives' price on one or several underlying assets. After giving a brief introduction to the problem of derivatives pricing we present experiments with German stock index options data showing that a regularized neural network trained with the IEKF outperforms several benchmark models and alternative learning procedures. In particular, the performance may be greatly improved using a newly designed neural network architecture that accounts for no-arbitrage pricing restrictions.

  3. Combining Machine Learning Systems and Multiple Docking Simulation Packages to Improve Docking Prediction Reliability for Network Pharmacology

    PubMed Central

    Hsin, Kun-Yi; Ghosh, Samik; Kitano, Hiroaki

    2013-01-01

    Increased availability of bioinformatics resources is creating opportunities for the application of network pharmacology to predict drug effects and toxicity resulting from multi-target interactions. Here we present a high-precision computational prediction approach that combines two elaborately built machine learning systems and multiple molecular docking tools to assess binding potentials of a test compound against proteins involved in a complex molecular network. One of the two machine learning systems is a re-scoring function to evaluate binding modes generated by docking tools. The second is a binding mode selection function to identify the most predictive binding mode. Results from a series of benchmark validations and a case study show that this approach surpasses the prediction reliability of other techniques and that it also identifies either primary or off-targets of kinase inhibitors. Integrating this approach with molecular network maps makes it possible to address drug safety issues by comprehensively investigating network-dependent effects of a drug or drug candidate. PMID:24391846

  4. Extensive site-directed mutagenesis reveals interconnected functional units in the alkaline phosphatase active site

    DOE PAGES

    Sunden, Fanny; Peck, Ariana; Salzman, Julia; ...

    2015-04-22

    Enzymes enable life by accelerating reaction rates to biological timescales. Conventional studies have focused on identifying the residues that have a direct involvement in an enzymatic reaction, but these so-called ‘catalytic residues’ are embedded in extensive interaction networks. Although fundamental to our understanding of enzyme function, evolution, and engineering, the properties of these networks have yet to be quantitatively and systematically explored. We dissected an interaction network of five residues in the active site of Escherichia coli alkaline phosphatase. Analysis of the complex catalytic interdependence of specific residues identified three energetically independent but structurally interconnected functional units with distinct modesmore » of cooperativity. From an evolutionary perspective, this network is orders of magnitude more probable to arise than a fully cooperative network. From a functional perspective, new catalytic insights emerge. Further, such comprehensive energetic characterization will be necessary to benchmark the algorithms required to rationally engineer highly efficient enzymes.« less

  5. Modeling structure and resilience of the dark network

    NASA Astrophysics Data System (ADS)

    De Domenico, Manlio; Arenas, Alex

    2017-02-01

    While the statistical and resilience properties of the Internet are no longer changing significantly across time, the Darknet, a network devoted to keep anonymous its traffic, still experiences rapid changes to improve the security of its users. Here we study the structure of the Darknet and find that its topology is rather peculiar, being characterized by a nonhomogeneous distribution of connections, typical of scale-free networks; very short path lengths and high clustering, typical of small-world networks; and lack of a core of highly connected nodes. We propose a model to reproduce such features, demonstrating that the mechanisms used to improve cybersecurity are responsible for the observed topology. Unexpectedly, we reveal that its peculiar structure makes the Darknet much more resilient than the Internet (used as a benchmark for comparison at a descriptive level) to random failures, targeted attacks, and cascade failures, as a result of adaptive changes in response to the attempts of dismantling the network across time.

  6. Computational models of location-invariant orthographic processing

    NASA Astrophysics Data System (ADS)

    Dandurand, Frédéric; Hannagan, Thomas; Grainger, Jonathan

    2013-03-01

    We trained three topologies of backpropagation neural networks to discriminate 2000 words (lexical representations) presented at different positions of a horizontal letter array. The first topology (zero-deck) contains no hidden layer, the second (one-deck) has a single hidden layer, and for the last topology (two-deck), the task is divided in two subtasks implemented as two stacked neural networks, with explicit word-centred letters as intermediate representations. All topologies successfully simulated two key benchmark phenomena observed in skilled human reading: transposed-letter priming and relative-position priming. However, the two-deck topology most accurately simulated the ability to discriminate words from nonwords, while containing the fewest connection weights. We analysed the internal representations after training. Zero-deck networks implement a letter-based scheme with a position bias to differentiate anagrams. One-deck networks implement a holographic overlap coding in which representations are essentially letter-based and words are linear combinations of letters. Two-deck networks also implement holographic-coding.

  7. Benchmarking Measures of Network Controllability on Canonical Graph Models

    NASA Astrophysics Data System (ADS)

    Wu-Yan, Elena; Betzel, Richard F.; Tang, Evelyn; Gu, Shi; Pasqualetti, Fabio; Bassett, Danielle S.

    2018-03-01

    The control of networked dynamical systems opens the possibility for new discoveries and therapies in systems biology and neuroscience. Recent theoretical advances provide candidate mechanisms by which a system can be driven from one pre-specified state to another, and computational approaches provide tools to test those mechanisms in real-world systems. Despite already having been applied to study network systems in biology and neuroscience, the practical performance of these tools and associated measures on simple networks with pre-specified structure has yet to be assessed. Here, we study the behavior of four control metrics (global, average, modal, and boundary controllability) on eight canonical graphs (including Erdős-Rényi, regular, small-world, random geometric, Barábasi-Albert preferential attachment, and several modular networks) with different edge weighting schemes (Gaussian, power-law, and two nonparametric distributions from brain networks, as examples of real-world systems). We observe that differences in global controllability across graph models are more salient when edge weight distributions are heavy-tailed as opposed to normal. In contrast, differences in average, modal, and boundary controllability across graph models (as well as across nodes in the graph) are more salient when edge weight distributions are less heavy-tailed. Across graph models and edge weighting schemes, average and modal controllability are negatively correlated with one another across nodes; yet, across graph instances, the relation between average and modal controllability can be positive, negative, or nonsignificant. Collectively, these findings demonstrate that controllability statistics (and their relations) differ across graphs with different topologies and that these differences can be muted or accentuated by differences in the edge weight distributions. More generally, our numerical studies motivate future analytical efforts to better understand the mathematical underpinnings of the relationship between graph topology and control, as well as efforts to design networks with specific control profiles.

  8. Benchmark duration of work hours for development of fatigue symptoms in Japanese workers with adjustment for job-related stress.

    PubMed

    Suwazono, Yasushi; Dochi, Mirei; Kobayashi, Etsuko; Oishi, Mitsuhiro; Okubo, Yasushi; Tanaka, Kumihiko; Sakata, Kouichi

    2008-12-01

    The objective of this study was to calculate benchmark durations and lower 95% confidence limits for benchmark durations of working hours associated with subjective fatigue symptoms by applying the benchmark dose approach while adjusting for job-related stress using multiple logistic regression analyses. A self-administered questionnaire was completed by 3,069 male and 412 female daytime workers (age 18-67 years) in a Japanese steel company. The eight dependent variables in the Cumulative Fatigue Symptoms Index were decreased vitality, general fatigue, physical disorders, irritability, decreased willingness to work, anxiety, depressive feelings, and chronic tiredness. Independent variables were daily working hours, four subscales (job demand, job control, interpersonal relationship, and job suitability) of the Brief Job Stress Questionnaire, and other potential covariates. Using significant parameters for working hours and those for other covariates, the benchmark durations of working hours were calculated for the corresponding Index property. Benchmark response was set at 5% or 10%. Assuming a condition of worst job stress, the benchmark duration/lower 95% confidence limit for benchmark duration of working hours per day with a benchmark response of 5% or 10% were 10.0/9.4 or 11.7/10.7 (irritability) and 9.2/8.9 or 10.4/9.8 (chronic tiredness) in men and 8.9/8.4 or 9.8/8.9 (chronic tiredness) in women. The threshold amounts of working hours for fatigue symptoms under the worst job-related stress were very close to the standard daily working hours in Japan. The results strongly suggest that special attention should be paid to employees whose working hours exceed threshold amounts based on individual levels of job-related stress.

  9. Competency based training in robotic surgery: benchmark scores for virtual reality robotic simulation.

    PubMed

    Raison, Nicholas; Ahmed, Kamran; Fossati, Nicola; Buffi, Nicolò; Mottrie, Alexandre; Dasgupta, Prokar; Van Der Poel, Henk

    2017-05-01

    To develop benchmark scores of competency for use within a competency based virtual reality (VR) robotic training curriculum. This longitudinal, observational study analysed results from nine European Association of Urology hands-on-training courses in VR simulation. In all, 223 participants ranging from novice to expert robotic surgeons completed 1565 exercises. Competency was set at 75% of the mean expert score. Benchmark scores for all general performance metrics generated by the simulator were calculated. Assessment exercises were selected by expert consensus and through learning-curve analysis. Three basic skill and two advanced skill exercises were identified. Benchmark scores based on expert performance offered viable targets for novice and intermediate trainees in robotic surgery. Novice participants met the competency standards for most basic skill exercises; however, advanced exercises were significantly more challenging. Intermediate participants performed better across the seven metrics but still did not achieve the benchmark standard in the more difficult exercises. Benchmark scores derived from expert performances offer relevant and challenging scores for trainees to achieve during VR simulation training. Objective feedback allows both participants and trainers to monitor educational progress and ensures that training remains effective. Furthermore, the well-defined goals set through benchmarking offer clear targets for trainees and enable training to move to a more efficient competency based curriculum. © 2016 The Authors BJU International © 2016 BJU International Published by John Wiley & Sons Ltd.

  10. A Collaborative Recommend Algorithm Based on Bipartite Community

    PubMed Central

    Fu, Yuchen; Liu, Quan; Cui, Zhiming

    2014-01-01

    The recommendation algorithm based on bipartite network is superior to traditional methods on accuracy and diversity, which proves that considering the network topology of recommendation systems could help us to improve recommendation results. However, existing algorithms mainly focus on the overall topology structure and those local characteristics could also play an important role in collaborative recommend processing. Therefore, on account of data characteristics and application requirements of collaborative recommend systems, we proposed a link community partitioning algorithm based on the label propagation and a collaborative recommendation algorithm based on the bipartite community. Then we designed numerical experiments to verify the algorithm validity under benchmark and real database. PMID:24955393

  11. Information filtering based on corrected redundancy-eliminating mass diffusion.

    PubMed

    Zhu, Xuzhen; Yang, Yujie; Chen, Guilin; Medo, Matus; Tian, Hui; Cai, Shi-Min

    2017-01-01

    Methods used in information filtering and recommendation often rely on quantifying the similarity between objects or users. The used similarity metrics often suffer from similarity redundancies arising from correlations between objects' attributes. Based on an unweighted undirected object-user bipartite network, we propose a Corrected Redundancy-Eliminating similarity index (CRE) which is based on a spreading process on the network. Extensive experiments on three benchmark data sets-Movilens, Netflix and Amazon-show that when used in recommendation, the CRE yields significant improvements in terms of recommendation accuracy and diversity. A detailed analysis is presented to unveil the origins of the observed differences between the CRE and mainstream similarity indices.

  12. Learning in Stochastic Bit Stream Neural Networks.

    PubMed

    van Daalen, Max; Shawe-Taylor, John; Zhao, Jieyu

    1996-08-01

    This paper presents learning techniques for a novel feedforward stochastic neural network. The model uses stochastic weights and the "bit stream" data representation. It has a clean analysable functionality and is very attractive with its great potential to be implemented in hardware using standard digital VLSI technology. The design allows simulation at three different levels and learning techniques are described for each level. The lowest level corresponds to on-chip learning. Simulation results on three benchmark MONK's problems and handwritten digit recognition with a clean set of 500 16 x 16 pixel digits demonstrate that the new model is powerful enough for the real world applications. Copyright 1996 Elsevier Science Ltd

  13. Traffic sign classification with dataset augmentation and convolutional neural network

    NASA Astrophysics Data System (ADS)

    Tang, Qing; Kurnianggoro, Laksono; Jo, Kang-Hyun

    2018-04-01

    This paper presents a method for traffic sign classification using a convolutional neural network (CNN). In this method, firstly we transfer a color image into grayscale, and then normalize it in the range (-1,1) as the preprocessing step. To increase robustness of classification model, we apply a dataset augmentation algorithm and create new images to train the model. To avoid overfitting, we utilize a dropout module before the last fully connection layer. To assess the performance of the proposed method, the German traffic sign recognition benchmark (GTSRB) dataset is utilized. Experimental results show that the method is effective in classifying traffic signs.

  14. Outcome after polytrauma in a certified trauma network: comparing standard vs. maximum care facilities concept of the study and study protocol (POLYQUALY).

    PubMed

    Koller, Michael; Ernstberger, Antonio; Zeman, Florian; Loss, Julika; Nerlich, Michael

    2016-07-11

    The aim of this study is to evaluate the performance of the first certified regional trauma network in Germany, the Trauma Network Eastern Bavaria (TNO) addressing the following specific research questions: Do standard and maximum care facilities produce comparable (risk-adjusted) levels of patient outcome? Does TNO outperform reference data provided by the German Trauma Register 2008? Does TNO comply with selected benchmarks derived from the S3 practice guideline? Which barriers and facilitators can be identified in the health care delivery processes for polytrauma patients? The design is based on a prospective multicenter cohort study comparing two cohorts of polytrauma patients: those treated in maximum care facilities and those treated in standard care facilities. Patient recruitment will take place in the 25 TNO clinics. It is estimated that n = 1.100 patients will be assessed for eligibility within a two-year period and n = 800 will be included into the study and analysed. Main outcome measures include the TraumaRegisterQM form, which has been implemented in the clinical routine since 2009 and is filled in via a web-based data management system in participating hospitals on a mandatory basis. Furthermore, patient-reported outcome is assessed using the EQ-5D at 6, 12 and 24 months after trauma. Comparisons will be drawn between the two cohorts. Further standards of comparisons are secondary data derived from German Trauma Registry as well as benchmarks from German S3 guideline on polytrauma. The qualitative part of the study will be based on semi-standardized interviews and focus group discussions with health care providers within TNO. The goal of the qualitative analysis is to elucidate which facilitating and inhibiting forces influence cooperation and performance within the network. This is the first study to evaluate a certified trauma network within the German health care system using a unique combination of a quantitative (prospective cohort study) and a qualitative (in-depth facilitator/barrier analysis) approach. The information generated by this project will be used in two ways. Firstly, within the region the results of the study will help to optimize the pre-hospital and clinical management of polytrauma patients. Secondly, on a nationwide scale, influential decision-making bodies, such as the Ministries of Health, the Hospital Associations, sickness funds, insurance companies and professional societies, will be addressed. The results will not only be applicable to the region of Eastern Bavaria, but also in most other parts of Germany with a comparable infrastructure. VfD_Polyqualy_12_001978 , 10.Jan.2013; German Clinical Trials Register DRKS00010039 , 18.02.2016.

  15. Using SPARK as a Solver for Modelica

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wetter, Michael; Wetter, Michael; Haves, Philip

    Modelica is an object-oriented acausal modeling language that is well positioned to become a de-facto standard for expressing models of complex physical systems. To simulate a model expressed in Modelica, it needs to be translated into executable code. For generating run-time efficient code, such a translation needs to employ algebraic formula manipulations. As the SPARK solver has been shown to be competitive for generating such code but currently cannot be used with the Modelica language, we report in this paper how SPARK's symbolic and numerical algorithms can be implemented in OpenModelica, an open-source implementation of a Modelica modeling and simulationmore » environment. We also report benchmark results that show that for our air flow network simulation benchmark, the SPARK solver is competitive with Dymola, which is believed to provide the best solver for Modelica.« less

  16. Radiochemical analyses of surface water from U.S. Geological Survey hydrologic bench-mark stations

    USGS Publications Warehouse

    Janzer, V.J.; Saindon, L.G.

    1972-01-01

    The U.S. Geological Survey's program for collecting and analyzing surface-water samples for radiochemical constituents at hydrologic bench-mark stations is described. Analytical methods used during the study are described briefly and data obtained from 55 of the network stations in the United States during the period from 1967 to 1971 are given in tabular form.Concentration values are reported for dissolved uranium, radium, gross alpha and gross beta radioactivity. Values are also given for suspended gross alpha radioactivity in terms of natural uranium. Suspended gross beta radioactivity is expressed both as the equilibrium mixture of strontium-90/yttrium-90 and as cesium-137.Other physical parameters reported which describe the samples include the concentrations of dissolved and suspended solids, the water temperature and stream discharge at the time of the sample collection.

  17. Defining consensus norms for palliative care of people with intellectual disabilities in Europe, using Delphi methods: A White Paper from the European Association of Palliative Care

    PubMed Central

    Tuffrey-Wijne, Irene; McLaughlin, Dorry; Curfs, Leopold; Dusart, Anne; Hoenger, Catherine; McEnhill, Linda; Read, Sue; Ryan, Karen; Satgé, Daniel; Straßer, Benjamin; Westergård, Britt-Evy; Oliver, David

    2015-01-01

    Background: People with intellectual disabilities often present with unique challenges that make it more difficult to meet their palliative care needs. Aim: To define consensus norms for palliative care of people with intellectual disabilities in Europe. Design: Delphi study in four rounds: (1) a taskforce of 12 experts from seven European countries drafted the norms, based on available empirical knowledge and regional/national guidelines; (2) using an online survey, 34 experts from 18 European countries evaluated the draft norms, provided feedback and distributed the survey within their professional networks. Criteria for consensus were clearly defined; (3) modifications and recommendations were made by the taskforce; and (4) the European Association for Palliative Care reviewed and approved the final version. Setting and participants: Taskforce members: identified through international networking strategies. Expert panel: a purposive sample identified through taskforce members’ networks. Results: A total of 80 experts from 15 European countries evaluated 52 items within the following 13 norms: equity of access, communication, recognising the need for palliative care, assessment of total needs, symptom management, end-of-life decision making, involving those who matter, collaboration, support for family/carers, preparing for death, bereavement support, education/training and developing/managing services. None of the items scored less than 86% agreement, making a further round unnecessary. In light of respondents’ comments, several items were modified and one item was deleted. Conclusion: This White Paper presents the first guidance for clinical practice, policy and research related to palliative care for people with intellectual disabilities based on evidence and European consensus, setting a benchmark for changes in policy and practice. PMID:26346181

  18. A Programming Model Performance Study Using the NAS Parallel Benchmarks

    DOE PAGES

    Shan, Hongzhang; Blagojević, Filip; Min, Seung-Jai; ...

    2010-01-01

    Harnessing the power of multicore platforms is challenging due to the additional levels of parallelism present. In this paper we use the NAS Parallel Benchmarks to study three programming models, MPI, OpenMP and PGAS to understand their performance and memory usage characteristics on current multicore architectures. To understand these characteristics we use the Integrated Performance Monitoring tool and other ways to measure communication versus computation time, as well as the fraction of the run time spent in OpenMP. The benchmarks are run on two different Cray XT5 systems and an Infiniband cluster. Our results show that in general the threemore » programming models exhibit very similar performance characteristics. In a few cases, OpenMP is significantly faster because it explicitly avoids communication. For these particular cases, we were able to re-write the UPC versions and achieve equal performance to OpenMP. Using OpenMP was also the most advantageous in terms of memory usage. Also we compare performance differences between the two Cray systems, which have quad-core and hex-core processors. We show that at scale the performance is almost always slower on the hex-core system because of increased contention for network resources.« less

  19. Neonatal outcomes of preterm or very-low-birth-weight infants over a decade from Queen Mary Hospital, Hong Kong: comparison with the Vermont Oxford Network.

    PubMed

    Chee, Y Y; Wong, M Sc; Wong, R Ms; Wong, K Y

    2017-08-01

    There is a paucity of local data on neonatal outcomes of preterm/very-low-birth-weight infants in Hong Kong. This study aimed to evaluate the survival rate on discharge and morbidity of preterm/very-low-birth-weight infants (≤29+6 weeks and/or birth weight <1500 g) over a decade at Queen Mary Hospital in Hong Kong, so as to provide centre-specific data for prenatal counselling and to benchmark these results against the Vermont Oxford Network. Standardised perinatal/neonatal data were collected for infants with gestational age of 23+0 to 29+6 weeks and/or birth weight of <1500 g who were born at Queen Mary Hospital between 1 January 2005 and 31 December 2014. These data were compared with all neonatal centres in the Vermont Oxford Network in 2013. The Chi squared test was used to compare the categorical Queen Mary Hospital data with that of Vermont Oxford Network. A two-tailed P value of <0.05 was considered statistically significant. The overall survival rate on discharge from Queen Mary Hospital for 449 infants was significantly higher than that of the Vermont Oxford Network (87% versus 80%; P=0.0006). The morbidity-free survival at Queen Mary Hospital (40%) was comparable with the Vermont Oxford Network (44%). At Queen Mary Hospital, 86% of infants had respiratory distress syndrome, 40% bronchopulmonary dysplasia, 44% patent ductus arteriosus, 7% severe intraventricular haemorrhage, 5% necrotising enterocolitis, 10% severe retinopathy of prematurity, 10% late-onset sepsis, and 84% growth failure on discharge. Rates of respiratory distress syndrome, intraventricular haemorrhage, necrotising enterocolitis, and severe retinopathy of prematurity were similar in the two populations. At Queen Mary Hospital, significantly more infants had bronchopulmonary dysplasia (P=0.011), patent ductus arteriosus (P=0.015), and growth failure (P=0.0001) compared with the Vermont Oxford Network. In contrast, rate of late-onset sepsis was significantly lower at Queen Mary Hospital than the Vermont Oxford Network (P=0.0002). Mortality rate and most of the morbidity rates of our centre compare favourably with international standards, but rates of bronchopulmonary dysplasia and growth failure are of concern. A regular benchmarking process is crucial to audit any change in clinical outcomes after implementation of a local quality improvement project.

  20. Safety and governance issues for neonatal transport services.

    PubMed

    Ratnavel, Nandiran

    2009-08-01

    Neonatal transport is a subspecialty within the field of neonatology. Transport services are developing rapidly in the United Kingdom (UK) with network demographics and funding patterns leading to a broad spectrum of service provision. Applying principles of clinical governance and safety to such a diverse landscape of transport services is challenging but finally receiving much needed attention. To understand issues of risk management associated with this branch of retrieval medicine one needs to look at the infrastructure of transport teams, arrangements for governance, risk identification, incident reporting, feedback and learning from experience. One also needs to look at audit processes, training, communication and ways of team working. Adherence to current recommendations for equipment and vehicle design are vital. The national picture for neonatal transport is evolving. This is an excellent time to start benchmarking and sharing best practice with a view to optimising safety and reducing risk.

  1. The power of collaboration: using internet-based tools to facilitate networking and benchmarking within a consortium of academic health centers.

    PubMed

    Korner, Eli J; Oinonen, Michael J; Browne, Robert C

    2003-02-01

    The University HealthSystem Consortium (UHC) represents a strategic alliance of 169 academic health centers and associated institutions engaged in knowledge sharing and idea-generation. The use of the Internet as a tool in the delivery of UHC's products and services has increased dramatically over the past year and will continue to increase during the foreseeable future. This paper examines the current state of UHC-member institution driven tools and services that utilize the Web as a fundamental component in their delivery. The evolution of knowledge management at UHC, its management information and reporting tools, and expansion of e-commerce provide real world examples of Internet use in health care delivery and management. Health care workers are using these Web-based tools to help manage rising costs and optimize patient outcomes. Policy, technical, and organizational issues must be resolved to facilitate rapid adoption of Internet applications.

  2. GPS Measurements of Crustal Deformation in San Diego, CA: Results from fixed-height monument network and implications for the Inner Continental Borderlands

    NASA Astrophysics Data System (ADS)

    Singleton, D. M.; Agnew, D. C.; Maloney, J. M.; Rockwell, T. K.

    2017-12-01

    The Newport-Inglewood-Rose Canyon fault zone is the easternmost fault in a system of strike-slip faults that together make up the Inner Continental Borderlands (ICB), a region offshore of Southern California that is thought to accommodate 10-15% of the total plate boundary slip. However, slip on individual faults is difficult to measure because of the offshore location and limited availability of geologic indicators. With a 30-km onshore segment, the southern Rose Canyon fault zone (RCF) provides an opportunity to employ geodetic techniques to quantify the slip rate for a fault within the ICB. Space geodetic techniques have significantly enhanced our ability to quantify tectonic motion. With a best-estimated geologic slip rate of 1.5 ± 0.5 mm/yr, the RCF, as with other low slip-rate faults, is a challenge to traditional survey GPS techniques. Here we present the results from surveys of a GPS network first constructed in 1998 to determine motion across the RCF. This network has four sites, each site consisting of three to five closely spaced benchmarks that employ novel fixed-height centering with submillimeter repeatability so as to reduce noise associated with monument stability. Data collected from 1998 to 2017 shows millimeter-level monument stability and repeatability of the network. We present the results of velocity inversion for slip using data spanning 19 years across the Rose Canyon fault zone and discuss the implications for broader motion across the Inner Continental Borderlands.

  3. Implementation of patient blood management remains extremely variable in Europe and Canada: the NATA benchmark project: An observational study.

    PubMed

    Van der Linden, Philippe; Hardy, Jean-François

    2016-12-01

    Preoperative anaemia is associated with increased postoperative morbidity and mortality. Patient blood management (PBM) is advocated to improve patient outcomes. NATA, the 'Network for the advancement of patient blood management, haemostasis and thrombosis', initiated a benchmark project with the aim of providing the basis for educational strategies to implement optimal PBM in participating centres. Prospective, observational study with online data collection in 11 secondary and tertiary care institutions interested in developing PBM. Ten European centres (Austria, Spain, England, Denmark, Belgium, Netherlands, Romania, Greece, France, and Germany) and one Canadian centre participated between January 2010 and June 2011. A total of 2470 patients undergoing total hip (THR) or knee replacement, or coronary artery bypass grafting (CABG), were registered in the study. Data from 2431 records were included in the final analysis. Primary outcome measures were the incidence and volume of red blood cells (RBC) transfused. Logistic regression analysis identified variables independently associated with RBC transfusions. The incidence of transfusion was significantly different between centres for THR (range 7 to 95%), total knee replacement (range 3 to 100%) and CABG (range 20 to 95%). The volume of RBC transfused was significantly different between centres for THR and CABG. The incidence of preoperative anaemia ranged between 3 and 40% and its treatment between 0 and 40%, the latter not being related to the former. Patient characteristics, evolution of haemoglobin concentrations and blood losses were also different between centres. Variables independently associated with RBC transfusion were preoperative haemoglobin concentration, lost volume of RBC and female sex. Implementation of PBM remains extremely variable across centres. The relative importance of factors explaining RBC transfusion differs across institutions, some being patient related whereas others are related to the healthcare process. The results reported confidentially to each centre will allow them to implement tailored measures to improve their PBM strategies.

  4. Paradoxical ventilator associated pneumonia incidences among selective digestive decontamination studies versus other studies of mechanically ventilated patients: benchmarking the evidence base

    PubMed Central

    2011-01-01

    Introduction Selective digestive decontamination (SDD) appears to have a more compelling evidence base than non-antimicrobial methods for the prevention of ventilator associated pneumonia (VAP). However, the striking variability in ventilator associated pneumonia-incidence proportion (VAP-IP) among the SDD studies remains unexplained and a postulated contextual effect remains untested for. Methods Nine reviews were used to source 45 observational (benchmark) groups and 137 component (control and intervention) groups of studies of SDD and studies of three non-antimicrobial methods of VAP prevention. The logit VAP-IP data were summarized by meta-analysis using random effects methods and the associated heterogeneity (tau2) was measured. As group level predictors of logit VAP-IP, the mode of VAP diagnosis, proportion of trauma admissions, the proportion receiving prolonged ventilation and the intervention method under study were examined in meta-regression models containing the benchmark groups together with either the control (models 1 to 3) or intervention (models 4 to 6) groups of the prevention studies. Results The VAP-IP benchmark derived here is 22.1% (95% confidence interval; 95% CI; 19.2 to 25.5; tau2 0.34) whereas the mean VAP-IP of control groups from studies of SDD and of non-antimicrobial methods, is 35.7 (29.7 to 41.8; tau2 0.63) versus 20.4 (17.2 to 24.0; tau2 0.41), respectively (P < 0.001). The disparity between the benchmark groups and the control groups of the SDD studies, which was most apparent for the highest quality studies, could not be explained in the meta-regression models after adjusting for various group level factors. The mean VAP-IP (95% CI) of intervention groups is 16.0 (12.6 to 20.3; tau2 0.59) and 17.1 (14.2 to 20.3; tau2 0.35) for SDD studies versus studies of non-antimicrobial methods, respectively. Conclusions The VAP-IP among the intervention groups within the SDD evidence base is less variable and more similar to the benchmark than among the control groups. These paradoxical observations cannot readily be explained. The interpretation of the SDD evidence base cannot proceed without further consideration of this contextual effect. PMID:21214897

  5. Skeleton-Based Human Action Recognition With Global Context-Aware Attention LSTM Networks

    NASA Astrophysics Data System (ADS)

    Liu, Jun; Wang, Gang; Duan, Ling-Yu; Abdiyeva, Kamila; Kot, Alex C.

    2018-04-01

    Human action recognition in 3D skeleton sequences has attracted a lot of research attention. Recently, Long Short-Term Memory (LSTM) networks have shown promising performance in this task due to their strengths in modeling the dependencies and dynamics in sequential data. As not all skeletal joints are informative for action recognition, and the irrelevant joints often bring noise which can degrade the performance, we need to pay more attention to the informative ones. However, the original LSTM network does not have explicit attention ability. In this paper, we propose a new class of LSTM network, Global Context-Aware Attention LSTM (GCA-LSTM), for skeleton based action recognition. This network is capable of selectively focusing on the informative joints in each frame of each skeleton sequence by using a global context memory cell. To further improve the attention capability of our network, we also introduce a recurrent attention mechanism, with which the attention performance of the network can be enhanced progressively. Moreover, we propose a stepwise training scheme in order to train our network effectively. Our approach achieves state-of-the-art performance on five challenging benchmark datasets for skeleton based action recognition.

  6. Inference of neuronal network spike dynamics and topology from calcium imaging data

    PubMed Central

    Lütcke, Henry; Gerhard, Felipe; Zenke, Friedemann; Gerstner, Wulfram; Helmchen, Fritjof

    2013-01-01

    Two-photon calcium imaging enables functional analysis of neuronal circuits by inferring action potential (AP) occurrence (“spike trains”) from cellular fluorescence signals. It remains unclear how experimental parameters such as signal-to-noise ratio (SNR) and acquisition rate affect spike inference and whether additional information about network structure can be extracted. Here we present a simulation framework for quantitatively assessing how well spike dynamics and network topology can be inferred from noisy calcium imaging data. For simulated AP-evoked calcium transients in neocortical pyramidal cells, we analyzed the quality of spike inference as a function of SNR and data acquisition rate using a recently introduced peeling algorithm. Given experimentally attainable values of SNR and acquisition rate, neural spike trains could be reconstructed accurately and with up to millisecond precision. We then applied statistical neuronal network models to explore how remaining uncertainties in spike inference affect estimates of network connectivity and topological features of network organization. We define the experimental conditions suitable for inferring whether the network has a scale-free structure and determine how well hub neurons can be identified. Our findings provide a benchmark for future calcium imaging studies that aim to reliably infer neuronal network properties. PMID:24399936

  7. Detecting network communities beyond assortativity-related attributes

    NASA Astrophysics Data System (ADS)

    Liu, Xin; Murata, Tsuyoshi; Wakita, Ken

    2014-07-01

    In network science, assortativity refers to the tendency of links to exist between nodes with similar attributes. In social networks, for example, links tend to exist between individuals of similar age, nationality, location, race, income, educational level, religious belief, and language. Thus, various attributes jointly affect the network topology. An interesting problem is to detect community structure beyond some specific assortativity-related attributes ρ, i.e., to take out the effect of ρ on network topology and reveal the hidden community structures which are due to other attributes. An approach to this problem is to redefine the null model of the modularity measure, so as to simulate the effect of ρ on network topology. However, a challenge is that we do not know to what extent the network topology is affected by ρ and by other attributes. In this paper, we propose a distance modularity, which allows us to freely choose any suitable function to simulate the effect of ρ. Such freedom can help us probe the effect of ρ and detect the hidden communities which are due to other attributes. We test the effectiveness of distance modularity on synthetic benchmarks and two real-world networks.

  8. Traffic sign recognition based on deep convolutional neural network

    NASA Astrophysics Data System (ADS)

    Yin, Shi-hao; Deng, Ji-cai; Zhang, Da-wei; Du, Jing-yuan

    2017-11-01

    Traffic sign recognition (TSR) is an important component of automated driving systems. It is a rather challenging task to design a high-performance classifier for the TSR system. In this paper, we propose a new method for TSR system based on deep convolutional neural network. In order to enhance the expression of the network, a novel structure (dubbed block-layer below) which combines network-in-network and residual connection is designed. Our network has 10 layers with parameters (block-layer seen as a single layer): the first seven are alternate convolutional layers and block-layers, and the remaining three are fully-connected layers. We train our TSR network on the German traffic sign recognition benchmark (GTSRB) dataset. To reduce overfitting, we perform data augmentation on the training images and employ a regularization method named "dropout". The activation function we employ in our network adopts scaled exponential linear units (SELUs), which can induce self-normalizing properties. To speed up the training, we use an efficient GPU to accelerate the convolutional operation. On the test dataset of GTSRB, we achieve the accuracy rate of 99.67%, exceeding the state-of-the-art results.

  9. Characterizing system dynamics with a weighted and directed network constructed from time series data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Xiaoran, E-mail: sxr0806@gmail.com; School of Mathematics and Statistics, The University of Western Australia, Crawley WA 6009; Small, Michael, E-mail: michael.small@uwa.edu.au

    In this work, we propose a novel method to transform a time series into a weighted and directed network. For a given time series, we first generate a set of segments via a sliding window, and then use a doubly symbolic scheme to characterize every windowed segment by combining absolute amplitude information with an ordinal pattern characterization. Based on this construction, a network can be directly constructed from the given time series: segments corresponding to different symbol-pairs are mapped to network nodes and the temporal succession between nodes is represented by directed links. With this conversion, dynamics underlying the timemore » series has been encoded into the network structure. We illustrate the potential of our networks with a well-studied dynamical model as a benchmark example. Results show that network measures for characterizing global properties can detect the dynamical transitions in the underlying system. Moreover, we employ a random walk algorithm to sample loops in our networks, and find that time series with different dynamics exhibits distinct cycle structure. That is, the relative prevalence of loops with different lengths can be used to identify the underlying dynamics.« less

  10. Combined Simulated Annealing and Genetic Algorithm Approach to Bus Network Design

    NASA Astrophysics Data System (ADS)

    Liu, Li; Olszewski, Piotr; Goh, Pong-Chai

    A new method - combined simulated annealing (SA) and genetic algorithm (GA) approach is proposed to solve the problem of bus route design and frequency setting for a given road network with fixed bus stop locations and fixed travel demand. The method involves two steps: a set of candidate routes is generated first and then the best subset of these routes is selected by the combined SA and GA procedure. SA is the main process to search for a better solution to minimize the total system cost, comprising user and operator costs. GA is used as a sub-process to generate new solutions. Bus demand assignment on two alternative paths is performed at the solution evaluation stage. The method was implemented on four theoretical grid networks of different size and a benchmark network. Several GA operators (crossover and mutation) were utilized and tested for their effectiveness. The results show that the proposed method can efficiently converge to the optimal solution on a small network but computation time increases significantly with network size. The method can also be used for other transport operation management problems.

  11. Brain tumor segmentation in multi-spectral MRI using convolutional neural networks (CNN).

    PubMed

    Iqbal, Sajid; Ghani, M Usman; Saba, Tanzila; Rehman, Amjad

    2018-04-01

    A tumor could be found in any area of the brain and could be of any size, shape, and contrast. There may exist multiple tumors of different types in a human brain at the same time. Accurate tumor area segmentation is considered primary step for treatment of brain tumors. Deep Learning is a set of promising techniques that could provide better results as compared to nondeep learning techniques for segmenting timorous part inside a brain. This article presents a deep convolutional neural network (CNN) to segment brain tumors in MRIs. The proposed network uses BRATS segmentation challenge dataset which is composed of images obtained through four different modalities. Accordingly, we present an extended version of existing network to solve segmentation problem. The network architecture consists of multiple neural network layers connected in sequential order with the feeding of Convolutional feature maps at the peer level. Experimental results on BRATS 2015 benchmark data thus show the usability of the proposed approach and its superiority over the other approaches in this area of research. © 2018 Wiley Periodicals, Inc.

  12. FindPrimaryPairs: An efficient algorithm for predicting element-transferring reactant/product pairs in metabolic networks.

    PubMed

    Steffensen, Jon Lund; Dufault-Thompson, Keith; Zhang, Ying

    2018-01-01

    The metabolism of individual organisms and biological communities can be viewed as a network of metabolites connected to each other through chemical reactions. In metabolic networks, chemical reactions transform reactants into products, thereby transferring elements between these metabolites. Knowledge of how elements are transferred through reactant/product pairs allows for the identification of primary compound connections through a metabolic network. However, such information is not readily available and is often challenging to obtain for large reaction databases or genome-scale metabolic models. In this study, a new algorithm was developed for automatically predicting the element-transferring reactant/product pairs using the limited information available in the standard representation of metabolic networks. The algorithm demonstrated high efficiency in analyzing large datasets and provided accurate predictions when benchmarked with manually curated data. Applying the algorithm to the visualization of metabolic networks highlighted pathways of primary reactant/product connections and provided an organized view of element-transferring biochemical transformations. The algorithm was implemented as a new function in the open source software package PSAMM in the release v0.30 (https://zhanglab.github.io/psamm/).

  13. A novel community detection method in bipartite networks

    NASA Astrophysics Data System (ADS)

    Zhou, Cangqi; Feng, Liang; Zhao, Qianchuan

    2018-02-01

    Community structure is a common and important feature in many complex networks, including bipartite networks, which are used as a standard model for many empirical networks comprised of two types of nodes. In this paper, we propose a two-stage method for detecting community structure in bipartite networks. Firstly, we extend the widely-used Louvain algorithm to bipartite networks. The effectiveness and efficiency of the Louvain algorithm have been proved by many applications. However, there lacks a Louvain-like algorithm specially modified for bipartite networks. Based on bipartite modularity, a measure that extends unipartite modularity and that quantifies the strength of partitions in bipartite networks, we fill the gap by developing the Bi-Louvain algorithm that iteratively groups the nodes in each part by turns. This algorithm in bipartite networks often produces a balanced network structure with equal numbers of two types of nodes. Secondly, for the balanced network yielded by the first algorithm, we use an agglomerative clustering method to further cluster the network. We demonstrate that the calculation of the gain of modularity of each aggregation, and the operation of joining two communities can be compactly calculated by matrix operations for all pairs of communities simultaneously. At last, a complete hierarchical community structure is unfolded. We apply our method to two benchmark data sets and a large-scale data set from an e-commerce company, showing that it effectively identifies community structure in bipartite networks.

  14. Aircraft Engine Gas Path Diagnostic Methods: Public Benchmarking Results

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Borguet, Sebastien; Leonard, Olivier; Zhang, Xiaodong (Frank)

    2013-01-01

    Recent technology reviews have identified the need for objective assessments of aircraft engine health management (EHM) technologies. To help address this issue, a gas path diagnostic benchmark problem has been created and made publicly available. This software tool, referred to as the Propulsion Diagnostic Method Evaluation Strategy (ProDiMES), has been constructed based on feedback provided by the aircraft EHM community. It provides a standard benchmark problem enabling users to develop, evaluate and compare diagnostic methods. This paper will present an overview of ProDiMES along with a description of four gas path diagnostic methods developed and applied to the problem. These methods, which include analytical and empirical diagnostic techniques, will be described and associated blind-test-case metric results will be presented and compared. Lessons learned along with recommendations for improving the public benchmarking processes will also be presented and discussed.

  15. CPE--A New Perspective: The Impact of the Technology Revolution. Proceedings of the Computer Performance Evaluation Users Group Meeting (19th, San Francisco, California, October 25-28, 1983). Final Report. Reports on Computer Science and Technology.

    ERIC Educational Resources Information Center

    Mobray, Deborah, Ed.

    Papers on local area networks (LANs), modelling techniques, software improvement, capacity planning, software engineering, microcomputers and end user computing, cost accounting and chargeback, configuration and performance management, and benchmarking presented at this conference include: (1) "Theoretical Performance Analysis of Virtual…

  16. Extremely Lightweight Intrusion Detection (ELIDe)

    DTIC Science & Technology

    2013-12-01

    devices that would be more commonly found in a dynamic tactical environment. As a point of reference, the Raspberry Pi single-chip computer (4) is...the ELIDe application onto a resource- constrained hardware platform more likely to be used in a mobile tactical network, and the Raspberry Pi was...chosen as that representative platform. ELIDe was successfully tested on a Raspberry Pi , its throughput was benchmarked at approximately 8.3 megabits

  17. Biomarker Identification for Prostate Cancer and Lymph Node Metastasis from Microarray Data and Protein Interaction Network Using Gene Prioritization Method

    PubMed Central

    Arias, Carlos Roberto; Yeh, Hsiang-Yuan; Soo, Von-Wun

    2012-01-01

    Finding a genetic disease-related gene is not a trivial task. Therefore, computational methods are needed to present clues to the biomedical community to explore genes that are more likely to be related to a specific disease as biomarker. We present biomarker identification problem using gene prioritization method called gene prioritization from microarray data based on shortest paths, extended with structural and biological properties and edge flux using voting scheme (GP-MIDAS-VXEF). The method is based on finding relevant interactions on protein interaction networks, then scoring the genes using shortest paths and topological analysis, integrating the results using a voting scheme and a biological boosting. We applied two experiments, one is prostate primary and normal samples and the other is prostate primary tumor with and without lymph nodes metastasis. We used 137 truly prostate cancer genes as benchmark. In the first experiment, GP-MIDAS-VXEF outperforms all the other state-of-the-art methods in the benchmark by retrieving the truest related genes from the candidate set in the top 50 scores found. We applied the same technique to infer the significant biomarkers in prostate cancer with lymph nodes metastasis which is not established well. PMID:22654636

  18. Hierarchical Kohonenen net for anomaly detection in network security.

    PubMed

    Sarasamma, Suseela T; Zhu, Qiuming A; Huff, Julie

    2005-04-01

    A novel multilevel hierarchical Kohonen Net (K-Map) for an intrusion detection system is presented. Each level of the hierarchical map is modeled as a simple winner-take-all K-Map. One significant advantage of this multilevel hierarchical K-Map is its computational efficiency. Unlike other statistical anomaly detection methods such as nearest neighbor approach, K-means clustering or probabilistic analysis that employ distance computation in the feature space to identify the outliers, our approach does not involve costly point-to-point computation in organizing the data into clusters. Another advantage is the reduced network size. We use the classification capability of the K-Map on selected dimensions of data set in detecting anomalies. Randomly selected subsets that contain both attacks and normal records from the KDD Cup 1999 benchmark data are used to train the hierarchical net. We use a confidence measure to label the clusters. Then we use the test set from the same KDD Cup 1999 benchmark to test the hierarchical net. We show that a hierarchical K-Map in which each layer operates on a small subset of the feature space is superior to a single-layer K-Map operating on the whole feature space in detecting a variety of attacks in terms of detection rate as well as false positive rate.

  19. Self-supervised ARTMAP.

    PubMed

    Amis, Gregory P; Carpenter, Gail A

    2010-03-01

    Computational models of learning typically train on labeled input patterns (supervised learning), unlabeled input patterns (unsupervised learning), or a combination of the two (semi-supervised learning). In each case input patterns have a fixed number of features throughout training and testing. Human and machine learning contexts present additional opportunities for expanding incomplete knowledge from formal training, via self-directed learning that incorporates features not previously experienced. This article defines a new self-supervised learning paradigm to address these richer learning contexts, introducing a neural network called self-supervised ARTMAP. Self-supervised learning integrates knowledge from a teacher (labeled patterns with some features), knowledge from the environment (unlabeled patterns with more features), and knowledge from internal model activation (self-labeled patterns). Self-supervised ARTMAP learns about novel features from unlabeled patterns without destroying partial knowledge previously acquired from labeled patterns. A category selection function bases system predictions on known features, and distributed network activation scales unlabeled learning to prediction confidence. Slow distributed learning on unlabeled patterns focuses on novel features and confident predictions, defining classification boundaries that were ambiguous in the labeled patterns. Self-supervised ARTMAP improves test accuracy on illustrative low-dimensional problems and on high-dimensional benchmarks. Model code and benchmark data are available from: http://techlab.eu.edu/SSART/. Copyright 2009 Elsevier Ltd. All rights reserved.

  20. Comparing the accuracy of high-dimensional neural network potentials and the systematic molecular fragmentation method: A benchmark study for all-trans alkanes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gastegger, Michael; Kauffmann, Clemens; Marquetand, Philipp, E-mail: philipp.marquetand@univie.ac.at

    Many approaches, which have been developed to express the potential energy of large systems, exploit the locality of the atomic interactions. A prominent example is the fragmentation methods in which the quantum chemical calculations are carried out for overlapping small fragments of a given molecule that are then combined in a second step to yield the system’s total energy. Here we compare the accuracy of the systematic molecular fragmentation approach with the performance of high-dimensional neural network (HDNN) potentials introduced by Behler and Parrinello. HDNN potentials are similar in spirit to the fragmentation approach in that the total energy ismore » constructed as a sum of environment-dependent atomic energies, which are derived indirectly from electronic structure calculations. As a benchmark set, we use all-trans alkanes containing up to eleven carbon atoms at the coupled cluster level of theory. These molecules have been chosen because they allow to extrapolate reliable reference energies for very long chains, enabling an assessment of the energies obtained by both methods for alkanes including up to 10 000 carbon atoms. We find that both methods predict high-quality energies with the HDNN potentials yielding smaller errors with respect to the coupled cluster reference.« less

  1. Improving Protein Fold Recognition by Deep Learning Networks

    NASA Astrophysics Data System (ADS)

    Jo, Taeho; Hou, Jie; Eickholt, Jesse; Cheng, Jianlin

    2015-12-01

    For accurate recognition of protein folds, a deep learning network method (DN-Fold) was developed to predict if a given query-template protein pair belongs to the same structural fold. The input used stemmed from the protein sequence and structural features extracted from the protein pair. We evaluated the performance of DN-Fold along with 18 different methods on Lindahl’s benchmark dataset and on a large benchmark set extracted from SCOP 1.75 consisting of about one million protein pairs, at three different levels of fold recognition (i.e., protein family, superfamily, and fold) depending on the evolutionary distance between protein sequences. The correct recognition rate of ensembled DN-Fold for Top 1 predictions is 84.5%, 61.5%, and 33.6% and for Top 5 is 91.2%, 76.5%, and 60.7% at family, superfamily, and fold levels, respectively. We also evaluated the performance of single DN-Fold (DN-FoldS), which showed the comparable results at the level of family and superfamily, compared to ensemble DN-Fold. Finally, we extended the binary classification problem of fold recognition to real-value regression task, which also show a promising performance. DN-Fold is freely available through a web server at http://iris.rnet.missouri.edu/dnfold.

  2. Development and Application of Benchmark Examples for Mixed-Mode I/II Quasi-Static Delamination Propagation Predictions

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2012-01-01

    The development of benchmark examples for quasi-static delamination propagation prediction is presented and demonstrated for a commercial code. The examples are based on finite element models of the Mixed-Mode Bending (MMB) specimen. The examples are independent of the analysis software used and allow the assessment of the automated delamination propagation prediction capability in commercial finite element codes based on the virtual crack closure technique (VCCT). First, quasi-static benchmark examples were created for the specimen. Second, starting from an initially straight front, the delamination was allowed to propagate under quasi-static loading. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Good agreement between the results obtained from the automated propagation analysis and the benchmark results could be achieved by selecting input parameters that had previously been determined during analyses of mode I Double Cantilever Beam and mode II End Notched Flexure specimens. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Overall the results are encouraging, but further assessment for mixed-mode delamination fatigue onset and growth is required.

  3. Prediction of Excess Weight Loss after Laparoscopic Roux-en-Y Gastric Bypass: Data from an Artificial Neural Network

    PubMed Central

    Wise, Eric S.; Hocking, Kyle M.; Kavic, Stephen M.

    2015-01-01

    Introduction Laparoscopic Roux-en-Y Gastric Bypass (LRYGB) has become the gold standard for surgical weight loss. The success of LRYGB may be measured by excess body-mass index loss (%EBMIL) over 25 kg/m2, which is partially determined by multiple patient factors. In this study, artificial neural network (ANN) modeling was used to derive a reasonable estimate of expected postoperative weight loss using only known preoperative patient variables. Additionally, ANN modeling allowed for the discriminant prediction of achievement of benchmark 50% EBMIL at one year postoperatively. Methods Six-hundred and forty-seven LRYGB included patients were retrospectively reviewed for preoperative factors independently associated with EBMIL at 180 and 365 days postoperatively (EBMIL180 and EBMIL365, respectively). Previously validated factors were selectively analyzed, including age; race; gender; preoperative BMI (BMI0); hemoglobin; and diagnoses of hypertension (HTN), diabetes mellitus (DM), and depression or anxiety disorder. Variables significant upon multivariate analysis (P<.05) were modeled by “traditional” multiple linear regression and an ANN, to predict %EBMIL180 and %EBMIL365. Results The mean EBMIL180 and EBMIL365 were 56.4%±16.5% and 73.5%±21.5%, corresponding to total body weight losses of 25.7%±5.9% and 33.6%±8.0%, respectively. Upon multivariate analysis, independent factors associated with EBMIL180 included black race (B=−6.3%, P<.001), BMI0 (B=−1.1%/unit BMI, P<.001) and DM (B=−3.2%, P<.004). For EBMIL365, independently associated factors were female gender (B=6.4%, P<.001), black race (B=−6.7%, P<.001), BMI0 (B=−1.2%/unit BMI, P<.001), HTN (B=−3.7%, P=.03) and DM (B=−6.0%, P<.001). Pearson r2 values for the multiple linear regression and ANN models were .38 (EBMIL180) and .35 (EBMIL365), and .42 (EBMIL180) and .38 (EBMIL365), respectively. ANN-prediction of benchmark 50% EBMIL at 365 days generated an area under the curve of 0.78±0.03 in the training set (n=518), and 0.83±0.04 (n=129) in the validation set. Conclusions Available at https://redcap.vanderbilt.edu/surveys/?s=3HCR43AKXR, this, or other ANN models may be used to provide an optimized estimate of postoperative EBMIL following LRYGB. PMID:26017908

  4. Reliability of calculation of the lithosphere deformations in tectonically stable area of Poland based on the GPS measurements

    NASA Astrophysics Data System (ADS)

    Araszkiewicz, Andrzej; Jarosiński, Marek

    2013-04-01

    In this research we aimed to check if the GPS observations can be used for calculation of a reliable deformation pattern of the intracontinental lithosphere in seismically inactive areas, such as territory of Poland. For this purpose we have used data mainly from the ASG-EUPOS permanent network and the solutions developed by the MUT CAG team (Military University of Technology: Centre of Applied Geomatics). From the 128 analyzed stations almost 100 are mounted on buildings. Daily observations were processed in the Bernese 5.0 software and next the weekly solutions were used to determine the station velocities expressed in ETRF2000. The strain rates were determined for almost 200 triangles with GPS stations in their corners plotted used Delaunay triangulation. The obtained scattered directions of deformations and highly changeable values of strain rates point to insufficient antennas' stabilization as for geodynamical studies. In order to depict badly stabilized stations we carried out a benchmark test to show what might be the effect of one station drift on deformations in contacting triangles. Based on the benchmark results, from our network we have eliminated the stations which showed deformation pattern characteristic for instable station. After several rounds of strain rate calculations and eliminations of dubious points we have reduced the number of stations down to 60. The refined network revealed more consistent deformation pattern across Poland. Deformations compared with the recent stress field of the study area disclosed good correlation in some places and significant discrepancies in the others, which will be the subject of future research.

  5. Parallel Algorithms for Monte Carlo Particle Transport Simulation on Exascale Computing Architectures

    NASA Astrophysics Data System (ADS)

    Romano, Paul Kollath

    Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there are a number of algorithmic shortcomings that would prevent their immediate adoption for full-core analyses. In this thesis, algorithms are proposed both to ameliorate the degradation in parallel efficiency typically observed for large numbers of processors and to offer a means of decomposing large tally data that will be needed for reactor analysis. A nearest-neighbor fission bank algorithm was proposed and subsequently implemented in the OpenMC Monte Carlo code. A theoretical analysis of the communication pattern shows that the expected cost is O( N ) whereas traditional fission bank algorithms are O(N) at best. The algorithm was tested on two supercomputers, the Intrepid Blue Gene/P and the Titan Cray XK7, and demonstrated nearly linear parallel scaling up to 163,840 processor cores on a full-core benchmark problem. An algorithm for reducing network communication arising from tally reduction was analyzed and implemented in OpenMC. The proposed algorithm groups only particle histories on a single processor into batches for tally purposes---in doing so it prevents all network communication for tallies until the very end of the simulation. The algorithm was tested, again on a full-core benchmark, and shown to reduce network communication substantially. A model was developed to predict the impact of load imbalances on the performance of domain decomposed simulations. The analysis demonstrated that load imbalances in domain decomposed simulations arise from two distinct phenomena: non-uniform particle densities and non-uniform spatial leakage. The dominant performance penalty for domain decomposition was shown to come from these physical effects rather than insufficient network bandwidth or high latency. The model predictions were verified with measured data from simulations in OpenMC on a full-core benchmark problem. Finally, a novel algorithm for decomposing large tally data was proposed, analyzed, and implemented/tested in OpenMC. The algorithm relies on disjoint sets of compute processes and tally servers. The analysis showed that for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead. Tests were performed on Intrepid and Titan and demonstrated that the algorithm did indeed perform well over a wide range of parameters. (Copies available exclusively from MIT Libraries, libraries.mit.edu/docs - docs mit.edu)

  6. Robust nonlinear variable selective control for networked systems

    NASA Astrophysics Data System (ADS)

    Rahmani, Behrooz

    2016-10-01

    This paper is concerned with the networked control of a class of uncertain nonlinear systems. In this way, Takagi-Sugeno (T-S) fuzzy modelling is used to extend the previously proposed variable selective control (VSC) methodology to nonlinear systems. This extension is based upon the decomposition of the nonlinear system to a set of fuzzy-blended locally linearised subsystems and further application of the VSC methodology to each subsystem. To increase the applicability of the T-S approach for uncertain nonlinear networked control systems, this study considers the asynchronous premise variables in the plant and the controller, and then introduces a robust stability analysis and control synthesis. The resulting optimal switching-fuzzy controller provides a minimum guaranteed cost on an H2 performance index. Simulation studies on three nonlinear benchmark problems demonstrate the effectiveness of the proposed method.

  7. Exploiting Publication Contents and Collaboration Networks for Collaborator Recommendation

    PubMed Central

    Kong, Xiangjie; Jiang, Huizhen; Yang, Zhuo; Xu, Zhenzhen; Xia, Feng; Tolba, Amr

    2016-01-01

    Thanks to the proliferation of online social networks, it has become conventional for researchers to communicate and collaborate with each other. Meanwhile, one critical challenge arises, that is, how to find the most relevant and potential collaborators for each researcher? In this work, we propose a novel collaborator recommendation model called CCRec, which combines the information on researchers’ publications and collaboration network to generate better recommendation. In order to effectively identify the most potential collaborators for researchers, we adopt a topic clustering model to identify the academic domains, as well as a random walk model to compute researchers’ feature vectors. Using DBLP datasets, we conduct benchmarking experiments to examine the performance of CCRec. The experimental results show that CCRec outperforms other state-of-the-art methods in terms of precision, recall and F1 score. PMID:26849682

  8. Parallel ALLSPD-3D: Speeding Up Combustor Analysis Via Parallel Processing

    NASA Technical Reports Server (NTRS)

    Fricker, David M.

    1997-01-01

    The ALLSPD-3D Computational Fluid Dynamics code for reacting flow simulation was run on a set of benchmark test cases to determine its parallel efficiency. These test cases included non-reacting and reacting flow simulations with varying numbers of processors. Also, the tests explored the effects of scaling the simulation with the number of processors in addition to distributing a constant size problem over an increasing number of processors. The test cases were run on a cluster of IBM RS/6000 Model 590 workstations with ethernet and ATM networking plus a shared memory SGI Power Challenge L workstation. The results indicate that the network capabilities significantly influence the parallel efficiency, i.e., a shared memory machine is fastest and ATM networking provides acceptable performance. The limitations of ethernet greatly hamper the rapid calculation of flows using ALLSPD-3D.

  9. Functional complexity emerging from anatomical constraints in the brain: the significance of network modularity and rich-clubs

    NASA Astrophysics Data System (ADS)

    Zamora-López, Gorka; Chen, Yuhan; Deco, Gustavo; Kringelbach, Morten L.; Zhou, Changsong

    2016-12-01

    The large-scale structural ingredients of the brain and neural connectomes have been identified in recent years. These are, similar to the features found in many other real networks: the arrangement of brain regions into modules and the presence of highly connected regions (hubs) forming rich-clubs. Here, we examine how modules and hubs shape the collective dynamics on networks and we find that both ingredients lead to the emergence of complex dynamics. Comparing the connectomes of C. elegans, cats, macaques and humans to surrogate networks in which either modules or hubs are destroyed, we find that functional complexity always decreases in the perturbed networks. A comparison between simulated and empirically obtained resting-state functional connectivity indicates that the human brain, at rest, lies in a dynamical state that reflects the largest complexity its anatomical connectome can host. Last, we generalise the topology of neural connectomes into a new hierarchical network model that successfully combines modular organisation with rich-club forming hubs. This is achieved by centralising the cross-modular connections through a preferential attachment rule. Our network model hosts more complex dynamics than other hierarchical models widely used as benchmarks.

  10. Functional complexity emerging from anatomical constraints in the brain: the significance of network modularity and rich-clubs

    PubMed Central

    Zamora-López, Gorka; Chen, Yuhan; Deco, Gustavo; Kringelbach, Morten L.; Zhou, Changsong

    2016-01-01

    The large-scale structural ingredients of the brain and neural connectomes have been identified in recent years. These are, similar to the features found in many other real networks: the arrangement of brain regions into modules and the presence of highly connected regions (hubs) forming rich-clubs. Here, we examine how modules and hubs shape the collective dynamics on networks and we find that both ingredients lead to the emergence of complex dynamics. Comparing the connectomes of C. elegans, cats, macaques and humans to surrogate networks in which either modules or hubs are destroyed, we find that functional complexity always decreases in the perturbed networks. A comparison between simulated and empirically obtained resting-state functional connectivity indicates that the human brain, at rest, lies in a dynamical state that reflects the largest complexity its anatomical connectome can host. Last, we generalise the topology of neural connectomes into a new hierarchical network model that successfully combines modular organisation with rich-club forming hubs. This is achieved by centralising the cross-modular connections through a preferential attachment rule. Our network model hosts more complex dynamics than other hierarchical models widely used as benchmarks. PMID:27917958

  11. Community coalitions as a system: effects of network change on adoption of evidence-based substance abuse prevention.

    PubMed

    Valente, Thomas W; Chou, Chich Ping; Pentz, Mary Ann

    2007-05-01

    We examined the effect of community coalition network structure on the effectiveness of an intervention designed to accelerate the adoption of evidence-based substance abuse prevention programs. At baseline, 24 cities were matched and randomly assigned to 3 conditions (control, satellite TV training, and training plus technical assistance). We surveyed 415 community leaders at baseline and 406 at 18-month follow-up about their attitudes and practices toward substance abuse prevention programs. Network structure was measured by asking leaders whom in their coalition they turned to for advice about prevention programs. The outcome was a scale with 4 subscales: coalition function, planning, achievement of benchmarks, and progress in prevention activities. We used multiple linear regression and path analysis to test hypotheses. Intervention had a significant effect on decreasing the density of coalition networks. The change in density subsequently increased adoption of evidence-based practices. Optimal community network structures for the adoption of public health programs are unknown, but it should not be assumed that increasing network density or centralization are appropriate goals. Lower-density networks may be more efficient for organizing evidence-based prevention programs in communities.

  12. A clustering algorithm for determining community structure in complex networks

    NASA Astrophysics Data System (ADS)

    Jin, Hong; Yu, Wei; Li, ShiJun

    2018-02-01

    Clustering algorithms are attractive for the task of community detection in complex networks. DENCLUE is a representative density based clustering algorithm which has a firm mathematical basis and good clustering properties allowing for arbitrarily shaped clusters in high dimensional datasets. However, this method cannot be directly applied to community discovering due to its inability to deal with network data. Moreover, it requires a careful selection of the density parameter and the noise threshold. To solve these issues, a new community detection method is proposed in this paper. First, we use a spectral analysis technique to map the network data into a low dimensional Euclidean Space which can preserve node structural characteristics. Then, DENCLUE is applied to detect the communities in the network. A mathematical method named Sheather-Jones plug-in is chosen to select the density parameter which can describe the intrinsic clustering structure accurately. Moreover, every node on the network is meaningful so there were no noise nodes as a result the noise threshold can be ignored. We test our algorithm on both benchmark and real-life networks, and the results demonstrate the effectiveness of our algorithm over other popularity density based clustering algorithms adopted to community detection.

  13. RuleMonkey: software for stochastic simulation of rule-based models

    PubMed Central

    2010-01-01

    Background The system-level dynamics of many molecular interactions, particularly protein-protein interactions, can be conveniently represented using reaction rules, which can be specified using model-specification languages, such as the BioNetGen language (BNGL). A set of rules implicitly defines a (bio)chemical reaction network. The reaction network implied by a set of rules is often very large, and as a result, generation of the network implied by rules tends to be computationally expensive. Moreover, the cost of many commonly used methods for simulating network dynamics is a function of network size. Together these factors have limited application of the rule-based modeling approach. Recently, several methods for simulating rule-based models have been developed that avoid the expensive step of network generation. The cost of these "network-free" simulation methods is independent of the number of reactions implied by rules. Software implementing such methods is now needed for the simulation and analysis of rule-based models of biochemical systems. Results Here, we present a software tool called RuleMonkey, which implements a network-free method for simulation of rule-based models that is similar to Gillespie's method. The method is suitable for rule-based models that can be encoded in BNGL, including models with rules that have global application conditions, such as rules for intramolecular association reactions. In addition, the method is rejection free, unlike other network-free methods that introduce null events, i.e., steps in the simulation procedure that do not change the state of the reaction system being simulated. We verify that RuleMonkey produces correct simulation results, and we compare its performance against DYNSTOC, another BNGL-compliant tool for network-free simulation of rule-based models. We also compare RuleMonkey against problem-specific codes implementing network-free simulation methods. Conclusions RuleMonkey enables the simulation of rule-based models for which the underlying reaction networks are large. It is typically faster than DYNSTOC for benchmark problems that we have examined. RuleMonkey is freely available as a stand-alone application http://public.tgen.org/rulemonkey. It is also available as a simulation engine within GetBonNie, a web-based environment for building, analyzing and sharing rule-based models. PMID:20673321

  14. Think locally, act locally: detection of small, medium-sized, and large communities in large networks.

    PubMed

    Jeub, Lucas G S; Balachandran, Prakash; Porter, Mason A; Mucha, Peter J; Mahoney, Michael W

    2015-01-01

    It is common in the study of networks to investigate intermediate-sized (or "meso-scale") features to try to gain an understanding of network structure and function. For example, numerous algorithms have been developed to try to identify "communities," which are typically construed as sets of nodes with denser connections internally than with the remainder of a network. In this paper, we adopt a complementary perspective that communities are associated with bottlenecks of locally biased dynamical processes that begin at seed sets of nodes, and we employ several different community-identification procedures (using diffusion-based and geodesic-based dynamics) to investigate community quality as a function of community size. Using several empirical and synthetic networks, we identify several distinct scenarios for "size-resolved community structure" that can arise in real (and realistic) networks: (1) the best small groups of nodes can be better than the best large groups (for a given formulation of the idea of a good community); (2) the best small groups can have a quality that is comparable to the best medium-sized and large groups; and (3) the best small groups of nodes can be worse than the best large groups. As we discuss in detail, which of these three cases holds for a given network can make an enormous difference when investigating and making claims about network community structure, and it is important to take this into account to obtain reliable downstream conclusions. Depending on which scenario holds, one may or may not be able to successfully identify "good" communities in a given network (and good communities might not even exist for a given community quality measure), the manner in which different small communities fit together to form meso-scale network structures can be very different, and processes such as viral propagation and information diffusion can exhibit very different dynamics. In addition, our results suggest that, for many large realistic networks, the output of locally biased methods that focus on communities that are centered around a given seed node (or set of seed nodes) might have better conceptual grounding and greater practical utility than the output of global community-detection methods. They also illustrate structural properties that are important to consider in the development of better benchmark networks to test methods for community detection.

  15. Think Locally, Act Locally: The Detection of Small, Medium-Sized, and Large Communities in Large Networks

    PubMed Central

    Jeub, Lucas G. S.; Balachandran, Prakash; Porter, Mason A.; Mucha, Peter J.; Mahoney, Michael W.

    2016-01-01

    It is common in the study of networks to investigate intermediate-sized (or “meso-scale”) features to try to gain an understanding of network structure and function. For example, numerous algorithms have been developed to try to identify “communities,” which are typically construed as sets of nodes with denser connections internally than with the remainder of a network. In this paper, we adopt a complementary perspective that “communities” are associated with bottlenecks of locally-biased dynamical processes that begin at seed sets of nodes, and we employ several different community-identification procedures (using diffusion-based and geodesic-based dynamics) to investigate community quality as a function of community size. Using several empirical and synthetic networks, we identify several distinct scenarios for “size-resolved community structure” that can arise in real (and realistic) networks: (i) the best small groups of nodes can be better than the best large groups (for a given formulation of the idea of a good community); (ii) the best small groups can have a quality that is comparable to the best medium-sized and large groups; and (iii) the best small groups of nodes can be worse than the best large groups. As we discuss in detail, which of these three cases holds for a given network can make an enormous difference when investigating and making claims about network community structure, and it is important to take this into account to obtain reliable downstream conclusions. Depending on which scenario holds, one may or may not be able to successfully identify “good” communities in a given network (and good communities might not even exist for a given community quality measure), the manner in which different small communities fit together to form meso-scale network structures can be very different, and processes such as viral propagation and information diffusion can exhibit very different dynamics. In addition, our results suggest that, for many large realistic networks, the output of locally-biased methods that focus on communities that are centered around a given seed node might have better conceptual grounding and greater practical utility than the output of global community-detection methods. They also illustrate subtler structural properties that are important to consider in the development of better benchmark networks to test methods for community detection. PMID:25679670

  16. Think locally, act locally: Detection of small, medium-sized, and large communities in large networks

    NASA Astrophysics Data System (ADS)

    Jeub, Lucas G. S.; Balachandran, Prakash; Porter, Mason A.; Mucha, Peter J.; Mahoney, Michael W.

    2015-01-01

    It is common in the study of networks to investigate intermediate-sized (or "meso-scale") features to try to gain an understanding of network structure and function. For example, numerous algorithms have been developed to try to identify "communities," which are typically construed as sets of nodes with denser connections internally than with the remainder of a network. In this paper, we adopt a complementary perspective that communities are associated with bottlenecks of locally biased dynamical processes that begin at seed sets of nodes, and we employ several different community-identification procedures (using diffusion-based and geodesic-based dynamics) to investigate community quality as a function of community size. Using several empirical and synthetic networks, we identify several distinct scenarios for "size-resolved community structure" that can arise in real (and realistic) networks: (1) the best small groups of nodes can be better than the best large groups (for a given formulation of the idea of a good community); (2) the best small groups can have a quality that is comparable to the best medium-sized and large groups; and (3) the best small groups of nodes can be worse than the best large groups. As we discuss in detail, which of these three cases holds for a given network can make an enormous difference when investigating and making claims about network community structure, and it is important to take this into account to obtain reliable downstream conclusions. Depending on which scenario holds, one may or may not be able to successfully identify "good" communities in a given network (and good communities might not even exist for a given community quality measure), the manner in which different small communities fit together to form meso-scale network structures can be very different, and processes such as viral propagation and information diffusion can exhibit very different dynamics. In addition, our results suggest that, for many large realistic networks, the output of locally biased methods that focus on communities that are centered around a given seed node (or set of seed nodes) might have better conceptual grounding and greater practical utility than the output of global community-detection methods. They also illustrate structural properties that are important to consider in the development of better benchmark networks to test methods for community detection.

  17. Evolutionary model selection and parameter estimation for protein-protein interaction network based on differential evolution algorithm

    PubMed Central

    Huang, Lei; Liao, Li; Wu, Cathy H.

    2016-01-01

    Revealing the underlying evolutionary mechanism plays an important role in understanding protein interaction networks in the cell. While many evolutionary models have been proposed, the problem about applying these models to real network data, especially for differentiating which model can better describe evolutionary process for the observed network urgently remains as a challenge. The traditional way is to use a model with presumed parameters to generate a network, and then evaluate the fitness by summary statistics, which however cannot capture the complete network structures information and estimate parameter distribution. In this work we developed a novel method based on Approximate Bayesian Computation and modified Differential Evolution (ABC-DEP) that is capable of conducting model selection and parameter estimation simultaneously and detecting the underlying evolutionary mechanisms more accurately. We tested our method for its power in differentiating models and estimating parameters on the simulated data and found significant improvement in performance benchmark, as compared with a previous method. We further applied our method to real data of protein interaction networks in human and yeast. Our results show Duplication Attachment model as the predominant evolutionary mechanism for human PPI networks and Scale-Free model as the predominant mechanism for yeast PPI networks. PMID:26357273

  18. Surveillance of diarrhoea in small animal practice through the Small Animal Veterinary Surveillance Network (SAVSNET).

    PubMed

    Jones, P H; Dawson, S; Gaskell, R M; Coyne, K P; Tierney, A; Setzkorn, C; Radford, A D; Noble, P-J M

    2014-09-01

    Using the Small Animal Veterinary Surveillance Network (SAVSNET), a national small animal disease-surveillance scheme, information on gastrointestinal disease was collected for a total of 76 days between 10 May 2010 and 8 August 2011 from 16,223 consultations (including data from 9115 individual dogs and 3462 individual cats) from 42 premises belonging to 19 UK veterinary practices. During that period, 7% of dogs and 3% of cats presented with diarrhoea. Adult dogs had a higher proportional morbidity of diarrhoea (PMD) than adult cats (P <0.001). This difference was not observed in animals <1 year old. Younger animals in both species had higher PMDs than adult animals (P < 0.001). Neutering was associated with reduced PMD in young male dogs. In adult dogs, miniature Schnauzers had the highest PMD. Most animals with diarrhoea (51%) presented having been ill for 2-4 days, but a history of vomiting or haemorrhagic diarrhoea was associated with a shorter time to presentation. The most common treatments employed were dietary modification (66% of dogs; 63% of cats) and antibacterials (63% of dogs; 49% of cats). There was variability in PMD between different practices. The SAVNET methodology facilitates rapid collection of cross-sectional data regarding diarrhoea, a recognised sentinel for infectious disease, and characterises data that could benchmark clinical practice and support the development of evidence-based medicine. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Food Recognition: A New Dataset, Experiments, and Results.

    PubMed

    Ciocca, Gianluigi; Napoletano, Paolo; Schettini, Raimondo

    2017-05-01

    We propose a new dataset for the evaluation of food recognition algorithms that can be used in dietary monitoring applications. Each image depicts a real canteen tray with dishes and foods arranged in different ways. Each tray contains multiple instances of food classes. The dataset contains 1027 canteen trays for a total of 3616 food instances belonging to 73 food classes. The food on the tray images has been manually segmented using carefully drawn polygonal boundaries. We have benchmarked the dataset by designing an automatic tray analysis pipeline that takes a tray image as input, finds the regions of interest, and predicts for each region the corresponding food class. We have experimented with three different classification strategies using also several visual descriptors. We achieve about 79% of food and tray recognition accuracy using convolutional-neural-networks-based features. The dataset, as well as the benchmark framework, are available to the research community.

  20. Terms, Trends, and Insights: PV Project Finance in the United States, 2017

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feldman, David J; Schwabe, Paul D

    This brief is a compilation of data points and market insights that reflect the state of the project finance market for solar photovoltaic (PV) assets in the United States as of the third quarter of 2017. This information can generally be used as a simplified benchmark of the costs associated with securing financing for solar PV as well as the cost of the financing itself (i.e., the cost of capital). This work represents the second DOE sponsored effort to benchmark financing costs across the residential, commercial, and utility-scale PV markets, as part of its larger effort to benchmark the componentsmore » of PV system costs.« less

  1. Benchmarking is associated with improved quality of care in type 2 diabetes: the OPTIMISE randomized, controlled trial.

    PubMed

    Hermans, Michel P; Elisaf, Moses; Michel, Georges; Muls, Erik; Nobels, Frank; Vandenberghe, Hans; Brotons, Carlos

    2013-11-01

    To assess prospectively the effect of benchmarking on quality of primary care for patients with type 2 diabetes by using three major modifiable cardiovascular risk factors as critical quality indicators. Primary care physicians treating patients with type 2 diabetes in six European countries were randomized to give standard care (control group) or standard care with feedback benchmarked against other centers in each country (benchmarking group). In both groups, laboratory tests were performed every 4 months. The primary end point was the percentage of patients achieving preset targets of the critical quality indicators HbA1c, LDL cholesterol, and systolic blood pressure (SBP) after 12 months of follow-up. Of 4,027 patients enrolled, 3,996 patients were evaluable and 3,487 completed 12 months of follow-up. Primary end point of HbA1c target was achieved in the benchmarking group by 58.9 vs. 62.1% in the control group (P = 0.398) after 12 months; 40.0 vs. 30.1% patients met the SBP target (P < 0.001); 54.3 vs. 49.7% met the LDL cholesterol target (P = 0.006). Percentages of patients meeting all three targets increased during the study in both groups, with a statistically significant increase observed in the benchmarking group. The percentage of patients achieving all three targets at month 12 was significantly larger in the benchmarking group than in the control group (12.5 vs. 8.1%; P < 0.001). In this prospective, randomized, controlled study, benchmarking was shown to be an effective tool for increasing achievement of critical quality indicators and potentially reducing patient cardiovascular residual risk profile.

  2. Benchmarking Is Associated With Improved Quality of Care in Type 2 Diabetes

    PubMed Central

    Hermans, Michel P.; Elisaf, Moses; Michel, Georges; Muls, Erik; Nobels, Frank; Vandenberghe, Hans; Brotons, Carlos

    2013-01-01

    OBJECTIVE To assess prospectively the effect of benchmarking on quality of primary care for patients with type 2 diabetes by using three major modifiable cardiovascular risk factors as critical quality indicators. RESEARCH DESIGN AND METHODS Primary care physicians treating patients with type 2 diabetes in six European countries were randomized to give standard care (control group) or standard care with feedback benchmarked against other centers in each country (benchmarking group). In both groups, laboratory tests were performed every 4 months. The primary end point was the percentage of patients achieving preset targets of the critical quality indicators HbA1c, LDL cholesterol, and systolic blood pressure (SBP) after 12 months of follow-up. RESULTS Of 4,027 patients enrolled, 3,996 patients were evaluable and 3,487 completed 12 months of follow-up. Primary end point of HbA1c target was achieved in the benchmarking group by 58.9 vs. 62.1% in the control group (P = 0.398) after 12 months; 40.0 vs. 30.1% patients met the SBP target (P < 0.001); 54.3 vs. 49.7% met the LDL cholesterol target (P = 0.006). Percentages of patients meeting all three targets increased during the study in both groups, with a statistically significant increase observed in the benchmarking group. The percentage of patients achieving all three targets at month 12 was significantly larger in the benchmarking group than in the control group (12.5 vs. 8.1%; P < 0.001). CONCLUSIONS In this prospective, randomized, controlled study, benchmarking was shown to be an effective tool for increasing achievement of critical quality indicators and potentially reducing patient cardiovascular residual risk profile. PMID:23846810

  3. Benchmarking passive transfer of immunity and growth in dairy calves.

    PubMed

    Atkinson, D J; von Keyserlingk, M A G; Weary, D M

    2017-05-01

    Poor health and growth in young dairy calves can have lasting effects on their development and future production. This study benchmarked calf-rearing outcomes in a cohort of Canadian dairy farms, reported these findings back to producers and their veterinarians, and documented the results. A total of 18 Holstein dairy farms were recruited, all in British Columbia. Blood samples were collected from calves aged 1 to 7 d. We estimated serum total protein levels using digital refractometry, and failure of passive transfer (FPT) was defined as values below 5.2 g/dL. We estimated average daily gain (ADG) for preweaned heifers (1 to 70 d old) using heart-girth tape measurements, and analyzed early (≤35 d) and late (>35 d) growth separately. At first assessment, the average farm FPT rate was 16%. Overall, ADG was 0.68 kg/d, with early and late growth rates of 0.51 and 0.90 kg/d, respectively. Following delivery of the benchmark reports, all participants volunteered to undergo a second assessment. The majority (83%) made at least 1 change in their colostrum-management or milk-feeding practices, including increased colostrum at first feeding, reduced time to first colostrum, and increased initial and maximum daily milk allowances. The farms that made these changes experienced improved outcomes. On the 11 farms that made changes to improve colostrum feeding, the rate of FPT declined from 21 ± 10% before benchmarking to 11 ± 10% after making the changes. On the 10 farms that made changes to improve calf growth, ADG improved from 0.66 ± 0.09 kg/d before benchmarking to 0.72 ± 0.08 kg/d after making the management changes. Increases in ADG were greatest in the early milk-feeding period, averaging 0.13 kg/d higher than pre-benchmarking values for calves ≤35 d of age. Benchmarking specific outcomes associated with calf rearing can motivate producer engagement in calf care, leading to improved outcomes for calves on farms that apply relevant management changes. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  4. An Analysis of Strain Accumulation in the Western Part of Black Sea Region in Turkey

    NASA Astrophysics Data System (ADS)

    Deniz, I.; Avsar, N. B.; Deniz, R.; Mekik, C.; Kutoglu, S.

    2014-12-01

    Turkish National Horizontal Control Network (TNHCN) based on the European Datum 1950 (ED50) was used as the principal geodetic network until 2005 in Turkey. Since 2005, Turkish Large Scale Map and Map Information Production Regulation have required that that all the densification points have been produced within the same datum of Turkish National Fundamental GPS Network (TNFGN) put into practise in 2002 and based on International Terrestrial Reference Frame (ITRF). Hence, the common points were produced in both European Datum 1950 (ED50), and TNFGN.It is known that the geological and geophysical information about the network area can be obtained by the evaluation of the coordinate and scale variations in a geodetic network. For one such evaluation, the coordinate variations and velocities of network points, and also the strains are investigated. However, the principal problem in derivation of velocities arises from two different datums. In this context, the computation of velocities using the coordinate data of the ED50 and TNFGN is not accurate and reliable. Likewise, the analysis of strain from the coordinate differences is not reliable. However, due to the fact that the scale of a geodetic network is independent from datum, the strains can be derived from scale variations accurately and reliably.In this study, a test area limited 39.5°-42.0° northern latitudes and 31.0°-37.0° eastern longitudes was chosen. The benchmarks in this test area are composed of 30 geodetic control points derived with the aim of cadastral and engineering applications. We used data mining to investigate the common benchmarks in both reference systems for this area. Accordingly, the ED50 and TNFGN coordinates refer 1954 and 2005, respectively. Thus, it has been investigated the strain accumulation of 51 years in this region. It should be also noted that since 1954, the earthquakes have not registered greater than magnitude 6.0 in the test area. It is a considerable situation for this evaluation. The finite element analysis is used in order to derive the strain accumulation and rates in the test area (Figure 1). The results have been indicated that the minimum and maximum strains are 17μs and 3041μs, respectively.

  5. Algorithms for Lightweight Key Exchange.

    PubMed

    Alvarez, Rafael; Caballero-Gil, Cándido; Santonja, Juan; Zamora, Antonio

    2017-06-27

    Public-key cryptography is too slow for general purpose encryption, with most applications limiting its use as much as possible. Some secure protocols, especially those that enable forward secrecy, make a much heavier use of public-key cryptography, increasing the demand for lightweight cryptosystems that can be implemented in low powered or mobile devices. This performance requirements are even more significant in critical infrastructure and emergency scenarios where peer-to-peer networks are deployed for increased availability and resiliency. We benchmark several public-key key-exchange algorithms, determining those that are better for the requirements of critical infrastructure and emergency applications and propose a security framework based on these algorithms and study its application to decentralized node or sensor networks.

  6. ChemTS: an efficient python library for de novo molecular generation.

    PubMed

    Yang, Xiufeng; Zhang, Jinzhe; Yoshizoe, Kazuki; Terayama, Kei; Tsuda, Koji

    2017-01-01

    Automatic design of organic materials requires black-box optimization in a vast chemical space. In conventional molecular design algorithms, a molecule is built as a combination of predetermined fragments. Recently, deep neural network models such as variational autoencoders and recurrent neural networks (RNNs) are shown to be effective in de novo design of molecules without any predetermined fragments. This paper presents a novel Python library ChemTS that explores the chemical space by combining Monte Carlo tree search and an RNN. In a benchmarking problem of optimizing the octanol-water partition coefficient and synthesizability, our algorithm showed superior efficiency in finding high-scoring molecules. ChemTS is available at https://github.com/tsudalab/ChemTS.

  7. Understanding Health Professionals' Informal Learning in Online Social Networks: A Cross-Sectional Survey.

    PubMed

    Li, Xin; Verspoor, Karin; Gray, Kathleen; Barnett, Stephen

    2017-01-01

    Online social networks (OSNs) enable health professionals to learn informally, for example by sharing medical knowledge, or discussing practice management challenges and clinical issues. Understanding how learning occurs in OSNs is necessary to better support this type of learning. Through a cross-sectional survey, this study found that learning interaction in OSNs is low in general, with a small number of active users. Some health professionals actively used OSNs to support their practice, including sharing practical and experiential knowledge, benchmarking themselves, and to keep up-to-date on policy, advanced information and news in the field. These health professionals had an overall positive learning experience in OSNs.

  8. Information filtering based on corrected redundancy-eliminating mass diffusion

    PubMed Central

    Zhu, Xuzhen; Yang, Yujie; Chen, Guilin; Medo, Matus; Tian, Hui

    2017-01-01

    Methods used in information filtering and recommendation often rely on quantifying the similarity between objects or users. The used similarity metrics often suffer from similarity redundancies arising from correlations between objects’ attributes. Based on an unweighted undirected object-user bipartite network, we propose a Corrected Redundancy-Eliminating similarity index (CRE) which is based on a spreading process on the network. Extensive experiments on three benchmark data sets—Movilens, Netflix and Amazon—show that when used in recommendation, the CRE yields significant improvements in terms of recommendation accuracy and diversity. A detailed analysis is presented to unveil the origins of the observed differences between the CRE and mainstream similarity indices. PMID:28749976

  9. ChemTS: an efficient python library for de novo molecular generation

    NASA Astrophysics Data System (ADS)

    Yang, Xiufeng; Zhang, Jinzhe; Yoshizoe, Kazuki; Terayama, Kei; Tsuda, Koji

    2017-12-01

    Automatic design of organic materials requires black-box optimization in a vast chemical space. In conventional molecular design algorithms, a molecule is built as a combination of predetermined fragments. Recently, deep neural network models such as variational autoencoders and recurrent neural networks (RNNs) are shown to be effective in de novo design of molecules without any predetermined fragments. This paper presents a novel Python library ChemTS that explores the chemical space by combining Monte Carlo tree search and an RNN. In a benchmarking problem of optimizing the octanol-water partition coefficient and synthesizability, our algorithm showed superior efficiency in finding high-scoring molecules. ChemTS is available at https://github.com/tsudalab/ChemTS.

  10. Augmented Lagrange Programming Neural Network for Localization Using Time-Difference-of-Arrival Measurements.

    PubMed

    Han, Zifa; Leung, Chi Sing; So, Hing Cheung; Constantinides, Anthony George

    2017-08-15

    A commonly used measurement model for locating a mobile source is time-difference-of-arrival (TDOA). As each TDOA measurement defines a hyperbola, it is not straightforward to compute the mobile source position due to the nonlinear relationship in the measurements. This brief exploits the Lagrange programming neural network (LPNN), which provides a general framework to solve nonlinear constrained optimization problems, for the TDOA-based localization. The local stability of the proposed LPNN solution is also analyzed. Simulation results are included to evaluate the localization accuracy of the LPNN scheme by comparing with the state-of-the-art methods and the optimality benchmark of Cramér-Rao lower bound.

  11. Off-lexicon online Arabic handwriting recognition using neural network

    NASA Astrophysics Data System (ADS)

    Yahia, Hamdi; Chaabouni, Aymen; Boubaker, Houcine; Alimi, Adel M.

    2017-03-01

    This paper highlights a new method for online Arabic handwriting recognition based on graphemes segmentation. The main contribution of our work is to explore the utility of Beta-elliptic model in segmentation and features extraction for online handwriting recognition. Indeed, our method consists in decomposing the input signal into continuous part called graphemes based on Beta-Elliptical model, and classify them according to their position in the pseudo-word. The segmented graphemes are then described by the combination of geometric features and trajectory shape modeling. The efficiency of the considered features has been evaluated using feed forward neural network classifier. Experimental results using the benchmarking ADAB Database show the performance of the proposed method.

  12. HRSSA - Efficient hybrid stochastic simulation for spatially homogeneous biochemical reaction networks

    NASA Astrophysics Data System (ADS)

    Marchetti, Luca; Priami, Corrado; Thanh, Vo Hong

    2016-07-01

    This paper introduces HRSSA (Hybrid Rejection-based Stochastic Simulation Algorithm), a new efficient hybrid stochastic simulation algorithm for spatially homogeneous biochemical reaction networks. HRSSA is built on top of RSSA, an exact stochastic simulation algorithm which relies on propensity bounds to select next reaction firings and to reduce the average number of reaction propensity updates needed during the simulation. HRSSA exploits the computational advantage of propensity bounds to manage time-varying transition propensities and to apply dynamic partitioning of reactions, which constitute the two most significant bottlenecks of hybrid simulation. A comprehensive set of simulation benchmarks is provided for evaluating performance and accuracy of HRSSA against other state of the art algorithms.

  13. Efficiently passing messages in distributed spiking neural network simulation.

    PubMed

    Thibeault, Corey M; Minkovich, Kirill; O'Brien, Michael J; Harris, Frederick C; Srinivasa, Narayan

    2013-01-01

    Efficiently passing spiking messages in a neural model is an important aspect of high-performance simulation. As the scale of networks has increased so has the size of the computing systems required to simulate them. In addition, the information exchange of these resources has become more of an impediment to performance. In this paper we explore spike message passing using different mechanisms provided by the Message Passing Interface (MPI). A specific implementation, MVAPICH, designed for high-performance clusters with Infiniband hardware is employed. The focus is on providing information about these mechanisms for users of commodity high-performance spiking simulators. In addition, a novel hybrid method for spike exchange was implemented and benchmarked.

  14. Secure and Authenticated Data Communication in Wireless Sensor Networks.

    PubMed

    Alfandi, Omar; Bochem, Arne; Kellner, Ansgar; Göge, Christian; Hogrefe, Dieter

    2015-08-10

    Securing communications in wireless sensor networks is increasingly important as the diversity of applications increases. However, even today, it is equally important for the measures employed to be energy efficient. For this reason, this publication analyzes the suitability of various cryptographic primitives for use in WSNs according to various criteria and, finally, describes a modular, PKI-based framework for confidential, authenticated, secure communications in which most suitable primitives can be employed. Due to the limited capabilities of common WSN motes, criteria for the selection of primitives are security, power efficiency and memory requirements. The implementation of the framework and the singular components have been tested and benchmarked in our testbed of IRISmotes.

  15. Secure and Authenticated Data Communication in Wireless Sensor Networks

    PubMed Central

    Alfandi, Omar; Bochem, Arne; Kellner, Ansgar; Göge, Christian; Hogrefe, Dieter

    2015-01-01

    Securing communications in wireless sensor networks is increasingly important as the diversity of applications increases. However, even today, it is equally important for the measures employed to be energy efficient. For this reason, this publication analyzes the suitability of various cryptographic primitives for use in WSNs according to various criteria and, finally, describes a modular, PKI-based framework for confidential, authenticated, secure communications in which most suitable primitives can be employed. Due to the limited capabilities of common WSN motes, criteria for the selection of primitives are security, power efficiency and memory requirements. The implementation of the framework and the singular components have been tested and benchmarked in our testbed of IRISmotes. PMID:26266413

  16. Internet-Based Partner Services in US Sexually Transmitted Disease Prevention Programs: 2009-2013.

    PubMed

    Moody, Victoria; Hogben, Matthew; Kroeger, Karen; Johnson, James

    2015-01-01

    Social networking sites have become increasingly popular venues for meeting sex partners. Today, some sexually transmitted disease (STD) programs conduct Internet-based partner services (IPS). The purpose of the study was to explore how the Internet is being used by STD prevention programs to perform partner services. We assessed US STD prevention programs receiving funds through the 2008-2013 Comprehensive STD Prevention Systems cooperative agreement. We (1) reviewed 2009 IPS protocols in 57 funding applications against a benchmark of national guidelines and (2) surveyed persons who conducted IPS in jurisdictions conducting IPS in 2012. Of the 57 project areas receiving Comprehensive STD Prevention Systems funds, 74% provided an IPS protocol. States with IPS protocols had larger populations and more gonorrhea and syphilis cases (t = 2.2-2.6; all Ps < .05), although not higher rates of infection. Most protocols included staffing (92%) and IPS documentation (87%) requirements, but fewer had evaluation plans (29%) or social networking site engagement strategies (16%). Authority to perform a complete range of IPS activities (send e-mail, use social networking sites) was associated with contacting more partners via IPSs (P < .05). This study provides a snapshot of IPS activities in STD programs in the United States. Further research is needed to move from assessment to generating data that can assist training efforts and program action and, finally, to enable efficient IPS programs that are integrated into STD prevention and control efforts.

  17. Recommendations for Benchmarking Preclinical Studies of Nanomedicines.

    PubMed

    Dawidczyk, Charlene M; Russell, Luisa M; Searson, Peter C

    2015-10-01

    Nanoparticle-based delivery systems provide new opportunities to overcome the limitations associated with traditional small-molecule drug therapy for cancer and to achieve both therapeutic and diagnostic functions in the same platform. Preclinical trials are generally designed to assess therapeutic potential and not to optimize the design of the delivery platform. Consequently, progress in developing design rules for cancer nanomedicines has been slow, hindering progress in the field. Despite the large number of preclinical trials, several factors restrict comparison and benchmarking of different platforms, including variability in experimental design, reporting of results, and the lack of quantitative data. To solve this problem, we review the variables involved in the design of preclinical trials and propose a protocol for benchmarking that we recommend be included in in vivo preclinical studies of drug-delivery platforms for cancer therapy. This strategy will contribute to building the scientific knowledge base that enables development of design rules and accelerates the translation of new technologies. ©2015 American Association for Cancer Research.

  18. False Positive and False Negative Effects on Network Attacks

    NASA Astrophysics Data System (ADS)

    Shang, Yilun

    2018-01-01

    Robustness against attacks serves as evidence for complex network structures and failure mechanisms that lie behind them. Most often, due to detection capability limitation or good disguises, attacks on networks are subject to false positives and false negatives, meaning that functional nodes may be falsely regarded as compromised by the attacker and vice versa. In this work, we initiate a study of false positive/negative effects on network robustness against three fundamental types of attack strategies, namely, random attacks (RA), localized attacks (LA), and targeted attack (TA). By developing a general mathematical framework based upon the percolation model, we investigate analytically and by numerical simulations of attack robustness with false positive/negative rate (FPR/FNR) on three benchmark models including Erdős-Rényi (ER) networks, random regular (RR) networks, and scale-free (SF) networks. We show that ER networks are equivalently robust against RA and LA only when FPR equals zero or the initial network is intact. We find several interesting crossovers in RR and SF networks when FPR is taken into consideration. By defining the cost of attack, we observe diminishing marginal attack efficiency for RA, LA, and TA. Our finding highlights the potential risk of underestimating or ignoring FPR in understanding attack robustness. The results may provide insights into ways of enhancing robustness of network architecture and improve the level of protection of critical infrastructures.

  19. Supercomputing '91; Proceedings of the 4th Annual Conference on High Performance Computing, Albuquerque, NM, Nov. 18-22, 1991

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Various papers on supercomputing are presented. The general topics addressed include: program analysis/data dependence, memory access, distributed memory code generation, numerical algorithms, supercomputer benchmarks, latency tolerance, parallel programming, applications, processor design, networks, performance tools, mapping and scheduling, characterization affecting performance, parallelism packaging, computing climate change, combinatorial algorithms, hardware and software performance issues, system issues. (No individual items are abstracted in this volume)

  20. Next Generation School Districts: What Capacities Do Districts Need to Create and Sustain Schools That Are Ready to Deliver on Common Core?

    ERIC Educational Resources Information Center

    Lake, Robin; Hill, Paul T.; Maas, Tricia

    2015-01-01

    Every sector of the U.S. economy is working on ways to deliver services in a more customized manner. If all goes well, education is headed in the same direction. Personalized learning and globally benchmarked academic standards (a.k.a. Common Core) are the focus of most major school districts and charter school networks. Educators and parents know…

  1. Efficient Constant-Time Complexity Algorithm for Stochastic Simulation of Large Reaction Networks.

    PubMed

    Thanh, Vo Hong; Zunino, Roberto; Priami, Corrado

    2017-01-01

    Exact stochastic simulation is an indispensable tool for a quantitative study of biochemical reaction networks. The simulation realizes the time evolution of the model by randomly choosing a reaction to fire and update the system state according to a probability that is proportional to the reaction propensity. Two computationally expensive tasks in simulating large biochemical networks are the selection of next reaction firings and the update of reaction propensities due to state changes. We present in this work a new exact algorithm to optimize both of these simulation bottlenecks. Our algorithm employs the composition-rejection on the propensity bounds of reactions to select the next reaction firing. The selection of next reaction firings is independent of the number reactions while the update of propensities is skipped and performed only when necessary. It therefore provides a favorable scaling for the computational complexity in simulating large reaction networks. We benchmark our new algorithm with the state of the art algorithms available in literature to demonstrate its applicability and efficiency.

  2. Inferring nonlinear gene regulatory networks from gene expression data based on distance correlation.

    PubMed

    Guo, Xiaobo; Zhang, Ye; Hu, Wenhao; Tan, Haizhu; Wang, Xueqin

    2014-01-01

    Nonlinear dependence is general in regulation mechanism of gene regulatory networks (GRNs). It is vital to properly measure or test nonlinear dependence from real data for reconstructing GRNs and understanding the complex regulatory mechanisms within the cellular system. A recently developed measurement called the distance correlation (DC) has been shown powerful and computationally effective in nonlinear dependence for many situations. In this work, we incorporate the DC into inferring GRNs from the gene expression data without any underling distribution assumptions. We propose three DC-based GRNs inference algorithms: CLR-DC, MRNET-DC and REL-DC, and then compare them with the mutual information (MI)-based algorithms by analyzing two simulated data: benchmark GRNs from the DREAM challenge and GRNs generated by SynTReN network generator, and an experimentally determined SOS DNA repair network in Escherichia coli. According to both the receiver operator characteristic (ROC) curve and the precision-recall (PR) curve, our proposed algorithms significantly outperform the MI-based algorithms in GRNs inference.

  3. Inferring Nonlinear Gene Regulatory Networks from Gene Expression Data Based on Distance Correlation

    PubMed Central

    Guo, Xiaobo; Zhang, Ye; Hu, Wenhao; Tan, Haizhu; Wang, Xueqin

    2014-01-01

    Nonlinear dependence is general in regulation mechanism of gene regulatory networks (GRNs). It is vital to properly measure or test nonlinear dependence from real data for reconstructing GRNs and understanding the complex regulatory mechanisms within the cellular system. A recently developed measurement called the distance correlation (DC) has been shown powerful and computationally effective in nonlinear dependence for many situations. In this work, we incorporate the DC into inferring GRNs from the gene expression data without any underling distribution assumptions. We propose three DC-based GRNs inference algorithms: CLR-DC, MRNET-DC and REL-DC, and then compare them with the mutual information (MI)-based algorithms by analyzing two simulated data: benchmark GRNs from the DREAM challenge and GRNs generated by SynTReN network generator, and an experimentally determined SOS DNA repair network in Escherichia coli. According to both the receiver operator characteristic (ROC) curve and the precision-recall (PR) curve, our proposed algorithms significantly outperform the MI-based algorithms in GRNs inference. PMID:24551058

  4. A Systems Approach to Scalable Transportation Network Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S

    2006-01-01

    Emerging needs in transportation network modeling and simulation are raising new challenges with respect to scal-ability of network size and vehicular traffic intensity, speed of simulation for simulation-based optimization, and fidel-ity of vehicular behavior for accurate capture of event phe-nomena. Parallel execution is warranted to sustain the re-quired detail, size and speed. However, few parallel simulators exist for such applications, partly due to the challenges underlying their development. Moreover, many simulators are based on time-stepped models, which can be computationally inefficient for the purposes of modeling evacuation traffic. Here an approach is presented to de-signing a simulator with memory andmore » speed efficiency as the goals from the outset, and, specifically, scalability via parallel execution. The design makes use of discrete event modeling techniques as well as parallel simulation meth-ods. Our simulator, called SCATTER, is being developed, incorporating such design considerations. Preliminary per-formance results are presented on benchmark road net-works, showing scalability to one million vehicles simu-lated on one processor.« less

  5. A new class of methods for functional connectivity estimation

    NASA Astrophysics Data System (ADS)

    Lin, Wutu

    Measuring functional connectivity from neural recordings is important in understanding processing in cortical networks. The covariance-based methods are the current golden standard for functional connectivity estimation. However, the link between the pair-wise correlations and the physiological connections inside the neural network is unclear. Therefore, the power of inferring physiological basis from functional connectivity estimation is limited. To build a stronger tie and better understand the relationship between functional connectivity and physiological neural network, we need (1) a realistic model to simulate different types of neural recordings with known ground truth for benchmarking; (2) a new functional connectivity method that produce estimations closely reflecting the physiological basis. In this thesis, (1) I tune a spiking neural network model to match with human sleep EEG data, (2) introduce a new class of methods for estimating connectivity from different kinds of neural signals and provide theory proof for its superiority, (3) apply it to simulated fMRI data as an application.

  6. Supervised Learning Using Spike-Timing-Dependent Plasticity of Memristive Synapses.

    PubMed

    Nishitani, Yu; Kaneko, Yukihiro; Ueda, Michihito

    2015-12-01

    We propose a supervised learning model that enables error backpropagation for spiking neural network hardware. The method is modeled by modifying an existing model to suit the hardware implementation. An example of a network circuit for the model is also presented. In this circuit, a three-terminal ferroelectric memristor (3T-FeMEM), which is a field-effect transistor with a gate insulator composed of ferroelectric materials, is used as an electric synapse device to store the analog synaptic weight. Our model can be implemented by reflecting the network error to the write voltage of the 3T-FeMEMs and introducing a spike-timing-dependent learning function to the device. An XOR problem was successfully demonstrated as a benchmark learning by numerical simulations using the circuit properties to estimate the learning performance. In principle, the learning time per step of this supervised learning model and the circuit is independent of the number of neurons in each layer, promising a high-speed and low-power calculation in large-scale neural networks.

  7. A sparse structure learning algorithm for Gaussian Bayesian Network identification from high-dimensional data.

    PubMed

    Huang, Shuai; Li, Jing; Ye, Jieping; Fleisher, Adam; Chen, Kewei; Wu, Teresa; Reiman, Eric

    2013-06-01

    Structure learning of Bayesian Networks (BNs) is an important topic in machine learning. Driven by modern applications in genetics and brain sciences, accurate and efficient learning of large-scale BN structures from high-dimensional data becomes a challenging problem. To tackle this challenge, we propose a Sparse Bayesian Network (SBN) structure learning algorithm that employs a novel formulation involving one L1-norm penalty term to impose sparsity and another penalty term to ensure that the learned BN is a Directed Acyclic Graph--a required property of BNs. Through both theoretical analysis and extensive experiments on 11 moderate and large benchmark networks with various sample sizes, we show that SBN leads to improved learning accuracy, scalability, and efficiency as compared with 10 existing popular BN learning algorithms. We apply SBN to a real-world application of brain connectivity modeling for Alzheimer's disease (AD) and reveal findings that could lead to advancements in AD research.

  8. A Sparse Structure Learning Algorithm for Gaussian Bayesian Network Identification from High-Dimensional Data

    PubMed Central

    Huang, Shuai; Li, Jing; Ye, Jieping; Fleisher, Adam; Chen, Kewei; Wu, Teresa; Reiman, Eric

    2014-01-01

    Structure learning of Bayesian Networks (BNs) is an important topic in machine learning. Driven by modern applications in genetics and brain sciences, accurate and efficient learning of large-scale BN structures from high-dimensional data becomes a challenging problem. To tackle this challenge, we propose a Sparse Bayesian Network (SBN) structure learning algorithm that employs a novel formulation involving one L1-norm penalty term to impose sparsity and another penalty term to ensure that the learned BN is a Directed Acyclic Graph (DAG)—a required property of BNs. Through both theoretical analysis and extensive experiments on 11 moderate and large benchmark networks with various sample sizes, we show that SBN leads to improved learning accuracy, scalability, and efficiency as compared with 10 existing popular BN learning algorithms. We apply SBN to a real-world application of brain connectivity modeling for Alzheimer’s disease (AD) and reveal findings that could lead to advancements in AD research. PMID:22665720

  9. Assessing potential health risks to fish and humans using mercury concentrations in inland fish from across western Canada and the United States

    USGS Publications Warehouse

    Lepak, Jesse M.; Hooten, Mevin B.; Eagles-Smith, Collin A.; Tate, Michael T.; Lutz, Michelle A.; Ackerman, Joshua T.; Willacker, James J.; Jackson, Allyson K.; Evers, David C.; Wiener, James G.; Pritz, Colleen Flanagan; Davis, Jay

    2016-01-01

    Fish represent high quality protein and nutrient sources, but Hg contamination is ubiquitous in aquatic ecosystems and can pose health risks to fish and their consumers. Potential health risks posed to fish and humans by Hg contamination in fish were assessed in western Canada and the United States. A large compilation of inland fish Hg concentrations was evaluated in terms of potential health risk to the fish themselves, health risk to predatory fish that consume Hg contaminated fish, and to humans that consume Hg contaminated fish. The probability that a fish collected from a given location would exceed a Hg concentration benchmark relevant to a health risk was calculated. These exceedance probabilities and their associated uncertainties were characterized for fish of multiple size classes at multiple health-relevant benchmarks. The approach was novel and allowed for the assessment of the potential for deleterious health effects in fish and humans associated with Hg contamination in fish across this broad study area. Exceedance probabilities were relatively common at low Hg concentration benchmarks, particularly for fish in larger size classes. Specifically, median exceedances for the largest size classes of fish evaluated at the lowest Hg concentration benchmarks were 0.73 (potential health risks to fish themselves), 0.90 (potential health risk to predatory fish that consume Hg contaminated fish), and 0.97 (potential for restricted fish consumption by humans), but diminished to essentially zero at the highest benchmarks and smallest fish size classes. Exceedances of benchmarks are likely to have deleterious health effects on fish and limit recommended amounts of fish humans consume in western Canada and the United States. Results presented here are not intended to subvert or replace local fish Hg data or consumption advice, but provide a basis for identifying areas of potential health risk and developing more focused future research and monitoring efforts.

  10. Synchronization unveils the organization of ecological networks with positive and negative interactions

    NASA Astrophysics Data System (ADS)

    Girón, Andrea; Saiz, Hugo; Bacelar, Flora S.; Andrade, Roberto F. S.; Gómez-Gardeñes, Jesús

    2016-06-01

    Network science has helped to understand the organization principles of the interactions among the constituents of large complex systems. However, recently, the high resolution of the data sets collected has allowed to capture the different types of interactions coexisting within the same system. A particularly important example is that of systems with positive and negative interactions, a usual feature appearing in social, neural, and ecological systems. The interplay of links of opposite sign presents natural difficulties for generalizing typical concepts and tools applied to unsigned networks and, moreover, poses some questions intrinsic to the signed nature of the network, such as how are negative interactions balanced by positive ones so to allow the coexistence and survival of competitors/foes within the same system? Here, we show that synchronization phenomenon is an ideal benchmark for uncovering such balance and, as a byproduct, to assess which nodes play a critical role in the overall organization of the system. We illustrate our findings with the analysis of synthetic and real ecological networks in which facilitation and competitive interactions coexist.

  11. Predicting Physical Time Series Using Dynamic Ridge Polynomial Neural Networks

    PubMed Central

    Al-Jumeily, Dhiya; Ghazali, Rozaida; Hussain, Abir

    2014-01-01

    Forecasting naturally occurring phenomena is a common problem in many domains of science, and this has been addressed and investigated by many scientists. The importance of time series prediction stems from the fact that it has wide range of applications, including control systems, engineering processes, environmental systems and economics. From the knowledge of some aspects of the previous behaviour of the system, the aim of the prediction process is to determine or predict its future behaviour. In this paper, we consider a novel application of a higher order polynomial neural network architecture called Dynamic Ridge Polynomial Neural Network that combines the properties of higher order and recurrent neural networks for the prediction of physical time series. In this study, four types of signals have been used, which are; The Lorenz attractor, mean value of the AE index, sunspot number, and heat wave temperature. The simulation results showed good improvements in terms of the signal to noise ratio in comparison to a number of higher order and feedforward neural networks in comparison to the benchmarked techniques. PMID:25157950

  12. From photons to big-data applications: terminating terabits

    PubMed Central

    2016-01-01

    Computer architectures have entered a watershed as the quantity of network data generated by user applications exceeds the data-processing capacity of any individual computer end-system. It will become impossible to scale existing computer systems while a gap grows between the quantity of networked data and the capacity for per system data processing. Despite this, the growth in demand in both task variety and task complexity continues unabated. Networked computer systems provide a fertile environment in which new applications develop. As networked computer systems become akin to infrastructure, any limitation upon the growth in capacity and capabilities becomes an important constraint of concern to all computer users. Considering a networked computer system capable of processing terabits per second, as a benchmark for scalability, we critique the state of the art in commodity computing, and propose a wholesale reconsideration in the design of computer architectures and their attendant ecosystem. Our proposal seeks to reduce costs, save power and increase performance in a multi-scale approach that has potential application from nanoscale to data-centre-scale computers. PMID:26809573

  13. Predicting physical time series using dynamic ridge polynomial neural networks.

    PubMed

    Al-Jumeily, Dhiya; Ghazali, Rozaida; Hussain, Abir

    2014-01-01

    Forecasting naturally occurring phenomena is a common problem in many domains of science, and this has been addressed and investigated by many scientists. The importance of time series prediction stems from the fact that it has wide range of applications, including control systems, engineering processes, environmental systems and economics. From the knowledge of some aspects of the previous behaviour of the system, the aim of the prediction process is to determine or predict its future behaviour. In this paper, we consider a novel application of a higher order polynomial neural network architecture called Dynamic Ridge Polynomial Neural Network that combines the properties of higher order and recurrent neural networks for the prediction of physical time series. In this study, four types of signals have been used, which are; The Lorenz attractor, mean value of the AE index, sunspot number, and heat wave temperature. The simulation results showed good improvements in terms of the signal to noise ratio in comparison to a number of higher order and feedforward neural networks in comparison to the benchmarked techniques.

  14. Delay and cost performance analysis of the diffie-hellman key exchange protocol in opportunistic mobile networks

    NASA Astrophysics Data System (ADS)

    Soelistijanto, B.; Muliadi, V.

    2018-03-01

    Diffie-Hellman (DH) provides an efficient key exchange system by reducing the number of cryptographic keys distributed in the network. In this method, a node broadcasts a single public key to all nodes in the network, and in turn each peer uses this key to establish a shared secret key which then can be utilized to encrypt and decrypt traffic between the peer and the given node. In this paper, we evaluate the key transfer delay and cost performance of DH in opportunistic mobile networks, a specific scenario of MANETs where complete end-to-end paths rarely exist between sources and destinations; consequently, the end-to-end delays in these networks are much greater than typical MANETs. Simulation results, driven by a random node movement model and real human mobility traces, showed that DH outperforms a typical key distribution scheme based on the RSA algorithm in terms of key transfer delay, measured by average key convergence time; however, DH performs as well as the benchmark in terms of key transfer cost, evaluated by total key (copies) forwards.

  15. Brain-Inspired Constructive Learning Algorithms with Evolutionally Additive Nonlinear Neurons

    NASA Astrophysics Data System (ADS)

    Fang, Le-Heng; Lin, Wei; Luo, Qiang

    In this article, inspired partially by the physiological evidence of brain’s growth and development, we developed a new type of constructive learning algorithm with evolutionally additive nonlinear neurons. The new algorithms have remarkable ability in effective regression and accurate classification. In particular, the algorithms are able to sustain a certain reduction of the loss function when the dynamics of the trained network are bogged down in the vicinity of the local minima. The algorithm augments the neural network by adding only a few connections as well as neurons whose activation functions are nonlinear, nonmonotonic, and self-adapted to the dynamics of the loss functions. Indeed, we analytically demonstrate the reduction dynamics of the algorithm for different problems, and further modify the algorithms so as to obtain an improved generalization capability for the augmented neural networks. Finally, through comparing with the classical algorithm and architecture for neural network construction, we show that our constructive learning algorithms as well as their modified versions have better performances, such as faster training speed and smaller network size, on several representative benchmark datasets including the MNIST dataset for handwriting digits.

  16. From photons to big-data applications: terminating terabits.

    PubMed

    Zilberman, Noa; Moore, Andrew W; Crowcroft, Jon A

    2016-03-06

    Computer architectures have entered a watershed as the quantity of network data generated by user applications exceeds the data-processing capacity of any individual computer end-system. It will become impossible to scale existing computer systems while a gap grows between the quantity of networked data and the capacity for per system data processing. Despite this, the growth in demand in both task variety and task complexity continues unabated. Networked computer systems provide a fertile environment in which new applications develop. As networked computer systems become akin to infrastructure, any limitation upon the growth in capacity and capabilities becomes an important constraint of concern to all computer users. Considering a networked computer system capable of processing terabits per second, as a benchmark for scalability, we critique the state of the art in commodity computing, and propose a wholesale reconsideration in the design of computer architectures and their attendant ecosystem. Our proposal seeks to reduce costs, save power and increase performance in a multi-scale approach that has potential application from nanoscale to data-centre-scale computers. © 2016 The Authors.

  17. Robust visual tracking via multiscale deep sparse networks

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Hou, Zhiqiang; Yu, Wangsheng; Xue, Yang; Jin, Zefenfen; Dai, Bo

    2017-04-01

    In visual tracking, deep learning with offline pretraining can extract more intrinsic and robust features. It has significant success solving the tracking drift in a complicated environment. However, offline pretraining requires numerous auxiliary training datasets and is considerably time-consuming for tracking tasks. To solve these problems, a multiscale sparse networks-based tracker (MSNT) under the particle filter framework is proposed. Based on the stacked sparse autoencoders and rectifier linear unit, the tracker has a flexible and adjustable architecture without the offline pretraining process and exploits the robust and powerful features effectively only through online training of limited labeled data. Meanwhile, the tracker builds four deep sparse networks of different scales, according to the target's profile type. During tracking, the tracker selects the matched tracking network adaptively in accordance with the initial target's profile type. It preserves the inherent structural information more efficiently than the single-scale networks. Additionally, a corresponding update strategy is proposed to improve the robustness of the tracker. Extensive experimental results on a large scale benchmark dataset show that the proposed method performs favorably against state-of-the-art methods in challenging environments.

  18. Exploring the topological sources of robustness against invasion in biological and technological networks.

    PubMed

    Alcalde Cuesta, Fernando; González Sequeiros, Pablo; Lozano Rojo, Álvaro

    2016-02-10

    For a network, the accomplishment of its functions despite perturbations is called robustness. Although this property has been extensively studied, in most cases, the network is modified by removing nodes. In our approach, it is no longer perturbed by site percolation, but evolves after site invasion. The process transforming resident/healthy nodes into invader/mutant/diseased nodes is described by the Moran model. We explore the sources of robustness (or its counterpart, the propensity to spread favourable innovations) of the US high-voltage power grid network, the Internet2 academic network, and the C. elegans connectome. We compare them to three modular and non-modular benchmark networks, and samples of one thousand random networks with the same degree distribution. It is found that, contrary to what happens with networks of small order, fixation probability and robustness are poorly correlated with most of standard statistics, but they depend strongly on the degree distribution. While community detection techniques are able to detect the existence of a central core in Internet2, they are not effective in detecting hierarchical structures whose topological complexity arises from the repetition of a few rules. Box counting dimension and Rent's rule are applied to show a subtle trade-off between topological and wiring complexity.

  19. Exploring the topological sources of robustness against invasion in biological and technological networks

    PubMed Central

    Alcalde Cuesta, Fernando; González Sequeiros, Pablo; Lozano Rojo, Álvaro

    2016-01-01

    For a network, the accomplishment of its functions despite perturbations is called robustness. Although this property has been extensively studied, in most cases, the network is modified by removing nodes. In our approach, it is no longer perturbed by site percolation, but evolves after site invasion. The process transforming resident/healthy nodes into invader/mutant/diseased nodes is described by the Moran model. We explore the sources of robustness (or its counterpart, the propensity to spread favourable innovations) of the US high-voltage power grid network, the Internet2 academic network, and the C. elegans connectome. We compare them to three modular and non-modular benchmark networks, and samples of one thousand random networks with the same degree distribution. It is found that, contrary to what happens with networks of small order, fixation probability and robustness are poorly correlated with most of standard statistics, but they depend strongly on the degree distribution. While community detection techniques are able to detect the existence of a central core in Internet2, they are not effective in detecting hierarchical structures whose topological complexity arises from the repetition of a few rules. Box counting dimension and Rent’s rule are applied to show a subtle trade-off between topological and wiring complexity. PMID:26861189

  20. Efficient self-organizing multilayer neural network for nonlinear system modeling.

    PubMed

    Han, Hong-Gui; Wang, Li-Dan; Qiao, Jun-Fei

    2013-07-01

    It has been shown extensively that the dynamic behaviors of a neural system are strongly influenced by the network architecture and learning process. To establish an artificial neural network (ANN) with self-organizing architecture and suitable learning algorithm for nonlinear system modeling, an automatic axon-neural network (AANN) is investigated in the following respects. First, the network architecture is constructed automatically to change both the number of hidden neurons and topologies of the neural network during the training process. The approach introduced in adaptive connecting-and-pruning algorithm (ACP) is a type of mixed mode operation, which is equivalent to pruning or adding the connecting of the neurons, as well as inserting some required neurons directly. Secondly, the weights are adjusted, using a feedforward computation (FC) to obtain the information for the gradient during learning computation. Unlike most of the previous studies, AANN is able to self-organize the architecture and weights, and to improve the network performances. Also, the proposed AANN has been tested on a number of benchmark problems, ranging from nonlinear function approximating to nonlinear systems modeling. The experimental results show that AANN can have better performances than that of some existing neural networks. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  1. The adenosine triphosphate test is a rapid and reliable audit tool to assess manual cleaning adequacy of flexible endoscope channels.

    PubMed

    Alfa, Michelle J; Fatima, Iram; Olson, Nancy

    2013-03-01

    The study objective was to verify that the adenosine triphosphate (ATP) benchmark of <200 relative light units (RLUs) was achievable in a busy endoscopy clinic that followed the manufacturer's manual cleaning instructions. All channels from patient-used colonoscopes (20) and duodenoscopes (20) in a tertiary care hospital endoscopy clinic were sampled after manual cleaning and tested for residual ATP. The ATP test benchmark for adequate manual cleaning was set at <200 RLUs. The benchmark for protein was <6.4 μg/cm(2), and, for bioburden, it was <4-log10 colony-forming units/cm(2). Our data demonstrated that 96% (115/120) of channels from 20 colonoscopes and 20 duodenoscopes evaluated met the ATP benchmark of <200 RLUs. The 5 channels that exceeded 200 RLUs were all elevator guide-wire channels. All 120 of the manually cleaned endoscopes tested had protein and bioburden levels that were compliant with accepted benchmarks for manual cleaning for suction-biopsy, air-water, and auxiliary water channels. Our data confirmed that, by following the endoscope manufacturer's manual cleaning recommendations, 96% of channels in gastrointestinal endoscopes would have <200 RLUs for the ATP test kit evaluated and would meet the accepted clean benchmarks for protein and bioburden. Copyright © 2013 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Mosby, Inc. All rights reserved.

  2. Application of Benchmark Examples to Assess the Single and Mixed-Mode Static Delamination Propagation Capabilities in ANSYS

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2012-01-01

    The application of benchmark examples for the assessment of quasi-static delamination propagation capabilities is demonstrated for ANSYS. The examples are independent of the analysis software used and allow the assessment of the automated delamination propagation in commercial finite element codes based on the virtual crack closure technique (VCCT). The examples selected are based on two-dimensional finite element models of Double Cantilever Beam (DCB), End-Notched Flexure (ENF), Mixed-Mode Bending (MMB) and Single Leg Bending (SLB) specimens. First, the quasi-static benchmark examples were recreated for each specimen using the current implementation of VCCT in ANSYS . Second, the delamination was allowed to propagate under quasi-static loading from its initial location using the automated procedure implemented in the finite element software. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Overall the results are encouraging, but further assessment for three-dimensional solid models is required.

  3. Multisensor benchmark data for riot control

    NASA Astrophysics Data System (ADS)

    Jäger, Uwe; Höpken, Marc; Dürr, Bernhard; Metzler, Jürgen; Willersinn, Dieter

    2008-10-01

    Quick and precise response is essential for riot squads when coping with escalating violence in crowds. Often it is just a single person, known as the leader of the gang, who instigates other people and thus is responsible of excesses. Putting this single person out of action in most cases leads to a de-escalating situation. Fostering de-escalations is one of the main tasks of crowd and riot control. To do so, extensive situation awareness is mandatory for the squads and can be promoted by technical means such as video surveillance using sensor networks. To develop software tools for situation awareness appropriate input data with well-known quality is needed. Furthermore, the developer must be able to measure algorithm performance and ongoing improvements. Last but not least, after algorithm development has finished and marketing aspects emerge, meeting of specifications must be proved. This paper describes a multisensor benchmark which exactly serves this purpose. We first define the underlying algorithm task. Then we explain details about data acquisition and sensor setup and finally we give some insight into quality measures of multisensor data. Currently, the multisensor benchmark described in this paper is applied to the development of basic algorithms for situational awareness, e.g. tracking of individuals in a crowd.

  4. Relationships between sources of acid mine drainage and the hydrochemistry of acid effluents during rainy season in the Iberian Pyrite Belt.

    PubMed

    Pérez-Ostalé, E; Grande, J A; Valente, T; de la Torre, M L; Santisteban, M; Fernández, P; Diaz-Curiel, J

    2016-01-01

    In the Iberian Pyrite Belt (IPB), southwest Spain, a prolonged and intense mining activity of more than 4,500 years has resulted in almost a hundred mines scattered through the region. After years of inactivity, these mines are still causing high levels of hydrochemical degradation in the fluvial network. This situation represents a unique scenario in the world, taking into consideration its magnitude and intensity of the contamination processes. In order to obtain a benchmark regarding the degree of acid mine drainage (AMD) pollution in the aquatic environment, the relationship between the areas occupied by the sulfide mines and the characteristics of the respective effluents after rainfall was analysed. The methodology developed, which includes the design of a sampling network, analytical treatment and cluster analysis, is a useful tool for diagnosing the contamination level by AMD in an entire metallogenic province, at the scale of each mining group. The results presented the relationship between sulfate, total dissolved solids and electrical conductivity, as well as other parameters that are typically associated with AMD and the major elements that compose the polymetallic sulfides of IPB. This analysis also indicates the low level of proximity between the affectation area and the other variables.

  5. INFORMAS (International Network for Food and Obesity/non-communicable diseases Research, Monitoring and Action Support): summary and future directions.

    PubMed

    Kumanyika, S

    2013-10-01

    This supplement presents the foundational elements for INFORMAS (International Network for Food and Obesity/non-communicable diseases Research, Monitoring and Action Support). As explained in the overview article by Swinburn and colleagues, INFORMAS has a compelling rationale and has set forth clear objectives, outcomes, principles and frameworks for monitoring and benchmarking key aspects of food environments and the policies and actions that influence the healthiness of food environments. This summary highlights the proposed monitoring approaches for the 10 interrelated INFORMAS modules: public and private sector policies and actions; key aspects of food environments (food composition, labelling, promotion, provision, retail, prices, and trade and investment) and population outcomes (diet quality). This ambitious effort should be feasible when approached in a step-wise manner, taking into account existing monitoring efforts, data sources, country contexts and capacity, and when adequately resourced. After protocol development and pilot testing of the modules, INFORMAS aims to be a sustainable, low-cost monitoring framework. Future directions relate to institutionalization, implementation and, ultimately, to leveraging INFORMAS data in ways that will bring key drivers of food environments into alignment with public health goals. © 2013 The Authors. Obesity Reviews published by John Wiley & Sons Ltd on behalf of the International Association for the Study of Obesity.

  6. Cascading failures in complex networks with community structure

    NASA Astrophysics Data System (ADS)

    Lin, Guoqiang; di, Zengru; Fan, Ying

    2014-12-01

    Much empirical evidence shows that when attacked with cascading failures, scale-free or even random networks tend to collapse more extensively when the initially deleted node has higher betweenness. Meanwhile, in networks with strong community structure, high-betweenness nodes tend to be bridge nodes that link different communities, and the removal of such nodes will reduce only the connections among communities, leaving the networks fairly stable. Understanding what will affect cascading failures and how to protect or attack networks with strong community structure is therefore of interest. In this paper, we have constructed scale-free Community Networks (SFCN) and Random Community Networks (RCN). We applied these networks, along with the Lancichinett-Fortunato-Radicchi (LFR) benchmark, to the cascading-failure scenario to explore their vulnerability to attack and the relationship between cascading failures and the degree distribution and community structure of a network. The numerical results show that when the networks are of a power-law distribution, a stronger community structure will result in the failure of fewer nodes. In addition, the initial removal of the node with the highest betweenness will not lead to the worst cascading, i.e. the largest avalanche size. The Betweenness Overflow (BOF), an index that we developed, is an effective indicator of this tendency. The RCN, however, display a different result. In addition, the avalanche size of each node can be adopted as an index to evaluate the importance of the node.

  7. Interplanetary Overlay Network Bundle Protocol Implementation

    NASA Technical Reports Server (NTRS)

    Burleigh, Scott C.

    2011-01-01

    The Interplanetary Overlay Network (ION) system's BP package, an implementation of the Delay-Tolerant Networking (DTN) Bundle Protocol (BP) and supporting services, has been specifically designed to be suitable for use on deep-space robotic vehicles. Although the ION BP implementation is unique in its use of zero-copy objects for high performance, and in its use of resource-sensitive rate control, it is fully interoperable with other implementations of the BP specification (Internet RFC 5050). The ION BP implementation is built using the same software infrastructure that underlies the implementation of the CCSDS (Consultative Committee for Space Data Systems) File Delivery Protocol (CFDP) built into the flight software of Deep Impact. It is designed to minimize resource consumption, while maximizing operational robustness. For example, no dynamic allocation of system memory is required. Like all the other ION packages, ION's BP implementation is designed to port readily between Linux and Solaris (for easy development and for ground system operations) and VxWorks (for flight systems operations). The exact same source code is exercised in both environments. Initially included in the ION BP implementations are the following: libraries of functions used in constructing bundle forwarders and convergence-layer (CL) input and output adapters; a simple prototype bundle forwarder and associated CL adapters designed to run over an IPbased local area network; administrative tools for managing a simple DTN infrastructure built from these components; a background daemon process that silently destroys bundles whose time-to-live intervals have expired; a library of functions exposed to applications, enabling them to issue and receive data encapsulated in DTN bundles; and some simple applications that can be used for system checkout and benchmarking.

  8. Design of hybrid radial basis function neural networks (HRBFNNs) realized with the aid of hybridization of fuzzy clustering method (FCM) and polynomial neural networks (PNNs).

    PubMed

    Huang, Wei; Oh, Sung-Kwun; Pedrycz, Witold

    2014-12-01

    In this study, we propose Hybrid Radial Basis Function Neural Networks (HRBFNNs) realized with the aid of fuzzy clustering method (Fuzzy C-Means, FCM) and polynomial neural networks. Fuzzy clustering used to form information granulation is employed to overcome a possible curse of dimensionality, while the polynomial neural network is utilized to build local models. Furthermore, genetic algorithm (GA) is exploited here to optimize the essential design parameters of the model (including fuzzification coefficient, the number of input polynomial fuzzy neurons (PFNs), and a collection of the specific subset of input PFNs) of the network. To reduce dimensionality of the input space, principal component analysis (PCA) is considered as a sound preprocessing vehicle. The performance of the HRBFNNs is quantified through a series of experiments, in which we use several modeling benchmarks of different levels of complexity (different number of input variables and the number of available data). A comparative analysis reveals that the proposed HRBFNNs exhibit higher accuracy in comparison to the accuracy produced by some models reported previously in the literature. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Robust Analysis of Network-Based Real-Time Kinematic for GNSS-Derived Heights.

    PubMed

    Bae, Tae-Suk; Grejner-Brzezinska, Dorota; Mader, Gerald; Dennis, Michael

    2015-10-26

    New guidelines and procedures for real-time (RT) network-based solutions are required in order to support Global Navigation Satellite System (GNSS) derived heights. Two kinds of experiments were carried out to analyze the performance of the network-based real-time kinematic (RTK) solutions. New test marks were installed in different surrounding environments, and the existing GPS benchmarks were used for analyzing the effect of different factors, such as baseline lengths, antenna types, on the final accuracy and reliability of the height estimation. The RT solutions are categorized into three groups: single-base RTK, multiple-epoch network RTK (mRTN), and single-epoch network RTK (sRTN). The RTK solution can be biased up to 9 mm depending on the surrounding environment, but there was no notable bias for a longer reference base station (about 30 km) In addition, the occupation time for the network RTK was investigated in various cases. There is no explicit bias in the solution for different durations, but smoother results were obtained for longer durations. Further investigation is needed into the effect of changing the occupation time between solutions and into the possibility of using single-epoch solutions in precise determination of heights by GNSS.

  10. Highball: A high speed, reserved-access, wide area network

    NASA Technical Reports Server (NTRS)

    Mills, David L.; Boncelet, Charles G.; Elias, John G.; Schragger, Paul A.; Jackson, Alden W.

    1990-01-01

    A network architecture called Highball and a preliminary design for a prototype, wide-area data network designed to operate at speeds of 1 Gbps and beyond are described. It is intended for applications requiring high speed burst transmissions where some latency between requesting a transmission and granting the request can be anticipated and tolerated. Examples include real-time video and disk-disk transfers, national filestore access, remote sensing, and similar applications. The network nodes include an intelligent crossbar switch, but have no buffering capabilities; thus, data must be queued at the end nodes. There are no restrictions on the network topology, link speeds, or end-end protocols. The end system, nodes, and links can operate at any speed up to the limits imposed by the physical facilities. An overview of an initial design approach is presented and is intended as a benchmark upon which a detailed design can be developed. It describes the network architecture and proposed access protocols, as well as functional descriptions of the hardware and software components that could be used in a prototype implementation. It concludes with a discussion of additional issues to be resolved in continuing stages of this project.

  11. A Fuzzy analytical hierarchy process approach in irrigation networks maintenance

    NASA Astrophysics Data System (ADS)

    Riza Permana, Angga; Rintis Hadiani, Rr.; Syafi'i

    2017-11-01

    Ponorogo Regency has 440 Irrigation Area with a total area of 17,950 Ha. Due to the limited budget and lack of maintenance cause decreased function on the irrigation. The aim of this study is to make an appropriate system to determine the indices weighted of the rank prioritization criteria for irrigation network maintenance using a fuzzy-based methodology. The criteria that are used such as the physical condition of irrigation networks, area of service, estimated maintenance cost, and efficiency of irrigation water distribution. 26 experts in the field of water resources in the Dinas Pekerjaan Umum were asked to fill out the questionnaire, and the result will be used as a benchmark to determine the rank of irrigation network maintenance priority. The results demonstrate that the physical condition of irrigation networks criterion (W1) = 0,279 has the greatest impact on the assessment process. The area of service (W2) = 0,270, efficiency of irrigation water distribution (W4) = 0,249, and estimated maintenance cost (W3) = 0,202 criteria rank next in effectiveness, respectively. The proposed methodology deals with uncertainty and vague data using triangular fuzzy numbers, and, moreover, it provides a comprehensive decision-making technique to assess maintenance priority on irrigation network.

  12. Development and Applications of Benchmark Examples for Static Delamination Propagation Predictions

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2013-01-01

    The development and application of benchmark examples for the assessment of quasistatic delamination propagation capabilities was demonstrated for ANSYS (TradeMark) and Abaqus/Standard (TradeMark). The examples selected were based on finite element models of Double Cantilever Beam (DCB) and Mixed-Mode Bending (MMB) specimens. First, quasi-static benchmark results were created based on an approach developed previously. Second, the delamination was allowed to propagate under quasi-static loading from its initial location using the automated procedure implemented in ANSYS (TradeMark) and Abaqus/Standard (TradeMark). Input control parameters were varied to study the effect on the computed delamination propagation. Overall, the benchmarking procedure proved valuable by highlighting the issues associated with choosing the appropriate input parameters for the VCCT implementations in ANSYS® and Abaqus/Standard®. However, further assessment for mixed-mode delamination fatigue onset and growth is required. Additionally studies should include the assessment of the propagation capabilities in more complex specimens and on a structural level.

  13. Data classification using metaheuristic Cuckoo Search technique for Levenberg Marquardt back propagation (CSLM) algorithm

    NASA Astrophysics Data System (ADS)

    Nawi, Nazri Mohd.; Khan, Abdullah; Rehman, M. Z.

    2015-05-01

    A nature inspired behavior metaheuristic techniques which provide derivative-free solutions to solve complex problems. One of the latest additions to the group of nature inspired optimization procedure is Cuckoo Search (CS) algorithm. Artificial Neural Network (ANN) training is an optimization task since it is desired to find optimal weight set of a neural network in training process. Traditional training algorithms have some limitation such as getting trapped in local minima and slow convergence rate. This study proposed a new technique CSLM by combining the best features of two known algorithms back-propagation (BP) and Levenberg Marquardt algorithm (LM) for improving the convergence speed of ANN training and avoiding local minima problem by training this network. Some selected benchmark classification datasets are used for simulation. The experiment result show that the proposed cuckoo search with Levenberg Marquardt algorithm has better performance than other algorithm used in this study.

  14. Improved mine blast algorithm for optimal cost design of water distribution systems

    NASA Astrophysics Data System (ADS)

    Sadollah, Ali; Guen Yoo, Do; Kim, Joong Hoon

    2015-12-01

    The design of water distribution systems is a large class of combinatorial, nonlinear optimization problems with complex constraints such as conservation of mass and energy equations. Since feasible solutions are often extremely complex, traditional optimization techniques are insufficient. Recently, metaheuristic algorithms have been applied to this class of problems because they are highly efficient. In this article, a recently developed optimizer called the mine blast algorithm (MBA) is considered. The MBA is improved and coupled with the hydraulic simulator EPANET to find the optimal cost design for water distribution systems. The performance of the improved mine blast algorithm (IMBA) is demonstrated using the well-known Hanoi, New York tunnels and Balerma benchmark networks. Optimization results obtained using IMBA are compared to those using MBA and other optimizers in terms of their minimum construction costs and convergence rates. For the complex Balerma network, IMBA offers the cheapest network design compared to other optimization algorithms.

  15. NASA Astrophysics Data System (ADS)

    Knosp, B.; Neely, S.; Zimdars, P.; Mills, B.; Vance, N.

    2007-12-01

    The Microwave Limb Sounder (MLS) Science Computing Facility (SCF) stores over 50 terabytes of data, has over 240 computer processing hosts, and 64 users from around the world. These resources are spread over three primary geographical locations - the Jet Propulsion Laboratory (JPL), Raytheon RIS, and New Mexico Institute of Mining and Technology (NMT). A need for a grid network system was identified and defined to solve the problem of users competing for finite, and increasingly scarce, MLS SCF computing resources. Using Sun's Grid Engine software, a grid network was successfully created in a development environment that connected the JPL and Raytheon sites, established master and slave hosts, and demonstrated that transfer queues for jobs can work among multiple clusters in the same grid network. This poster will first describe MLS SCF resources and the lessons that were learned in the design and development phase of this project. It will then go on to discuss the test environment and plans for deployment by highlighting benchmarks and user experiences.

  16. RBind: computational network method to predict RNA binding sites.

    PubMed

    Wang, Kaili; Jian, Yiren; Wang, Huiwen; Zeng, Chen; Zhao, Yunjie

    2018-04-26

    Non-coding RNA molecules play essential roles by interacting with other molecules to perform various biological functions. However, it is difficult to determine RNA structures due to their flexibility. At present, the number of experimentally solved RNA-ligand and RNA-protein structures is still insufficient. Therefore, binding sites prediction of non-coding RNA is required to understand their functions. Current RNA binding site prediction algorithms produce many false positive nucleotides that are distance away from the binding sites. Here, we present a network approach, RBind, to predict the RNA binding sites. We benchmarked RBind in RNA-ligand and RNA-protein datasets. The average accuracy of 0.82 in RNA-ligand and 0.63 in RNA-protein testing showed that this network strategy has a reliable accuracy for binding sites prediction. The codes and datasets are available at https://zhaolab.com.cn/RBind. yjzhaowh@mail.ccnu.edu.cn. Supplementary data are available at Bioinformatics online.

  17. Evolving neural networks for strategic decision-making problems.

    PubMed

    Kohl, Nate; Miikkulainen, Risto

    2009-04-01

    Evolution of neural networks, or neuroevolution, has been a successful approach to many low-level control problems such as pole balancing, vehicle control, and collision warning. However, certain types of problems-such as those involving strategic decision-making-have remained difficult for neuroevolution to solve. This paper evaluates the hypothesis that such problems are difficult because they are fractured: The correct action varies discontinuously as the agent moves from state to state. A method for measuring fracture using the concept of function variation is proposed and, based on this concept, two methods for dealing with fracture are examined: neurons with local receptive fields, and refinement based on a cascaded network architecture. Experiments in several benchmark domains are performed to evaluate how different levels of fracture affect the performance of neuroevolution methods, demonstrating that these two modifications improve performance significantly. These results form a promising starting point for expanding neuroevolution to strategic tasks.

  18. Strong systematicity through sensorimotor conceptual grounding: an unsupervised, developmental approach to connectionist sentence processing

    NASA Astrophysics Data System (ADS)

    Jansen, Peter A.; Watter, Scott

    2012-03-01

    Connectionist language modelling typically has difficulty with syntactic systematicity, or the ability to generalise language learning to untrained sentences. This work develops an unsupervised connectionist model of infant grammar learning. Following the semantic boostrapping hypothesis, the network distils word category using a developmentally plausible infant-scale database of grounded sensorimotor conceptual representations, as well as a biologically plausible semantic co-occurrence activation function. The network then uses this knowledge to acquire an early benchmark clausal grammar using correlational learning, and further acquires separate conceptual and grammatical category representations. The network displays strongly systematic behaviour indicative of the general acquisition of the combinatorial systematicity present in the grounded infant-scale language stream, outperforms previous contemporary models that contain primarily noun and verb word categories, and successfully generalises broadly to novel untrained sensorimotor grounded sentences composed of unfamiliar nouns and verbs. Limitations as well as implications to later grammar learning are discussed.

  19. Embarked electrical network robust control based on singular perturbation model.

    PubMed

    Abdeljalil Belhaj, Lamya; Ait-Ahmed, Mourad; Benkhoris, Mohamed Fouad

    2014-07-01

    This paper deals with an approach of modelling in view of control for embarked networks which can be described as strongly coupled multi-sources, multi-loads systems with nonlinear and badly known characteristics. This model has to be representative of the system behaviour and easy to handle for easy regulators synthesis. As a first step, each alternator is modelled and linearized around an operating point and then it is subdivided into two lower order systems according to the singular perturbation theory. RST regulators are designed for each subsystem and tested by means of a software test-bench which allows predicting network behaviour in both steady and transient states. Finally, the designed controllers are implanted on an experimental benchmark constituted by two alternators supplying loads in order to test the dynamic performances in realistic conditions. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Spectral analysis of stellar light curves by means of neural networks

    NASA Astrophysics Data System (ADS)

    Tagliaferri, R.; Ciaramella, A.; Milano, L.; Barone, F.; Longo, G.

    1999-06-01

    Periodicity analysis of unevenly collected data is a relevant issue in several scientific fields. In astrophysics, for example, we have to find the fundamental period of light or radial velocity curves which are unevenly sampled observations of stars. Classical spectral analysis methods are unsatisfactory to solve the problem. In this paper we present a neural network based estimator system which performs well the frequency extraction in unevenly sampled signals. It uses an unsupervised Hebbian nonlinear neural algorithm to extract, from the interpolated signal, the principal components which, in turn, are used by the MUSIC frequency estimator algorithm to extract the frequencies. The neural network is tolerant to noise and works well also with few points in the sequence. We benchmark the system on synthetic and real signals with the Periodogram and with the Cramer-Rao lower bound. This work was been partially supported by IIASS, by MURST 40\\% and by the Italian Space Agency.

  1. Understanding Road Usage Patterns in Urban Areas

    NASA Astrophysics Data System (ADS)

    Wang, Pu; Hunter, Timothy; Bayen, Alexandre M.; Schechtner, Katja; González, Marta C.

    2012-12-01

    In this paper, we combine the most complete record of daily mobility, based on large-scale mobile phone data, with detailed Geographic Information System (GIS) data, uncovering previously hidden patterns in urban road usage. We find that the major usage of each road segment can be traced to its own - surprisingly few - driver sources. Based on this finding we propose a network of road usage by defining a bipartite network framework, demonstrating that in contrast to traditional approaches, which define road importance solely by topological measures, the role of a road segment depends on both: its betweeness and its degree in the road usage network. Moreover, our ability to pinpoint the few driver sources contributing to the major traffic flow allows us to create a strategy that achieves a significant reduction of the travel time across the entire road system, compared to a benchmark approach.

  2. Performance of the High Sensitivity Open Source Multi-GNSS Assisted GNSS Reference Server.

    NASA Astrophysics Data System (ADS)

    Sarwar, Ali; Rizos, Chris; Glennon, Eamonn

    2015-06-01

    The Open Source GNSS Reference Server (OSGRS) exploits the GNSS Reference Interface Protocol (GRIP) to provide assistance data to GPS receivers. Assistance can be in terms of signal acquisition and in the processing of the measurement data. The data transfer protocol is based on Extensible Mark-up Language (XML) schema. The first version of the OSGRS required a direct hardware connection to a GPS device to acquire the data necessary to generate the appropriate assistance. Scenarios of interest for the OSGRS users are weak signal strength indoors, obstructed outdoors or heavy multipath environments. This paper describes an improved version of OSGRS that provides alternative assistance support from a number of Global Navigation Satellite Systems (GNSS). The underlying protocol to transfer GNSS assistance data from global casters is the Networked Transport of RTCM (Radio Technical Commission for Maritime Services) over Internet Protocol (NTRIP), and/or the RINEX (Receiver Independent Exchange) format. This expands the assistance and support model of the OSGRS to globally available GNSS data servers connected via internet casters. A variety of formats and versions of RINEX and RTCM streams become available, which strengthens the assistance provisioning capability of the OSGRS platform. The prime motivation for this work was to enhance the system architecture of the OSGRS to take advantage of globally available GNSS data sources. Open source software architectures and assistance models provide acquisition and data processing assistance for GNSS receivers operating in weak signal environments. This paper describes test scenarios to benchmark the OSGRSv2 performance against other Assisted-GNSS solutions. Benchmarking devices include the SPOT satellite messenger, MS-Based & MS-Assisted GNSS, HSGNSS (SiRFstar-III) and Wireless Sensor Networks Assisted-GNSS. Benchmarked parameters include the number of tracked satellites, the Time to Fix First (TTFF), navigation availability and accuracy. Three different configurations of Multi-GNSS assistance servers were used, namely Cloud-Client-Server, the Demilitarized Zone (DMZ) Client-Server and PC-Client-Server; with respect to the connectivity location of client and server. The impact on the performance based on server and/or client initiation, hardware capability, network latency, processing delay and computation times with their storage, scalability, processing and load sharing capabilities, were analysed. The performance of the OSGRS is compared against commercial GNSS, Assisted-GNSS and WSN-enabled GNSS devices. The OSGRS system demonstrated lower TTFF and higher availability.

  3. Improving patient safety culture in Saudi Arabia (2012-2015): trending, improvement and benchmarking.

    PubMed

    Alswat, Khalid; Abdalla, Rawia Ahmad Mustafa; Titi, Maher Abdelraheim; Bakash, Maram; Mehmood, Faiza; Zubairi, Beena; Jamal, Diana; El-Jardali, Fadi

    2017-08-02

    Measuring patient safety culture can provide insight into areas for improvement and help monitor changes over time. This study details the findings of a re-assessment of patient safety culture in a multi-site Medical City in Riyadh, Kingdom of Saudi Arabia (KSA). Results were compared to an earlier assessment conducted in 2012 and benchmarked with regional and international studies. Such assessments can provide hospital leadership with insight on how their hospital is performing on patient safety culture composites as a result of quality improvement plans. This paper also explored the association between patient safety culture predictors and patient safety grade, perception of patient safety, frequency of events reported and number of events reported. We utilized a customized version of the patient safety culture survey developed by the Agency for Healthcare Research and Quality. The Medical City is a tertiary care teaching facility composed of two sites (total capacity of 904 beds). Data was analyzed using SPSS 24 at a significance level of 0.05. A t-Test was used to compare results from the 2012 survey to that conducted in 2015. Two adopted Generalized Estimating Equations in addition to two linear models were used to assess the association between composites and patient safety culture outcomes. Results were also benchmarked against similar initiatives in Lebanon, Palestine and USA. Areas of strength in 2015 included Teamwork within units, and Organizational Learning-Continuous Improvement; areas requiring improvement included Non-Punitive Response to Error, and Staffing. Comparing results to the 2012 survey revealed improvement on some areas but non-punitive response to error and Staffing remained the lowest scoring composites in 2015. Regression highlighted significant association between managerial support, organizational learning and feedback and improved survey outcomes. Comparison to international benchmarks revealed that the hospital is performing at or better than benchmark on several composites. The Medical City has made significant progress on several of the patient safety culture composites despite still having areas requiring additional improvement. Patient safety culture outcomes are evidently linked to better performance on specific composites. While results are comparable with regional and international benchmarks, findings confirm that regular assessment can allow hospitals to better understand and visualize changes in their performance and identify additional areas for improvement.

  4. Benchmark levels for the consumptive water footprint of crop production for different environmental conditions: a case study for winter wheat in China

    NASA Astrophysics Data System (ADS)

    Zhuo, La; Mekonnen, Mesfin M.; Hoekstra, Arjen Y.

    2016-11-01

    Meeting growing food demands while simultaneously shrinking the water footprint (WF) of agricultural production is one of the greatest societal challenges. Benchmarks for the WF of crop production can serve as a reference and be helpful in setting WF reduction targets. The consumptive WF of crops, the consumption of rainwater stored in the soil (green WF), and the consumption of irrigation water (blue WF) over the crop growing period varies spatially and temporally depending on environmental factors like climate and soil. The study explores which environmental factors should be distinguished when determining benchmark levels for the consumptive WF of crops. Hereto we determine benchmark levels for the consumptive WF of winter wheat production in China for all separate years in the period 1961-2008, for rain-fed vs. irrigated croplands, for wet vs. dry years, for warm vs. cold years, for four different soil classes, and for two different climate zones. We simulate consumptive WFs of winter wheat production with the crop water productivity model AquaCrop at a 5 by 5 arcmin resolution, accounting for water stress only. The results show that (i) benchmark levels determined for individual years for the country as a whole remain within a range of ±20 % around long-term mean levels over 1961-2008, (ii) the WF benchmarks for irrigated winter wheat are 8-10 % larger than those for rain-fed winter wheat, (iii) WF benchmarks for wet years are 1-3 % smaller than for dry years, (iv) WF benchmarks for warm years are 7-8 % smaller than for cold years, (v) WF benchmarks differ by about 10-12 % across different soil texture classes, and (vi) WF benchmarks for the humid zone are 26-31 % smaller than for the arid zone, which has relatively higher reference evapotranspiration in general and lower yields in rain-fed fields. We conclude that when determining benchmark levels for the consumptive WF of a crop, it is useful to primarily distinguish between different climate zones. If actual consumptive WFs of winter wheat throughout China were reduced to the benchmark levels set by the best 25 % of Chinese winter wheat production (1224 m3 t-1 for arid areas and 841 m3 t-1 for humid areas), the water saving in an average year would be 53 % of the current water consumption at winter wheat fields in China. The majority of the yield increase and associated improvement in water productivity can be achieved in southern China.

  5. Small drinking water systems under spatiotemporal water quality variability: a risk-based performance benchmarking framework.

    PubMed

    Bereskie, Ty; Haider, Husnain; Rodriguez, Manuel J; Sadiq, Rehan

    2017-08-23

    Traditional approaches for benchmarking drinking water systems are binary, based solely on the compliance and/or non-compliance of one or more water quality performance indicators against defined regulatory guidelines/standards. The consequence of water quality failure is dependent on location within a water supply system as well as time of the year (i.e., season) with varying levels of water consumption. Conventional approaches used for water quality comparison purposes fail to incorporate spatiotemporal variability and degrees of compliance and/or non-compliance. This can lead to misleading or inaccurate performance assessment data used in the performance benchmarking process. In this research, a hierarchical risk-based water quality performance benchmarking framework is proposed to evaluate small drinking water systems (SDWSs) through cross-comparison amongst similar systems. The proposed framework (R WQI framework) is designed to quantify consequence associated with seasonal and location-specific water quality issues in a given drinking water supply system to facilitate more efficient decision-making for SDWSs striving for continuous performance improvement. Fuzzy rule-based modelling is used to address imprecision associated with measuring performance based on singular water quality guidelines/standards and the uncertainties present in SDWS operations and monitoring. This proposed R WQI framework has been demonstrated using data collected from 16 SDWSs in Newfoundland and Labrador and Quebec, Canada, and compared to the Canadian Council of Ministers of the Environment WQI, a traditional, guidelines/standard-based approach. The study found that the R WQI framework provides an in-depth state of water quality and benchmarks SDWSs more rationally based on the frequency of occurrence and consequence of failure events.

  6. Impact of quality circles for improvement of asthma care: results of a randomized controlled trial

    PubMed Central

    Schneider, Antonius; Wensing, Michel; Biessecker, Kathrin; Quinzler, Renate; Kaufmann-Kolle, Petra; Szecsenyi, Joachim

    2008-01-01

    Rationale and aims Quality circles (QCs) are well established as a means of aiding doctors. New quality improvement strategies include benchmarking activities. The aim of this paper was to evaluate the efficacy of QCs for asthma care working either with general feedback or with an open benchmark. Methods Twelve QCs, involving 96 general practitioners, were organized in a randomized controlled trial. Six worked with traditional anonymous feedback and six with an open benchmark; both had guided discussion from a trained moderator. Forty-three primary care practices agreed to give out questionnaires to patients to evaluate the efficacy of QCs. Results A total of 256 patients participated in the survey, of whom 185 (72.3%) responded to the follow-up 1 year later. Use of inhaled steroids at baseline was high (69%) and self-management low (asthma education 27%, individual emergency plan 8%, and peak flow meter at home 21%). Guideline adherence in drug treatment increased (P = 0.19), and asthma steps improved (P = 0.02). Delivery of individual emergency plans increased (P = 0.008), and unscheduled emergency visits decreased (P = 0.064). There was no change in asthma education and peak flow meter usage. High medication guideline adherence was associated with reduced emergency visits (OR 0.24; 95% CI 0.07–0.89). Use of theophylline was associated with hospitalization (OR 7.1; 95% CI 1.5–34.3) and emergency visits (OR 4.9; 95% CI 1.6–14.7). There was no difference between traditional and benchmarking QCs. Conclusions Quality circles working with individualized feedback are effective at improving asthma care. The trial may have been underpowered to detect specific benchmarking effects. Further research is necessary to evaluate strategies for improving the self-management of asthma patients. PMID:18093108

  7. Performance of MODIS satellite and mesoscale model based land surface temperature for soil moisture deficit estimation using Neural Network

    NASA Astrophysics Data System (ADS)

    Srivastava, Prashant K.; Petropoulos, George P.; Gupta, Manika; Islam, Tanvir

    2015-04-01

    Soil Moisture Deficit (SMD) is a key variable in the water and energy exchanges that occur at the land-surface/atmosphere interface. Monitoring SMD is an alternate method of irrigation scheduling and represents the use of the suitable quantity of water at the proper time by combining measurements of soil moisture deficit. In past it is found that LST has a strong relation to SMD, which can be estimated by MODIS or numerical weather prediction model such as WRF (Weather Research and Forecasting model). By looking into the importance of SMD, this work focused on the application of Artificial Neural Network (ANN) for evaluating its capabilities towards SMD estimation using the LST data estimated from MODIS and WRF mesoscale model. The benchmark SMD estimated from Probability Distribution Model (PDM) over the Brue catchment, Southwest of England, U.K. is used for all the calibration and validation experiments. The performances between observed and simulated SMD are assessed in terms of the Nash-Sutcliffe Efficiency (NSE), the Root Mean Square Error (RMSE) and the percentage of bias (%Bias). The application of the ANN confirmed a high capability WRF and MODIS LST for prediction of SMD. Performance during the ANN calibration and validation showed a good agreement between benchmark and estimated SMD with MODIS LST information with significantly higher performance than WRF simulated LST. The work presented showed the first comprehensive application of LST from MODIS and WRF mesoscale model for hydrological SMD estimation, particularly for the maritime climate. More studies in this direction are recommended to hydro-meteorological community, so that useful information will be accumulated in the technical literature domain for different geographical locations and climatic conditions. Keyword: WRF, Land Surface Temperature, MODIS satellite, Soil Moisture Deficit, Neural Network

  8. Under Construction: Benchmark Assessments and Common Core Math Implementation in Grades K-8. Formative Evaluation Cycle Report for the Math in Common Initiative, Volume 1

    ERIC Educational Resources Information Center

    Flaherty, John, Jr.; Sobolew-Shubin, Alexandria; Heredia, Alberto; Chen-Gaddini, Min; Klarin, Becca; Finkelstein, Neal D.

    2014-01-01

    Math in Common® (MiC) is a five-year initiative that supports a formal network of 10 California school districts as they implement the Common Core State Standards in mathematics (CCSS-M) across grades K-8. As the MiC initiative moves into its second year, one of the central activities that each of the districts is undergoing to support CCSS…

  9. Taming Wild Horses: The Need for Virtual Time-based Scheduling of VMs in Network Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoginath, Srikanth B; Perumalla, Kalyan S; Henz, Brian J

    2012-01-01

    The next generation of scalable network simulators employ virtual machines (VMs) to act as high-fidelity models of traffic producer/consumer nodes in simulated networks. However, network simulations could be inaccurate if VMs are not scheduled according to virtual time, especially when many VMs are hosted per simulator core in a multi-core simulator environment. Since VMs are by default free-running, on the outset, it is not clear if, and to what extent, their untamed execution affects the results in simulated scenarios. Here, we provide the first quantitative basis for establishing the need for generalized virtual time scheduling of VMs in network simulators,more » based on an actual prototyped implementations. To exercise breadth, our system is tested with multiple disparate applications: (a) a set of message passing parallel programs, (b) a computer worm propagation phenomenon, and (c) a mobile ad-hoc wireless network simulation. We define and use error metrics and benchmarks in scaled tests to empirically report the poor match of traditional, fairness-based VM scheduling to VM-based network simulation, and also clearly show the better performance of our simulation-specific scheduler, with up to 64 VMs hosted on a 12-core simulator node.« less

  10. Analysis and Modeling of DIII-D Experiments With OMFIT and Neural Networks

    NASA Astrophysics Data System (ADS)

    Meneghini, O.; Luna, C.; Smith, S. P.; Lao, L. L.; GA Theory Team

    2013-10-01

    The OMFIT integrated modeling framework is designed to facilitate experimental data analysis and enable integrated simulations. This talk introduces this framework and presents a selection of its applications to the DIII-D experiment. Examples include kinetic equilibrium reconstruction analysis; evaluation of MHD stability in the core and in the edge; and self-consistent predictive steady-state transport modeling. The OMFIT framework also provides the platform for an innovative approach based on neural networks to predict electron and ion energy fluxes. In our study a multi-layer feed-forward back-propagation neural network is built and trained over a database of DIII-D data. It is found that given the same parameters that the highest fidelity models use, the neural network model is able to predict to a large degree the heat transport profiles observed in the DIII-D experiments. Once the network is built, the numerical cost of evaluating the transport coefficients is virtually nonexistent, thus making the neural network model particularly well suited for plasma control and quick exploration of operational scenarios. The implementation of the neural network model and benchmark with experimental results and gyro-kinetic models will be discussed. Work supported in part by the US DOE under DE-FG02-95ER54309.

  11. Temporal neural networks and transient analysis of complex engineering systems

    NASA Astrophysics Data System (ADS)

    Uluyol, Onder

    A theory is introduced for a multi-layered Local Output Gamma Feedback (LOGF) neural network within the paradigm of Locally-Recurrent Globally-Feedforward neural networks. It is developed for the identification, prediction, and control tasks of spatio-temporal systems and allows for the presentation of different time scales through incorporation of a gamma memory. It is initially applied to the tasks of sunspot and Mackey-Glass series prediction as benchmarks, then it is extended to the task of power level control of a nuclear reactor at different fuel cycle conditions. The developed LOGF neuron model can also be viewed as a Transformed Input and State (TIS) Gamma memory for neural network architectures for temporal processing. The novel LOGF neuron model extends the static neuron model by incorporating into it a short-term memory structure in the form of a digital gamma filter. A feedforward neural network made up of LOGF neurons can thus be used to model dynamic systems. A learning algorithm based upon the Backpropagation-Through-Time (BTT) approach is derived. It is applicable for training a general L-layer LOGF neural network. The spatial and temporal weights and parameters of the network are iteratively optimized for a given problem using the derived learning algorithm.

  12. ChemTS: an efficient python library for de novo molecular generation

    PubMed Central

    Yang, Xiufeng; Zhang, Jinzhe; Yoshizoe, Kazuki; Terayama, Kei; Tsuda, Koji

    2017-01-01

    Abstract Automatic design of organic materials requires black-box optimization in a vast chemical space. In conventional molecular design algorithms, a molecule is built as a combination of predetermined fragments. Recently, deep neural network models such as variational autoencoders and recurrent neural networks (RNNs) are shown to be effective in de novo design of molecules without any predetermined fragments. This paper presents a novel Python library ChemTS that explores the chemical space by combining Monte Carlo tree search and an RNN. In a benchmarking problem of optimizing the octanol-water partition coefficient and synthesizability, our algorithm showed superior efficiency in finding high-scoring molecules. ChemTS is available at https://github.com/tsudalab/ChemTS. PMID:29435094

  13. Parallel computation for biological sequence comparison: comparing a portable model to the native model for the Intel Hypercube.

    PubMed

    Nadkarni, P M; Miller, P L

    1991-01-01

    A parallel program for inter-database sequence comparison was developed on the Intel Hypercube using two models of parallel programming. One version was built using machine-specific Hypercube parallel programming commands. The other version was built using Linda, a machine-independent parallel programming language. The two versions of the program provide a case study comparing these two approaches to parallelization in an important biological application area. Benchmark tests with both programs gave comparable results with a small number of processors. As the number of processors was increased, the Linda version was somewhat less efficient. The Linda version was also run without change on Network Linda, a virtual parallel machine running on a network of desktop workstations.

  14. Algorithms for Lightweight Key Exchange †

    PubMed Central

    Santonja, Juan; Zamora, Antonio

    2017-01-01

    Public-key cryptography is too slow for general purpose encryption, with most applications limiting its use as much as possible. Some secure protocols, especially those that enable forward secrecy, make a much heavier use of public-key cryptography, increasing the demand for lightweight cryptosystems that can be implemented in low powered or mobile devices. This performance requirements are even more significant in critical infrastructure and emergency scenarios where peer-to-peer networks are deployed for increased availability and resiliency. We benchmark several public-key key-exchange algorithms, determining those that are better for the requirements of critical infrastructure and emergency applications and propose a security framework based on these algorithms and study its application to decentralized node or sensor networks. PMID:28654006

  15. HRSSA – Efficient hybrid stochastic simulation for spatially homogeneous biochemical reaction networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marchetti, Luca, E-mail: marchetti@cosbi.eu; Priami, Corrado, E-mail: priami@cosbi.eu; University of Trento, Department of Mathematics

    This paper introduces HRSSA (Hybrid Rejection-based Stochastic Simulation Algorithm), a new efficient hybrid stochastic simulation algorithm for spatially homogeneous biochemical reaction networks. HRSSA is built on top of RSSA, an exact stochastic simulation algorithm which relies on propensity bounds to select next reaction firings and to reduce the average number of reaction propensity updates needed during the simulation. HRSSA exploits the computational advantage of propensity bounds to manage time-varying transition propensities and to apply dynamic partitioning of reactions, which constitute the two most significant bottlenecks of hybrid simulation. A comprehensive set of simulation benchmarks is provided for evaluating performance andmore » accuracy of HRSSA against other state of the art algorithms.« less

  16. Chiral topological phases from artificial neural networks

    NASA Astrophysics Data System (ADS)

    Kaubruegger, Raphael; Pastori, Lorenzo; Budich, Jan Carl

    2018-05-01

    Motivated by recent progress in applying techniques from the field of artificial neural networks (ANNs) to quantum many-body physics, we investigate to what extent the flexibility of ANNs can be used to efficiently study systems that host chiral topological phases such as fractional quantum Hall (FQH) phases. With benchmark examples, we demonstrate that training ANNs of restricted Boltzmann machine type in the framework of variational Monte Carlo can numerically solve FQH problems to good approximation. Furthermore, we show by explicit construction how n -body correlations can be kept at an exact level with ANN wave functions exhibiting polynomial scaling with power n in system size. Using this construction, we analytically represent the paradigmatic Laughlin wave function as an ANN state.

  17. Quality assurance and improvement: the Pediatric Regional Anesthesia Network.

    PubMed

    Polaner, David M; Martin, Lynn D

    2012-01-01

    Quality assurance and improvement (QA/QI) is a critical activity in medicine. The use of large-scale collaborative databases is increasingly essential to obtain enough reports with which to establish standards of practice and define the incidence of complications and risk/benefit ratios for rare events. Such projects can enhance local QA/QI endeavors by enabling institutions to obtain benchmark data against which to compare their performance and can be used for prospective analyses of inter-institutional differences to determine 'best practice'. The pediatric regional anesthesia network (PRAN) is such a project. The first data cohort is currently being analyzed and offers insight into how such data can be used to detect trends in adverse events and improve care. © 2011 Blackwell Publishing Ltd.

  18. Inference of time-delayed gene regulatory networks based on dynamic Bayesian network hybrid learning method

    PubMed Central

    Yu, Bin; Xu, Jia-Meng; Li, Shan; Chen, Cheng; Chen, Rui-Xin; Wang, Lei; Zhang, Yan; Wang, Ming-Hui

    2017-01-01

    Gene regulatory networks (GRNs) research reveals complex life phenomena from the perspective of gene interaction, which is an important research field in systems biology. Traditional Bayesian networks have a high computational complexity, and the network structure scoring model has a single feature. Information-based approaches cannot identify the direction of regulation. In order to make up for the shortcomings of the above methods, this paper presents a novel hybrid learning method (DBNCS) based on dynamic Bayesian network (DBN) to construct the multiple time-delayed GRNs for the first time, combining the comprehensive score (CS) with the DBN model. DBNCS algorithm first uses CMI2NI (conditional mutual inclusive information-based network inference) algorithm for network structure profiles learning, namely the construction of search space. Then the redundant regulations are removed by using the recursive optimization algorithm (RO), thereby reduce the false positive rate. Secondly, the network structure profiles are decomposed into a set of cliques without loss, which can significantly reduce the computational complexity. Finally, DBN model is used to identify the direction of gene regulation within the cliques and search for the optimal network structure. The performance of DBNCS algorithm is evaluated by the benchmark GRN datasets from DREAM challenge as well as the SOS DNA repair network in Escherichia coli, and compared with other state-of-the-art methods. The experimental results show the rationality of the algorithm design and the outstanding performance of the GRNs. PMID:29113310

  19. Inference of time-delayed gene regulatory networks based on dynamic Bayesian network hybrid learning method.

    PubMed

    Yu, Bin; Xu, Jia-Meng; Li, Shan; Chen, Cheng; Chen, Rui-Xin; Wang, Lei; Zhang, Yan; Wang, Ming-Hui

    2017-10-06

    Gene regulatory networks (GRNs) research reveals complex life phenomena from the perspective of gene interaction, which is an important research field in systems biology. Traditional Bayesian networks have a high computational complexity, and the network structure scoring model has a single feature. Information-based approaches cannot identify the direction of regulation. In order to make up for the shortcomings of the above methods, this paper presents a novel hybrid learning method (DBNCS) based on dynamic Bayesian network (DBN) to construct the multiple time-delayed GRNs for the first time, combining the comprehensive score (CS) with the DBN model. DBNCS algorithm first uses CMI2NI (conditional mutual inclusive information-based network inference) algorithm for network structure profiles learning, namely the construction of search space. Then the redundant regulations are removed by using the recursive optimization algorithm (RO), thereby reduce the false positive rate. Secondly, the network structure profiles are decomposed into a set of cliques without loss, which can significantly reduce the computational complexity. Finally, DBN model is used to identify the direction of gene regulation within the cliques and search for the optimal network structure. The performance of DBNCS algorithm is evaluated by the benchmark GRN datasets from DREAM challenge as well as the SOS DNA repair network in Escherichia coli , and compared with other state-of-the-art methods. The experimental results show the rationality of the algorithm design and the outstanding performance of the GRNs.

  20. Automated Instrumentation, Monitoring and Visualization of PVM Programs Using AIMS

    NASA Technical Reports Server (NTRS)

    Mehra, Pankaj; VanVoorst, Brian; Yan, Jerry; Tucker, Deanne (Technical Monitor)

    1994-01-01

    We present views and analysis of the execution of several PVM codes for Computational Fluid Dynamics on a network of Sparcstations, including (a) NAS Parallel benchmarks CG and MG (White, Alund and Sunderam 1993); (b) a multi-partitioning algorithm for NAS Parallel Benchmark SP (Wijngaart 1993); and (c) an overset grid flowsolver (Smith 1993). These views and analysis were obtained using our Automated Instrumentation and Monitoring System (AIMS) version 3.0, a toolkit for debugging the performance of PVM programs. We will describe the architecture, operation and application of AIMS. The AIMS toolkit contains (a) Xinstrument, which can automatically instrument various computational and communication constructs in message-passing parallel programs; (b) Monitor, a library of run-time trace-collection routines; (c) VK (Visual Kernel), an execution-animation tool with source-code clickback; and (d) Tally, a tool for statistical analysis of execution profiles. Currently, Xinstrument can handle C and Fortran77 programs using PVM 3.2.x; Monitor has been implemented and tested on Sun 4 systems running SunOS 4.1.2; and VK uses X11R5 and Motif 1.2. Data and views obtained using AIMS clearly illustrate several characteristic features of executing parallel programs on networked workstations: (a) the impact of long message latencies; (b) the impact of multiprogramming overheads and associated load imbalance; (c) cache and virtual-memory effects; and (4significant skews between workstation clocks. Interestingly, AIMS can compensate for constant skew (zero drift) by calibrating the skew between a parent and its spawned children. In addition, AIMS' skew-compensation algorithm can adjust timestamps in a way that eliminates physically impossible communications (e.g., messages going backwards in time). Our current efforts are directed toward creating new views to explain the observed performance of PVM programs. Some of the features planned for the near future include: (a) ConfigView, showing the physical topology of the virtual machine, inferred using specially formatted IP (Internet Protocol) packets; and (b) LoadView, synchronous animation of PVM-program execution and resource-utilization patterns.

  1. Gaming in risk-adjusted mortality rates: effect of misclassification of risk factors in the benchmarking of cardiac surgery risk-adjusted mortality rates.

    PubMed

    Siregar, Sabrina; Groenwold, Rolf H H; Versteegh, Michel I M; Noyez, Luc; ter Burg, Willem Jan P P; Bots, Michiel L; van der Graaf, Yolanda; van Herwerden, Lex A

    2013-03-01

    Upcoding or undercoding of risk factors could affect the benchmarking of risk-adjusted mortality rates. The aim was to investigate the effect of misclassification of risk factors on the benchmarking of mortality rates after cardiac surgery. A prospective cohort was used comprising all adult cardiac surgery patients in all 16 cardiothoracic centers in The Netherlands from January 1, 2007, to December 31, 2009. A random effects model, including the logistic European system for cardiac operative risk evaluation (EuroSCORE) was used to benchmark the in-hospital mortality rates. We simulated upcoding and undercoding of 5 selected variables in the patients from 1 center. These patients were selected randomly (nondifferential misclassification) or by the EuroSCORE (differential misclassification). In the random patients, substantial misclassification was required to affect benchmarking: a 1.8-fold increase in prevalence of the 4 risk factors changed an underperforming center into an average performing one. Upcoding of 1 variable required even more. When patients with the greatest EuroSCORE were upcoded (ie, differential misclassification), a 1.1-fold increase was sufficient: moderate left ventricular function from 14.2% to 15.7%, poor left ventricular function from 8.4% to 9.3%, recent myocardial infarction from 7.9% to 8.6%, and extracardiac arteriopathy from 9.0% to 9.8%. Benchmarking using risk-adjusted mortality rates can be manipulated by misclassification of the EuroSCORE risk factors. Misclassification of random patients or of single variables will have little effect. However, limited upcoding of multiple risk factors in high-risk patients can greatly influence benchmarking. To minimize "gaming," the prevalence of all risk factors should be carefully monitored. Copyright © 2013 The American Association for Thoracic Surgery. Published by Mosby, Inc. All rights reserved.

  2. There is no one-size-fits-all product for InSAR; on the inclusion of contextual information for geodetically-proof InSAR data products

    NASA Astrophysics Data System (ADS)

    Hanssen, R. F.

    2017-12-01

    In traditional geodesy, one is interested in determining the coordinates, or the change in coordinates, of predefined benchmarks. These benchmarks are clearly identifiable and are especially established to be representative of the signal of interest. This holds, e.g., for leveling benchmarks, for triangulation/trilateration benchmarks, and for GNSS benchmarks. The desired coordinates are not identical to the basic measurements, and need to be estimated using robust estimation procedures, where the stochastic nature of the measurements is taken into account. For InSAR, however, the `benchmarks' are not predefined. In fact, usually we do not know where an effective benchmark is located, even though we can determine its dynamic behavior pretty well. This poses several significant problems. First, we cannot describe the quality of the measurements, unless we already know the dynamic behavior of the benchmark. Second, if we don't know the quality of the measurements, we cannot compute the quality of the estimated parameters. Third, rather harsh assumptions need to be made to produce a result. These (usually implicit) assumptions differ between processing operators and the used software, and are severely affected by the amount of available data. Fourth, the `relative' nature of the final estimates is usually not explicitly stated, which is particularly problematic for non-expert users. Finally, whereas conventional geodesy applies rigorous testing to check for measurement or model errors, this is hardly ever done in InSAR-geodesy. These problems make it rather impossible to provide a precise, reliable, repeatable, and `universal' InSAR product or service. Here we evaluate the requirements and challenges to move towards InSAR as a geodetically-proof product. In particular this involves the explicit inclusion of contextual information, as well as InSAR procedures, standards and a technical protocol, supported by the International Association of Geodesy and the international scientific community.

  3. A New Performance Improvement Model: Adding Benchmarking to the Analysis of Performance Indicator Data.

    PubMed

    Al-Kuwaiti, Ahmed; Homa, Karen; Maruthamuthu, Thennarasu

    2016-01-01

    A performance improvement model was developed that focuses on the analysis and interpretation of performance indicator (PI) data using statistical process control and benchmarking. PIs are suitable for comparison with benchmarks only if the data fall within the statistically accepted limit-that is, show only random variation. Specifically, if there is no significant special-cause variation over a period of time, then the data are ready to be benchmarked. The proposed Define, Measure, Control, Internal Threshold, and Benchmark model is adapted from the Define, Measure, Analyze, Improve, Control (DMAIC) model. The model consists of the following five steps: Step 1. Define the process; Step 2. Monitor and measure the variation over the period of time; Step 3. Check the variation of the process; if stable (no significant variation), go to Step 4; otherwise, control variation with the help of an action plan; Step 4. Develop an internal threshold and compare the process with it; Step 5.1. Compare the process with an internal benchmark; and Step 5.2. Compare the process with an external benchmark. The steps are illustrated through the use of health care-associated infection (HAI) data collected for 2013 and 2014 from the Infection Control Unit, King Fahd Hospital, University of Dammam, Saudi Arabia. Monitoring variation is an important strategy in understanding and learning about a process. In the example, HAI was monitored for variation in 2013, and the need to have a more predictable process prompted the need to control variation by an action plan. The action plan was successful, as noted by the shift in the 2014 data, compared to the historical average, and, in addition, the variation was reduced. The model is subject to limitations: For example, it cannot be used without benchmarks, which need to be calculated the same way with similar patient populations, and it focuses only on the "Analyze" part of the DMAIC model.

  4. Benchmarking protein classification algorithms via supervised cross-validation.

    PubMed

    Kertész-Farkas, Attila; Dhir, Somdutta; Sonego, Paolo; Pacurar, Mircea; Netoteia, Sergiu; Nijveen, Harm; Kuzniar, Arnold; Leunissen, Jack A M; Kocsor, András; Pongor, Sándor

    2008-04-24

    Development and testing of protein classification algorithms are hampered by the fact that the protein universe is characterized by groups vastly different in the number of members, in average protein size, similarity within group, etc. Datasets based on traditional cross-validation (k-fold, leave-one-out, etc.) may not give reliable estimates on how an algorithm will generalize to novel, distantly related subtypes of the known protein classes. Supervised cross-validation, i.e., selection of test and train sets according to the known subtypes within a database has been successfully used earlier in conjunction with the SCOP database. Our goal was to extend this principle to other databases and to design standardized benchmark datasets for protein classification. Hierarchical classification trees of protein categories provide a simple and general framework for designing supervised cross-validation strategies for protein classification. Benchmark datasets can be designed at various levels of the concept hierarchy using a simple graph-theoretic distance. A combination of supervised and random sampling was selected to construct reduced size model datasets, suitable for algorithm comparison. Over 3000 new classification tasks were added to our recently established protein classification benchmark collection that currently includes protein sequence (including protein domains and entire proteins), protein structure and reading frame DNA sequence data. We carried out an extensive evaluation based on various machine-learning algorithms such as nearest neighbor, support vector machines, artificial neural networks, random forests and logistic regression, used in conjunction with comparison algorithms, BLAST, Smith-Waterman, Needleman-Wunsch, as well as 3D comparison methods DALI and PRIDE. The resulting datasets provide lower, and in our opinion more realistic estimates of the classifier performance than do random cross-validation schemes. A combination of supervised and random sampling was used to construct model datasets, suitable for algorithm comparison.

  5. Global Positioning System surveys of storm-surge sensors deployed during Hurricane Ike, Seadrift, Texas, to Lake Charles, Louisiana, 2008

    USGS Publications Warehouse

    Payne, Jason; Woodward, Brenda K.; Storm, John B.

    2009-01-01

    The U.S. Geological Survey installed a network of pressure sensors at 65 sites along the Gulf Coast from Seadrift, Texas, northeast to Lake Charles, Louisiana, to record the timing, areal extent, and magnitude of inland storm surge and coastal flooding caused by Hurricane Ike in September 2008. A Global Positioning System was used to obtain elevations of reference marks near each sensor. A combination of real-time kinematic (RTK) and static Global Positioning System surveys were done to obtain elevations of reference marks. Leveling relative to reference marks was done to obtain elevations of sensor orifices above the reference marks. This report summarizes the Global Positioning System data collected and processed to obtain reference mark and storm-sensor-orifice elevations for 59 storm-surge sensors recovered from the original 65 installed as a necessary prelude to computation of storm-surge elevations. National Geodetic Survey benchmarks were used for RTK surveying. Where National Geodetic Survey benchmarks were not within 12 kilometers of a sensor site, static surveying was done. Additional control points for static surveying were in the form of newly established benchmarks or reestablished existing benchmarks. RTK surveying was used to obtain positions and elevations of reference marks for 29 sensor sites. Static surveying was used to obtain positions and elevations of reference marks for 34 sensor sites; four sites were surveyed using both methods. Multiple quality checks on the RTK-survey and static-survey data were applied. The results of all quality checks indicate that the desired elevation accuracy for the surveys of this report, less than 0.1-meter error, was achieved.

  6. A benchmark for vehicle detection on wide area motion imagery

    NASA Astrophysics Data System (ADS)

    Catrambone, Joseph; Amzovski, Ismail; Liang, Pengpeng; Blasch, Erik; Sheaff, Carolyn; Wang, Zhonghai; Chen, Genshe; Ling, Haibin

    2015-05-01

    Wide area motion imagery (WAMI) has been attracting an increased amount of research attention due to its large spatial and temporal coverage. An important application includes moving target analysis, where vehicle detection is often one of the first steps before advanced activity analysis. While there exist many vehicle detection algorithms, a thorough evaluation of them on WAMI data still remains a challenge mainly due to the lack of an appropriate benchmark data set. In this paper, we address a research need by presenting a new benchmark for wide area motion imagery vehicle detection data. The WAMI benchmark is based on the recently available Wright-Patterson Air Force Base (WPAFB09) dataset and the Temple Resolved Uncertainty Target History (TRUTH) associated target annotation. Trajectory annotations were provided in the original release of the WPAFB09 dataset, but detailed vehicle annotations were not available with the dataset. In addition, annotations of static vehicles, e.g., in parking lots, are also not identified in the original release. Addressing these issues, we re-annotated the whole dataset with detailed information for each vehicle, including not only a target's location, but also its pose and size. The annotated WAMI data set should be useful to community for a common benchmark to compare WAMI detection, tracking, and identification methods.

  7. Preemptive financial strategies help IPAs avoid insolvency.

    PubMed

    Karling, J; Silberman, L

    2000-11-01

    The 1999 collapse in California of practice management giants FPA Medical Management, Inc. and MedPartners, Inc. has caused healthcare provider organizations, particularly independent practice associations (IPAs), to examine critical issues related to financial solvency. Problems such as declining membership, ineffective management, weak contracting, and lack of strategic vision frequently are encountered by troubled provider organizations. The common thread that runs through IPA failures is a combination of unreliable accounting data and inadequate reporting systems. This lack of satisfactory financial and reporting information impairs the ability of the provider group to maintain sufficient funds to cover expenses and pay physicians. Successful, financially stable provider networks use well-defined reporting procedures based on fundamental accounting and financial concepts, as well as a sound methodology for measuring and calculating claims liability estimates. In California, new regulations aimed at encouraging provider organizations to assume preemptive financial strategies are in the process of being adopted. IPAs in every state should consider reviewing these regulations as benchmarks by which to assess their financial procedures.

  8. Predictors and variation of routine home discharge in critically ill adults with cystic fibrosis.

    PubMed

    Oud, Lavi; Chan, Yiu Ming

    2018-06-01

    The short-term outcomes of patients with cystic fibrosis (CF) surviving critical illness were not examined systematically. To determine the factors associated with and variation in rates of routine home discharge among ICU-managed adult CF patients. Predictors of routine home discharge and its hospital-level variation were examined in ICU-managed adults with cystic fibrosis in Texas during 2004-2013. Older age, rural residence, and severity of illness decreased odds of routine home discharge, while hospitalization in facilities accredited as part of the Cystic Fibrosis Foundation Care Center Network nearly doubled the odds of routine home discharge. The median (interquartile) adjusted rate of routine home discharge was 62.0% (31.5-82.5). The identified determinants of routine home discharge can inform clinical decision-making, while the demonstrated wide variation in adjusted across-hospital rates of routine home discharge of ICU-managed adults with CF can provide benchmark data for future quality improvement efforts. Published by Elsevier Inc.

  9. Personalized recommendation based on heat bidirectional transfer

    NASA Astrophysics Data System (ADS)

    Ma, Wenping; Feng, Xiang; Wang, Shanfeng; Gong, Maoguo

    2016-02-01

    Personalized recommendation has become an increasing popular research topic, which aims to find future likes and interests based on users' past preferences. Traditional recommendation algorithms pay more attention to forecast accuracy by calculating first-order relevance, while ignore the importance of diversity and novelty that provide comfortable experiences for customers. There are some levels of contradictions between these three metrics, so an algorithm based on bidirectional transfer is proposed in this paper to solve this dilemma. In this paper, we agree that an object that is associated with history records or has been purchased by similar users should be introduced to the specified user and recommendation approach based on heat bidirectional transfer is proposed. Compared with the state-of-the-art approaches based on bipartite network, experiments on two benchmark data sets, Movielens and Netflix, demonstrate that our algorithm has better performance on accuracy, diversity and novelty. Moreover, this method does better in exploiting long-tail commodities and cold-start problem.

  10. Automatic benchmarking of homogenization packages applied to synthetic monthly series within the frame of the MULTITEST project

    NASA Astrophysics Data System (ADS)

    Guijarro, José A.; López, José A.; Aguilar, Enric; Domonkos, Peter; Venema, Victor; Sigró, Javier; Brunet, Manola

    2017-04-01

    After the successful inter-comparison of homogenization methods carried out in the COST Action ES0601 (HOME), many methods kept improving their algorithms, suggesting the need of performing new inter-comparison exercises. However, manual applications of the methodologies to a large number of testing networks cannot be afforded without involving the work of many researchers over an extended time. The alternative is to make the comparisons as automatic as possible, as in the MULTITEST project, which, funded by the Spanish Ministry of Economy and Competitiveness, tests homogenization methods by applying them to a large number of synthetic networks of monthly temperature and precipitation. One hundred networks of 10 series were sampled from different master networks containing 100 series of 720 values (60 years times 12 months). Three master temperature networks were built with different degree of cross-correlations between the series in order to simulate conditions of different station densities or climatic heterogeneity. Also three master synthetic networks were developed for precipitation, this time mimicking the characteristics of three different climates: Atlantic temperate, Mediterranean and monsoonal. Inhomogeneities were introduced in every network sampled from the master networks, and all publicly available homogenization methods that we could run in an automatic way were applied to them: ACMANT 3.0, Climatol 3.0, MASH 3.03, RHTestV4, USHCN v52d and HOMER 2.6. Most of them were tested with different settings, and their comparative results can be inspected in box-plot graphics of Root Mean Squared Errors and trend biases computed between the homogenized data and their original homogeneous series. In a first stage, inhomogeneities were applied to the synthetic homogeneous series with five different settings with increasing difficulty and realism: i) big shifts in half of the series; ii) the same with a strong seasonality; iii) short term platforms and local trends; iv) random number of shifts with random size and location in all series; and v) the same plus seasonality of random amplitude. The shifts were additive for temperature and multiplicative for precipitation. The second stage is dedicated to study the impact of the number of series in the networks, seasonalities other than sinusoidal, and the occurrence of simultaneous shifts in a high number of series. Finally, tests will be performed on a longer and more realistic benchmark, with varying number of missing data along time, similar to that used in the COST Action ES0601. These inter-comparisons will be valuable both to the users and to the developers of the tested packages, who can see how their algorithms behave under varied climate conditions.

  11. Developing Starlight connections with UNESCO sites through the Biosphere Smart

    NASA Astrophysics Data System (ADS)

    Marin, Cipriano

    2015-08-01

    The large number of UNESCO Sites around the world, in outstanding sites ranging from small islands to cities, makes it possible to build and share a comprehensive knowledge base on good practices and policies on the preservation of the night skies consistent with the protection of the associated scientific, natural and cultural values. In this context, the Starlight Initiative and other organizations such as IDA play a catalytic role in an essential international process to promote comprehensive, holistic approaches on dark sky preservation, astronomical observation, environmental protection, responsible lighting, sustainable energy, climate change and global sustainability.Many of these places have the potential to become models of excellence to foster the recovery of the dark skies and its defence against light pollution, included some case studies mentioned in the Portal to the Heritage of Astronomy.Fighting light pollution and recovering starry sky are already elements of a new emerging culture in biosphere reserves and world heritage sites committed to acting on climate change and sustainable development. Over thirty territories, including biosphere reserves and world heritage sites, have been developed successful initiatives to ensure night sky quality and promote sustainable lighting. Clear night skies also provide sustainable income opportunities as tourists and visitors are eagerly looking for sites with impressive night skies.Taking into account the high visibility and the ability of UNESCO sites to replicate network experiences, the Starlight Initiative has launched an action In cooperation with Biosphere Smart, aimed at promoting the Benchmark sites.Biosphere Smart is a global observatory created in partnership with UNESCO MaB Programme to share good practices, and experiences among UNESCO sites. The Benchmark sites window allows access to all the information of the most relevant astronomical heritage sites, dark sky protected areas and other places committed to the preservation of the values associated with the night sky. A new step ahead in our common task of protecting the starry skies at UNESCO sites.

  12. Benchmark Dose Modeling Estimates of the Concentrations of Inorganic Arsenic That Induce Changes to the Neonatal Transcriptome, Proteome, and Epigenome in a Pregnancy Cohort.

    PubMed

    Rager, Julia E; Auerbach, Scott S; Chappell, Grace A; Martin, Elizabeth; Thompson, Chad M; Fry, Rebecca C

    2017-10-16

    Prenatal inorganic arsenic (iAs) exposure influences the expression of critical genes and proteins associated with adverse outcomes in newborns, in part through epigenetic mediators. The doses at which these genomic and epigenomic changes occur have yet to be evaluated in the context of dose-response modeling. The goal of the present study was to estimate iAs doses that correspond to changes in transcriptomic, proteomic, epigenomic, and integrated multi-omic signatures in human cord blood through benchmark dose (BMD) modeling. Genome-wide DNA methylation, microRNA expression, mRNA expression, and protein expression levels in cord blood were modeled against total urinary arsenic (U-tAs) levels from pregnant women exposed to varying levels of iAs. Dose-response relationships were modeled in BMDExpress, and BMDs representing 10% response levels were estimated. Overall, DNA methylation changes were estimated to occur at lower exposure concentrations in comparison to other molecular endpoints. Multi-omic module eigengenes were derived through weighted gene co-expression network analysis, representing co-modulated signatures across transcriptomic, proteomic, and epigenomic profiles. One module eigengene was associated with decreased gestational age occurring alongside increased iAs exposure. Genes/proteins within this module eigengene showed enrichment for organismal development, including potassium voltage-gated channel subfamily Q member 1 (KCNQ1), an imprinted gene showing differential methylation and expression in response to iAs. Modeling of this prioritized multi-omic module eigengene resulted in a BMD(BMDL) of 58(45) μg/L U-tAs, which was estimated to correspond to drinking water arsenic concentrations of 51(40) μg/L. Results are in line with epidemiological evidence supporting effects of prenatal iAs occurring at levels <100 μg As/L urine. Together, findings present a variety of BMD measures to estimate doses at which prenatal iAs exposure influences neonatal outcome-relevant transcriptomic, proteomic, and epigenomic profiles.

  13. The phosphoproteome of Aspergillus nidulans reveals functional association with cellular processes involved in morphology and secretion.

    PubMed

    Ramsubramaniam, Nikhil; Harris, Steven D; Marten, Mark R

    2014-11-01

    We describe the first phosphoproteome of the model filamentous fungus Aspergillus nidulans. Phosphopeptides were enriched using titanium dioxide, separated using a convenient ultra-long reverse phase gradient, and identified using a "high-high" strategy (high mass accuracy on the parent and fragment ions) with higher-energy collisional dissociation. Using this approach 1801 phosphosites, from 1637 unique phosphopeptides, were identified. Functional classification revealed phosphoproteins were overrepresented under GO categories related to fungal morphogenesis: "sites of polar growth," "vesicle mediated transport," and "cytoskeleton organization." In these same GO categories, kinase-substrate analysis of phosphoproteins revealed the majority were target substrates of CDK and CK2 kinase families, indicating these kinase families play a prominent role in fungal morphogenesis. Kinase-substrate analysis also identified 57 substrates for kinases known to regulate secretion of hydrolytic enzymes (e.g. PkaA, SchA, and An-Snf1). Altogether this data will serve as a benchmark that can be used to elucidate regulatory networks functionally associated with fungal morphogenesis and secretion. All MS data have been deposited in the ProteomeXchange with identifier PXD000715 (http://proteomecentral.proteomexchange.org/dataset/PXD000715). © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. The International Postal Network and Other Global Flows as Proxies for National Wellbeing.

    PubMed

    Hristova, Desislava; Rutherford, Alex; Anson, Jose; Luengo-Oroz, Miguel; Mascolo, Cecilia

    2016-01-01

    The digital exhaust left by flows of physical and digital commodities provides a rich measure of the nature, strength and significance of relationships between countries in the global network. With this work, we examine how these traces and the network structure can reveal the socioeconomic profile of different countries. We take into account multiple international networks of physical and digital flows, including the previously unexplored international postal network. By measuring the position of each country in the Trade, Postal, Migration, International Flights, IP and Digital Communications networks, we are able to build proxies for a number of crucial socioeconomic indicators such as GDP per capita and the Human Development Index ranking along with twelve other indicators used as benchmarks of national well-being by the United Nations and other international organisations. In this context, we have also proposed and evaluated a global connectivity degree measure applying multiplex theory across the six networks that accounts for the strength of relationships between countries. We conclude by showing how countries with shared community membership over multiple networks have similar socioeconomic profiles. Combining multiple flow data sources can help understand the forces which drive economic activity on a global level. Such an ability to infer proxy indicators in a context of incomplete information is extremely timely in light of recent discussions on measurement of indicators relevant to the Sustainable Development Goals.

  15. The International Postal Network and Other Global Flows as Proxies for National Wellbeing

    PubMed Central

    Rutherford, Alex; Anson, Jose; Luengo-Oroz, Miguel; Mascolo, Cecilia

    2016-01-01

    The digital exhaust left by flows of physical and digital commodities provides a rich measure of the nature, strength and significance of relationships between countries in the global network. With this work, we examine how these traces and the network structure can reveal the socioeconomic profile of different countries. We take into account multiple international networks of physical and digital flows, including the previously unexplored international postal network. By measuring the position of each country in the Trade, Postal, Migration, International Flights, IP and Digital Communications networks, we are able to build proxies for a number of crucial socioeconomic indicators such as GDP per capita and the Human Development Index ranking along with twelve other indicators used as benchmarks of national well-being by the United Nations and other international organisations. In this context, we have also proposed and evaluated a global connectivity degree measure applying multiplex theory across the six networks that accounts for the strength of relationships between countries. We conclude by showing how countries with shared community membership over multiple networks have similar socioeconomic profiles. Combining multiple flow data sources can help understand the forces which drive economic activity on a global level. Such an ability to infer proxy indicators in a context of incomplete information is extremely timely in light of recent discussions on measurement of indicators relevant to the Sustainable Development Goals. PMID:27248142

  16. CMIP: a software package capable of reconstructing genome-wide regulatory networks using gene expression data.

    PubMed

    Zheng, Guangyong; Xu, Yaochen; Zhang, Xiujun; Liu, Zhi-Ping; Wang, Zhuo; Chen, Luonan; Zhu, Xin-Guang

    2016-12-23

    A gene regulatory network (GRN) represents interactions of genes inside a cell or tissue, in which vertexes and edges stand for genes and their regulatory interactions respectively. Reconstruction of gene regulatory networks, in particular, genome-scale networks, is essential for comparative exploration of different species and mechanistic investigation of biological processes. Currently, most of network inference methods are computationally intensive, which are usually effective for small-scale tasks (e.g., networks with a few hundred genes), but are difficult to construct GRNs at genome-scale. Here, we present a software package for gene regulatory network reconstruction at a genomic level, in which gene interaction is measured by the conditional mutual information measurement using a parallel computing framework (so the package is named CMIP). The package is a greatly improved implementation of our previous PCA-CMI algorithm. In CMIP, we provide not only an automatic threshold determination method but also an effective parallel computing framework for network inference. Performance tests on benchmark datasets show that the accuracy of CMIP is comparable to most current network inference methods. Moreover, running tests on synthetic datasets demonstrate that CMIP can handle large datasets especially genome-wide datasets within an acceptable time period. In addition, successful application on a real genomic dataset confirms its practical applicability of the package. This new software package provides a powerful tool for genomic network reconstruction to biological community. The software can be accessed at http://www.picb.ac.cn/CMIP/ .

  17. Decreased rates of nosocomial endometritis and urinary tract infection after vaginal delivery in a French surveillance network, 1997-2003.

    PubMed

    Ayzac, Louis; Caillat-Vallet, Emmanuelle; Girard, Raphaële; Chapuis, Catherine; Depaix, Florence; Dumas, Anne-Marie; Gignoux, Chantal; Haond, Catherine; Lafarge-Leboucher, Joëlle; Launay, Carine; Tissot-Guerraz, Françoise; Vincent, Agnès; Fabry, Jacques

    2008-06-01

    To identify independent risk factors for endometritis and urinary tract infection (UTI) after vaginal delivery, and to monitor changes in nosocomial infection rates and derive benchmarks for prevention. Prospective study. We analyzed routine surveillance data for all vaginal deliveries between January 1997 and December 2003 at 66 maternity units participating in the Mater Sud-Est surveillance network. Adjusted odds ratios for risk of endometritis or UTI were obtained using a logistic regression model. The overall incidence rates were 0.5% for endometritis and 0.3% for UTI. There was a significant decrease in the incidence and risk of endometritis but not of UTI during the 7-year period. Significant risk factors for endometritis were fever during labor, parity of 1, and instrumental delivery and/or manual removal of the placenta. Significant risk factors for UTI were urinary infection on admission, premature rupture of membranes (more than 12 hours before admission), blood loss of more than 800 mL, parity of 1, instrumental delivery, and receipt of more than 5 vaginal digital examinations. Each maternity unit received a poster showing graphs of the number of expected and observed cases of UTI and endometritis associated with vaginal deliveries, which enabled each maternity unit to determine their rank within the network and to initiate prevention programs. Although routine surveillance means additional work for maternity units, our results demonstrate the usefulness of regular targeted monitoring of risk factors and of the most common nosocomial infections in obstetrics. Most of the information needed for monitoring is already present in the patients' records.

  18. Convolutional Neural Network on Embedded Linux(trademark) System-on-Chip: A Methodology and Performance Benchmark

    DTIC Science & Technology

    2016-05-01

    A9 CPU and 15 W for the i7 CPU. A method of accelerating this computation is by using a customized hardware unit called a field- programmable gate...implementation of custom logic to accelerate com- putational workloads. This FPGA fabric, in addition to the standard programmable logic, contains 220...chip; field- programmable gate array Daniel Gebhardt U U U U 18 (619) 553-2786 INITIAL DISTRIBUTION 84300 Library (2) 85300 Archive/Stock (1

  19. Convolutional Neural Network on Embedded Linux System-on-Chip: A Methodology and Performance Benchmark

    DTIC Science & Technology

    2016-05-01

    A9 CPU and 15 W for the i7 CPU. A method of accelerating this computation is by using a customized hardware unit called a field- programmable gate...implementation of custom logic to accelerate com- putational workloads. This FPGA fabric, in addition to the standard programmable logic, contains 220...chip; field- programmable gate array Daniel Gebhardt U U U U 18 (619) 553-2786 INITIAL DISTRIBUTION 84300 Library (2) 85300 Archive/Stock (1

  20. Compiler-directed cache management in multiprocessors

    NASA Technical Reports Server (NTRS)

    Cheong, Hoichi; Veidenbaum, Alexander V.

    1990-01-01

    The necessity of finding alternatives to hardware-based cache coherence strategies for large-scale multiprocessor systems is discussed. Three different software-based strategies sharing the same goals and general approach are presented. They consist of a simple invalidation approach, a fast selective invalidation scheme, and a version control scheme. The strategies are suitable for shared-memory multiprocessor systems with interconnection networks and a large number of processors. Results of trace-driven simulations conducted on numerical benchmark routines to compare the performance of the three schemes are presented.

  1. High-resolution Self-Organizing Maps for advanced visualization and dimension reduction.

    PubMed

    Saraswati, Ayu; Nguyen, Van Tuc; Hagenbuchner, Markus; Tsoi, Ah Chung

    2018-05-04

    Kohonen's Self Organizing feature Map (SOM) provides an effective way to project high dimensional input features onto a low dimensional display space while preserving the topological relationships among the input features. Recent advances in algorithms that take advantages of modern computing hardware introduced the concept of high resolution SOMs (HRSOMs). This paper investigates the capabilities and applicability of the HRSOM as a visualization tool for cluster analysis and its suitabilities to serve as a pre-processor in ensemble learning models. The evaluation is conducted on a number of established benchmarks and real-world learning problems, namely, the policeman benchmark, two web spam detection problems, a network intrusion detection problem, and a malware detection problem. It is found that the visualization resulted from an HRSOM provides new insights concerning these learning problems. It is furthermore shown empirically that broad benefits from the use of HRSOMs in both clustering and classification problems can be expected. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Community-based benchmarking improves spike rate inference from two-photon calcium imaging data.

    PubMed

    Berens, Philipp; Freeman, Jeremy; Deneux, Thomas; Chenkov, Nikolay; McColgan, Thomas; Speiser, Artur; Macke, Jakob H; Turaga, Srinivas C; Mineault, Patrick; Rupprecht, Peter; Gerhard, Stephan; Friedrich, Rainer W; Friedrich, Johannes; Paninski, Liam; Pachitariu, Marius; Harris, Kenneth D; Bolte, Ben; Machado, Timothy A; Ringach, Dario; Stone, Jasmine; Rogerson, Luke E; Sofroniew, Nicolas J; Reimer, Jacob; Froudarakis, Emmanouil; Euler, Thomas; Román Rosón, Miroslav; Theis, Lucas; Tolias, Andreas S; Bethge, Matthias

    2018-05-01

    In recent years, two-photon calcium imaging has become a standard tool to probe the function of neural circuits and to study computations in neuronal populations. However, the acquired signal is only an indirect measurement of neural activity due to the comparatively slow dynamics of fluorescent calcium indicators. Different algorithms for estimating spike rates from noisy calcium measurements have been proposed in the past, but it is an open question how far performance can be improved. Here, we report the results of the spikefinder challenge, launched to catalyze the development of new spike rate inference algorithms through crowd-sourcing. We present ten of the submitted algorithms which show improved performance compared to previously evaluated methods. Interestingly, the top-performing algorithms are based on a wide range of principles from deep neural networks to generative models, yet provide highly correlated estimates of the neural activity. The competition shows that benchmark challenges can drive algorithmic developments in neuroscience.

  3. Establishment of National Laboratory Standards in Public and Private Hospital Laboratories

    PubMed Central

    ANJARANI, Soghra; SAFADEL, Nooshafarin; DAHIM, Parisa; AMINI, Rana; MAHDAVI, Saeed; MIRAB SAMIEE, Siamak

    2013-01-01

    In September 2007 national standard manual was finalized and officially announced as the minimal quality requirements for all medical laboratories in the country. Apart from auditing laboratories, Reference Health Laboratory has performed benchmarking auditing of medical laboratory network (surveys) in provinces. 12th benchmarks performed in Tehran and Alborz provinces, Iran in 2010 in three stages. We tried to compare different processes, their quality and accordance with national standard measures between public and private hospital laboratories. The assessment tool was a standardized checklist consists of 164 questions. Analyzing process show although in most cases implementing the standard requirements are more prominent in private laboratories, there is still a long way to complete fulfillment of requirements, and it takes a lot of effort. Differences between laboratories in public and private sectors especially in laboratory personnel and management process are significant. Probably lack of motivation, plays a key role in obtaining less desirable results in laboratories in public sectors. PMID:23514840

  4. Deep Visual Attention Prediction

    NASA Astrophysics Data System (ADS)

    Wang, Wenguan; Shen, Jianbing

    2018-05-01

    In this work, we aim to predict human eye fixation with view-free scenes based on an end-to-end deep learning architecture. Although Convolutional Neural Networks (CNNs) have made substantial improvement on human attention prediction, it is still needed to improve CNN based attention models by efficiently leveraging multi-scale features. Our visual attention network is proposed to capture hierarchical saliency information from deep, coarse layers with global saliency information to shallow, fine layers with local saliency response. Our model is based on a skip-layer network structure, which predicts human attention from multiple convolutional layers with various reception fields. Final saliency prediction is achieved via the cooperation of those global and local predictions. Our model is learned in a deep supervision manner, where supervision is directly fed into multi-level layers, instead of previous approaches of providing supervision only at the output layer and propagating this supervision back to earlier layers. Our model thus incorporates multi-level saliency predictions within a single network, which significantly decreases the redundancy of previous approaches of learning multiple network streams with different input scales. Extensive experimental analysis on various challenging benchmark datasets demonstrate our method yields state-of-the-art performance with competitive inference time.

  5. Temporal stability in human interaction networks

    NASA Astrophysics Data System (ADS)

    Fabbri, Renato; Fabbri, Ricardo; Antunes, Deborah Christina; Pisani, Marilia Mello; de Oliveira, Osvaldo Novais

    2017-11-01

    This paper reports on stable (or invariant) properties of human interaction networks, with benchmarks derived from public email lists. Activity, recognized through messages sent, along time and topology were observed in snapshots in a timeline, and at different scales. Our analysis shows that activity is practically the same for all networks across timescales ranging from seconds to months. The principal components of the participants in the topological metrics space remain practically unchanged as different sets of messages are considered. The activity of participants follows the expected scale-free trace, thus yielding the hub, intermediary and peripheral classes of vertices by comparison against the Erdös-Rényi model. The relative sizes of these three sectors are essentially the same for all email lists and the same along time. Typically, < 15% of the vertices are hubs, 15%-45% are intermediary and > 45% are peripheral vertices. Similar results for the distribution of participants in the three sectors and for the relative importance of the topological metrics were obtained for 12 additional networks from Facebook, Twitter and ParticipaBR. These properties are consistent with the literature and may be general for human interaction networks, which has important implications for establishing a typology of participants based on quantitative criteria.

  6. AlignNemo: a local network alignment method to integrate homology and topology.

    PubMed

    Ciriello, Giovanni; Mina, Marco; Guzzi, Pietro H; Cannataro, Mario; Guerra, Concettina

    2012-01-01

    Local network alignment is an important component of the analysis of protein-protein interaction networks that may lead to the identification of evolutionary related complexes. We present AlignNemo, a new algorithm that, given the networks of two organisms, uncovers subnetworks of proteins that relate in biological function and topology of interactions. The discovered conserved subnetworks have a general topology and need not to correspond to specific interaction patterns, so that they more closely fit the models of functional complexes proposed in the literature. The algorithm is able to handle sparse interaction data with an expansion process that at each step explores the local topology of the networks beyond the proteins directly interacting with the current solution. To assess the performance of AlignNemo, we ran a series of benchmarks using statistical measures as well as biological knowledge. Based on reference datasets of protein complexes, AlignNemo shows better performance than other methods in terms of both precision and recall. We show our solutions to be biologically sound using the concept of semantic similarity applied to Gene Ontology vocabularies. The binaries of AlignNemo and supplementary details about the algorithms and the experiments are available at: sourceforge.net/p/alignnemo.

  7. Connectionist Architectures for Time Series Prediction of Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Weigend, Andreas Sebastian

    We investigate the effectiveness of connectionist networks for predicting the future continuation of temporal sequences. The problem of overfitting, particularly serious for short records of noisy data, is addressed by the method of weight-elimination: a term penalizing network complexity is added to the usual cost function in back-propagation. We describe the dynamics of the procedure and clarify the meaning of the parameters involved. From a Bayesian perspective, the complexity term can be usefully interpreted as an assumption about prior distribution of the weights. We analyze three time series. On the benchmark sunspot series, the networks outperform traditional statistical approaches. We show that the network performance does not deteriorate when there are more input units than needed. In the second example, the notoriously noisy foreign exchange rates series, we pick one weekday and one currency (DM vs. US). Given exchange rate information up to and including a Monday, the task is to predict the rate for the following Tuesday. Weight-elimination manages to extract a significant part of the dynamics and makes the solution interpretable. In the third example, the networks predict the resource utilization of a chaotic computational ecosystem for hundreds of steps forward in time.

  8. A Grey Wolf Optimizer for Modular Granular Neural Networks for Human Recognition

    PubMed Central

    Sánchez, Daniela; Melin, Patricia

    2017-01-01

    A grey wolf optimizer for modular neural network (MNN) with a granular approach is proposed. The proposed method performs optimal granulation of data and design of modular neural networks architectures to perform human recognition, and to prove its effectiveness benchmark databases of ear, iris, and face biometric measures are used to perform tests and comparisons against other works. The design of a modular granular neural network (MGNN) consists in finding optimal parameters of its architecture; these parameters are the number of subgranules, percentage of data for the training phase, learning algorithm, goal error, number of hidden layers, and their number of neurons. Nowadays, there is a great variety of approaches and new techniques within the evolutionary computing area, and these approaches and techniques have emerged to help find optimal solutions to problems or models and bioinspired algorithms are part of this area. In this work a grey wolf optimizer is proposed for the design of modular granular neural networks, and the results are compared against a genetic algorithm and a firefly algorithm in order to know which of these techniques provides better results when applied to human recognition. PMID:28894461

  9. A Grey Wolf Optimizer for Modular Granular Neural Networks for Human Recognition.

    PubMed

    Sánchez, Daniela; Melin, Patricia; Castillo, Oscar

    2017-01-01

    A grey wolf optimizer for modular neural network (MNN) with a granular approach is proposed. The proposed method performs optimal granulation of data and design of modular neural networks architectures to perform human recognition, and to prove its effectiveness benchmark databases of ear, iris, and face biometric measures are used to perform tests and comparisons against other works. The design of a modular granular neural network (MGNN) consists in finding optimal parameters of its architecture; these parameters are the number of subgranules, percentage of data for the training phase, learning algorithm, goal error, number of hidden layers, and their number of neurons. Nowadays, there is a great variety of approaches and new techniques within the evolutionary computing area, and these approaches and techniques have emerged to help find optimal solutions to problems or models and bioinspired algorithms are part of this area. In this work a grey wolf optimizer is proposed for the design of modular granular neural networks, and the results are compared against a genetic algorithm and a firefly algorithm in order to know which of these techniques provides better results when applied to human recognition.

  10. Optimizing a neural network for detection of moving vehicles in video

    NASA Astrophysics Data System (ADS)

    Fischer, Noëlle M.; Kruithof, Maarten C.; Bouma, Henri

    2017-10-01

    In the field of security and defense, it is extremely important to reliably detect moving objects, such as cars, ships, drones and missiles. Detection and analysis of moving objects in cameras near borders could be helpful to reduce illicit trading, drug trafficking, irregular border crossing, trafficking in human beings and smuggling. Many recent benchmarks have shown that convolutional neural networks are performing well in the detection of objects in images. Most deep-learning research effort focuses on classification or detection on single images. However, the detection of dynamic changes (e.g., moving objects, actions and events) in streaming video is extremely relevant for surveillance and forensic applications. In this paper, we combine an end-to-end feedforward neural network for static detection with a recurrent Long Short-Term Memory (LSTM) network for multi-frame analysis. We present a practical guide with special attention to the selection of the optimizer and batch size. The end-to-end network is able to localize and recognize the vehicles in video from traffic cameras. We show an efficient way to collect relevant in-domain data for training with minimal manual labor. Our results show that the combination with LSTM improves performance for the detection of moving vehicles.

  11. DANoC: An Efficient Algorithm and Hardware Codesign of Deep Neural Networks on Chip.

    PubMed

    Zhou, Xichuan; Li, Shengli; Tang, Fang; Hu, Shengdong; Lin, Zhi; Zhang, Lei

    2017-07-18

    Deep neural networks (NNs) are the state-of-the-art models for understanding the content of images and videos. However, implementing deep NNs in embedded systems is a challenging task, e.g., a typical deep belief network could exhaust gigabytes of memory and result in bandwidth and computational bottlenecks. To address this challenge, this paper presents an algorithm and hardware codesign for efficient deep neural computation. A hardware-oriented deep learning algorithm, named the deep adaptive network, is proposed to explore the sparsity of neural connections. By adaptively removing the majority of neural connections and robustly representing the reserved connections using binary integers, the proposed algorithm could save up to 99.9% memory utility and computational resources without undermining classification accuracy. An efficient sparse-mapping-memory-based hardware architecture is proposed to fully take advantage of the algorithmic optimization. Different from traditional Von Neumann architecture, the deep-adaptive network on chip (DANoC) brings communication and computation in close proximity to avoid power-hungry parameter transfers between on-board memory and on-chip computational units. Experiments over different image classification benchmarks show that the DANoC system achieves competitively high accuracy and efficiency comparing with the state-of-the-art approaches.

  12. Constructing Neuronal Network Models in Massively Parallel Environments.

    PubMed

    Ippen, Tammo; Eppler, Jochen M; Plesser, Hans E; Diesmann, Markus

    2017-01-01

    Recent advances in the development of data structures to represent spiking neuron network models enable us to exploit the complete memory of petascale computers for a single brain-scale network simulation. In this work, we investigate how well we can exploit the computing power of such supercomputers for the creation of neuronal networks. Using an established benchmark, we divide the runtime of simulation code into the phase of network construction and the phase during which the dynamical state is advanced in time. We find that on multi-core compute nodes network creation scales well with process-parallel code but exhibits a prohibitively large memory consumption. Thread-parallel network creation, in contrast, exhibits speedup only up to a small number of threads but has little overhead in terms of memory. We further observe that the algorithms creating instances of model neurons and their connections scale well for networks of ten thousand neurons, but do not show the same speedup for networks of millions of neurons. Our work uncovers that the lack of scaling of thread-parallel network creation is due to inadequate memory allocation strategies and demonstrates that thread-optimized memory allocators recover excellent scaling. An analysis of the loop order used for network construction reveals that more complex tests on the locality of operations significantly improve scaling and reduce runtime by allowing construction algorithms to step through large networks more efficiently than in existing code. The combination of these techniques increases performance by an order of magnitude and harnesses the increasingly parallel compute power of the compute nodes in high-performance clusters and supercomputers.

  13. Constructing Neuronal Network Models in Massively Parallel Environments

    PubMed Central

    Ippen, Tammo; Eppler, Jochen M.; Plesser, Hans E.; Diesmann, Markus

    2017-01-01

    Recent advances in the development of data structures to represent spiking neuron network models enable us to exploit the complete memory of petascale computers for a single brain-scale network simulation. In this work, we investigate how well we can exploit the computing power of such supercomputers for the creation of neuronal networks. Using an established benchmark, we divide the runtime of simulation code into the phase of network construction and the phase during which the dynamical state is advanced in time. We find that on multi-core compute nodes network creation scales well with process-parallel code but exhibits a prohibitively large memory consumption. Thread-parallel network creation, in contrast, exhibits speedup only up to a small number of threads but has little overhead in terms of memory. We further observe that the algorithms creating instances of model neurons and their connections scale well for networks of ten thousand neurons, but do not show the same speedup for networks of millions of neurons. Our work uncovers that the lack of scaling of thread-parallel network creation is due to inadequate memory allocation strategies and demonstrates that thread-optimized memory allocators recover excellent scaling. An analysis of the loop order used for network construction reveals that more complex tests on the locality of operations significantly improve scaling and reduce runtime by allowing construction algorithms to step through large networks more efficiently than in existing code. The combination of these techniques increases performance by an order of magnitude and harnesses the increasingly parallel compute power of the compute nodes in high-performance clusters and supercomputers. PMID:28559808

  14. Benchmarking to Identify Practice Variation in Test Ordering: A Potential Tool for Utilization Management.

    PubMed

    Signorelli, Heather; Straseski, Joely A; Genzen, Jonathan R; Walker, Brandon S; Jackson, Brian R; Schmidt, Robert L

    2015-01-01

    Appropriate test utilization is usually evaluated by adherence to published guidelines. In many cases, medical guidelines are not available. Benchmarking has been proposed as a method to identify practice variations that may represent inappropriate testing. This study investigated the use of benchmarking to identify sites with inappropriate utilization of testing for a particular analyte. We used a Web-based survey to compare 2 measures of vitamin D utilization: overall testing intensity (ratio of total vitamin D orders to blood-count orders) and relative testing intensity (ratio of 1,25(OH)2D to 25(OH)D test orders). A total of 81 facilities contributed data. The average overall testing intensity index was 0.165, or approximately 1 vitamin D test for every 6 blood-count tests. The average relative testing intensity index was 0.055, or one 1,25(OH)2D test for every 18 of the 25(OH)D tests. Both indexes varied considerably. Benchmarking can be used as a screening tool to identify outliers that may be associated with inappropriate test utilization. Copyright© by the American Society for Clinical Pathology (ASCP).

  15. The Interaction Network Ontology-supported modeling and mining of complex interactions represented with multiple keywords in biomedical literature.

    PubMed

    Özgür, Arzucan; Hur, Junguk; He, Yongqun

    2016-01-01

    The Interaction Network Ontology (INO) logically represents biological interactions, pathways, and networks. INO has been demonstrated to be valuable in providing a set of structured ontological terms and associated keywords to support literature mining of gene-gene interactions from biomedical literature. However, previous work using INO focused on single keyword matching, while many interactions are represented with two or more interaction keywords used in combination. This paper reports our extension of INO to include combinatory patterns of two or more literature mining keywords co-existing in one sentence to represent specific INO interaction classes. Such keyword combinations and related INO interaction type information could be automatically obtained via SPARQL queries, formatted in Excel format, and used in an INO-supported SciMiner, an in-house literature mining program. We studied the gene interaction sentences from the commonly used benchmark Learning Logic in Language (LLL) dataset and one internally generated vaccine-related dataset to identify and analyze interaction types containing multiple keywords. Patterns obtained from the dependency parse trees of the sentences were used to identify the interaction keywords that are related to each other and collectively represent an interaction type. The INO ontology currently has 575 terms including 202 terms under the interaction branch. The relations between the INO interaction types and associated keywords are represented using the INO annotation relations: 'has literature mining keywords' and 'has keyword dependency pattern'. The keyword dependency patterns were generated via running the Stanford Parser to obtain dependency relation types. Out of the 107 interactions in the LLL dataset represented with two-keyword interaction types, 86 were identified by using the direct dependency relations. The LLL dataset contained 34 gene regulation interaction types, each of which associated with multiple keywords. A hierarchical display of these 34 interaction types and their ancestor terms in INO resulted in the identification of specific gene-gene interaction patterns from the LLL dataset. The phenomenon of having multi-keyword interaction types was also frequently observed in the vaccine dataset. By modeling and representing multiple textual keywords for interaction types, the extended INO enabled the identification of complex biological gene-gene interactions represented with multiple keywords.

  16. Representational Distance Learning for Deep Neural Networks

    PubMed Central

    McClure, Patrick; Kriegeskorte, Nikolaus

    2016-01-01

    Deep neural networks (DNNs) provide useful models of visual representational transformations. We present a method that enables a DNN (student) to learn from the internal representational spaces of a reference model (teacher), which could be another DNN or, in the future, a biological brain. Representational spaces of the student and the teacher are characterized by representational distance matrices (RDMs). We propose representational distance learning (RDL), a stochastic gradient descent method that drives the RDMs of the student to approximate the RDMs of the teacher. We demonstrate that RDL is competitive with other transfer learning techniques for two publicly available benchmark computer vision datasets (MNIST and CIFAR-100), while allowing for architectural differences between student and teacher. By pulling the student's RDMs toward those of the teacher, RDL significantly improved visual classification performance when compared to baseline networks that did not use transfer learning. In the future, RDL may enable combined supervised training of deep neural networks using task constraints (e.g., images and category labels) and constraints from brain-activity measurements, so as to build models that replicate the internal representational spaces of biological brains. PMID:28082889

  17. Representational Distance Learning for Deep Neural Networks.

    PubMed

    McClure, Patrick; Kriegeskorte, Nikolaus

    2016-01-01

    Deep neural networks (DNNs) provide useful models of visual representational transformations. We present a method that enables a DNN (student) to learn from the internal representational spaces of a reference model (teacher), which could be another DNN or, in the future, a biological brain. Representational spaces of the student and the teacher are characterized by representational distance matrices (RDMs). We propose representational distance learning (RDL), a stochastic gradient descent method that drives the RDMs of the student to approximate the RDMs of the teacher. We demonstrate that RDL is competitive with other transfer learning techniques for two publicly available benchmark computer vision datasets (MNIST and CIFAR-100), while allowing for architectural differences between student and teacher. By pulling the student's RDMs toward those of the teacher, RDL significantly improved visual classification performance when compared to baseline networks that did not use transfer learning. In the future, RDL may enable combined supervised training of deep neural networks using task constraints (e.g., images and category labels) and constraints from brain-activity measurements, so as to build models that replicate the internal representational spaces of biological brains.

  18. Self-organising mixture autoregressive model for non-stationary time series modelling.

    PubMed

    Ni, He; Yin, Hujun

    2008-12-01

    Modelling non-stationary time series has been a difficult task for both parametric and nonparametric methods. One promising solution is to combine the flexibility of nonparametric models with the simplicity of parametric models. In this paper, the self-organising mixture autoregressive (SOMAR) network is adopted as a such mixture model. It breaks time series into underlying segments and at the same time fits local linear regressive models to the clusters of segments. In such a way, a global non-stationary time series is represented by a dynamic set of local linear regressive models. Neural gas is used for a more flexible structure of the mixture model. Furthermore, a new similarity measure has been introduced in the self-organising network to better quantify the similarity of time series segments. The network can be used naturally in modelling and forecasting non-stationary time series. Experiments on artificial, benchmark time series (e.g. Mackey-Glass) and real-world data (e.g. numbers of sunspots and Forex rates) are presented and the results show that the proposed SOMAR network is effective and superior to other similar approaches.

  19. Operation of remote mobile sensors for security of drinking water distribution systems.

    PubMed

    Perelman, By Lina; Ostfeld, Avi

    2013-09-01

    The deployment of fixed online water quality sensors in water distribution systems has been recognized as one of the key components of contamination warning systems for securing public health. This study proposes to explore how the inclusion of mobile sensors for inline monitoring of various water quality parameters (e.g., residual chlorine, pH) can enhance water distribution system security. Mobile sensors equipped with sampling, sensing, data acquisition, wireless transmission and power generation systems are being designed, fabricated, and tested, and prototypes are expected to be released in the very near future. This study initiates the development of a theoretical framework for modeling mobile sensor movement in water distribution systems and integrating the sensory data collected from stationary and non-stationary sensor nodes to increase system security. The methodology is applied and demonstrated on two benchmark networks. Performance of different sensor network designs are compared for fixed and combined fixed and mobile sensor networks. Results indicate that complementing online sensor networks with inline monitoring can increase detection likelihood and decrease mean time to detection. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. GeNN: a code generation framework for accelerated brain simulations

    NASA Astrophysics Data System (ADS)

    Yavuz, Esin; Turner, James; Nowotny, Thomas

    2016-01-01

    Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ. GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials, Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/.

Top