Sample records for integrating multiple sources

  1. Content Integration across Multiple Documents Reduces Memory for Sources

    ERIC Educational Resources Information Center

    Braasch, Jason L. G.; McCabe, Rebecca M.; Daniel, Frances

    2016-01-01

    The current experiments systematically examined semantic content integration as a mechanism for explaining source inattention and forgetting when reading-to-remember multiple texts. For all 3 experiments, degree of semantic overlap was manipulated amongst messages provided by various information sources. In Experiment 1, readers' source…

  2. Multiple-source spatial data fusion and integration research in the region unified planning management information system

    NASA Astrophysics Data System (ADS)

    Liu, Zhijun; Zhang, Liangpei; Liu, Zhenmin; Jiao, Hongbo; Chen, Liqun

    2008-12-01

    In order to manage the internal resources of Gulf of Tonkin and integrate multiple-source spatial data, the establishment of region unified plan management system is needed. The data fusion and the integrated research should be carried on because there are some difficulties in the course of the system's establishment. For example, kinds of planning and the project data format are different, and data criterion is not unified. Besides, the time state property is strong, and spatial reference is inconsistent, etc. In this article the ARCGIS ENGINE is introduced as the developing platform, key technologies are researched, such as multiple-source data transformation and fusion, remote sensing data and DEM fusion and integrated, plan and project data integration, and so on. Practice shows that the system improves the working efficiency of Guangxi Gulf of Tonkin Economic Zone Management Committee significantly and promotes planning construction work of the economic zone remarkably.

  3. Method and system of integrating information from multiple sources

    DOEpatents

    Alford, Francine A [Livermore, CA; Brinkerhoff, David L [Antioch, CA

    2006-08-15

    A system and method of integrating information from multiple sources in a document centric application system. A plurality of application systems are connected through an object request broker to a central repository. The information may then be posted on a webpage. An example of an implementation of the method and system is an online procurement system.

  4. FIA: An Open Forensic Integration Architecture for Composing Digital Evidence

    NASA Astrophysics Data System (ADS)

    Raghavan, Sriram; Clark, Andrew; Mohay, George

    The analysis and value of digital evidence in an investigation has been the domain of discourse in the digital forensic community for several years. While many works have considered different approaches to model digital evidence, a comprehensive understanding of the process of merging different evidence items recovered during a forensic analysis is still a distant dream. With the advent of modern technologies, pro-active measures are integral to keeping abreast of all forms of cyber crimes and attacks. This paper motivates the need to formalize the process of analyzing digital evidence from multiple sources simultaneously. In this paper, we present the forensic integration architecture (FIA) which provides a framework for abstracting the evidence source and storage format information from digital evidence and explores the concept of integrating evidence information from multiple sources. The FIA architecture identifies evidence information from multiple sources that enables an investigator to build theories to reconstruct the past. FIA is hierarchically composed of multiple layers and adopts a technology independent approach. FIA is also open and extensible making it simple to adapt to technological changes. We present a case study using a hypothetical car theft case to demonstrate the concepts and illustrate the value it brings into the field.

  5. An integral equation formulation for the diffraction from convex plates and polyhedra.

    PubMed

    Asheim, Andreas; Svensson, U Peter

    2013-06-01

    A formulation of the problem of scattering from obstacles with edges is presented. The formulation is based on decomposing the field into geometrical acoustics, first-order, and multiple-order edge diffraction components. An existing secondary-source model for edge diffraction from finite edges is extended to handle multiple diffraction of all orders. It is shown that the multiple-order diffraction component can be found via the solution to an integral equation formulated on pairs of edge points. This gives what can be called an edge source signal. In a subsequent step, this edge source signal is propagated to yield a multiple-order diffracted field, taking all diffraction orders into account. Numerical experiments demonstrate accurate response for frequencies down to 0 for thin plates and a cube. No problems with irregular frequencies, as happen with the Kirchhoff-Helmholtz integral equation, are observed for this formulation. For the axisymmetric scattering from a circular disc, a highly effective symmetric formulation results, and results agree with reference solutions across the entire frequency range.

  6. The performance of integrated transconductance amplifiers as variable current sources for bio-electric impedance measurements.

    PubMed

    Smith, D N

    1992-01-01

    Multiple applied current impedance measurement systems require numbers of current sources which operate simultaneously at the same frequency and within the same phase but at variable amplitudes. Investigations into the performance of some integrated operational transconductance amplifiers as variable current sources are described. Measurements of breakthrough, non-linearity and common-mode output levels for LM13600, NE5517 and CA3280 were carried out. The effects of such errors on the overall performance and stability of multiple current systems when driving floating loads are considered.

  7. Learning to Integrate Divergent Information Sources: The Interplay of Epistemic Cognition and Epistemic Metacognition

    ERIC Educational Resources Information Center

    Barzilai, Sarit; Ka'adan, Ibtisam

    2017-01-01

    Learning to integrate multiple information sources is vital for advancing learners' digital literacy. Previous studies have found that learners' epistemic metacognitive knowledge about the nature of knowledge and knowing is related to their strategic integration performance. The purpose of this study was to understand how these relations come into…

  8. Using Commercially available Tools for multi-faceted health assessment: Data Integration Lessons Learned

    PubMed Central

    Wilamowska, Katarzyna; Le, Thai; Demiris, George; Thompson, Hilaire

    2013-01-01

    Health monitoring data collected from multiple available intake devices provide a rich resource to support older adult health and wellness. Though large amounts of data can be collected, there is currently a lack of understanding on integration of these various data sources using commercially available products. This article describes an inexpensive approach to integrating data from multiple sources from a recently completed pilot project that assessed older adult wellness, and demonstrates challenges and benefits in pursuing data integration using commercially available products. The data in this project were sourced from a) electronically captured participant intake surveys, and existing commercial software output for b) vital signs and c) cognitive function. All the software used for data integration in this project was freeware and was chosen because of its ease of comprehension by novice database users. The methods and results of this approach provide a model for researchers with similar data integration needs to easily replicate this effort at a low cost. PMID:23728444

  9. Ensemble positive unlabeled learning for disease gene identification.

    PubMed

    Yang, Peng; Li, Xiaoli; Chua, Hon-Nian; Kwoh, Chee-Keong; Ng, See-Kiong

    2014-01-01

    An increasing number of genes have been experimentally confirmed in recent years as causative genes to various human diseases. The newly available knowledge can be exploited by machine learning methods to discover additional unknown genes that are likely to be associated with diseases. In particular, positive unlabeled learning (PU learning) methods, which require only a positive training set P (confirmed disease genes) and an unlabeled set U (the unknown candidate genes) instead of a negative training set N, have been shown to be effective in uncovering new disease genes in the current scenario. Using only a single source of data for prediction can be susceptible to bias due to incompleteness and noise in the genomic data and a single machine learning predictor prone to bias caused by inherent limitations of individual methods. In this paper, we propose an effective PU learning framework that integrates multiple biological data sources and an ensemble of powerful machine learning classifiers for disease gene identification. Our proposed method integrates data from multiple biological sources for training PU learning classifiers. A novel ensemble-based PU learning method EPU is then used to integrate multiple PU learning classifiers to achieve accurate and robust disease gene predictions. Our evaluation experiments across six disease groups showed that EPU achieved significantly better results compared with various state-of-the-art prediction methods as well as ensemble learning classifiers. Through integrating multiple biological data sources for training and the outputs of an ensemble of PU learning classifiers for prediction, we are able to minimize the potential bias and errors in individual data sources and machine learning algorithms to achieve more accurate and robust disease gene predictions. In the future, our EPU method provides an effective framework to integrate the additional biological and computational resources for better disease gene predictions.

  10. Intelligent multi-sensor integrations

    NASA Technical Reports Server (NTRS)

    Volz, Richard A.; Jain, Ramesh; Weymouth, Terry

    1989-01-01

    Growth in the intelligence of space systems requires the use and integration of data from multiple sensors. Generic tools are being developed for extracting and integrating information obtained from multiple sources. The full spectrum is addressed for issues ranging from data acquisition, to characterization of sensor data, to adaptive systems for utilizing the data. In particular, there are three major aspects to the project, multisensor processing, an adaptive approach to object recognition, and distributed sensor system integration.

  11. New opportunities of real-world data from clinical routine settings in life-cycle management of drugs: example of an integrative approach in multiple sclerosis.

    PubMed

    Rothenbacher, Dietrich; Capkun, Gorana; Uenal, Hatice; Tumani, Hayrettin; Geissbühler, Yvonne; Tilson, Hugh

    2015-05-01

    The assessment and demonstration of a positive benefit-risk balance of a drug is a life-long process and includes specific data from preclinical, clinical development and post-launch experience. However, new integrative approaches are needed to enrich evidence from clinical trials and sponsor-initiated observational studies with information from multiple additional sources, including registry information and other existing observational data and, more recently, health-related administrative claims and medical records databases. To illustrate the value of this approach, this paper exemplifies such a cross-package approach to the area of multiple sclerosis, exploring also possible analytic strategies when using these multiple sources of information.

  12. Analysis in Motion Initiative – Summarization Capability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arendt, Dustin; Pirrung, Meg; Jasper, Rob

    2017-06-22

    Analysts are tasked with integrating information from multiple data sources for important and timely decision making. What if sense making and overall situation awareness could be improved through visualization techniques? The Analysis in Motion initiative is advancing the ability to summarize and abstract multiple streams and static data sources over time.

  13. ToxPi GUI: An interactive visualization tool for transparent integration of data from diverse sources of evidence

    EPA Science Inventory

    Motivation: Scientists and regulators are often faced with complex decisions, where use of scarce resources must be prioritized using collections of diverse information. The Toxicological Prioritization Index (ToxPi™) was developed to enable integration of multiple sources of evi...

  14. An Applied Framework for Incorporating Multiple Sources of Uncertainty in Fisheries Stock Assessments.

    PubMed

    Scott, Finlay; Jardim, Ernesto; Millar, Colin P; Cerviño, Santiago

    2016-01-01

    Estimating fish stock status is very challenging given the many sources and high levels of uncertainty surrounding the biological processes (e.g. natural variability in the demographic rates), model selection (e.g. choosing growth or stock assessment models) and parameter estimation. Incorporating multiple sources of uncertainty in a stock assessment allows advice to better account for the risks associated with proposed management options, promoting decisions that are more robust to such uncertainty. However, a typical assessment only reports the model fit and variance of estimated parameters, thereby underreporting the overall uncertainty. Additionally, although multiple candidate models may be considered, only one is selected as the 'best' result, effectively rejecting the plausible assumptions behind the other models. We present an applied framework to integrate multiple sources of uncertainty in the stock assessment process. The first step is the generation and conditioning of a suite of stock assessment models that contain different assumptions about the stock and the fishery. The second step is the estimation of parameters, including fitting of the stock assessment models. The final step integrates across all of the results to reconcile the multi-model outcome. The framework is flexible enough to be tailored to particular stocks and fisheries and can draw on information from multiple sources to implement a broad variety of assumptions, making it applicable to stocks with varying levels of data availability The Iberian hake stock in International Council for the Exploration of the Sea (ICES) Divisions VIIIc and IXa is used to demonstrate the framework, starting from length-based stock and indices data. Process and model uncertainty are considered through the growth, natural mortality, fishing mortality, survey catchability and stock-recruitment relationship. Estimation uncertainty is included as part of the fitting process. Simple model averaging is used to integrate across the results and produce a single assessment that considers the multiple sources of uncertainty.

  15. Understanding and Integrating Multiple Science Texts: Summary Tasks Are Sometimes Better than Argument Tasks

    ERIC Educational Resources Information Center

    Gil, Laura; Braten, Ivar; Vidal-Abarca, Eduardo; Stromso, Helge I.

    2010-01-01

    One of the major challenges of a knowledge society is that students as well as other citizens must learn to understand and integrate information from multiple textual sources. Still, task and reader characteristics that may facilitate or constrain such intertextual processes are not well understood by researchers. In this study, we compare the…

  16. Integrating data from multiple sources for data completeness in a web-based registry for pediatric renal transplantation--the CERTAIN Registry.

    PubMed

    Köster, Lennart; Krupka, Kai; Höcker, Britta; Rahmel, Axel; Samuel, Undine; Zanen, Wouter; Opelz, Gerhard; Süsal, Caner; Döhler, Bernd; Plotnicki, Lukasz; Kohl, Christian D; Knaup, Petra; Tönshoff, Burkhard

    2015-01-01

    Patient registries are a useful tool to measure outcomes and compare the effectiveness of therapies in a specific patient population. High data quality and completeness are therefore advantageous for registry analysis. Data integration from multiple sources may increase completeness of the data. The pediatric renal transplantation registry CERTAIN identified Eurotransplant (ET) and the Collaborative Transplant Study (CTS) as possible partners for data exchange. Import and export interfaces with CTS and ET were implemented. All parties reached their projected goals and benefit from the exchange.

  17. Integration of Schemas on the Pre-Design Level Using the KCPM-Approach

    NASA Astrophysics Data System (ADS)

    Vöhringer, Jürgen; Mayr, Heinrich C.

    Integration is a central research and operational issue in information system design and development. It can be conducted on the system, schema, and view or data level. On the system level, integration deals with the progressive linking and testing of system components to merge their functional and technical characteristics and behavior into a comprehensive, interoperable system. Schema integration comprises the comparison and merging of two or more schemas, usually conceptual database schemas. The integration of data deals with merging the contents of multiple sources of related data. View integration is similar to schema integration, however focuses on views and queries on these instead of schemas. All these types of integration have in common, that two or more sources are merged and previously compared, in order to identify matches and mismatches as well as conflicts and inconsistencies. The sources may stem from heterogeneous companies, organizational units or projects. Integration enables the reuse and combined use of source components.

  18. Combining data from multiple sources using the CUAHSI Hydrologic Information System

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Ames, D. P.; Horsburgh, J. S.; Goodall, J. L.

    2012-12-01

    The Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI) has developed a Hydrologic Information System (HIS) to provide better access to data by enabling the publication, cataloging, discovery, retrieval, and analysis of hydrologic data using web services. The CUAHSI HIS is an Internet based system comprised of hydrologic databases and servers connected through web services as well as software for data publication, discovery and access. The HIS metadata catalog lists close to 100 web services registered to provide data through this system, ranging from large federal agency data sets to experimental watersheds managed by University investigators. The system's flexibility in storing and enabling public access to similarly formatted data and metadata has created a community data resource from governmental and academic data that might otherwise remain private or analyzed only in isolation. Comprehensive understanding of hydrology requires integration of this information from multiple sources. HydroDesktop is the client application developed as part of HIS to support data discovery and access through this system. HydroDesktop is founded on an open source GIS client and has a plug-in architecture that has enabled the integration of modeling and analysis capability with the functionality for data discovery and access. Model integration is possible through a plug-in built on the OpenMI standard and data visualization and analysis is supported by an R plug-in. This presentation will demonstrate HydroDesktop, showing how it provides an analysis environment within which data from multiple sources can be discovered, accessed and integrated.

  19. Methods for radiation detection and characterization using a multiple detector probe

    DOEpatents

    Akers, Douglas William; Roybal, Lyle Gene

    2014-11-04

    Apparatuses, methods, and systems relating to radiological characterization of environments are disclosed. Multi-detector probes with a plurality of detectors in a common housing may be used to substantially concurrently detect a plurality of different radiation activities and types. Multiple multi-detector probes may be used in a down-hole environment to substantially concurrently detect radioactive activity and contents of a buried waste container. Software may process, analyze, and integrate the data from the different multi-detector probes and the different detector types therein to provide source location and integrated analysis as to the source types and activity in the measured environment. Further, the integrated data may be used to compensate for differential density effects and the effects of radiation shielding materials within the volume being measured.

  20. Audiovisual Speech Integration in Pervasive Developmental Disorder: Evidence from Event-Related Potentials

    ERIC Educational Resources Information Center

    Magnee, Maurice J. C. M.; de Gelder, Beatrice; van Engeland, Herman; Kemner, Chantal

    2008-01-01

    Background: Integration of information from multiple sensory sources is an important prerequisite for successful social behavior, especially during face-to-face conversation. It has been suggested that communicative impairments among individuals with pervasive developmental disorders (PDD) might be caused by an inability to integrate synchronously…

  1. Modeling Environment for Total Risk-4M

    EPA Science Inventory

    MENTOR-4M uses an integrated, mechanistically consistent, source-to-dose modeling framework to quantify simultaneous exposures and doses of individuals and populations to multiple contaminants. It is an implementation of the MENTOR system for exposures to Multiple contaminants fr...

  2. Multisource Data Integration in Remote Sensing

    NASA Technical Reports Server (NTRS)

    Tilton, James C. (Editor)

    1991-01-01

    Papers presented at the workshop on Multisource Data Integration in Remote Sensing are compiled. The full text of these papers is included. New instruments and new sensors are discussed that can provide us with a large variety of new views of the real world. This huge amount of data has to be combined and integrated in a (computer-) model of this world. Multiple sources may give complimentary views of the world - consistent observations from different (and independent) data sources support each other and increase their credibility, while contradictions may be caused by noise, errors during processing, or misinterpretations, and can be identified as such. As a consequence, integration results are very reliable and represent a valid source of information for any geographical information system.

  3. Privacy preserving integration of health care data.

    PubMed

    Adam, Nabil; White, Tom; Shafiq, Basit; Vaidya, Jaideep; He, Xiaoyun

    2007-10-11

    For health care related research studies the medical records of patients may need to be retrieved from multiple sites with different regulations on the disclosure of health information. Given the sensitive nature of health care information, privacy is a major concern when patients' health care data is used for research purposes. In this paper, we propose an approach for integration and querying of health care data from multiple sources in a secure and privacy preserving manner.

  4. EnRICH: Extraction and Ranking using Integration and Criteria Heuristics.

    PubMed

    Zhang, Xia; Greenlee, M Heather West; Serb, Jeanne M

    2013-01-15

    High throughput screening technologies enable biologists to generate candidate genes at a rate that, due to time and cost constraints, cannot be studied by experimental approaches in the laboratory. Thus, it has become increasingly important to prioritize candidate genes for experiments. To accomplish this, researchers need to apply selection requirements based on their knowledge, which necessitates qualitative integration of heterogeneous data sources and filtration using multiple criteria. A similar approach can also be applied to putative candidate gene relationships. While automation can assist in this routine and imperative procedure, flexibility of data sources and criteria must not be sacrificed. A tool that can optimize the trade-off between automation and flexibility to simultaneously filter and qualitatively integrate data is needed to prioritize candidate genes and generate composite networks from heterogeneous data sources. We developed the java application, EnRICH (Extraction and Ranking using Integration and Criteria Heuristics), in order to alleviate this need. Here we present a case study in which we used EnRICH to integrate and filter multiple candidate gene lists in order to identify potential retinal disease genes. As a result of this procedure, a candidate pool of several hundred genes was narrowed down to five candidate genes, of which four are confirmed retinal disease genes and one is associated with a retinal disease state. We developed a platform-independent tool that is able to qualitatively integrate multiple heterogeneous datasets and use different selection criteria to filter each of them, provided the datasets are tables that have distinct identifiers (required) and attributes (optional). With the flexibility to specify data sources and filtering criteria, EnRICH automatically prioritizes candidate genes or gene relationships for biologists based on their specific requirements. Here, we also demonstrate that this tool can be effectively and easily used to apply highly specific user-defined criteria and can efficiently identify high quality candidate genes from relatively sparse datasets.

  5. An application of the Aggregate Exposure Pathway (AEP) and Adverse Outcome Pathway (AOP) frameworks to mechanistically integrate data sources across multiple species into cumulative risk assessment (CRA)

    EPA Science Inventory

    Toxicologists use dose-response data from both in vivo and in vitro experiments to evaluate the effects of chemical contaminants on organisms. Cumulative risk assessments (CRAs) consider the effects of multiple stressors on multiple endpoints, and utilize environmental exposure ...

  6. Epistemic Beliefs and Their Relation to Multiple-Text Comprehension: A Norwegian Program of Research

    ERIC Educational Resources Information Center

    Ferguson, Leila E.

    2015-01-01

    Nowadays, students are required to use multiple information sources to complete tasks, both in and out of school. The beliefs that students hold about knowledge and knowing--their epistemic beliefs-- have been linked to successful integration of information across multiple texts. Framed by literature on epistemic belief research from an…

  7. Dempster-Shafer theory applied to regulatory decision process for selecting safer alternatives to toxic chemicals in consumer products.

    PubMed

    Park, Sung Jin; Ogunseitan, Oladele A; Lejano, Raul P

    2014-01-01

    Regulatory agencies often face a dilemma when regulating chemicals in consumer products-namely, that of making decisions in the face of multiple, and sometimes conflicting, lines of evidence. We present an integrative approach for dealing with uncertainty and multiple pieces of evidence in toxics regulation. The integrative risk analytic framework is grounded in the Dempster-Shafer (D-S) theory that allows the analyst to combine multiple pieces of evidence and judgments from independent sources of information. We apply the integrative approach to the comparative risk assessment of bisphenol-A (BPA)-based polycarbonate and the functionally equivalent alternative, Eastman Tritan copolyester (ETC). Our results show that according to cumulative empirical evidence, the estimated probability of toxicity of BPA is 0.034, whereas the toxicity probability for ETC is 0.097. However, when we combine extant evidence with strength of confidence in the source (or expert judgment), we are guided by a richer interval measure, (Bel(t), Pl(t)). With the D-S derived measure, we arrive at various intervals for BPA, with the low-range estimate at (0.034, 0.250), and (0.097,0.688) for ETC. These new measures allow a reasonable basis for comparison and a justifiable procedure for decision making that takes advantage of multiple sources of evidence. Through the application of D-S theory to toxicity risk assessment, we show how a multiplicity of scientific evidence can be converted into a unified risk estimate, and how this information can be effectively used for comparative assessments to select potentially less toxic alternative chemicals. © 2013 SETAC.

  8. Towards establishing a human fecal contamination index in microbial source tracking

    EPA Science Inventory

    There have been significant advances in development of PCR-based methods to detect source associated DNA sequences (markers), but method evaluation has focused on performance with individual challenge samples. Little attention has been given to integration of multiple samples fro...

  9. A Versatile Integrated Ambient Ionization Source Platform.

    PubMed

    Ai, Wanpeng; Nie, Honggang; Song, Shiyao; Liu, Xiaoyun; Bai, Yu; Liu, Huwei

    2018-04-30

    The pursuit of high-throughput sample analysis from complex matrix demands development of multiple ionization techniques with complementary specialties. A versatile integrated ambient ionization source (iAmIS) platform is proposed in this work, based on the idea of integrating multiple functions, enhancing the efficiency of current ionization techniques, extending the applications, and decreasing the cost of the instrument. The design of the iAmIS platform combines flowing atmospheric pressure afterglow (FAPA) source/direct analysis in real time (DART), dielectric barrier discharge ionization (DBDI)/low-temperature plasma (LTP), desorption electrospray ionization (DESI), and laser desorption (LD) technique. All individual and combined ionization modes can be easily attained by modulating parameters. In particular, the FAPA/DART&DESI mode can realize the detection of polar and nonpolar compounds at the same time with two different ionization mechanisms: proton transfer and charge transfer. The introduction of LD contributes to the mass spectrometry imaging and the surface-assisted laser desorption (SALDI) under ambient condition. Compared with other individual or multi-mode ion source, the iAmIS platform provides the flexibility of choosing different ionization modes, broadens the scope of the analyte detection, and facilitates the analysis of complex samples. Graphical abstract ᅟ.

  10. A Versatile Integrated Ambient Ionization Source Platform

    NASA Astrophysics Data System (ADS)

    Ai, Wanpeng; Nie, Honggang; Song, Shiyao; Liu, Xiaoyun; Bai, Yu; Liu, Huwei

    2018-04-01

    The pursuit of high-throughput sample analysis from complex matrix demands development of multiple ionization techniques with complementary specialties. A versatile integrated ambient ionization source (iAmIS) platform is proposed in this work, based on the idea of integrating multiple functions, enhancing the efficiency of current ionization techniques, extending the applications, and decreasing the cost of the instrument. The design of the iAmIS platform combines flowing atmospheric pressure afterglow (FAPA) source/direct analysis in real time (DART), dielectric barrier discharge ionization (DBDI)/low-temperature plasma (LTP), desorption electrospray ionization (DESI), and laser desorption (LD) technique. All individual and combined ionization modes can be easily attained by modulating parameters. In particular, the FAPA/DART&DESI mode can realize the detection of polar and nonpolar compounds at the same time with two different ionization mechanisms: proton transfer and charge transfer. The introduction of LD contributes to the mass spectrometry imaging and the surface-assisted laser desorption (SALDI) under ambient condition. Compared with other individual or multi-mode ion source, the iAmIS platform provides the flexibility of choosing different ionization modes, broadens the scope of the analyte detection, and facilitates the analysis of complex samples. [Figure not available: see fulltext.

  11. Large-scale adverse effects related to treatment evidence standardization (LAERTES): an open scalable system for linking pharmacovigilance evidence sources with clinical data.

    PubMed

    2017-03-07

    Integrating multiple sources of pharmacovigilance evidence has the potential to advance the science of safety signal detection and evaluation. In this regard, there is a need for more research on how to integrate multiple disparate evidence sources while making the evidence computable from a knowledge representation perspective (i.e., semantic enrichment). Existing frameworks suggest well-promising outcomes for such integration but employ a rather limited number of sources. In particular, none have been specifically designed to support both regulatory and clinical use cases, nor have any been designed to add new resources and use cases through an open architecture. This paper discusses the architecture and functionality of a system called Large-scale Adverse Effects Related to Treatment Evidence Standardization (LAERTES) that aims to address these shortcomings. LAERTES provides a standardized, open, and scalable architecture for linking evidence sources relevant to the association of drugs with health outcomes of interest (HOIs). Standard terminologies are used to represent different entities. For example, drugs and HOIs are represented in RxNorm and Systematized Nomenclature of Medicine -- Clinical Terms respectively. At the time of this writing, six evidence sources have been loaded into the LAERTES evidence base and are accessible through prototype evidence exploration user interface and a set of Web application programming interface services. This system operates within a larger software stack provided by the Observational Health Data Sciences and Informatics clinical research framework, including the relational Common Data Model for observational patient data created by the Observational Medical Outcomes Partnership. Elements of the Linked Data paradigm facilitate the systematic and scalable integration of relevant evidence sources. The prototype LAERTES system provides useful functionality while creating opportunities for further research. Future work will involve improving the method for normalizing drug and HOI concepts across the integrated sources, aggregated evidence at different levels of a hierarchy of HOI concepts, and developing more advanced user interface for drug-HOI investigations.

  12. A comparison of graph- and kernel-based -omics data integration algorithms for classifying complex traits.

    PubMed

    Yan, Kang K; Zhao, Hongyu; Pang, Herbert

    2017-12-06

    High-throughput sequencing data are widely collected and analyzed in the study of complex diseases in quest of improving human health. Well-studied algorithms mostly deal with single data source, and cannot fully utilize the potential of these multi-omics data sources. In order to provide a holistic understanding of human health and diseases, it is necessary to integrate multiple data sources. Several algorithms have been proposed so far, however, a comprehensive comparison of data integration algorithms for classification of binary traits is currently lacking. In this paper, we focus on two common classes of integration algorithms, graph-based that depict relationships with subjects denoted by nodes and relationships denoted by edges, and kernel-based that can generate a classifier in feature space. Our paper provides a comprehensive comparison of their performance in terms of various measurements of classification accuracy and computation time. Seven different integration algorithms, including graph-based semi-supervised learning, graph sharpening integration, composite association network, Bayesian network, semi-definite programming-support vector machine (SDP-SVM), relevance vector machine (RVM) and Ada-boost relevance vector machine are compared and evaluated with hypertension and two cancer data sets in our study. In general, kernel-based algorithms create more complex models and require longer computation time, but they tend to perform better than graph-based algorithms. The performance of graph-based algorithms has the advantage of being faster computationally. The empirical results demonstrate that composite association network, relevance vector machine, and Ada-boost RVM are the better performers. We provide recommendations on how to choose an appropriate algorithm for integrating data from multiple sources.

  13. System and method for integrating and accessing multiple data sources within a data warehouse architecture

    DOEpatents

    Musick, Charles R [Castro Valley, CA; Critchlow, Terence [Livermore, CA; Ganesh, Madhaven [San Jose, CA; Slezak, Tom [Livermore, CA; Fidelis, Krzysztof [Brentwood, CA

    2006-12-19

    A system and method is disclosed for integrating and accessing multiple data sources within a data warehouse architecture. The metadata formed by the present method provide a way to declaratively present domain specific knowledge, obtained by analyzing data sources, in a consistent and useable way. Four types of information are represented by the metadata: abstract concepts, databases, transformations and mappings. A mediator generator automatically generates data management computer code based on the metadata. The resulting code defines a translation library and a mediator class. The translation library provides a data representation for domain specific knowledge represented in a data warehouse, including "get" and "set" methods for attributes that call transformation methods and derive a value of an attribute if it is missing. The mediator class defines methods that take "distinguished" high-level objects as input and traverse their data structures and enter information into the data warehouse.

  14. Field testing of two prototype air-source integrated heat pumps for net zero energy home (nZEH) application

    DOE PAGES

    Baxter, Van D.; Munk, Jeffrey D.

    2017-11-08

    By integrating multiple functions into a single system it offers potential efficiency and cost reduction benefits. Oak Ridge National Laboratory (ORNL) and its partners have designed, developed, and tested two air-source heat pump designs that not only provide space heating and cooling, but also water heating, dehumidification, and ventilation functions. Some details on the design, simulated performance, prototype field test, measured performance, and lessons learned are provided.

  15. Field testing of two prototype air-source integrated heat pumps for net zero energy home (nZEH) application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baxter, Van D.; Munk, Jeffrey D.

    By integrating multiple functions into a single system it offers potential efficiency and cost reduction benefits. Oak Ridge National Laboratory (ORNL) and its partners have designed, developed, and tested two air-source heat pump designs that not only provide space heating and cooling, but also water heating, dehumidification, and ventilation functions. Some details on the design, simulated performance, prototype field test, measured performance, and lessons learned are provided.

  16. NoSQL data model for semi-automatic integration of ethnomedicinal plant data from multiple sources.

    PubMed

    Ningthoujam, Sanjoy Singh; Choudhury, Manabendra Dutta; Potsangbam, Kumar Singh; Chetia, Pankaj; Nahar, Lutfun; Sarker, Satyajit D; Basar, Norazah; Das Talukdar, Anupam

    2014-01-01

    Sharing traditional knowledge with the scientific community could refine scientific approaches to phytochemical investigation and conservation of ethnomedicinal plants. As such, integration of traditional knowledge with scientific data using a single platform for sharing is greatly needed. However, ethnomedicinal data are available in heterogeneous formats, which depend on cultural aspects, survey methodology and focus of the study. Phytochemical and bioassay data are also available from many open sources in various standards and customised formats. To design a flexible data model that could integrate both primary and curated ethnomedicinal plant data from multiple sources. The current model is based on MongoDB, one of the Not only Structured Query Language (NoSQL) databases. Although it does not contain schema, modifications were made so that the model could incorporate both standard and customised ethnomedicinal plant data format from different sources. The model presented can integrate both primary and secondary data related to ethnomedicinal plants. Accommodation of disparate data was accomplished by a feature of this database that supported a different set of fields for each document. It also allowed storage of similar data having different properties. The model presented is scalable to a highly complex level with continuing maturation of the database, and is applicable for storing, retrieving and sharing ethnomedicinal plant data. It can also serve as a flexible alternative to a relational and normalised database. Copyright © 2014 John Wiley & Sons, Ltd.

  17. Deep Kalman Filter: Simultaneous Multi-Sensor Integration and Modelling; A GNSS/IMU Case Study

    PubMed Central

    Hosseinyalamdary, Siavash

    2018-01-01

    Bayes filters, such as the Kalman and particle filters, have been used in sensor fusion to integrate two sources of information and obtain the best estimate of unknowns. The efficient integration of multiple sensors requires deep knowledge of their error sources. Some sensors, such as Inertial Measurement Unit (IMU), have complicated error sources. Therefore, IMU error modelling and the efficient integration of IMU and Global Navigation Satellite System (GNSS) observations has remained a challenge. In this paper, we developed deep Kalman filter to model and remove IMU errors and, consequently, improve the accuracy of IMU positioning. To achieve this, we added a modelling step to the prediction and update steps of the Kalman filter, so that the IMU error model is learned during integration. The results showed our deep Kalman filter outperformed the conventional Kalman filter and reached a higher level of accuracy. PMID:29695119

  18. Deep Kalman Filter: Simultaneous Multi-Sensor Integration and Modelling; A GNSS/IMU Case Study.

    PubMed

    Hosseinyalamdary, Siavash

    2018-04-24

    Bayes filters, such as the Kalman and particle filters, have been used in sensor fusion to integrate two sources of information and obtain the best estimate of unknowns. The efficient integration of multiple sensors requires deep knowledge of their error sources. Some sensors, such as Inertial Measurement Unit (IMU), have complicated error sources. Therefore, IMU error modelling and the efficient integration of IMU and Global Navigation Satellite System (GNSS) observations has remained a challenge. In this paper, we developed deep Kalman filter to model and remove IMU errors and, consequently, improve the accuracy of IMU positioning. To achieve this, we added a modelling step to the prediction and update steps of the Kalman filter, so that the IMU error model is learned during integration. The results showed our deep Kalman filter outperformed the conventional Kalman filter and reached a higher level of accuracy.

  19. Graph-Based Weakly-Supervised Methods for Information Extraction & Integration

    ERIC Educational Resources Information Center

    Talukdar, Partha Pratim

    2010-01-01

    The variety and complexity of potentially-related data resources available for querying--webpages, databases, data warehouses--has been growing ever more rapidly. There is a growing need to pose integrative queries "across" multiple such sources, exploiting foreign keys and other means of interlinking data to merge information from diverse…

  20. Multiple fingerprinting analyses in quality control of Cassiae Semen polysaccharides.

    PubMed

    Cheng, Jing; He, Siyu; Wan, Qiang; Jing, Pu

    2018-03-01

    Quality control issue overshadows potential health benefits of Cassiae Semen due to the analytic limitations. In this study, multiple-fingerprint analysis integrated with several chemometrics was performed to assess the polysaccharide quality of Cassiae Semen harvested from different locations. FT-IR, HPLC, and GC fingerprints of polysaccharide extracts from the authentic source were established as standard profiles, applying to assess the quality of foreign sources. Analyses of FT-IR fingerprints of polysaccharide extracts using either Pearson correlation analysis or principal component analysis (PCA), or HPLC fingerprints of partially hydrolyzed polysaccharides with PCA, distinguished the foreign sources from the authentic source. However, HPLC or GC fingerprints of completely hydrolyzed polysaccharides couldn't identify all foreign sources and the methodology using GC is quite limited in determining the monosaccharide composition. This indicates that FT-IR/HPLC fingerprints of non/partially-hydrolyzed polysaccharides, respectively, accompanied by multiple chemometrics methods, might be potentially applied in detecting and differentiating sources of Cassiae Semen. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Searching Across the International Space Station Databases

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; McDermott, William J.; Smith, Ernest E.; Bell, David G.; Gurram, Mohana

    2007-01-01

    Data access in the enterprise generally requires us to combine data from different sources and different formats. It is advantageous thus to focus on the intersection of the knowledge across sources and domains; keeping irrelevant knowledge around only serves to make the integration more unwieldy and more complicated than necessary. A context search over multiple domain is proposed in this paper to use context sensitive queries to support disciplined manipulation of domain knowledge resources. The objective of a context search is to provide the capability for interrogating many domain knowledge resources, which are largely semantically disjoint. The search supports formally the tasks of selecting, combining, extending, specializing, and modifying components from a diverse set of domains. This paper demonstrates a new paradigm in composition of information for enterprise applications. In particular, it discusses an approach to achieving data integration across multiple sources, in a manner that does not require heavy investment in database and middleware maintenance. This lean approach to integration leads to cost-effectiveness and scalability of data integration with an underlying schemaless object-relational database management system. This highly scalable, information on demand system framework, called NX-Search, which is an implementation of an information system built on NETMARK. NETMARK is a flexible, high-throughput open database integration framework for managing, storing, and searching unstructured or semi-structured arbitrary XML and HTML used widely at the National Aeronautics Space Administration (NASA) and industry.

  2. Bayesian multiple-source localization in an uncertain ocean environment.

    PubMed

    Dosso, Stan E; Wilmut, Michael J

    2011-06-01

    This paper considers simultaneous localization of multiple acoustic sources when properties of the ocean environment (water column and seabed) are poorly known. A Bayesian formulation is developed in which the environmental parameters, noise statistics, and locations and complex strengths (amplitudes and phases) of multiple sources are considered to be unknown random variables constrained by acoustic data and prior information. Two approaches are considered for estimating source parameters. Focalization maximizes the posterior probability density (PPD) over all parameters using adaptive hybrid optimization. Marginalization integrates the PPD using efficient Markov-chain Monte Carlo methods to produce joint marginal probability distributions for source ranges and depths, from which source locations are obtained. This approach also provides quantitative uncertainty analysis for all parameters, which can aid in understanding of the inverse problem and may be of practical interest (e.g., source-strength probability distributions). In both approaches, closed-form maximum-likelihood expressions for source strengths and noise variance at each frequency allow these parameters to be sampled implicitly, substantially reducing the dimensionality and difficulty of the inversion. Examples are presented of both approaches applied to single- and multi-frequency localization of multiple sources in an uncertain shallow-water environment, and a Monte Carlo performance evaluation study is carried out. © 2011 Acoustical Society of America

  3. Issues in Humanoid Audition and Sound Source Localization by Active Audition

    NASA Astrophysics Data System (ADS)

    Nakadai, Kazuhiro; Okuno, Hiroshi G.; Kitano, Hiroaki

    In this paper, we present an active audition system which is implemented on the humanoid robot "SIG the humanoid". The audition system for highly intelligent humanoids localizes sound sources and recognizes auditory events in the auditory scene. Active audition reported in this paper enables SIG to track sources by integrating audition, vision, and motor movements. Given the multiple sound sources in the auditory scene, SIG actively moves its head to improve localization by aligning microphones orthogonal to the sound source and by capturing the possible sound sources by vision. However, such an active head movement inevitably creates motor noises.The system adaptively cancels motor noises using motor control signals and the cover acoustics. The experimental result demonstrates that active audition by integration of audition, vision, and motor control attains sound source tracking in variety of conditions.onditions.

  4. ToxPi Graphical User Interface 2.0: Dynamic exploration, visualization, and sharing of integrated data models.

    PubMed

    Marvel, Skylar W; To, Kimberly; Grimm, Fabian A; Wright, Fred A; Rusyn, Ivan; Reif, David M

    2018-03-05

    Drawing integrated conclusions from diverse source data requires synthesis across multiple types of information. The ToxPi (Toxicological Prioritization Index) is an analytical framework that was developed to enable integration of multiple sources of evidence by transforming data into integrated, visual profiles. Methodological improvements have advanced ToxPi and expanded its applicability, necessitating a new, consolidated software platform to provide functionality, while preserving flexibility for future updates. We detail the implementation of a new graphical user interface for ToxPi (Toxicological Prioritization Index) that provides interactive visualization, analysis, reporting, and portability. The interface is deployed as a stand-alone, platform-independent Java application, with a modular design to accommodate inclusion of future analytics. The new ToxPi interface introduces several features, from flexible data import formats (including legacy formats that permit backward compatibility) to similarity-based clustering to options for high-resolution graphical output. We present the new ToxPi interface for dynamic exploration, visualization, and sharing of integrated data models. The ToxPi interface is freely-available as a single compressed download that includes the main Java executable, all libraries, example data files, and a complete user manual from http://toxpi.org .

  5. Cognitively Based Assessment of Research and Inquiry Skills: Defining a Key Practice in the English Language Arts. Research Report. ETS RR-15-35

    ERIC Educational Resources Information Center

    Sparks, Jesse R.; Deane, Paul

    2015-01-01

    Current educational standards call for students to engage in the skills of research and inquiry, with a focus on gathering evidence from multiple information sources, evaluating the credibility of those sources, and writing an integrated synthesis that cites evidence from those sources. Opportunities to build strong research skills are critical,…

  6. Data integration of structured and unstructured sources for assigning clinical codes to patient stays

    PubMed Central

    Luyckx, Kim; Luyten, Léon; Daelemans, Walter; Van den Bulcke, Tim

    2016-01-01

    Objective Enormous amounts of healthcare data are becoming increasingly accessible through the large-scale adoption of electronic health records. In this work, structured and unstructured (textual) data are combined to assign clinical diagnostic and procedural codes (specifically ICD-9-CM) to patient stays. We investigate whether integrating these heterogeneous data types improves prediction strength compared to using the data types in isolation. Methods Two separate data integration approaches were evaluated. Early data integration combines features of several sources within a single model, and late data integration learns a separate model per data source and combines these predictions with a meta-learner. This is evaluated on data sources and clinical codes from a broad set of medical specialties. Results When compared with the best individual prediction source, late data integration leads to improvements in predictive power (eg, overall F-measure increased from 30.6% to 38.3% for International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) diagnostic codes), while early data integration is less consistent. The predictive strength strongly differs between medical specialties, both for ICD-9-CM diagnostic and procedural codes. Discussion Structured data provides complementary information to unstructured data (and vice versa) for predicting ICD-9-CM codes. This can be captured most effectively by the proposed late data integration approach. Conclusions We demonstrated that models using multiple electronic health record data sources systematically outperform models using data sources in isolation in the task of predicting ICD-9-CM codes over a broad range of medical specialties. PMID:26316458

  7. Bayesian Integrated Microbial Forensics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jarman, Kristin H.; Kreuzer-Martin, Helen W.; Wunschel, David S.

    2008-06-01

    In the aftermath of the 2001 anthrax letters, researchers have been exploring ways to predict the production environment of unknown source microorganisms. Different mass spectral techniques are being developed to characterize components of a microbe’s culture medium including water, carbon and nitrogen sources, metal ions added, and the presence of agar. Individually, each technique has the potential to identify one or two ingredients in a culture medium recipe. However, by integrating data from multiple mass spectral techniques, a more complete characterization is possible. We present a Bayesian statistical approach to integrated microbial forensics and illustrate its application on spores grownmore » in different culture media.« less

  8. Integration and Beyond

    PubMed Central

    Stead, William W.; Miller, Randolph A.; Musen, Mark A.; Hersh, William R.

    2000-01-01

    The vision of integrating information—from a variety of sources, into the way people work, to improve decisions and process—is one of the cornerstones of biomedical informatics. Thoughts on how this vision might be realized have evolved as improvements in information and communication technologies, together with discoveries in biomedical informatics, and have changed the art of the possible. This review identified three distinct generations of “integration” projects. First-generation projects create a database and use it for multiple purposes. Second-generation projects integrate by bringing information from various sources together through enterprise information architecture. Third-generation projects inter-relate disparate but accessible information sources to provide the appearance of integration. The review suggests that the ideas developed in the earlier generations have not been supplanted by ideas from subsequent generations. Instead, the ideas represent a continuum of progress along the three dimensions of workflow, structure, and extraction. PMID:10730596

  9. Unipro UGENE: a unified bioinformatics toolkit.

    PubMed

    Okonechnikov, Konstantin; Golosova, Olga; Fursov, Mikhail

    2012-04-15

    Unipro UGENE is a multiplatform open-source software with the main goal of assisting molecular biologists without much expertise in bioinformatics to manage, analyze and visualize their data. UGENE integrates widely used bioinformatics tools within a common user interface. The toolkit supports multiple biological data formats and allows the retrieval of data from remote data sources. It provides visualization modules for biological objects such as annotated genome sequences, Next Generation Sequencing (NGS) assembly data, multiple sequence alignments, phylogenetic trees and 3D structures. Most of the integrated algorithms are tuned for maximum performance by the usage of multithreading and special processor instructions. UGENE includes a visual environment for creating reusable workflows that can be launched on local resources or in a High Performance Computing (HPC) environment. UGENE is written in C++ using the Qt framework. The built-in plugin system and structured UGENE API make it possible to extend the toolkit with new functionality. UGENE binaries are freely available for MS Windows, Linux and Mac OS X at http://ugene.unipro.ru/download.html. UGENE code is licensed under the GPLv2; the information about the code licensing and copyright of integrated tools can be found in the LICENSE.3rd_party file provided with the source bundle.

  10. Integrating open-source software applications to build molecular dynamics systems.

    PubMed

    Allen, Bruce M; Predecki, Paul K; Kumosa, Maciej

    2014-04-05

    Three open-source applications, NanoEngineer-1, packmol, and mis2lmp are integrated using an open-source file format to quickly create molecular dynamics (MD) cells for simulation. The three software applications collectively make up the open-source software (OSS) suite known as MD Studio (MDS). The software is validated through software engineering practices and is verified through simulation of the diglycidyl ether of bisphenol-a and isophorone diamine (DGEBA/IPD) system. Multiple simulations are run using the MDS software to create MD cells, and the data generated are used to calculate density, bulk modulus, and glass transition temperature of the DGEBA/IPD system. Simulation results compare well with published experimental and numerical results. The MDS software prototype confirms that OSS applications can be analyzed against real-world research requirements and integrated to create a new capability. Copyright © 2014 Wiley Periodicals, Inc.

  11. Federated querying architecture with clinical & translational health IT application.

    PubMed

    Livne, Oren E; Schultz, N Dustin; Narus, Scott P

    2011-10-01

    We present a software architecture that federates data from multiple heterogeneous health informatics data sources owned by multiple organizations. The architecture builds upon state-of-the-art open-source Java and XML frameworks in innovative ways. It consists of (a) federated query engine, which manages federated queries and result set aggregation via a patient identification service; and (b) data source facades, which translate the physical data models into a common model on-the-fly and handle large result set streaming. System modules are connected via reusable Apache Camel integration routes and deployed to an OSGi enterprise service bus. We present an application of our architecture that allows users to construct queries via the i2b2 web front-end, and federates patient data from the University of Utah Enterprise Data Warehouse and the Utah Population database. Our system can be easily adopted, extended and integrated with existing SOA Healthcare and HL7 frameworks such as i2b2 and caGrid.

  12. KaBOB: ontology-based semantic integration of biomedical databases.

    PubMed

    Livingston, Kevin M; Bada, Michael; Baumgartner, William A; Hunter, Lawrence E

    2015-04-23

    The ability to query many independent biological databases using a common ontology-based semantic model would facilitate deeper integration and more effective utilization of these diverse and rapidly growing resources. Despite ongoing work moving toward shared data formats and linked identifiers, significant problems persist in semantic data integration in order to establish shared identity and shared meaning across heterogeneous biomedical data sources. We present five processes for semantic data integration that, when applied collectively, solve seven key problems. These processes include making explicit the differences between biomedical concepts and database records, aggregating sets of identifiers denoting the same biomedical concepts across data sources, and using declaratively represented forward-chaining rules to take information that is variably represented in source databases and integrating it into a consistent biomedical representation. We demonstrate these processes and solutions by presenting KaBOB (the Knowledge Base Of Biomedicine), a knowledge base of semantically integrated data from 18 prominent biomedical databases using common representations grounded in Open Biomedical Ontologies. An instance of KaBOB with data about humans and seven major model organisms can be built using on the order of 500 million RDF triples. All source code for building KaBOB is available under an open-source license. KaBOB is an integrated knowledge base of biomedical data representationally based in prominent, actively maintained Open Biomedical Ontologies, thus enabling queries of the underlying data in terms of biomedical concepts (e.g., genes and gene products, interactions and processes) rather than features of source-specific data schemas or file formats. KaBOB resolves many of the issues that routinely plague biomedical researchers intending to work with data from multiple data sources and provides a platform for ongoing data integration and development and for formal reasoning over a wealth of integrated biomedical data.

  13. HRATIS first year evaluation report

    DOT National Transportation Integrated Search

    2001-09-01

    The ITS integration project, the Hampton Roads Advanced Traveler Information System (HRATIS), is a public-private partnership. The service collects information from multiple sources, fuses the data elements, and distributes the information through va...

  14. Micropropagation and in vitro flowering of Rauvolfia tetraphylla; a potent source of anti-hypertension drugs.

    PubMed

    Sarma, D; Sarma, S; Baruah, A

    1999-04-01

    A simple protocol for in vitro mass multiplication of Rauvolfia tetraphylla (Apocynaceae) has been developed. The endophytic microflora was controlled by adopting integrated measures. Multiple shoot development was achieved on MS + Kin (0.1-0.2 mg/l) + BAP (0.4-0.5 mg/l) media. Rooting from in vitro shoots occurred on NAA containing media. In vitro flowering was induced in shoot multiplication media.

  15. Developing CCUS system models to handle the complexity of multiple sources and sinks: An update on Tasks 5.3 and 5.4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Middleton, Richard Stephen

    2017-05-22

    This presentation is part of US-China Clean Coal project and describes the impact of power plant cycling, techno economic modeling of combined IGCC and CCS, integrated capacity generation decision making for power utilities, and a new decision support tool for integrated assessment of CCUS.

  16. Vehicle-to-Grid Integration | Energy Systems Integration Facility | NREL

    Science.gov Websites

    energy sources. We work with automakers, charging station manufacturers, and utilities to test control powertrain engineering, and [I] have the ability to do that. But I don't necessarily want to test the hose on . Capabilities Electrolyzer stack test bed (up to 1 megawatt) Multiple hydrogen compression and storage stages

  17. Integration and Optimization of Alternative Sources of Energy in a Remote Region

    NASA Astrophysics Data System (ADS)

    Berberi, Pellumb; Inodnorjani, Spiro; Aleti, Riza

    2010-01-01

    In a remote coastal region supply of energy from national grid is insufficient for a sustainable development. Integration and optimization of local alternative renewable energy sources is an optional solution of the problem. In this paper we have studied the energetic potential of local sources of renewable energy (water, solar, wind and biomass). A bottom-up energy system optimization model is proposed in order to support planning policies for promoting the use of renewable energy sources. A software, based on multiple factors and constrains analysis for optimization energy flow is proposed, which provides detailed information for exploitation each source of energy, power and heat generation, GHG emissions and end-use sectors. Economical analysis shows that with existing technologies both stand alone and regional facilities may be feasible. Improving specific legislation will foster investments from Central or Local Governments and also from individuals, private companies or small families. The study is carried on the frame work of a FP6 project "Integrated Renewable Energy System."

  18. Cross-scale integration of knowledge for predicting species ranges: a metamodeling framework

    PubMed Central

    Talluto, Matthew V.; Boulangeat, Isabelle; Ameztegui, Aitor; Aubin, Isabelle; Berteaux, Dominique; Butler, Alyssa; Doyon, Frédérik; Drever, C. Ronnie; Fortin, Marie-Josée; Franceschini, Tony; Liénard, Jean; McKenney, Dan; Solarik, Kevin A.; Strigul, Nikolay; Thuiller, Wilfried; Gravel, Dominique

    2016-01-01

    Aim Current interest in forecasting changes to species ranges have resulted in a multitude of approaches to species distribution models (SDMs). However, most approaches include only a small subset of the available information, and many ignore smaller-scale processes such as growth, fecundity, and dispersal. Furthermore, different approaches often produce divergent predictions with no simple method to reconcile them. Here, we present a flexible framework for integrating models at multiple scales using hierarchical Bayesian methods. Location Eastern North America (as an example). Methods Our framework builds a metamodel that is constrained by the results of multiple sub-models and provides probabilistic estimates of species presence. We applied our approach to a simulated dataset to demonstrate the integration of a correlative SDM with a theoretical model. In a second example, we built an integrated model combining the results of a physiological model with presence-absence data for sugar maple (Acer saccharum), an abundant tree native to eastern North America. Results For both examples, the integrated models successfully included information from all data sources and substantially improved the characterization of uncertainty. For the second example, the integrated model outperformed the source models with respect to uncertainty when modelling the present range of the species. When projecting into the future, the model provided a consensus view of two models that differed substantially in their predictions. Uncertainty was reduced where the models agreed and was greater where they diverged, providing a more realistic view of the state of knowledge than either source model. Main conclusions We conclude by discussing the potential applications of our method and its accessibility to applied ecologists. In ideal cases, our framework can be easily implemented using off-the-shelf software. The framework has wide potential for use in species distribution modelling and can drive better integration of multi-source and multi-scale data into ecological decision-making. PMID:27499698

  19. Cross-scale integration of knowledge for predicting species ranges: a metamodeling framework.

    PubMed

    Talluto, Matthew V; Boulangeat, Isabelle; Ameztegui, Aitor; Aubin, Isabelle; Berteaux, Dominique; Butler, Alyssa; Doyon, Frédérik; Drever, C Ronnie; Fortin, Marie-Josée; Franceschini, Tony; Liénard, Jean; McKenney, Dan; Solarik, Kevin A; Strigul, Nikolay; Thuiller, Wilfried; Gravel, Dominique

    2016-02-01

    Current interest in forecasting changes to species ranges have resulted in a multitude of approaches to species distribution models (SDMs). However, most approaches include only a small subset of the available information, and many ignore smaller-scale processes such as growth, fecundity, and dispersal. Furthermore, different approaches often produce divergent predictions with no simple method to reconcile them. Here, we present a flexible framework for integrating models at multiple scales using hierarchical Bayesian methods. Eastern North America (as an example). Our framework builds a metamodel that is constrained by the results of multiple sub-models and provides probabilistic estimates of species presence. We applied our approach to a simulated dataset to demonstrate the integration of a correlative SDM with a theoretical model. In a second example, we built an integrated model combining the results of a physiological model with presence-absence data for sugar maple ( Acer saccharum ), an abundant tree native to eastern North America. For both examples, the integrated models successfully included information from all data sources and substantially improved the characterization of uncertainty. For the second example, the integrated model outperformed the source models with respect to uncertainty when modelling the present range of the species. When projecting into the future, the model provided a consensus view of two models that differed substantially in their predictions. Uncertainty was reduced where the models agreed and was greater where they diverged, providing a more realistic view of the state of knowledge than either source model. We conclude by discussing the potential applications of our method and its accessibility to applied ecologists. In ideal cases, our framework can be easily implemented using off-the-shelf software. The framework has wide potential for use in species distribution modelling and can drive better integration of multi-source and multi-scale data into ecological decision-making.

  20. Lean Middleware

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Bell, David g.; Ashish, Naveen

    2005-01-01

    This paper describes an approach to achieving data integration across multiple sources in an enterprise, in a manner that is cost efficient and economically scalable. We present an approach that does not rely on major investment in structured, heavy-weight database systems for data storage or heavy-weight middleware responsible for integrated access. The approach is centered around pushing any required data structure and semantics functionality (schema) to application clients, as well as pushing integration specification and functionality to clients where integration can be performed on-the-fly .

  1. Fusion or confusion: knowledge or nonsense?

    NASA Astrophysics Data System (ADS)

    Rothman, Peter L.; Denton, Richard V.

    1991-08-01

    The terms 'data fusion,' 'sensor fusion,' multi-sensor integration,' and 'multi-source integration' have been used widely in the technical literature to refer to a variety of techniques, technologies, systems, and applications which employ and/or combine data derived from multiple information sources. Applications of data fusion range from real-time fusion of sensor information for the navigation of mobile robots to the off-line fusion of both human and technical strategic intelligence data. The Department of Defense Critical Technologies Plan lists data fusion in the highest priority group of critical technologies, but just what is data fusion? The DoD Critical Technologies Plan states that data fusion involves 'the acquisition, integration, filtering, correlation, and synthesis of useful data from diverse sources for the purposes of situation/environment assessment, planning, detecting, verifying, diagnosing problems, aiding tactical and strategic decisions, and improving system performance and utility.' More simply states, sensor fusion refers to the combination of data from multiple sources to provide enhanced information quality and availability over that which is available from any individual source alone. This paper presents a survey of the state-of-the- art in data fusion technologies, system components, and applications. A set of characteristics which can be utilized to classify data fusion systems is presented. Additionally, a unifying mathematical and conceptual framework within which to understand and organize fusion technologies is described. A discussion of often overlooked issues in the development of sensor fusion systems is also presented.

  2. Semi-automatic Data Integration using Karma

    NASA Astrophysics Data System (ADS)

    Garijo, D.; Kejriwal, M.; Pierce, S. A.; Houser, P. I. Q.; Peckham, S. D.; Stanko, Z.; Hardesty Lewis, D.; Gil, Y.; Pennington, D. D.; Knoblock, C.

    2017-12-01

    Data integration applications are ubiquitous in scientific disciplines. A state-of-the-art data integration system accepts both a set of data sources and a target ontology as input, and semi-automatically maps the data sources in terms of concepts and relationships in the target ontology. Mappings can be both complex and highly domain-specific. Once such a semantic model, expressing the mapping using community-wide standard, is acquired, the source data can be stored in a single repository or database using the semantics of the target ontology. However, acquiring the mapping is a labor-prone process, and state-of-the-art artificial intelligence systems are unable to fully automate the process using heuristics and algorithms alone. Instead, a more realistic goal is to develop adaptive tools that minimize user feedback (e.g., by offering good mapping recommendations), while at the same time making it intuitive and easy for the user to both correct errors and to define complex mappings. We present Karma, a data integration system that has been developed over multiple years in the information integration group at the Information Sciences Institute, a research institute at the University of Southern California's Viterbi School of Engineering. Karma is a state-of-the-art data integration tool that supports an interactive graphical user interface, and has been featured in multiple domains over the last five years, including geospatial, biological, humanities and bibliographic applications. Karma allows a user to import their own ontology and datasets using widely used formats such as RDF, XML, CSV and JSON, can be set up either locally or on a server, supports a native backend database for prototyping queries, and can even be seamlessly integrated into external computational pipelines, including those ingesting data via streaming data sources, Web APIs and SQL databases. We illustrate a Karma workflow at a conceptual level, along with a live demo, and show use cases of Karma specifically for the geosciences. In particular, we show how Karma can be used intuitively to obtain the mapping model between case study data sources and a publicly available and expressive target ontology that has been designed to capture a broad set of concepts in geoscience with standardized, easily searchable names.

  3. Intelligent Information Fusion in the Aviation Domain: A Semantic-Web based Approach

    NASA Technical Reports Server (NTRS)

    Ashish, Naveen; Goforth, Andre

    2005-01-01

    Information fusion from multiple sources is a critical requirement for System Wide Information Management in the National Airspace (NAS). NASA and the FAA envision creating an "integrated pool" of information originally coming from different sources, which users, intelligent agents and NAS decision support tools can tap into. In this paper we present the results of our initial investigations into the requirements and prototype development of such an integrated information pool for the NAS. We have attempted to ascertain key requirements for such an integrated pool based on a survey of DSS tools that will benefit from this integrated pool. We then advocate key technologies from computer science research areas such as the semantic web, information integration, and intelligent agents that we believe are well suited to achieving the envisioned system wide information management capabilities.

  4. Acoustic 3D modeling by the method of integral equations

    NASA Astrophysics Data System (ADS)

    Malovichko, M.; Khokhlov, N.; Yavich, N.; Zhdanov, M.

    2018-02-01

    This paper presents a parallel algorithm for frequency-domain acoustic modeling by the method of integral equations (IE). The algorithm is applied to seismic simulation. The IE method reduces the size of the problem but leads to a dense system matrix. A tolerable memory consumption and numerical complexity were achieved by applying an iterative solver, accompanied by an effective matrix-vector multiplication operation, based on the fast Fourier transform (FFT). We demonstrate that, the IE system matrix is better conditioned than that of the finite-difference (FD) method, and discuss its relation to a specially preconditioned FD matrix. We considered several methods of matrix-vector multiplication for the free-space and layered host models. The developed algorithm and computer code were benchmarked against the FD time-domain solution. It was demonstrated that, the method could accurately calculate the seismic field for the models with sharp material boundaries and a point source and receiver located close to the free surface. We used OpenMP to speed up the matrix-vector multiplication, while MPI was used to speed up the solution of the system equations, and also for parallelizing across multiple sources. The practical examples and efficiency tests are presented as well.

  5. ZnO-based multiple channel and multiple gate FinMOSFETs

    NASA Astrophysics Data System (ADS)

    Lee, Ching-Ting; Huang, Hung-Lin; Tseng, Chun-Yen; Lee, Hsin-Ying

    2016-02-01

    In recent years, zinc oxide (ZnO)-based metal-oxide-semiconductor field-effect transistors (MOSFETs) have attracted much attention, because ZnO-based semiconductors possess several advantages, including large exciton binding energy, nontoxicity, biocompatibility, low material cost, and wide direct bandgap. Moreover, the ZnO-based MOSFET is one of most potential devices, due to the applications in microwave power amplifiers, logic circuits, large scale integrated circuits, and logic swing. In this study, to enhance the performances of the ZnO-based MOSFETs, the ZnObased multiple channel and multiple gate structured FinMOSFETs were fabricated using the simple laser interference photolithography method and the self-aligned photolithography method. The multiple channel structure possessed the additional sidewall depletion width control ability to improve the channel controllability, because the multiple channel sidewall portions were surrounded by the gate electrode. Furthermore, the multiple gate structure had a shorter distance between source and gate and a shorter gate length between two gates to enhance the gate operating performances. Besides, the shorter distance between source and gate could enhance the electron velocity in the channel fin structure of the multiple gate structure. In this work, ninety one channels and four gates were used in the FinMOSFETs. Consequently, the drain-source saturation current (IDSS) and maximum transconductance (gm) of the ZnO-based multiple channel and multiple gate structured FinFETs operated at a drain-source voltage (VDS) of 10 V and a gate-source voltage (VGS) of 0 V were respectively improved from 11.5 mA/mm to 13.7 mA/mm and from 4.1 mS/mm to 6.9 mS/mm in comparison with that of the conventional ZnO-based single channel and single gate MOSFETs.

  6. Transparent mediation-based access to multiple yeast data sources using an ontology driven interface.

    PubMed

    Briache, Abdelaali; Marrakchi, Kamar; Kerzazi, Amine; Navas-Delgado, Ismael; Rossi Hassani, Badr D; Lairini, Khalid; Aldana-Montes, José F

    2012-01-25

    Saccharomyces cerevisiae is recognized as a model system representing a simple eukaryote whose genome can be easily manipulated. Information solicited by scientists on its biological entities (Proteins, Genes, RNAs...) is scattered within several data sources like SGD, Yeastract, CYGD-MIPS, BioGrid, PhosphoGrid, etc. Because of the heterogeneity of these sources, querying them separately and then manually combining the returned results is a complex and time-consuming task for biologists most of whom are not bioinformatics expert. It also reduces and limits the use that can be made on the available data. To provide transparent and simultaneous access to yeast sources, we have developed YeastMed: an XML and mediator-based system. In this paper, we present our approach in developing this system which takes advantage of SB-KOM to perform the query transformation needed and a set of Data Services to reach the integrated data sources. The system is composed of a set of modules that depend heavily on XML and Semantic Web technologies. User queries are expressed in terms of a domain ontology through a simple form-based web interface. YeastMed is the first mediation-based system specific for integrating yeast data sources. It was conceived mainly to help biologists to find simultaneously relevant data from multiple data sources. It has a biologist-friendly interface easy to use. The system is available at http://www.khaos.uma.es/yeastmed/.

  7. Inverse random source scattering for the Helmholtz equation in inhomogeneous media

    NASA Astrophysics Data System (ADS)

    Li, Ming; Chen, Chuchu; Li, Peijun

    2018-01-01

    This paper is concerned with an inverse random source scattering problem in an inhomogeneous background medium. The wave propagation is modeled by the stochastic Helmholtz equation with the source driven by additive white noise. The goal is to reconstruct the statistical properties of the random source such as the mean and variance from the boundary measurement of the radiated random wave field at multiple frequencies. Both the direct and inverse problems are considered. We show that the direct problem has a unique mild solution by a constructive proof. For the inverse problem, we derive Fredholm integral equations, which connect the boundary measurement of the radiated wave field with the unknown source function. A regularized block Kaczmarz method is developed to solve the ill-posed integral equations. Numerical experiments are included to demonstrate the effectiveness of the proposed method.

  8. Semantic integration of data on transcriptional regulation

    PubMed Central

    Baitaluk, Michael; Ponomarenko, Julia

    2010-01-01

    Motivation: Experimental and predicted data concerning gene transcriptional regulation are distributed among many heterogeneous sources. However, there are no resources to integrate these data automatically or to provide a ‘one-stop shop’ experience for users seeking information essential for deciphering and modeling gene regulatory networks. Results: IntegromeDB, a semantic graph-based ‘deep-web’ data integration system that automatically captures, integrates and manages publicly available data concerning transcriptional regulation, as well as other relevant biological information, is proposed in this article. The problems associated with data integration are addressed by ontology-driven data mapping, multiple data annotation and heterogeneous data querying, also enabling integration of the user's data. IntegromeDB integrates over 100 experimental and computational data sources relating to genomics, transcriptomics, genetics, and functional and interaction data concerning gene transcriptional regulation in eukaryotes and prokaryotes. Availability: IntegromeDB is accessible through the integrated research environment BiologicalNetworks at http://www.BiologicalNetworks.org Contact: baitaluk@sdsc.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20427517

  9. Semantic integration of data on transcriptional regulation.

    PubMed

    Baitaluk, Michael; Ponomarenko, Julia

    2010-07-01

    Experimental and predicted data concerning gene transcriptional regulation are distributed among many heterogeneous sources. However, there are no resources to integrate these data automatically or to provide a 'one-stop shop' experience for users seeking information essential for deciphering and modeling gene regulatory networks. IntegromeDB, a semantic graph-based 'deep-web' data integration system that automatically captures, integrates and manages publicly available data concerning transcriptional regulation, as well as other relevant biological information, is proposed in this article. The problems associated with data integration are addressed by ontology-driven data mapping, multiple data annotation and heterogeneous data querying, also enabling integration of the user's data. IntegromeDB integrates over 100 experimental and computational data sources relating to genomics, transcriptomics, genetics, and functional and interaction data concerning gene transcriptional regulation in eukaryotes and prokaryotes. IntegromeDB is accessible through the integrated research environment BiologicalNetworks at http://www.BiologicalNetworks.org baitaluk@sdsc.edu Supplementary data are available at Bioinformatics online.

  10. Sensor Fusion Techniques for Phased-Array Eddy Current and Phased-Array Ultrasound Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arrowood, Lloyd F.

    Sensor (or Data) fusion is the process of integrating multiple data sources to produce more consistent, accurate and comprehensive information than is provided by a single data source. Sensor fusion may also be used to combine multiple signals from a single modality to improve the performance of a particular inspection technique. Industrial nondestructive testing may utilize multiple sensors to acquire inspection data depending upon the object under inspection and the anticipated types of defects that can be identified. Sensor fusion can be performed at various levels of signal abstraction with each having its strengths and weaknesses. A multimodal data fusionmore » strategy first proposed by Heideklang and Shokouhi that combines spatially scattered detection locations to improve detection performance of surface-breaking and near-surface cracks in ferromagnetic metals is shown using a surface inspection example and is then extended for volumetric inspections. Utilizing data acquired from an Olympus Omniscan MX2 from both phased array eddy current and ultrasound probes on test phantoms, single and multilevel fusion techniques are employed to integrate signals from the two modalities. Preliminary results demonstrate how confidence in defect identification and interpretation benefit from sensor fusion techniques. Lastly, techniques for integrating data into radiographic and volumetric imagery from computed tomography are described and results are presented.« less

  11. Integration of Multiple Data Sources to Simulate the Dynamics of Land Systems

    PubMed Central

    Deng, Xiangzheng; Su, Hongbo; Zhan, Jinyan

    2008-01-01

    In this paper we present and develop a new model, which we have called Dynamics of Land Systems (DLS). The DLS model is capable of integrating multiple data sources to simulate the dynamics of a land system. Three main modules are incorporated in DLS: a spatial regression module, to explore the relationship between land uses and influencing factors, a scenario analysis module of the land uses of a region during the simulation period and a spatial disaggregation module, to allocate land use changes from a regional level to disaggregated grid cells. A case study on Taips County in North China is incorporated in this paper to test the functionality of DLS. The simulation results under the baseline, economic priority and environmental scenarios help to understand the land system dynamics and project near future land-use trajectories of a region, in order to focus management decisions on land uses and land use planning. PMID:27879726

  12. Representation and Integration of Scientific Information

    NASA Technical Reports Server (NTRS)

    1999-01-01

    The objective of this Joint Research Interchange with NASA-Ames was to investigate how the Tsimmis technology could be used to represent and integrate scientific information. The funding allowed us to work with researchers within NAS at the NASA Ames Research Center, to understand their information needs, and to work with them on integration strategies. Most organizations have a need to access and integrate information from multiple, disparate information sources that may include both structured as well as semi-structured information. At Stanford we have been working on an information integration project called Tsimmis, supported by DARPA. The main goal of the Tsimmis project is to allow a decision maker to find information of interest from such sources, fuse it, and process it (e.g., summarize it, visualize it, discover trends). Another important goal is the easy incorporation of new sources, as well the ability of deal with sources whose structure or services evolve. During the Interchange we had research meetings approximately every month or two. The participants from NASA included Michael Cox and Peter Vanderbilt. The Stanford PI, and various students and Stanford staff members also participated. NASA researchers also participated in some of our regular Tsimmis meetings. As planned, our meetings discussed problems and solutions to various information integration problems.

  13. Accounting for multiple sources of uncertainty in impact assessments: The example of the BRACE study

    NASA Astrophysics Data System (ADS)

    O'Neill, B. C.

    2015-12-01

    Assessing climate change impacts often requires the use of multiple scenarios, types of models, and data sources, leading to a large number of potential sources of uncertainty. For example, a single study might require a choice of a forcing scenario, climate model, bias correction and/or downscaling method, societal development scenario, model (typically several) for quantifying elements of societal development such as economic and population growth, biophysical model (such as for crop yields or hydrology), and societal impact model (e.g. economic or health model). Some sources of uncertainty are reduced or eliminated by the framing of the question. For example, it may be useful to ask what an impact outcome would be conditional on a given societal development pathway, forcing scenario, or policy. However many sources of uncertainty remain, and it is rare for all or even most of these sources to be accounted for. I use the example of a recent integrated project on the Benefits of Reduced Anthropogenic Climate changE (BRACE) to explore useful approaches to uncertainty across multiple components of an impact assessment. BRACE comprises 23 papers that assess the differences in impacts between two alternative climate futures: those associated with Representative Concentration Pathways (RCPs) 4.5 and 8.5. It quantifies difference in impacts in terms of extreme events, health, agriculture, tropical cyclones, and sea level rise. Methodologically, it includes climate modeling, statistical analysis, integrated assessment modeling, and sector-specific impact modeling. It employs alternative scenarios of both radiative forcing and societal development, but generally uses a single climate model (CESM), partially accounting for climate uncertainty by drawing heavily on large initial condition ensembles. Strengths and weaknesses of the approach to uncertainty in BRACE are assessed. Options under consideration for improving the approach include the use of perturbed physics ensembles of CESM, employing results from multiple climate models, and combining the results from single impact models with statistical representations of uncertainty across multiple models. A key consideration is the relationship between the question being addressed and the uncertainty approach.

  14. A control system for a powered prosthesis using positional and myoelectric inputs from the shoulder complex.

    PubMed

    Losier, Y; Englehart, K; Hudgins, B

    2007-01-01

    The integration of multiple input sources within a control strategy for powered upper limb prostheses could provide smoother, more intuitive multi-joint reaching movements based on the user's intended motion. The work presented in this paper presents the results of using myoelectric signals (MES) of the shoulder area in combination with the position of the shoulder as input sources to multiple linear discriminant analysis classifiers. Such an approach may provide users with control signals capable of controlling three degrees of freedom (DOF). This work is another important step in the development of hybrid systems that will enable simultaneous control of multiple degrees of freedom used for reaching tasks in a prosthetic limb.

  15. Multisensory processing of naturalistic objects in motion: a high-density electrical mapping and source estimation study.

    PubMed

    Senkowski, Daniel; Saint-Amour, Dave; Kelly, Simon P; Foxe, John J

    2007-07-01

    In everyday life, we continuously and effortlessly integrate the multiple sensory inputs from objects in motion. For instance, the sound and the visual percept of vehicles in traffic provide us with complementary information about the location and motion of vehicles. Here, we used high-density electrical mapping and local auto-regressive average (LAURA) source estimation to study the integration of multisensory objects in motion as reflected in event-related potentials (ERPs). A randomized stream of naturalistic multisensory-audiovisual (AV), unisensory-auditory (A), and unisensory-visual (V) "splash" clips (i.e., a drop falling and hitting a water surface) was presented among non-naturalistic abstract motion stimuli. The visual clip onset preceded the "splash" onset by 100 ms for multisensory stimuli. For naturalistic objects early multisensory integration effects beginning 120-140 ms after sound onset were observed over posterior scalp, with distributed sources localized to occipital cortex, temporal lobule, insular, and medial frontal gyrus (MFG). These effects, together with longer latency interactions (210-250 and 300-350 ms) found in a widespread network of occipital, temporal, and frontal areas, suggest that naturalistic objects in motion are processed at multiple stages of multisensory integration. The pattern of integration effects differed considerably for non-naturalistic stimuli. Unlike naturalistic objects, no early interactions were found for non-naturalistic objects. The earliest integration effects for non-naturalistic stimuli were observed 210-250 ms after sound onset including large portions of the inferior parietal cortex (IPC). As such, there were clear differences in the cortical networks activated by multisensory motion stimuli as a consequence of the semantic relatedness (or lack thereof) of the constituent sensory elements.

  16. SNPConvert: SNP Array Standardization and Integration in Livestock Species.

    PubMed

    Nicolazzi, Ezequiel Luis; Marras, Gabriele; Stella, Alessandra

    2016-06-09

    One of the main advantages of single nucleotide polymorphism (SNP) array technology is providing genotype calls for a specific number of SNP markers at a relatively low cost. Since its first application in animal genetics, the number of available SNP arrays for each species has been constantly increasing. However, conversely to that observed in whole genome sequence data analysis, SNP array data does not have a common set of file formats or coding conventions for allele calling. Therefore, the standardization and integration of SNP array data from multiple sources have become an obstacle, especially for users with basic or no programming skills. Here, we describe the difficulties related to handling SNP array data, focusing on file formats, SNP allele coding, and mapping. We also present SNPConvert suite, a multi-platform, open-source, and user-friendly set of tools to overcome these issues. This tool, which can be integrated with open-source and open-access tools already available, is a first step towards an integrated system to standardize and integrate any type of raw SNP array data. The tool is available at: https://github. com/nicolazzie/SNPConvert.git.

  17. NASA/BLM Applications Pilot Test (APT), phase 2. Volume 3: Technology transfer

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Techniques used and materials presented at a planning session and two workshops held to provide hands-on training in the integration of quantitatively based remote sensing data are described as well as methods used to enhance understanding of approaches to inventories that integrate multiple data sources given various resource information objectives. Significant results from each of the technology transfer sessions are examined.

  18. Implementing bioinformatic workflows within the bioextract server

    USDA-ARS?s Scientific Manuscript database

    Computational workflows in bioinformatics are becoming increasingly important in the achievement of scientific advances. These workflows typically require the integrated use of multiple, distributed data sources and analytic tools. The BioExtract Server (http://bioextract.org) is a distributed servi...

  19. Hypermedia (Multimedia).

    ERIC Educational Resources Information Center

    Byrom, Elizabeth

    1990-01-01

    Hypermedia allows students to follow associative links among elements of nonsequential information, by combining information from multiple sources into one microcomputer-controlled system. Hypermedia products help teachers create lessons integrating text, motion film, color graphics, speech, and music, by linking such electronic devices as…

  20. Michigan Department of Transportation (MDOT) weather responsive traveler information (Wx-TINFO) system.

    DOT National Transportation Integrated Search

    2016-01-01

    FHWAs Road Weather Management Program partnered with MDOT to develop a weather responsive traveler information system called Wx-TINFO. The system, shown below, integrates multiple weather data sources into one program, enabling Transportation Oper...

  1. The OLI Radiometric Scale Realization Round Robin Measurement Campaign

    NASA Technical Reports Server (NTRS)

    Cutlip, Hansford; Cole,Jerold; Johnson, B. Carol; Maxwell, Stephen; Markham, Brian; Ong, Lawrence; Hom, Milton; Biggar, Stuart

    2011-01-01

    A round robin radiometric scale realization was performed at the Ball Aerospace Radiometric Calibration Laboratory in January/February 2011 in support of the Operational Land Imager (OLI) Program. Participants included Ball Aerospace, NIST, NASA Goddard Space Flight Center, and the University of Arizona. The eight day campaign included multiple observations of three integrating sphere sources by nine radiometers. The objective of the campaign was to validate the radiance calibration uncertainty ascribed to the integrating sphere used to calibrate the OLI instrument. The instrument level calibration source uncertainty was validated by quatnifying: (1) the long term stability of the NIST calibrated radiance artifact, (2) the responsivity scale of the Ball Aerospace transfer radiometer and (3) the operational characteristics of the large integrating sphere.

  2. A framework for characterizing drug information sources.

    PubMed

    Sharp, Mark; Bodenreider, Olivier; Wacholder, Nina

    2008-11-06

    Drug information is complex, voluminous, heterogeneous, and dynamic. Multiple sources are available, each providing some elements of information about drugs (usually for a given purpose), but there exists no integrated view or directory that could be used to locate sources appropriate to a given purpose. We examined 23 sources that provide drug information in the pharmacy, chemistry, biology, and clinical medicine domains. Their drug information content could be categorized with 39 dimensions. We propose this list of dimensions as a framework for characterizing drug information sources. As an evaluation, we show that this framework is useful for comparing drug information sources and selecting sources most relevant to a given use case.

  3. Multiple-input multiple-output visible light communication system based on disorder dispersion components

    NASA Astrophysics Data System (ADS)

    Yang, Tao; Zhang, Qi; Hao, Yue; Zhou, Xin-hui; Yi, Ming-dong; Wei, Wei; Huang, Wei; Li, Xing-ao

    2017-10-01

    A multiple-input multiple-output visible light communication (VLC) system based on disorder dispersion components is presented. Instead of monochromatic sources and large size photodetectors used in the traditional VLC systems, broadband sources with different spectra act as the transmitters and a compact imaging chip sensor accompanied by a disorder dispersion component and a calculating component serve as the receivers in the proposed system. This system has the merits of small size, more channels, simple structure, easy integration, and low cost. Simultaneously, the broadband sources are suitable to act as illumination sources for their white color. A regularized procedure is designed to solve a matrix equation for decoding the signals at the receivers. A proof-of-concept experiment using on-off keying modulation has been done to prove the feasibility of the design. The experimental results show that the signals decoded by the receivers fit well with those generated from the transmitters, but the bit error ratio is increased with the number of the signal channels. The experimental results can be further improved using a high-speed charge-coupled device, decreasing noises, and increasing the distance between the transmitters and the receivers.

  4. Building a Predictive Capability for Decision-Making that Supports MultiPEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carmichael, Joshua Daniel

    Multi-phenomenological explosion monitoring (multiPEM) is a developing science that uses multiple geophysical signatures of explosions to better identify and characterize their sources. MultiPEM researchers seek to integrate explosion signatures together to provide stronger detection, parameter estimation, or screening capabilities between different sources or processes. This talk will address forming a predictive capability for screening waveform explosion signatures to support multiPEM.

  5. Integrating multiple data sources for malware classification

    DOEpatents

    Anderson, Blake Harrell; Storlie, Curtis B; Lane, Terran

    2015-04-28

    Disclosed herein are representative embodiments of tools and techniques for classifying programs. According to one exemplary technique, at least one graph representation of at least one dynamic data source of at least one program is generated. Also, at least one graph representation of at least one static data source of the at least one program is generated. Additionally, at least using the at least one graph representation of the at least one dynamic data source and the at least one graph representation of the at least one static data source, the at least one program is classified.

  6. Integrating multiple satellite data for crop monitoring

    USDA-ARS?s Scientific Manuscript database

    Remote sensing provides a valuable data source for detecting crop types, monitoring crop condition and predicting crop yields from space. Routine and continuous remote sensing data are critical for agricultural research and operational applications. Since crop field dimensions tend to be relatively ...

  7. Multiple Kernel Learning with Random Effects for Predicting Longitudinal Outcomes and Data Integration

    PubMed Central

    Chen, Tianle; Zeng, Donglin

    2015-01-01

    Summary Predicting disease risk and progression is one of the main goals in many clinical research studies. Cohort studies on the natural history and etiology of chronic diseases span years and data are collected at multiple visits. Although kernel-based statistical learning methods are proven to be powerful for a wide range of disease prediction problems, these methods are only well studied for independent data but not for longitudinal data. It is thus important to develop time-sensitive prediction rules that make use of the longitudinal nature of the data. In this paper, we develop a novel statistical learning method for longitudinal data by introducing subject-specific short-term and long-term latent effects through a designed kernel to account for within-subject correlation of longitudinal measurements. Since the presence of multiple sources of data is increasingly common, we embed our method in a multiple kernel learning framework and propose a regularized multiple kernel statistical learning with random effects to construct effective nonparametric prediction rules. Our method allows easy integration of various heterogeneous data sources and takes advantage of correlation among longitudinal measures to increase prediction power. We use different kernels for each data source taking advantage of the distinctive feature of each data modality, and then optimally combine data across modalities. We apply the developed methods to two large epidemiological studies, one on Huntington's disease and the other on Alzheimer's Disease (Alzheimer's Disease Neuroimaging Initiative, ADNI) where we explore a unique opportunity to combine imaging and genetic data to study prediction of mild cognitive impairment, and show a substantial gain in performance while accounting for the longitudinal aspect of the data. PMID:26177419

  8. Integrating Multiple Data Sources for Combinatorial Marker Discovery: A Study in Tumorigenesis.

    PubMed

    Bandyopadhyay, Sanghamitra; Mallik, Saurav

    2018-01-01

    Identification of combinatorial markers from multiple data sources is a challenging task in bioinformatics. Here, we propose a novel computational framework for identifying significant combinatorial markers ( s) using both gene expression and methylation data. The gene expression and methylation data are integrated into a single continuous data as well as a (post-discretized) boolean data based on their intrinsic (i.e., inverse) relationship. A novel combined score of methylation and expression data (viz., ) is introduced which is computed on the integrated continuous data for identifying initial non-redundant set of genes. Thereafter, (maximal) frequent closed homogeneous genesets are identified using a well-known biclustering algorithm applied on the integrated boolean data of the determined non-redundant set of genes. A novel sample-based weighted support ( ) is then proposed that is consecutively calculated on the integrated boolean data of the determined non-redundant set of genes in order to identify the non-redundant significant genesets. The top few resulting genesets are identified as potential s. Since our proposed method generates a smaller number of significant non-redundant genesets than those by other popular methods, the method is much faster than the others. Application of the proposed technique on an expression and a methylation data for Uterine tumor or Prostate Carcinoma produces a set of significant combination of markers. We expect that such a combination of markers will produce lower false positives than individual markers.

  9. Auditing the multiply-related concepts within the UMLS

    PubMed Central

    Mougin, Fleur; Grabar, Natalia

    2014-01-01

    Objective This work focuses on multiply-related Unified Medical Language System (UMLS) concepts, that is, concepts associated through multiple relations. The relations involved in such situations are audited to determine whether they are provided by source vocabularies or result from the integration of these vocabularies within the UMLS. Methods We study the compatibility of the multiple relations which associate the concepts under investigation and try to explain the reason why they co-occur. Towards this end, we analyze the relations both at the concept and term levels. In addition, we randomly select 288 concepts associated through contradictory relations and manually analyze them. Results At the UMLS scale, only 0.7% of combinations of relations are contradictory, while homogeneous combinations are observed in one-third of situations. At the scale of source vocabularies, one-third do not contain more than one relation between the concepts under investigation. Among the remaining source vocabularies, seven of them mainly present multiple non-homogeneous relations between terms. Analysis at the term level also shows that only in a quarter of cases are the source vocabularies responsible for the presence of multiply-related concepts in the UMLS. These results are available at: http://www.isped.u-bordeaux2.fr/ArticleJAMIA/results_multiply_related_concepts.aspx. Discussion Manual analysis was useful to explain the conceptualization difference in relations between terms across source vocabularies. The exploitation of source relations was helpful for understanding why some source vocabularies describe multiple relations between a given pair of terms. PMID:24464853

  10. IT/IS plus E: exploring the need for e-integration

    NASA Astrophysics Data System (ADS)

    Miele, Renato; Gunasekaran, Angappa; Yusuf, Yahaya Y.

    2000-10-01

    The change in IT/IS strategy is about the Internet becoming a major part of the corporate environment and driving decisions more and more. Companies of all sizes and industries can fully engage employees, customers and partners to capitalize upon the new Internet economy. They can optimize supply chains, managing strategic relationships, reducing time to market, sharing vital information, and increasing productivity and shareholder value. Remaining competitive in today's rapidly changing global marketplace requires fast action. The problem is now how much, how soon, and what kind of Internet based components are essential for companies to be successful, and how the adoption of E-Integration can become a critical component of company's survival in an increasingly competitive environment. How information, knowledge and innovation processes can drive business success are fundamental notions for the information- based economy, which have been extensively researched and confirmed throughout the IT revolution. The new capabilities to use the Internet to supply large amounts of relevant information from multiple internal and external sources give the possibility to move from isolate Information Systems toward an integrated environment in every business organization. The article addresses how E-Integration must link together data from multiple sources, providing a seamless system, fully interoperable with pre-existing IT environment, totally scalable and upgradeable.

  11. Discovering perturbation of modular structure in HIV progression by integrating multiple data sources through non-negative matrix factorization.

    PubMed

    Ray, Sumanta; Maulik, Ujjwal

    2016-12-20

    Detecting perturbation in modular structure during HIV-1 disease progression is an important step to understand stage specific infection pattern of HIV-1 virus in human cell. In this article, we proposed a novel methodology on integration of multiple biological information to identify such disruption in human gene module during different stages of HIV-1 infection. We integrate three different biological information: gene expression information, protein-protein interaction information and gene ontology information in single gene meta-module, through non negative matrix factorization (NMF). As the identified metamodules inherit those information so, detecting perturbation of these, reflects the changes in expression pattern, in PPI structure and in functional similarity of genes during the infection progression. To integrate modules of different data sources into strong meta-modules, NMF based clustering is utilized here. Perturbation in meta-modular structure is identified by investigating the topological and intramodular properties and putting rank to those meta-modules using a rank aggregation algorithm. We have also analyzed the preservation structure of significant GO terms in which the human proteins of the meta-modules participate. Moreover, we have performed an analysis to show the change of coregulation pattern of identified transcription factors (TFs) over the HIV progression stages.

  12. Scaling to diversity: The DERECHOS distributed infrastructure for analyzing and sharing data

    NASA Astrophysics Data System (ADS)

    Rilee, M. L.; Kuo, K. S.; Clune, T.; Oloso, A.; Brown, P. G.

    2016-12-01

    Integrating Earth Science data from diverse sources such as satellite imagery and simulation output can be expensive and time-consuming, limiting scientific inquiry and the quality of our analyses. Reducing these costs will improve innovation and quality in science. The current Earth Science data infrastructure focuses on downloading data based on requests formed from the search and analysis of associated metadata. And while the data products provided by archives may use the best available data sharing technologies, scientist end-users generally do not have such resources (including staff) available to them. Furthermore, only once an end-user has received the data from multiple diverse sources and has integrated them can the actual analysis and synthesis begin. The cost of getting from idea to where synthesis can start dramatically slows progress. In this presentation we discuss a distributed computational and data storage framework that eliminates much of the aforementioned cost. The SciDB distributed array database is central as it is optimized for scientific computing involving very large arrays, performing better than less specialized frameworks like Spark. Adding spatiotemporal functions to the SciDB creates a powerful platform for analyzing and integrating massive, distributed datasets. SciDB allows Big Earth Data analysis to be performed "in place" without the need for expensive downloads and end-user resources. Spatiotemporal indexing technologies such as the hierarchical triangular mesh enable the compute and storage affinity needed to efficiently perform co-located and conditional analyses minimizing data transfers. These technologies automate the integration of diverse data sources using the framework, a critical step beyond current metadata search and analysis. Instead of downloading data into their idiosyncratic local environments, end-users can generate and share data products integrated from diverse multiple sources using a common shared environment, turning distributed active archive centers (DAACs) from warehouses into distributed active analysis centers.

  13. Implementation and performance evaluation open-source controller for precision control of gripper

    NASA Astrophysics Data System (ADS)

    Lee, Seung-Yong; Ham, Un-Hyeong; Park, Young-Woo; Jung, Hak-Sang; Jung, Il-Kyun; Lim, Sun

    2017-12-01

    This paper proposes integrating gripper embedded operating system, which consist of external interface structure for sophisticated gripper control. This system has multiple functions that control the gripping module and measure the pose of the gripper body with respect to contact environment. A controller based on open source only for the gripper is developed and an external communication interface between robot controller and gripper controller is designed. An experimental environment for the fixed-cycle test consists of integrating magic gripper software system and hardware on commercial business. As a result, a deviation is measured approximately 2% and the system were verified for gripper control.

  14. Multiple Frequency Contrast Source Inversion Method for Vertical Electromagnetic Profiling: 2D Simulation Results and Analyses

    NASA Astrophysics Data System (ADS)

    Li, Jinghe; Song, Linping; Liu, Qing Huo

    2016-02-01

    A simultaneous multiple frequency contrast source inversion (CSI) method is applied to reconstructing hydrocarbon reservoir targets in a complex multilayered medium in two dimensions. It simulates the effects of a salt dome sedimentary formation in the context of reservoir monitoring. In this method, the stabilized biconjugate-gradient fast Fourier transform (BCGS-FFT) algorithm is applied as a fast solver for the 2D volume integral equation for the forward computation. The inversion technique with CSI combines the efficient FFT algorithm to speed up the matrix-vector multiplication and the stable convergence of the simultaneous multiple frequency CSI in the iteration process. As a result, this method is capable of making quantitative conductivity image reconstruction effectively for large-scale electromagnetic oil exploration problems, including the vertical electromagnetic profiling (VEP) survey investigated here. A number of numerical examples have been demonstrated to validate the effectiveness and capacity of the simultaneous multiple frequency CSI method for a limited array view in VEP.

  15. AN INTEGRATED FRAMEWORK FOR WATERSHED ASSESSMENT AND MANAGEMENT

    EPA Science Inventory

    Watershed approaches to water quality management have become popular, because they can address multiple point and non-point sources and the influences of land use. Developing technically-sound watershed management strategies can be challenging due to the need to 1) account for mu...

  16. Radiative flux from a planar multiple point source within a cylindrical enclosure reaching a coaxial circular plane

    NASA Astrophysics Data System (ADS)

    Tryka, Stanislaw

    2007-04-01

    A general formula and some special integral formulas were presented for calculating radiative fluxes incident on a circular plane from a planar multiple point source within a coaxial cylindrical enclosure perpendicular to the source. These formula were obtained for radiation propagating in a homogeneous isotropic medium assuming that the lateral surface of the enclosure completely absorbs the incident radiation. Exemplary results were computed numerically and illustrated with three-dimensional surface plots. The formulas presented are suitable for determining fluxes of radiation reaching planar circular detectors, collectors or other planar circular elements from systems of laser diodes, light emitting diodes and fiber lamps within cylindrical enclosures, as well as small biological emitters (bacteria, fungi, yeast, etc.) distributed on planar bases of open nontransparent cylindrical containers.

  17. An integrative framework to reevaluate the Neotropical catfish genus Guyanancistrus (Siluriformes: Loricariidae) with particular emphasis on the Guyanancistrus brevispinis complex.

    PubMed

    Fisch-Muller, Sonia; Mol, Jan H A; Covain, Raphaël

    2018-01-01

    Characterizing and naming species becomes more and more challenging due to the increasing difficulty of accurately delineating specific bounderies. In this context, integrative taxonomy aims to delimit taxonomic units by leveraging the complementarity of multiple data sources (geography, morphology, genetics, etc.). However, while the theoretical framework of integrative taxonomy has been explicitly stated, methods for the simultaneous analysis of multiple data sets are poorly developed and in many cases different information sources are still explored successively. Multi-table methods developed in the field of community ecology provide such an intregrative framework. In particular, multiple co-inertia analysis is flexible enough to allow the integration of morphological, distributional, and genetic data in the same analysis. We have applied this powerfull approach to delimit species boundaries in a group of poorly differentiated catfishes belonging to the genus Guyanancistrus from the Guianas region of northeastern South America. Because the species G. brevispinis has been claimed to be a species complex consisting of five species, particular attention was paid to taxon. Separate analyses indicated the presence of eight distinct species of Guyanancistrus, including five new species and one new genus. However, none of the preliminary analyses revealed different lineages within G. brevispinis, and the multi-table analysis revealed three intraspecific lineages. After taxonomic clarifications and description of the new genus, species and subspecies, a reappraisal of the biogeography of Guyanancistrus members was performed. This analysis revealed three distinct dispersals from the Upper reaches of Amazonian tributaries toward coastal rivers of the Eastern Guianas Ecoregion. The central role played by the Maroni River, as gateway from the Amazon basin, was confirmed. The Maroni River was also found to be a center of speciation for Guyanancistrus (with three species and two subspecies), as well as a source of dispersal of G. brevispinis toward the other main basins of the Eastern Guianas.

  18. An integrative framework to reevaluate the Neotropical catfish genus Guyanancistrus (Siluriformes: Loricariidae) with particular emphasis on the Guyanancistrus brevispinis complex

    PubMed Central

    Fisch-Muller, Sonia; Mol, Jan H. A.

    2018-01-01

    Characterizing and naming species becomes more and more challenging due to the increasing difficulty of accurately delineating specific bounderies. In this context, integrative taxonomy aims to delimit taxonomic units by leveraging the complementarity of multiple data sources (geography, morphology, genetics, etc.). However, while the theoretical framework of integrative taxonomy has been explicitly stated, methods for the simultaneous analysis of multiple data sets are poorly developed and in many cases different information sources are still explored successively. Multi-table methods developed in the field of community ecology provide such an intregrative framework. In particular, multiple co-inertia analysis is flexible enough to allow the integration of morphological, distributional, and genetic data in the same analysis. We have applied this powerfull approach to delimit species boundaries in a group of poorly differentiated catfishes belonging to the genus Guyanancistrus from the Guianas region of northeastern South America. Because the species G. brevispinis has been claimed to be a species complex consisting of five species, particular attention was paid to taxon. Separate analyses indicated the presence of eight distinct species of Guyanancistrus, including five new species and one new genus. However, none of the preliminary analyses revealed different lineages within G. brevispinis, and the multi-table analysis revealed three intraspecific lineages. After taxonomic clarifications and description of the new genus, species and subspecies, a reappraisal of the biogeography of Guyanancistrus members was performed. This analysis revealed three distinct dispersals from the Upper reaches of Amazonian tributaries toward coastal rivers of the Eastern Guianas Ecoregion. The central role played by the Maroni River, as gateway from the Amazon basin, was confirmed. The Maroni River was also found to be a center of speciation for Guyanancistrus (with three species and two subspecies), as well as a source of dispersal of G. brevispinis toward the other main basins of the Eastern Guianas. PMID:29298344

  19. Validation of a Sensor-Driven Modeling Paradigm for Multiple Source Reconstruction with FFT-07 Data

    DTIC Science & Technology

    2009-05-01

    operational warning and reporting (information) systems that combine automated data acquisition, analysis , source reconstruction, display and distribution of...report and to incorporate this operational ca- pability into the integrative multiscale urban modeling system implemented in the com- putational...Journal of Fluid Mechanics, 180, 529–556. [27] Flesch, T., Wilson, J. D., and Yee, E. (1995), Backward- time Lagrangian stochastic dispersion models

  20. Capturing domain knowledge from multiple sources: the rare bone disorders use case.

    PubMed

    Groza, Tudor; Tudorache, Tania; Robinson, Peter N; Zankl, Andreas

    2015-01-01

    Lately, ontologies have become a fundamental building block in the process of formalising and storing complex biomedical information. The community-driven ontology curation process, however, ignores the possibility of multiple communities building, in parallel, conceptualisations of the same domain, and thus providing slightly different perspectives on the same knowledge. The individual nature of this effort leads to the need of a mechanism to enable us to create an overarching and comprehensive overview of the different perspectives on the domain knowledge. We introduce an approach that enables the loose integration of knowledge emerging from diverse sources under a single coherent interoperable resource. To accurately track the original knowledge statements, we record the provenance at very granular levels. We exemplify the approach in the rare bone disorders domain by proposing the Rare Bone Disorders Ontology (RBDO). Using RBDO, researchers are able to answer queries, such as: "What phenotypes describe a particular disorder and are common to all sources?" or to understand similarities between disorders based on divergent groupings (classifications) provided by the underlying sources. RBDO is available at http://purl.org/skeletome/rbdo. In order to support lightweight query and integration, the knowledge captured by RBDO has also been made available as a SPARQL Endpoint at http://bio-lark.org/se_skeldys.html.

  1. Charge-pump voltage converter

    DOEpatents

    Brainard, John P [Albuquerque, NM; Christenson, Todd R [Albuquerque, NM

    2009-11-03

    A charge-pump voltage converter for converting a low voltage provided by a low-voltage source to a higher voltage. Charge is inductively generated on a transfer rotor electrode during its transit past an inductor stator electrode and subsequently transferred by the rotating rotor to a collector stator electrode for storage or use. Repetition of the charge transfer process leads to a build-up of voltage on a charge-receiving device. Connection of multiple charge-pump voltage converters in series can generate higher voltages, and connection of multiple charge-pump voltage converters in parallel can generate higher currents. Microelectromechanical (MEMS) embodiments of this invention provide a small and compact high-voltage (several hundred V) voltage source starting with a few-V initial voltage source. The microscale size of many embodiments of this invention make it ideally suited for MEMS- and other micro-applications where integration of the voltage or charge source in a small package is highly desirable.

  2. Curating and Integrating Data from Multiple Sources to Support Healthcare Analytics.

    PubMed

    Ng, Kenney; Kakkanatt, Chris; Benigno, Michael; Thompson, Clay; Jackson, Margaret; Cahan, Amos; Zhu, Xinxin; Zhang, Ping; Huang, Paul

    2015-01-01

    As the volume and variety of healthcare related data continues to grow, the analysis and use of this data will increasingly depend on the ability to appropriately collect, curate and integrate disparate data from many different sources. We describe our approach to and highlight our experiences with the development of a robust data collection, curation and integration infrastructure that supports healthcare analytics. This system has been successfully applied to the processing of a variety of data types including clinical data from electronic health records and observational studies, genomic data, microbiomic data, self-reported data from surveys and self-tracked data from wearable devices from over 600 subjects. The curated data is currently being used to support healthcare analytic applications such as data visualization, patient stratification and predictive modeling.

  3. A Semantic Web Management Model for Integrative Biomedical Informatics

    PubMed Central

    Deus, Helena F.; Stanislaus, Romesh; Veiga, Diogo F.; Behrens, Carmen; Wistuba, Ignacio I.; Minna, John D.; Garner, Harold R.; Swisher, Stephen G.; Roth, Jack A.; Correa, Arlene M.; Broom, Bradley; Coombes, Kevin; Chang, Allen; Vogel, Lynn H.; Almeida, Jonas S.

    2008-01-01

    Background Data, data everywhere. The diversity and magnitude of the data generated in the Life Sciences defies automated articulation among complementary efforts. The additional need in this field for managing property and access permissions compounds the difficulty very significantly. This is particularly the case when the integration involves multiple domains and disciplines, even more so when it includes clinical and high throughput molecular data. Methodology/Principal Findings The emergence of Semantic Web technologies brings the promise of meaningful interoperation between data and analysis resources. In this report we identify a core model for biomedical Knowledge Engineering applications and demonstrate how this new technology can be used to weave a management model where multiple intertwined data structures can be hosted and managed by multiple authorities in a distributed management infrastructure. Specifically, the demonstration is performed by linking data sources associated with the Lung Cancer SPORE awarded to The University of Texas MDAnderson Cancer Center at Houston and the Southwestern Medical Center at Dallas. A software prototype, available with open source at www.s3db.org, was developed and its proposed design has been made publicly available as an open source instrument for shared, distributed data management. Conclusions/Significance The Semantic Web technologies have the potential to addresses the need for distributed and evolvable representations that are critical for systems Biology and translational biomedical research. As this technology is incorporated into application development we can expect that both general purpose productivity software and domain specific software installed on our personal computers will become increasingly integrated with the relevant remote resources. In this scenario, the acquisition of a new dataset should automatically trigger the delegation of its analysis. PMID:18698353

  4. Enabling a systems biology knowledgebase with gaggle and firegoose

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baliga, Nitin S.

    The overall goal of this project was to extend the existing Gaggle and Firegoose systems to develop an open-source technology that runs over the web and links desktop applications with many databases and software applications. This technology would enable researchers to incorporate workflows for data analysis that can be executed from this interface to other online applications. The four specific aims were to (1) provide one-click mapping of genes, proteins, and complexes across databases and species; (2) enable multiple simultaneous workflows; (3) expand sophisticated data analysis for online resources; and enhance open-source development of the Gaggle-Firegoose infrastructure. Gaggle is anmore » open-source Java software system that integrates existing bioinformatics programs and data sources into a user-friendly, extensible environment to allow interactive exploration, visualization, and analysis of systems biology data. Firegoose is an extension to the Mozilla Firefox web browser that enables data transfer between websites and desktop tools including Gaggle. In the last phase of this funding period, we have made substantial progress on development and application of the Gaggle integration framework. We implemented the workspace to the Network Portal. Users can capture data from Firegoose and save them to the workspace. Users can create workflows to start multiple software components programmatically and pass data between them. Results of analysis can be saved to the cloud so that they can be easily restored on any machine. We also developed the Gaggle Chrome Goose, a plugin for the Google Chrome browser in tandem with an opencpu server in the Amazon EC2 cloud. This allows users to interactively perform data analysis on a single web page using the R packages deployed on the opencpu server. The cloud-based framework facilitates collaboration between researchers from multiple organizations. We have made a number of enhancements to the cmonkey2 application to enable and improve the integration within different environments, and we have created a new tools pipeline for generating EGRIN2 models in a largely automated way.« less

  5. DASMiner: discovering and integrating data from DAS sources

    PubMed Central

    2009-01-01

    Background DAS is a widely adopted protocol for providing syntactic interoperability among biological databases. The popularity of DAS is due to a simplified and elegant mechanism for data exchange that consists of sources exposing their RESTful interfaces for data access. As a growing number of DAS services are available for molecular biology resources, there is an incentive to explore this protocol in order to advance data discovery and integration among these resources. Results We developed DASMiner, a Matlab toolkit for querying DAS data sources that enables creation of integrated biological models using the information available in DAS-compliant repositories. DASMiner is composed by a browser application and an API that work together to facilitate gathering of data from different DAS sources, which can be used for creating enriched datasets from multiple sources. The browser is used to formulate queries and navigate data contained in DAS sources. Users can execute queries against these sources in an intuitive fashion, without the need of knowing the specific DAS syntax for the particular source. Using the source's metadata provided by the DAS Registry, the browser's layout adapts to expose only the set of commands and coordinate systems supported by the specific source. For this reason, the browser can interrogate any DAS source, independently of the type of data being served. The API component of DASMiner may be used for programmatic access of DAS sources by programs in Matlab. Once the desired data is found during navigation, the query is exported in the format of an API call to be used within any Matlab application. We illustrate the use of DASMiner by creating integrative models of histone modification maps and protein-protein interaction networks. These enriched datasets were built by retrieving and integrating distributed genomic and proteomic DAS sources using the API. Conclusion The support of the DAS protocol allows that hundreds of molecular biology databases to be treated as a federated, online collection of resources. DASMiner enables full exploration of these resources, and can be used to deploy applications and create integrated views of biological systems using the information deposited in DAS repositories. PMID:19919683

  6. An integrated network of Arabidopsis growth regulators and its use for gene prioritization.

    PubMed

    Sabaghian, Ehsan; Drebert, Zuzanna; Inzé, Dirk; Saeys, Yvan

    2015-12-01

    Elucidating the molecular mechanisms that govern plant growth has been an important topic in plant research, and current advances in large-scale data generation call for computational tools that efficiently combine these different data sources to generate novel hypotheses. In this work, we present a novel, integrated network that combines multiple large-scale data sources to characterize growth regulatory genes in Arabidopsis, one of the main plant model organisms. The contributions of this work are twofold: first, we characterized a set of carefully selected growth regulators with respect to their connectivity patterns in the integrated network, and, subsequently, we explored to which extent these connectivity patterns can be used to suggest new growth regulators. Using a large-scale comparative study, we designed new supervised machine learning methods to prioritize growth regulators. Our results show that these methods significantly improve current state-of-the-art prioritization techniques, and are able to suggest meaningful new growth regulators. In addition, the integrated network is made available to the scientific community, providing a rich data source that will be useful for many biological processes, not necessarily restricted to plant growth.

  7. III-V quantum light source and cavity-QED on silicon.

    PubMed

    Luxmoore, I J; Toro, R; Del Pozo-Zamudio, O; Wasley, N A; Chekhovich, E A; Sanchez, A M; Beanland, R; Fox, A M; Skolnick, M S; Liu, H Y; Tartakovskii, A I

    2013-01-01

    Non-classical light sources offer a myriad of possibilities in both fundamental science and commercial applications. Single photons are the most robust carriers of quantum information and can be exploited for linear optics quantum information processing. Scale-up requires miniaturisation of the waveguide circuit and multiple single photon sources. Silicon photonics, driven by the incentive of optical interconnects is a highly promising platform for the passive optical components, but integrated light sources are limited by silicon's indirect band-gap. III-V semiconductor quantum-dots, on the other hand, are proven quantum emitters. Here we demonstrate single-photon emission from quantum-dots coupled to photonic crystal nanocavities fabricated from III-V material grown directly on silicon substrates. The high quality of the III-V material and photonic structures is emphasized by observation of the strong-coupling regime. This work opens-up the advantages of silicon photonics to the integration and scale-up of solid-state quantum optical systems.

  8. A scalable architecture for extracting, aligning, linking, and visualizing multi-Int data

    NASA Astrophysics Data System (ADS)

    Knoblock, Craig A.; Szekely, Pedro

    2015-05-01

    An analyst today has a tremendous amount of data available, but each of the various data sources typically exists in their own silos, so an analyst has limited ability to see an integrated view of the data and has little or no access to contextual information that could help in understanding the data. We have developed the Domain-Insight Graph (DIG) system, an innovative architecture for extracting, aligning, linking, and visualizing massive amounts of domain-specific content from unstructured sources. Under the DARPA Memex program we have already successfully applied this architecture to multiple application domains, including the enormous international problem of human trafficking, where we extracted, aligned and linked data from 50 million online Web pages. DIG builds on our Karma data integration toolkit, which makes it easy to rapidly integrate structured data from a variety of sources, including databases, spreadsheets, XML, JSON, and Web services. The ability to integrate Web services allows Karma to pull in live data from the various social media sites, such as Twitter, Instagram, and OpenStreetMaps. DIG then indexes the integrated data and provides an easy to use interface for query, visualization, and analysis.

  9. Virtual Plant Tissue: Building Blocks for Next-Generation Plant Growth Simulation

    PubMed Central

    De Vos, Dirk; Dzhurakhalov, Abdiravuf; Stijven, Sean; Klosiewicz, Przemyslaw; Beemster, Gerrit T. S.; Broeckhove, Jan

    2017-01-01

    Motivation: Computational modeling of plant developmental processes is becoming increasingly important. Cellular resolution plant tissue simulators have been developed, yet they are typically describing physiological processes in an isolated way, strongly delimited in space and time. Results: With plant systems biology moving toward an integrative perspective on development we have built the Virtual Plant Tissue (VPTissue) package to couple functional modules or models in the same framework and across different frameworks. Multiple levels of model integration and coordination enable combining existing and new models from different sources, with diverse options in terms of input/output. Besides the core simulator the toolset also comprises a tissue editor for manipulating tissue geometry and cell, wall, and node attributes in an interactive manner. A parameter exploration tool is available to study parameter dependence of simulation results by distributing calculations over multiple systems. Availability: Virtual Plant Tissue is available as open source (EUPL license) on Bitbucket (https://bitbucket.org/vptissue/vptissue). The project has a website https://vptissue.bitbucket.io. PMID:28523006

  10. Passing messages between biological networks to refine predicted interactions.

    PubMed

    Glass, Kimberly; Huttenhower, Curtis; Quackenbush, John; Yuan, Guo-Cheng

    2013-01-01

    Regulatory network reconstruction is a fundamental problem in computational biology. There are significant limitations to such reconstruction using individual datasets, and increasingly people attempt to construct networks using multiple, independent datasets obtained from complementary sources, but methods for this integration are lacking. We developed PANDA (Passing Attributes between Networks for Data Assimilation), a message-passing model using multiple sources of information to predict regulatory relationships, and used it to integrate protein-protein interaction, gene expression, and sequence motif data to reconstruct genome-wide, condition-specific regulatory networks in yeast as a model. The resulting networks were not only more accurate than those produced using individual data sets and other existing methods, but they also captured information regarding specific biological mechanisms and pathways that were missed using other methodologies. PANDA is scalable to higher eukaryotes, applicable to specific tissue or cell type data and conceptually generalizable to include a variety of regulatory, interaction, expression, and other genome-scale data. An implementation of the PANDA algorithm is available at www.sourceforge.net/projects/panda-net.

  11. Structurally Integrated Photoluminescent Chemical and Biological Sensors: An Organic Light-Emitting Diode-Based Platform

    NASA Astrophysics Data System (ADS)

    Shinar, J.; Shinar, R.

    The chapter describes the development, advantages, challenges, and potential of an emerging, compact photoluminescence-based sensing platform for chemical and biological analytes, including multiple analytes. In this platform, the excitation source is an array of organic light-emitting device (OLED) pixels that is structurally integrated with the sensing component. Steps towards advanced integration with additionally a thin-film-based photodetector are also described. The performance of the OLED-based sensing platform is examined for gas-phase and dissolved oxygen, glucose, lactate, ethanol, hydrazine, and anthrax lethal factor.

  12. Hydrologic and geochemical data assimilation at the Hanford 300 Area

    NASA Astrophysics Data System (ADS)

    Chen, X.; Hammond, G. E.; Murray, C. J.; Zachara, J. M.

    2012-12-01

    In modeling the uranium migration within the Integrated Field Research Challenge (IFRC) site at the Hanford 300 Area, uncertainties arise from both hydrologic and geochemical sources. The hydrologic uncertainty includes the transient flow boundary conditions induced by dynamic variations in Columbia River stage and the underlying heterogeneous hydraulic conductivity field, while the geochemical uncertainty is a result of limited knowledge of the geochemical reaction processes and parameters, as well as heterogeneity in uranium source terms. In this work, multiple types of data, including the results from constant-injection tests, borehole flowmeter profiling, and conservative tracer tests, are sequentially assimilated across scales within a Bayesian framework to reduce the hydrologic uncertainty. The hydrologic data assimilation is then followed by geochemical data assimilation, where the goal is to infer the heterogeneous distribution of uranium sources using uranium breakthrough curves from a desorption test that took place at high spring water table. We demonstrate in our study that Ensemble-based data assimilation techniques (Ensemble Kalman filter and smoother) are efficient in integrating multiple types of data sequentially for uncertainty reduction. The computational demand is managed by using the multi-realization capability within the parallel PFLOTRAN simulator.

  13. VISUALIZATION FROM INTRAOPERATIVE SWEPT-SOURCE MICROSCOPE-INTEGRATED OPTICAL COHERENCE TOMOGRAPHY IN VITRECTOMY FOR COMPLICATIONS OF PROLIFERATIVE DIABETIC RETINOPATHY.

    PubMed

    Gabr, Hesham; Chen, Xi; Zevallos-Carrasco, Oscar M; Viehland, Christian; Dandrige, Alexandria; Sarin, Neeru; Mahmoud, Tamer H; Vajzovic, Lejla; Izatt, Joseph A; Toth, Cynthia A

    2018-01-10

    To evaluate the use of live volumetric (4D) intraoperative swept-source microscope-integrated optical coherence tomography in vitrectomy for proliferative diabetic retinopathy complications. In this prospective study, we analyzed a subgroup of patients with proliferative diabetic retinopathy complications who required vitrectomy and who were imaged by the research swept-source microscope-integrated optical coherence tomography system. In near real time, images were displayed in stereo heads-up display facilitating intraoperative surgeon feedback. Postoperative review included scoring image quality, identifying different diabetic retinopathy-associated pathologies and reviewing the intraoperatively documented surgeon feedback. Twenty eyes were included. Indications for vitrectomy were tractional retinal detachment (16 eyes), combined tractional-rhegmatogenous retinal detachment (2 eyes), and vitreous hemorrhage (2 eyes). Useful, good-quality 2D (B-scans) and 4D images were obtained in 16/20 eyes (80%). In these eyes, multiple diabetic retinopathy complications could be imaged. Swept-source microscope-integrated optical coherence tomography provided surgical guidance, e.g., in identifying dissection planes under fibrovascular membranes, and in determining residual membranes and traction that would benefit from additional peeling. In 4/20 eyes (20%), acceptable images were captured, but they were not useful due to high tractional retinal detachment elevation which was challenging for imaging. Swept-source microscope-integrated optical coherence tomography can provide important guidance during surgery for proliferative diabetic retinopathy complications through intraoperative identification of different complications and facilitation of intraoperative decision making.

  14. Role of data warehousing in healthcare epidemiology.

    PubMed

    Wyllie, D; Davies, J

    2015-04-01

    Electronic storage of healthcare data, including individual-level risk factors for both infectious and other diseases, is increasing. These data can be integrated at hospital, regional and national levels. Data sources that contain risk factor and outcome information for a wide range of conditions offer the potential for efficient epidemiological analysis of multiple diseases. Opportunities may also arise for monitoring healthcare processes. Integrating diverse data sources presents epidemiological, practical, and ethical challenges. For example, diagnostic criteria, outcome definitions, and ascertainment methods may differ across the data sources. Data volumes may be very large, requiring sophisticated computing technology. Given the large populations involved, perhaps the most challenging aspect is how informed consent can be obtained for the development of integrated databases, particularly when it is not easy to demonstrate their potential. In this article, we discuss some of the ups and downs of recent projects as well as the potential of data warehousing for antimicrobial resistance monitoring. Copyright © 2015. Published by Elsevier Ltd.

  15. Registration and Fusion of Multiple Source Remotely Sensed Image Data

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline

    2004-01-01

    Earth and Space Science often involve the comparison, fusion, and integration of multiple types of remotely sensed data at various temporal, radiometric, and spatial resolutions. Results of this integration may be utilized for global change analysis, global coverage of an area at multiple resolutions, map updating or validation of new instruments, as well as integration of data provided by multiple instruments carried on multiple platforms, e.g. in spacecraft constellations or fleets of planetary rovers. Our focus is on developing methods to perform fast, accurate and automatic image registration and fusion. General methods for automatic image registration are being reviewed and evaluated. Various choices for feature extraction, feature matching and similarity measurements are being compared, including wavelet-based algorithms, mutual information and statistically robust techniques. Our work also involves studies related to image fusion and investigates dimension reduction and co-kriging for application-dependent fusion. All methods are being tested using several multi-sensor datasets, acquired at EOS Core Sites, and including multiple sensors such as IKONOS, Landsat-7/ETM+, EO1/ALI and Hyperion, MODIS, and SeaWIFS instruments. Issues related to the coregistration of data from the same platform (i.e., AIRS and MODIS from Aqua) or from several platforms of the A-train (i.e., MLS, HIRDLS, OMI from Aura with AIRS and MODIS from Terra and Aqua) will also be considered.

  16. Toward generalized human factors taxonomy for classifying ASAP incident reports, AQP performance ratings, and FOQA output

    DOT National Transportation Integrated Search

    2003-01-01

    Over the years, the FAA has partnered with industry to develop a number of programs for reporting, classifying, and analyzing safety-related data. Despite their successes, none of these programs has been able to integrate data from multiple sources. ...

  17. Bayesian source tracking via focalization and marginalization in an uncertain Mediterranean Sea environment.

    PubMed

    Dosso, Stan E; Wilmut, Michael J; Nielsen, Peter L

    2010-07-01

    This paper applies Bayesian source tracking in an uncertain environment to Mediterranean Sea data, and investigates the resulting tracks and track uncertainties as a function of data information content (number of data time-segments, number of frequencies, and signal-to-noise ratio) and of prior information (environmental uncertainties and source-velocity constraints). To track low-level sources, acoustic data recorded for multiple time segments (corresponding to multiple source positions along the track) are inverted simultaneously. Environmental uncertainty is addressed by including unknown water-column and seabed properties as nuisance parameters in an augmented inversion. Two approaches are considered: Focalization-tracking maximizes the posterior probability density (PPD) over the unknown source and environmental parameters. Marginalization-tracking integrates the PPD over environmental parameters to obtain a sequence of joint marginal probability distributions over source coordinates, from which the most-probable track and track uncertainties can be extracted. Both approaches apply track constraints on the maximum allowable vertical and radial source velocity. The two approaches are applied for towed-source acoustic data recorded at a vertical line array at a shallow-water test site in the Mediterranean Sea where previous geoacoustic studies have been carried out.

  18. Visual Feature Integration Indicated by pHase-Locked Frontal-Parietal EEG Signals

    PubMed Central

    Phillips, Steven; Takeda, Yuji; Singh, Archana

    2012-01-01

    The capacity to integrate multiple sources of information is a prerequisite for complex cognitive ability, such as finding a target uniquely identifiable by the conjunction of two or more features. Recent studies identified greater frontal-parietal synchrony during conjunctive than non-conjunctive (feature) search. Whether this difference also reflects greater information integration, rather than just differences in cognitive strategy (e.g., top-down versus bottom-up control of attention), or task difficulty is uncertain. Here, we examine the first possibility by parametrically varying the number of integrated sources from one to three and measuring phase-locking values (PLV) of frontal-parietal EEG electrode signals, as indicators of synchrony. Linear regressions, under hierarchical false-discovery rate control, indicated significant positive slopes for number of sources on PLV in the 30–38 Hz, 175–250 ms post-stimulus frequency-time band for pairs in the sagittal plane (i.e., F3-P3, Fz-Pz, F4-P4), after equating conditions for behavioural performance (to exclude effects due to task difficulty). No such effects were observed for pairs in the transverse plane (i.e., F3-F4, C3-C4, P3-P4). These results provide support for the idea that anterior-posterior phase-locking in the lower gamma-band mediates integration of visual information. They also provide a potential window into cognitive development, seen as developing the capacity to integrate more sources of information. PMID:22427847

  19. Visual feature integration indicated by pHase-locked frontal-parietal EEG signals.

    PubMed

    Phillips, Steven; Takeda, Yuji; Singh, Archana

    2012-01-01

    The capacity to integrate multiple sources of information is a prerequisite for complex cognitive ability, such as finding a target uniquely identifiable by the conjunction of two or more features. Recent studies identified greater frontal-parietal synchrony during conjunctive than non-conjunctive (feature) search. Whether this difference also reflects greater information integration, rather than just differences in cognitive strategy (e.g., top-down versus bottom-up control of attention), or task difficulty is uncertain. Here, we examine the first possibility by parametrically varying the number of integrated sources from one to three and measuring phase-locking values (PLV) of frontal-parietal EEG electrode signals, as indicators of synchrony. Linear regressions, under hierarchical false-discovery rate control, indicated significant positive slopes for number of sources on PLV in the 30-38 Hz, 175-250 ms post-stimulus frequency-time band for pairs in the sagittal plane (i.e., F3-P3, Fz-Pz, F4-P4), after equating conditions for behavioural performance (to exclude effects due to task difficulty). No such effects were observed for pairs in the transverse plane (i.e., F3-F4, C3-C4, P3-P4). These results provide support for the idea that anterior-posterior phase-locking in the lower gamma-band mediates integration of visual information. They also provide a potential window into cognitive development, seen as developing the capacity to integrate more sources of information.

  20. Design and Implementation of nine level multilevel Inverter

    NASA Astrophysics Data System (ADS)

    Dhineshkumar, K.; Subramani, C.

    2018-04-01

    In this paper the solar based boost converter integrated Nine level multilevel inverter presented. It uses 7 switches to produce nine level output stepped waveform. The aim of the work to produce 9 level wave form using solar and boost converter. The conventional inverter has multiple sources and has 16 switches are required and also more number of voltage sources required. The proposed inverter required single solar panel and reduced number of switches and integrated boost converter which increase the input voltage of the inverter. The proposed inverter simulated and compared with R load using Mat lab and prototype model experimentally verified. The proposed inverter can be used in n number of solar applications.

  1. Integrating multiple immunogenetic data sources for feature extraction and mining somatic hypermutation patterns: the case of "towards analysis" in chronic lymphocytic leukaemia.

    PubMed

    Kavakiotis, Ioannis; Xochelli, Aliki; Agathangelidis, Andreas; Tsoumakas, Grigorios; Maglaveras, Nicos; Stamatopoulos, Kostas; Hadzidimitriou, Anastasia; Vlahavas, Ioannis; Chouvarda, Ioanna

    2016-06-06

    Somatic Hypermutation (SHM) refers to the introduction of mutations within rearranged V(D)J genes, a process that increases the diversity of Immunoglobulins (IGs). The analysis of SHM has offered critical insight into the physiology and pathology of B cells, leading to strong prognostication markers for clinical outcome in chronic lymphocytic leukaemia (CLL), the most frequent adult B-cell malignancy. In this paper we present a methodology for integrating multiple immunogenetic and clinocobiological data sources in order to extract features and create high quality datasets for SHM analysis in IG receptors of CLL patients. This dataset is used as the basis for a higher level integration procedure, inspired form social choice theory. This is applied in the Towards Analysis, our attempt to investigate the potential ontogenetic transformation of genes belonging to specific stereotyped CLL subsets towards other genes or gene families, through SHM. The data integration process, followed by feature extraction, resulted in the generation of a dataset containing information about mutations occurring through SHM. The Towards analysis performed on the integrated dataset applying voting techniques, revealed the distinct behaviour of subset #201 compared to other subsets, as regards SHM related movements among gene clans, both in allele-conserved and non-conserved gene areas. With respect to movement between genes, a high percentage movement towards pseudo genes was found in all CLL subsets. This data integration and feature extraction process can set the basis for exploratory analysis or a fully automated computational data mining approach on many as yet unanswered, clinically relevant biological questions.

  2. Evaluating IAIMS at Yale: information access.

    PubMed

    Grajek, S E; Calarco, P; Frawley, S J; McKay, J; Miller, P L; Paton, J A; Roderer, N K; Sullivan, J E

    1997-01-01

    To evaluate use of information resources during the first year of IAIMS implementation at the Yale-New Haven Medical Center. The evaluation asked: (1) Which information resources are being used? (2) Who uses information resources? (3) Where are information resources used? (4) Are multiple sources of information being integrated? Measures included monthly usage data for resources delivered network-wide, in the Medical Library, and in the Hospital; online surveys of library workstation users; an annual survey of a random, stratified sample of Medical Center faculty, postdoctoral trainees, students, nurses, residents, and managerial and professional staff; and user comments. Eighty-three percent of the Medical Center community use networked information resources, and use of resources is increasing. Both status (faculty, student, nurse, etc.) and mission (teaching, research, patient care) affect use of individual resources. Eighty-eight percent of people use computers in more than one location, and increases in usage of traditional library resources such as MEDLINE are due to increased access from outside the Library. Both survey and usage data suggest that people are using multiple resources during the same information seeking session. Almost all of the Medical Center community is using networked information resources in more settings. It is necessary to support increased demand for information access from remote locations and to specific populations, such as nurses. People are integrating information from multiple sources, but true integration within information systems is just beginning. Other institutions are advised to incorporate pragmatic evaluation into their IAIMS activities and to share evaluation results with decision-makers.

  3. Improving the interoperability of biomedical ontologies with compound alignments.

    PubMed

    Oliveira, Daniela; Pesquita, Catia

    2018-01-09

    Ontologies are commonly used to annotate and help process life sciences data. Although their original goal is to facilitate integration and interoperability among heterogeneous data sources, when these sources are annotated with distinct ontologies, bridging this gap can be challenging. In the last decade, ontology matching systems have been evolving and are now capable of producing high-quality mappings for life sciences ontologies, usually limited to the equivalence between two ontologies. However, life sciences research is becoming increasingly transdisciplinary and integrative, fostering the need to develop matching strategies that are able to handle multiple ontologies and more complex relations between their concepts. We have developed ontology matching algorithms that are able to find compound mappings between multiple biomedical ontologies, in the form of ternary mappings, finding for instance that "aortic valve stenosis"(HP:0001650) is equivalent to the intersection between "aortic valve"(FMA:7236) and "constricted" (PATO:0001847). The algorithms take advantage of search space filtering based on partial mappings between ontology pairs, to be able to handle the increased computational demands. The evaluation of the algorithms has shown that they are able to produce meaningful results, with precision in the range of 60-92% for new mappings. The algorithms were also applied to the potential extension of logical definitions of the OBO and the matching of several plant-related ontologies. This work is a first step towards finding more complex relations between multiple ontologies. The evaluation shows that the results produced are significant and that the algorithms could satisfy specific integration needs.

  4. CHiCP: a web-based tool for the integrative and interactive visualization of promoter capture Hi-C datasets.

    PubMed

    Schofield, E C; Carver, T; Achuthan, P; Freire-Pritchett, P; Spivakov, M; Todd, J A; Burren, O S

    2016-08-15

    Promoter capture Hi-C (PCHi-C) allows the genome-wide interrogation of physical interactions between distal DNA regulatory elements and gene promoters in multiple tissue contexts. Visual integration of the resultant chromosome interaction maps with other sources of genomic annotations can provide insight into underlying regulatory mechanisms. We have developed Capture HiC Plotter (CHiCP), a web-based tool that allows interactive exploration of PCHi-C interaction maps and integration with both public and user-defined genomic datasets. CHiCP is freely accessible from www.chicp.org and supports most major HTML5 compliant web browsers. Full source code and installation instructions are available from http://github.com/D-I-L/django-chicp ob219@cam.ac.uk. © The Author 2016. Published by Oxford University Press. All rights reserved.

  5. Silicon photonic integrated circuit swept-source optical coherence tomography receiver with dual polarization, dual balanced, in-phase and quadrature detection.

    PubMed

    Wang, Zhao; Lee, Hsiang-Chieh; Vermeulen, Diedrik; Chen, Long; Nielsen, Torben; Park, Seo Yeon; Ghaemi, Allan; Swanson, Eric; Doerr, Chris; Fujimoto, James

    2015-07-01

    Optical coherence tomography (OCT) is a widely used three-dimensional (3D) optical imaging method with many biomedical and non-medical applications. Miniaturization, cost reduction, and increased functionality of OCT systems will be critical for future emerging clinical applications. We present a silicon photonic integrated circuit swept-source OCT (SS-OCT) coherent receiver with dual polarization, dual balanced, in-phase and quadrature (IQ) detection. We demonstrate multiple functional capabilities of IQ polarization resolved detection including: complex-conjugate suppressed full-range OCT, polarization diversity detection, and polarization-sensitive OCT. To our knowledge, this is the first demonstration of a silicon photonic integrated receiver for OCT. The integrated coherent receiver provides a miniaturized, low-cost solution for SS-OCT, and is also a key step towards a fully integrated high speed SS-OCT system with good performance and multi-functional capabilities. With further performance improvement and cost reduction, photonic integrated technology promises to greatly increase penetration of OCT systems in existing applications and enable new applications.

  6. Silicon photonic integrated circuit swept-source optical coherence tomography receiver with dual polarization, dual balanced, in-phase and quadrature detection

    PubMed Central

    Wang, Zhao; Lee, Hsiang-Chieh; Vermeulen, Diedrik; Chen, Long; Nielsen, Torben; Park, Seo Yeon; Ghaemi, Allan; Swanson, Eric; Doerr, Chris; Fujimoto, James

    2015-01-01

    Optical coherence tomography (OCT) is a widely used three-dimensional (3D) optical imaging method with many biomedical and non-medical applications. Miniaturization, cost reduction, and increased functionality of OCT systems will be critical for future emerging clinical applications. We present a silicon photonic integrated circuit swept-source OCT (SS-OCT) coherent receiver with dual polarization, dual balanced, in-phase and quadrature (IQ) detection. We demonstrate multiple functional capabilities of IQ polarization resolved detection including: complex-conjugate suppressed full-range OCT, polarization diversity detection, and polarization-sensitive OCT. To our knowledge, this is the first demonstration of a silicon photonic integrated receiver for OCT. The integrated coherent receiver provides a miniaturized, low-cost solution for SS-OCT, and is also a key step towards a fully integrated high speed SS-OCT system with good performance and multi-functional capabilities. With further performance improvement and cost reduction, photonic integrated technology promises to greatly increase penetration of OCT systems in existing applications and enable new applications. PMID:26203382

  7. Assessing District Energy Systems Performance Integrated with Multiple Thermal Energy Storages

    NASA Astrophysics Data System (ADS)

    Rezaie, Behnaz

    The goal of this study is to examine various energy resources in district energy (DE) systems and then DE system performance development by means of multiple thermal energy storages (TES) application. This study sheds light on areas not yet investigated precisely in detail. Throughout the research, major components of the heat plant, energy suppliers of the DE systems, and TES characteristics are separately examined; integration of various configurations of the multiple TESs in the DE system is then analysed. In the first part of the study, various sources of energy are compared, in a consistent manner, financially and environmentally. The TES performance is then assessed from various aspects. Then, TES(s) and DE systems with several sources of energy are integrated, and are investigated as a heat process centre. The most efficient configurations of the multiple TESs integrated with the DE system are investigated. Some of the findings of this study are applied on an actual DE system. The outcomes of this study provide insight for researchers and engineers who work in this field, as well as policy makers and project managers who are decision-makers. The accomplishments of the study are original developments TESs and DE systems. As an original development the Enviro-Economic Function, to balance the economic and environmental aspects of energy resources technologies in DE systems, is developed; various configurations of multiple TESs, including series, parallel, and general grid, are developed. The developed related functions are discharge temperature and energy of the TES, and energy and exergy efficiencies of the TES. The TES charging and discharging behavior of TES instantaneously is also investigated to obtain the charging temperature, the maximum charging temperature, the charging energy flow, maximum heat flow capacity, the discharging temperature, the minimum charging temperature, the discharging energy flow, the maximum heat flow capacity, and performance cycle time functions of the TES. Expanding to analysis of one TES integrated with the DE system, characteristics of various configurations of TES integrated with DE systems are obtained as functions of known properties, energy and exergy balances of the DE system including the TES(s); and energy and exergy efficiencies of the DE system. The energy, exergy, economic, and CO2 emissions of various energy options for the DE system are investigated in a consistent manner. Different sources of energy considered include natural gas, solar energy, ground source heat pump (GSHP), and municipal solid waste. The economic and environmental aspects and prioritization, and the advantages of each technology are reported. A community-based DE system is considered as a case study. For the considered case study, various existing sizing methods are applied, and then compared. The energy sources are natural gas, solar thermal, geothermal, and solid waste. The technologies are sized for each energy option, then the CO2 emissions and economic characteristics of each technology are analysed. The parallel configuration of the TESs delivers more energy to the DE system compared with other configurations, when the stored energy is the same. With increasing the number of parallel TESs results in a higher energy supply to the DE system. The efficiency of the set of the TESs is also improved by increasing the number of parallel TESs. The tax policy, including the tax benefits and carbon tax, is a strong tool which will influence the overall cost of the energy supplier's technology for the DE systems. The Enviro-Economic Function for the TESs is proposed and is integrated with the DE system, which suggests that the number of TESs required. The energy and exergy analyses are applied to the charging and discharging stages of an actual TES in the Friedrichshafen DE system. For the Friedrichshafen DE system, the performance is analysed based on energy and exergy analyses approach. Furthermore, by using the developed functions in the present study some modifications are suggested for the Friedrichshafen DE system for better performance.

  8. Tomographic reconstruction of ionospheric electron density during the storm of 5-6 August 2011 using multi-source data

    PubMed Central

    Tang, Jun; Yao, Yibin; Zhang, Liang; Kong, Jian

    2015-01-01

    The insufficiency of data is the essential reason for ill-posed problem existed in computerized ionospheric tomography (CIT) technique. Therefore, the method of integrating multi-source data is proposed. Currently, the multiple satellite navigation systems and various ionospheric observing instruments provide abundant data which can be employed to reconstruct ionospheric electron density (IED). In order to improve the vertical resolution of IED, we do research on IED reconstruction by integration of ground-based GPS data, occultation data from the LEO satellite, satellite altimetry data from Jason-1 and Jason-2 and ionosonde data. We used the CIT results to compare with incoherent scatter radar (ISR) observations, and found that the multi-source data fusion was effective and reliable to reconstruct electron density, showing its superiority than CIT with GPS data alone. PMID:26266764

  9. Tomographic reconstruction of ionospheric electron density during the storm of 5-6 August 2011 using multi-source data.

    PubMed

    Tang, Jun; Yao, Yibin; Zhang, Liang; Kong, Jian

    2015-08-12

    The insufficiency of data is the essential reason for ill-posed problem existed in computerized ionospheric tomography (CIT) technique. Therefore, the method of integrating multi-source data is proposed. Currently, the multiple satellite navigation systems and various ionospheric observing instruments provide abundant data which can be employed to reconstruct ionospheric electron density (IED). In order to improve the vertical resolution of IED, we do research on IED reconstruction by integration of ground-based GPS data, occultation data from the LEO satellite, satellite altimetry data from Jason-1 and Jason-2 and ionosonde data. We used the CIT results to compare with incoherent scatter radar (ISR) observations, and found that the multi-source data fusion was effective and reliable to reconstruct electron density, showing its superiority than CIT with GPS data alone.

  10. Modeling Photo-multiplier Gain and Regenerating Pulse Height Data for Application Development

    NASA Astrophysics Data System (ADS)

    Aspinall, Michael D.; Jones, Ashley R.

    2018-01-01

    Systems that adopt organic scintillation detector arrays often require a calibration process prior to the intended measurement campaign to correct for significant performance variances between detectors within the array. These differences exist because of low tolerances associated with photo-multiplier tube technology and environmental influences. Differences in detector response can be corrected for by adjusting the supplied photo-multiplier tube voltage to control its gain and the effect that this has on the pulse height spectra from a gamma-only calibration source with a defined photo-peak. Automated methods that analyze these spectra and adjust the photo-multiplier tube bias accordingly are emerging for hardware that integrate acquisition electronics and high voltage control. However, development of such algorithms require access to the hardware, multiple detectors and calibration source for prolonged periods, all with associated constraints and risks. In this work, we report on a software function and related models developed to rescale and regenerate pulse height data acquired from a single scintillation detector. Such a function could be used to generate significant and varied pulse height data that can be used to integration-test algorithms that are capable of automatically response matching multiple detectors using pulse height spectra analysis. Furthermore, a function of this sort removes the dependence on multiple detectors, digital analyzers and calibration source. Results show a good match between the real and regenerated pulse height data. The function has also been used successfully to develop auto-calibration algorithms.

  11. Multimodal integration of micro-Doppler sonar and auditory signals for behavior classification with convolutional networks.

    PubMed

    Dura-Bernal, Salvador; Garreau, Guillaume; Georgiou, Julius; Andreou, Andreas G; Denham, Susan L; Wennekers, Thomas

    2013-10-01

    The ability to recognize the behavior of individuals is of great interest in the general field of safety (e.g. building security, crowd control, transport analysis, independent living for the elderly). Here we report a new real-time acoustic system for human action and behavior recognition that integrates passive audio and active micro-Doppler sonar signatures over multiple time scales. The system architecture is based on a six-layer convolutional neural network, trained and evaluated using a dataset of 10 subjects performing seven different behaviors. Probabilistic combination of system output through time for each modality separately yields 94% (passive audio) and 91% (micro-Doppler sonar) correct behavior classification; probabilistic multimodal integration increases classification performance to 98%. This study supports the efficacy of micro-Doppler sonar systems in characterizing human actions, which can then be efficiently classified using ConvNets. It also demonstrates that the integration of multiple sources of acoustic information can significantly improve the system's performance.

  12. GLOBAL ASSESSMENT OF WASTEWATER IRRIGATION: UNDERSTANDING HEALTH RISKS AND CONTRIBUTIONS TO FOOD SECURITY USING AN ENVIRONMENTAL SYSTEMS APPROACH

    EPA Science Inventory

    This research will quantify the extent of de facto reuse of untreated wastewater at the global scale. Through the integration of multiple existing spatial data sources, this project will produce rigorous analyses assessing the relationship between wastewater irrigation, hea...

  13. Stakeholders' Conceptions of Connecting Learning at Different Sites in Two National VET Systems

    ERIC Educational Resources Information Center

    Sappa, Viviana; Choy, Sarojni; Aprea, Carmela

    2016-01-01

    Learning through active participation and engagement in education and workplace settings is a prerequisite for effective professional competence development through Vocational Education and Training (VET). Equally important is that learning from multiple sites and sources needs to be purposefully connected and integrated to construct meaningful…

  14. AN ACCURACY ASSESSMENT OF MULTIPLE MID-ATLANTIC SUB-PIXEL IMPERVIOUS SURFACE MAPS

    EPA Science Inventory

    Anthropogenic impervious surfaces have an important relationship with non-point source pollution (NPS) in urban watersheds. The amount of impervious surface area in a watershed is a key indicator of landscape change. As a single variable, it serves to integrate a number of conc...

  15. The Characterization of Brain Behavior Relationships via Cognitive Neuroinformatic Approaches

    ERIC Educational Resources Information Center

    Kalar, Donald James, II

    2009-01-01

    The scope, breadth, and volume of data characterizing our current understanding of how the brain functions is growing at an increasingly rapid pace. What is more, theories are becoming increasing complex and nuanced, integrating knowledge from multiple previously independent sources of scientific inquiry. The research described within this…

  16. A Study of Holocaust Survivors: Implications for Curriculum

    ERIC Educational Resources Information Center

    Greene, Roberta R.

    2010-01-01

    This article presents an approach to human behavior curriculum that requires students to achieve the purpose outlined in the Council on Social Work Education's 2008 Educational Policy and Accreditation Standards to "distinguish, appraise, and integrate multiple sources of knowledge, including research-based knowledge." It emphasizes and allows for…

  17. The Nature of Phoneme Representation in Spoken Word Recognition

    ERIC Educational Resources Information Center

    Gaskell, M. Gareth; Quinlan, Philip T.; Tamminen, Jakke; Cleland, Alexandra A.

    2008-01-01

    Four experiments used the psychological refractory period logic to examine whether integration of multiple sources of phonemic information has a decisional locus. All experiments made use of a dual-task paradigm in which participants made forced-choice color categorization (Task 1) and phoneme categorization (Task 2) decisions at varying stimulus…

  18. shinyGISPA: A web application for characterizing phenotype by gene sets using multiple omics data combinations.

    PubMed

    Dwivedi, Bhakti; Kowalski, Jeanne

    2018-01-01

    While many methods exist for integrating multi-omics data or defining gene sets, there is no one single tool that defines gene sets based on merging of multiple omics data sets. We present shinyGISPA, an open-source application with a user-friendly web-based interface to define genes according to their similarity in several molecular changes that are driving a disease phenotype. This tool was developed to help facilitate the usability of a previously published method, Gene Integrated Set Profile Analysis (GISPA), among researchers with limited computer-programming skills. The GISPA method allows the identification of multiple gene sets that may play a role in the characterization, clinical application, or functional relevance of a disease phenotype. The tool provides an automated workflow that is highly scalable and adaptable to applications that go beyond genomic data merging analysis. It is available at http://shinygispa.winship.emory.edu/shinyGISPA/.

  19. shinyGISPA: A web application for characterizing phenotype by gene sets using multiple omics data combinations

    PubMed Central

    Dwivedi, Bhakti

    2018-01-01

    While many methods exist for integrating multi-omics data or defining gene sets, there is no one single tool that defines gene sets based on merging of multiple omics data sets. We present shinyGISPA, an open-source application with a user-friendly web-based interface to define genes according to their similarity in several molecular changes that are driving a disease phenotype. This tool was developed to help facilitate the usability of a previously published method, Gene Integrated Set Profile Analysis (GISPA), among researchers with limited computer-programming skills. The GISPA method allows the identification of multiple gene sets that may play a role in the characterization, clinical application, or functional relevance of a disease phenotype. The tool provides an automated workflow that is highly scalable and adaptable to applications that go beyond genomic data merging analysis. It is available at http://shinygispa.winship.emory.edu/shinyGISPA/. PMID:29415010

  20. Design of Xen Hybrid Multiple Police Model

    NASA Astrophysics Data System (ADS)

    Sun, Lei; Lin, Renhao; Zhu, Xianwei

    2017-10-01

    Virtualization Technology has attracted more and more attention. As a popular open-source virtualization tools, XEN is used more and more frequently. Xsm, XEN security model, has also been widespread concern. The safety status classification has not been established in the XSM, and it uses the virtual machine as a managed object to make Dom0 a unique administrative domain that does not meet the minimum privilege. According to these questions, we design a Hybrid multiple police model named SV_HMPMD that organically integrates multiple single security policy models include DTE,RBAC,BLP. It can fullfill the requirement of confidentiality and integrity for security model and use different particle size to different domain. In order to improve BLP’s practicability, the model introduce multi-level security labels. In order to divide the privilege in detail, we combine DTE with RBAC. In order to oversize privilege, we limit the privilege of domain0.

  1. Microfabricated injectable drug delivery system

    DOEpatents

    Krulevitch, Peter A.; Wang, Amy W.

    2002-01-01

    A microfabricated, fully integrated drug delivery system capable of secreting controlled dosages of multiple drugs over long periods of time (up to a year). The device includes a long and narrow shaped implant with a sharp leading edge for implantation under the skin of a human in a manner analogous to a sliver. The implant includes: 1) one or more micromachined, integrated, zero power, high and constant pressure generating osmotic engine; 2) low power addressable one-shot shape memory polymer (SMP) valves for switching on the osmotic engine, and for opening drug outlet ports; 3) microfabricated polymer pistons for isolating the pressure source from drug-filled microchannels; 4) multiple drug/multiple dosage capacity, and 5) anisotropically-etched, atomically-sharp silicon leading edge for penetrating the skin during implantation. The device includes an externally mounted controller for controlling on-board electronics which activates the SMP microvalves, etc. of the implant.

  2. Cross-contact chain

    NASA Technical Reports Server (NTRS)

    Lieneweg, Udo (Inventor)

    1988-01-01

    A system is provided for use with wafers that include multiple integrated circuits that include two conductive layers in contact at multiple interfaces. Contact chains are formed beside the integrated circuits, each contact chain formed of the same two layers as the circuits, in the form of conductive segments alternating between the upper and lower layers and with the ends of the segments connected in series through interfaces. A current source passes a current through the series-connected segments, by way of a pair of current tabs connected to opposite ends of the series of segments. While the current flows, voltage measurements are taken between each of a plurality of pairs of voltage tabs, the two tabs of each pair connected to opposite ends of an interface that lies along the series-connected segments. A plot of interface conductances on a normal probability chart, enables prediction of the yield of good integrated circuits from the wafer.

  3. Cross-contact chain

    NASA Technical Reports Server (NTRS)

    Lieneweg, U. (Inventor)

    1986-01-01

    A system is provided for use with wafers that include multiple integrated circuits that include two conductive layers in contact at multiple interfaces. Contact chains are formed beside the integrated circuits, each contact chain formed of the same two layers as the circuits, in the form of conductive segments alternating between the upper and lower layers and with the ends of the segments connected in series through interfaces. A current source passes a current through the series-connected segments, by way of a pair of current tabs connected to opposite ends of the series of segments. While the current flows, voltage measurements are taken between each of a plurality of pairs of voltage tabs, the two tabs of each pair connected to opposite ends of an interface that lies along the series-connected segments. A plot of interface conductances on normal probability chart enables prediction of the yield of good integrated circuits from the wafer.

  4. Integrated photoacoustic, ultrasound and fluorescence platform for diagnostic medical imaging-proof of concept study with a tissue mimicking phantom.

    PubMed

    James, Joseph; Murukeshan, Vadakke Matham; Woh, Lye Sun

    2014-07-01

    The structural and molecular heterogeneities of biological tissues demand the interrogation of the samples with multiple energy sources and provide visualization capabilities at varying spatial resolution and depth scales for obtaining complementary diagnostic information. A novel multi-modal imaging approach that uses optical and acoustic energies to perform photoacoustic, ultrasound and fluorescence imaging at multiple resolution scales from the tissue surface and depth is proposed in this paper. The system comprises of two distinct forms of hardware level integration so as to have an integrated imaging system under a single instrumentation set-up. The experimental studies show that the system is capable of mapping high resolution fluorescence signatures from the surface, optical absorption and acoustic heterogeneities along the depth (>2cm) of the tissue at multi-scale resolution (<1µm to <0.5mm).

  5. Refining the aggregate exposure pathway.

    PubMed

    Tan, Yu-Mei; Leonard, Jeremy A; Edwards, Stephen; Teeguarden, Justin; Egeghy, Peter

    2018-03-01

    Advancements in measurement technologies and modeling capabilities continue to result in an abundance of exposure information, adding to that currently in existence. However, fragmentation within the exposure science community acts as an obstacle for realizing the vision set forth in the National Research Council's report on Exposure Science in the 21 st century to consider exposures from source to dose, on multiple levels of integration, and to multiple stressors. The concept of an Aggregate Exposure Pathway (AEP) was proposed as a framework for organizing and integrating diverse exposure information that exists across numerous repositories and among multiple scientific fields. A workshop held in May 2016 followed introduction of the AEP concept, allowing members of the exposure science community to provide extensive evaluation and feedback regarding the framework's structure, key components, and applications. The current work briefly introduces topics discussed at the workshop and attempts to address key challenges involved in refining this framework. The resulting evolution in the AEP framework's features allows for facilitating acquisition, integration, organization, and transparent application and communication of exposure knowledge in a manner that is independent of its ultimate use, thereby enabling reuse of such information in many applications.

  6. Bi-level Multi-Source Learning for Heterogeneous Block-wise Missing Data

    PubMed Central

    Xiang, Shuo; Yuan, Lei; Fan, Wei; Wang, Yalin; Thompson, Paul M.; Ye, Jieping

    2013-01-01

    Bio-imaging technologies allow scientists to collect large amounts of high-dimensional data from multiple heterogeneous sources for many biomedical applications. In the study of Alzheimer's Disease (AD), neuroimaging data, gene/protein expression data, etc., are often analyzed together to improve predictive power. Joint learning from multiple complementary data sources is advantageous, but feature-pruning and data source selection are critical to learn interpretable models from high-dimensional data. Often, the data collected has block-wise missing entries. In the Alzheimer’s Disease Neuroimaging Initiative (ADNI), most subjects have MRI and genetic information, but only half have cerebrospinal fluid (CSF) measures, a different half has FDG-PET; only some have proteomic data. Here we propose how to effectively integrate information from multiple heterogeneous data sources when data is block-wise missing. We present a unified “bi-level” learning model for complete multi-source data, and extend it to incomplete data. Our major contributions are: (1) our proposed models unify feature-level and source-level analysis, including several existing feature learning approaches as special cases; (2) the model for incomplete data avoids imputing missing data and offers superior performance; it generalizes to other applications with block-wise missing data sources; (3) we present efficient optimization algorithms for modeling complete and incomplete data. We comprehensively evaluate the proposed models including all ADNI subjects with at least one of four data types at baseline: MRI, FDG-PET, CSF and proteomics. Our proposed models compare favorably with existing approaches. PMID:23988272

  7. Bi-level multi-source learning for heterogeneous block-wise missing data.

    PubMed

    Xiang, Shuo; Yuan, Lei; Fan, Wei; Wang, Yalin; Thompson, Paul M; Ye, Jieping

    2014-11-15

    Bio-imaging technologies allow scientists to collect large amounts of high-dimensional data from multiple heterogeneous sources for many biomedical applications. In the study of Alzheimer's Disease (AD), neuroimaging data, gene/protein expression data, etc., are often analyzed together to improve predictive power. Joint learning from multiple complementary data sources is advantageous, but feature-pruning and data source selection are critical to learn interpretable models from high-dimensional data. Often, the data collected has block-wise missing entries. In the Alzheimer's Disease Neuroimaging Initiative (ADNI), most subjects have MRI and genetic information, but only half have cerebrospinal fluid (CSF) measures, a different half has FDG-PET; only some have proteomic data. Here we propose how to effectively integrate information from multiple heterogeneous data sources when data is block-wise missing. We present a unified "bi-level" learning model for complete multi-source data, and extend it to incomplete data. Our major contributions are: (1) our proposed models unify feature-level and source-level analysis, including several existing feature learning approaches as special cases; (2) the model for incomplete data avoids imputing missing data and offers superior performance; it generalizes to other applications with block-wise missing data sources; (3) we present efficient optimization algorithms for modeling complete and incomplete data. We comprehensively evaluate the proposed models including all ADNI subjects with at least one of four data types at baseline: MRI, FDG-PET, CSF and proteomics. Our proposed models compare favorably with existing approaches. © 2013 Elsevier Inc. All rights reserved.

  8. Exploring the Meaning and Use of Science Content Integration

    NASA Astrophysics Data System (ADS)

    Garner, Jason L.

    Science content integration, or the simultaneous teaching of science with other subjects during learning activities, has been explored by multiple studies. However, due to a lack of consensus on its definition, it was difficult for educators in a local school district to discuss and evaluate the effectiveness of this instructional technique. This qualitative collective case study, based on a constructivist theoretical foundation, centered on the questions of how teachers defined and used science content integration, and perceptions of impediments to its use. Participants were five teachers in a suburban elementary school. The sources of data for this study were interviews, audio recordings of lessons, and teacher documents in the form of lesson plans. Data analysis was conducted through multiple coding procedures, allowing the emergence of themes. Data analysis showed that participants' beliefs and practices differed according to age levels and developmental needs of their students. Implications for positive social change include building from this study to provide content integration-based professional development, common planning time, and suitable materials to improve teachers' capacity to integrate science content into instruction.

  9. Recent Advances in Registration, Integration and Fusion of Remotely Sensed Data: Redundant Representations and Frames

    NASA Technical Reports Server (NTRS)

    Czaja, Wojciech; Le Moigne-Stewart, Jacqueline

    2014-01-01

    In recent years, sophisticated mathematical techniques have been successfully applied to the field of remote sensing to produce significant advances in applications such as registration, integration and fusion of remotely sensed data. Registration, integration and fusion of multiple source imagery are the most important issues when dealing with Earth Science remote sensing data where information from multiple sensors, exhibiting various resolutions, must be integrated. Issues ranging from different sensor geometries, different spectral responses, differing illumination conditions, different seasons, and various amounts of noise need to be dealt with when designing an image registration, integration or fusion method. This tutorial will first define the problems and challenges associated with these applications and then will review some mathematical techniques that have been successfully utilized to solve them. In particular, we will cover topics on geometric multiscale representations, redundant representations and fusion frames, graph operators, diffusion wavelets, as well as spatial-spectral and operator-based data fusion. All the algorithms will be illustrated using remotely sensed data, with an emphasis on current and operational instruments.

  10. Two-particle Bose–Einstein correlations in pp collisions at √s=0.9 and 7 TeV measured with the ATLAS detector

    DOE PAGES

    Aad, G.; Abbott, B.; Abdallah, J.; ...

    2015-10-01

    The paper presents studies of Bose–Einstein Correlations (BEC) for pairs of like-sign charged particles measured in the kinematic range pT> 100 MeV and |η|< 2.5 in proton collisions at centre-of-mass energies of 0.9 and 7 TeV with the ATLAS detector at the CERN Large Hadron Collider. The integrated luminosities are approximately 7 μb -1, 190 μb -1 and 12.4 nb -1 for 0.9 TeV, 7 TeV minimum-bias and 7 TeV high-multiplicity data samples, respectively. The multiplicity dependence of the BEC parameters characterizing the correlation strength and the correlation source size are investigated for charged-particle multiplicities of up to 240. Amore » saturation effect in the multiplicity dependence of the correlation source size parameter is observed using the high-multiplicity 7 TeV data sample. In conclusion, the dependence of the BEC parameters on the average transverse momentum of the particle pair is also investigated.« less

  11. Two-particle Bose–Einstein correlations in pp collisions at √s=0.9 and 7 TeV measured with the ATLAS detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aad, G.; Abbott, B.; Abdallah, J.

    The paper presents studies of Bose–Einstein Correlations (BEC) for pairs of like-sign charged particles measured in the kinematic range pT> 100 MeV and |η|< 2.5 in proton collisions at centre-of-mass energies of 0.9 and 7 TeV with the ATLAS detector at the CERN Large Hadron Collider. The integrated luminosities are approximately 7 μb -1, 190 μb -1 and 12.4 nb -1 for 0.9 TeV, 7 TeV minimum-bias and 7 TeV high-multiplicity data samples, respectively. The multiplicity dependence of the BEC parameters characterizing the correlation strength and the correlation source size are investigated for charged-particle multiplicities of up to 240. Amore » saturation effect in the multiplicity dependence of the correlation source size parameter is observed using the high-multiplicity 7 TeV data sample. In conclusion, the dependence of the BEC parameters on the average transverse momentum of the particle pair is also investigated.« less

  12. Estimation of the sensitive volume for gravitational-wave source populations using weighted Monte Carlo integration

    NASA Astrophysics Data System (ADS)

    Tiwari, Vaibhav

    2018-07-01

    The population analysis and estimation of merger rates of compact binaries is one of the important topics in gravitational wave astronomy. The primary ingredient in these analyses is the population-averaged sensitive volume. Typically, sensitive volume, of a given search to a given simulated source population, is estimated by drawing signals from the population model and adding them to the detector data as injections. Subsequently injections, which are simulated gravitational waveforms, are searched for by the search pipelines and their signal-to-noise ratio (SNR) is determined. Sensitive volume is estimated, by using Monte-Carlo (MC) integration, from the total number of injections added to the data, the number of injections that cross a chosen threshold on SNR and the astrophysical volume in which the injections are placed. So far, only fixed population models have been used in the estimation of binary black holes (BBH) merger rates. However, as the scope of population analysis broaden in terms of the methodologies and source properties considered, due to an increase in the number of observed gravitational wave (GW) signals, the procedure will need to be repeated multiple times at a large computational cost. In this letter we address the problem by performing a weighted MC integration. We show how a single set of generic injections can be weighted to estimate the sensitive volume for multiple population models; thereby greatly reducing the computational cost. The weights in this MC integral are the ratios of the output probabilities, determined by the population model and standard cosmology, and the injection probability, determined by the distribution function of the generic injections. Unlike analytical/semi-analytical methods, which usually estimate sensitive volume using single detector sensitivity, the method is accurate within statistical errors, comes at no added cost and requires minimal computational resources.

  13. Gis-Based Route Finding Using ANT Colony Optimization and Urban Traffic Data from Different Sources

    NASA Astrophysics Data System (ADS)

    Davoodi, M.; Mesgari, M. S.

    2015-12-01

    Nowadays traffic data is obtained from multiple sources including GPS, Video Vehicle Detectors (VVD), Automatic Number Plate Recognition (ANPR), Floating Car Data (FCD), VANETs, etc. All such data can be used for route finding. This paper proposes a model for finding the optimum route based on the integration of traffic data from different sources. Ant Colony Optimization is applied in this paper because the concept of this method (movement of ants in a network) is similar to urban road network and movements of cars. The results indicate that this model is capable of incorporating data from different sources, which may even be inconsistent.

  14. Teaming up to crack innovation and enterprise integration.

    PubMed

    Cash, James I; Earl, Michael J; Morison, Robert

    2008-11-01

    In the continuing quest for business growth, many CEOs are turning to their CIOs and IT organizations because technology is essential to two compelling sources of growth: innovation and integration. Innovation, of course, is doing new things that customers ultimately appreciate and value--not only developing new generations of products, services, channels, and customer experience but also conceiving new business processes and models. Integration is making the multiple units, functions, and sites of large organizations work together to increase capacity, improve performance, lower cost structure, and discover opportunities for improvement that don't appear until you look across functions.

  15. High-speed photodiodes for InP-based photonic integrated circuits.

    PubMed

    Rouvalis, E; Chtioui, M; Tran, M; Lelarge, F; van Dijk, F; Fice, M J; Renaud, C C; Carpintero, G; Seeds, A J

    2012-04-09

    We demonstrate the feasibility of monolithic integration of evanescently coupled Uni-Traveling Carrier Photodiodes (UTC-PDs) having a bandwidth exceeding 100 GHz with Multimode Interference (MMI) couplers. This platform is suitable for active-passive, butt-joint monolithic integration with various Multiple Quantum Well (MQW) devices for narrow linewidth millimeter-wave photomixing sources. The fabricated devices achieved a high 3-dB bandwidth of up to 110 GHz and a generated output power of more than 0 dBm (1 mW) at 120 GHz with a flat frequency response over the microwave F-band (90-140 GHz).

  16. Data Foundry: Data Warehousing and Integration for Scientific Data Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Musick, R.; Critchlow, T.; Ganesh, M.

    2000-02-29

    Data warehousing is an approach for managing data from multiple sources by representing them with a single, coherent point of view. Commercial data warehousing products have been produced by companies such as RebBrick, IBM, Brio, Andyne, Ardent, NCR, Information Advantage, Informatica, and others. Other companies have chosen to develop their own in-house data warehousing solution using relational databases, such as those sold by Oracle, IBM, Informix and Sybase. The typical approaches include federated systems, and mediated data warehouses, each of which, to some extent, makes use of a series of source-specific wrapper and mediator layers to integrate the data intomore » a consistent format which is then presented to users as a single virtual data store. These approaches are successful when applied to traditional business data because the data format used by the individual data sources tends to be rather static. Therefore, once a data source has been integrated into a data warehouse, there is relatively little work required to maintain that connection. However, that is not the case for all data sources. Data sources from scientific domains tend to regularly change their data model, format and interface. This is problematic because each change requires the warehouse administrator to update the wrapper, mediator, and warehouse interfaces to properly read, interpret, and represent the modified data source. Furthermore, the data that scientists require to carry out research is continuously changing as their understanding of a research question develops, or as their research objectives evolve. The difficulty and cost of these updates effectively limits the number of sources that can be integrated into a single data warehouse, or makes an approach based on warehousing too expensive to consider.« less

  17. Spatio-Temporal Data Model for Integrating Evolving Nation-Level Datasets

    NASA Astrophysics Data System (ADS)

    Sorokine, A.; Stewart, R. N.

    2017-10-01

    Ability to easily combine the data from diverse sources in a single analytical workflow is one of the greatest promises of the Big Data technologies. However, such integration is often challenging as datasets originate from different vendors, governments, and research communities that results in multiple incompatibilities including data representations, formats, and semantics. Semantics differences are hardest to handle: different communities often use different attribute definitions and associate the records with different sets of evolving geographic entities. Analysis of global socioeconomic variables across multiple datasets over prolonged time is often complicated by the difference in how boundaries and histories of countries or other geographic entities are represented. Here we propose an event-based data model for depicting and tracking histories of evolving geographic units (countries, provinces, etc.) and their representations in disparate data. The model addresses the semantic challenge of preserving identity of geographic entities over time by defining criteria for the entity existence, a set of events that may affect its existence, and rules for mapping between different representations (datasets). Proposed model is used for maintaining an evolving compound database of global socioeconomic and environmental data harvested from multiple sources. Practical implementation of our model is demonstrated using PostgreSQL object-relational database with the use of temporal, geospatial, and NoSQL database extensions.

  18. An Integrated Nursing Management Information System: From Concept to Reality

    PubMed Central

    Pinkley, Connie L.; Sommer, Patricia K.

    1988-01-01

    This paper addresses the transition from the conceptualization of a Nursing Management Information System (NMIS) integrated and interdependent with the Hospital Information System (HIS) to its realization. Concepts of input, throughout, and output are presented to illustrate developmental strategies used to achieve nursing information products. Essential processing capabilities include: 1) ability to interact with multiple data sources; 2) database management, statistical, and graphics software packages; 3) online, batch and reporting; and 4) interactive data analysis. Challenges encountered in system construction are examined.

  19. Auditing the multiply-related concepts within the UMLS.

    PubMed

    Mougin, Fleur; Grabar, Natalia

    2014-10-01

    This work focuses on multiply-related Unified Medical Language System (UMLS) concepts, that is, concepts associated through multiple relations. The relations involved in such situations are audited to determine whether they are provided by source vocabularies or result from the integration of these vocabularies within the UMLS. We study the compatibility of the multiple relations which associate the concepts under investigation and try to explain the reason why they co-occur. Towards this end, we analyze the relations both at the concept and term levels. In addition, we randomly select 288 concepts associated through contradictory relations and manually analyze them. At the UMLS scale, only 0.7% of combinations of relations are contradictory, while homogeneous combinations are observed in one-third of situations. At the scale of source vocabularies, one-third do not contain more than one relation between the concepts under investigation. Among the remaining source vocabularies, seven of them mainly present multiple non-homogeneous relations between terms. Analysis at the term level also shows that only in a quarter of cases are the source vocabularies responsible for the presence of multiply-related concepts in the UMLS. These results are available at: http://www.isped.u-bordeaux2.fr/ArticleJAMIA/results_multiply_related_concepts.aspx. Manual analysis was useful to explain the conceptualization difference in relations between terms across source vocabularies. The exploitation of source relations was helpful for understanding why some source vocabularies describe multiple relations between a given pair of terms. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  20. MPHASYS: a mouse phenotype analysis system

    PubMed Central

    Calder, R Brent; Beems, Rudolf B; van Steeg, Harry; Mian, I Saira; Lohman, Paul HM; Vijg, Jan

    2007-01-01

    Background Systematic, high-throughput studies of mouse phenotypes have been hampered by the inability to analyze individual animal data from a multitude of sources in an integrated manner. Studies generally make comparisons at the level of genotype or treatment thereby excluding associations that may be subtle or involve compound phenotypes. Additionally, the lack of integrated, standardized ontologies and methodologies for data exchange has inhibited scientific collaboration and discovery. Results Here we introduce a Mouse Phenotype Analysis System (MPHASYS), a platform for integrating data generated by studies of mouse models of human biology and disease such as aging and cancer. This computational platform is designed to provide a standardized methodology for working with animal data; a framework for data entry, analysis and sharing; and ontologies and methodologies for ensuring accurate data capture. We describe the tools that currently comprise MPHASYS, primarily ones related to mouse pathology, and outline its use in a study of individual animal-specific patterns of multiple pathology in mice harboring a specific germline mutation in the DNA repair and transcription-specific gene Xpd. Conclusion MPHASYS is a system for analyzing multiple data types from individual animals. It provides a framework for developing data analysis applications, and tools for collecting and distributing high-quality data. The software is platform independent and freely available under an open-source license [1]. PMID:17553167

  1. The Digital Ageing Atlas: integrating the diversity of age-related changes into a unified resource.

    PubMed

    Craig, Thomas; Smelick, Chris; Tacutu, Robi; Wuttke, Daniel; Wood, Shona H; Stanley, Henry; Janssens, Georges; Savitskaya, Ekaterina; Moskalev, Alexey; Arking, Robert; de Magalhães, João Pedro

    2015-01-01

    Multiple studies characterizing the human ageing phenotype have been conducted for decades. However, there is no centralized resource in which data on multiple age-related changes are collated. Currently, researchers must consult several sources, including primary publications, in order to obtain age-related data at various levels. To address this and facilitate integrative, system-level studies of ageing we developed the Digital Ageing Atlas (DAA). The DAA is a one-stop collection of human age-related data covering different biological levels (molecular, cellular, physiological, psychological and pathological) that is freely available online (http://ageing-map.org/). Each of the >3000 age-related changes is associated with a specific tissue and has its own page displaying a variety of information, including at least one reference. Age-related changes can also be linked to each other in hierarchical trees to represent different types of relationships. In addition, we developed an intuitive and user-friendly interface that allows searching, browsing and retrieving information in an integrated and interactive fashion. Overall, the DAA offers a new approach to systemizing ageing resources, providing a manually-curated and readily accessible source of age-related changes. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  2. PopHR: a knowledge-based platform to support integration, analysis, and visualization of population health data.

    PubMed

    Shaban-Nejad, Arash; Lavigne, Maxime; Okhmatovskaia, Anya; Buckeridge, David L

    2017-01-01

    Population health decision makers must consider complex relationships between multiple concepts measured with differential accuracy from heterogeneous data sources. Population health information systems are currently limited in their ability to integrate data and present a coherent portrait of population health. Consequentially, these systems can provide only basic support for decision makers. The Population Health Record (PopHR) is a semantic web application that automates the integration and extraction of massive amounts of heterogeneous data from multiple distributed sources (e.g., administrative data, clinical records, and survey responses) to support the measurement and monitoring of population health and health system performance for a defined population. The design of the PopHR draws on the theories of the determinants of health and evidence-based public health to harmonize and explicitly link information about a population with evidence about the epidemiology and control of chronic diseases. Organizing information in this manner and linking it explicitly to evidence is expected to improve decision making related to the planning, implementation, and evaluation of population health and health system interventions. In this paper, we describe the PopHR platform and discuss the architecture, design, key modules, and its implementation and use. © 2016 New York Academy of Sciences.

  3. A reproducible approach to high-throughput biological data acquisition and integration

    PubMed Central

    Rahnavard, Gholamali; Waldron, Levi; McIver, Lauren; Shafquat, Afrah; Franzosa, Eric A.; Miropolsky, Larissa; Sweeney, Christopher

    2015-01-01

    Modern biological research requires rapid, complex, and reproducible integration of multiple experimental results generated both internally and externally (e.g., from public repositories). Although large systematic meta-analyses are among the most effective approaches both for clinical biomarker discovery and for computational inference of biomolecular mechanisms, identifying, acquiring, and integrating relevant experimental results from multiple sources for a given study can be time-consuming and error-prone. To enable efficient and reproducible integration of diverse experimental results, we developed a novel approach for standardized acquisition and analysis of high-throughput and heterogeneous biological data. This allowed, first, novel biomolecular network reconstruction in human prostate cancer, which correctly recovered and extended the NFκB signaling pathway. Next, we investigated host-microbiome interactions. In less than an hour of analysis time, the system retrieved data and integrated six germ-free murine intestinal gene expression datasets to identify the genes most influenced by the gut microbiota, which comprised a set of immune-response and carbohydrate metabolism processes. Finally, we constructed integrated functional interaction networks to compare connectivity of peptide secretion pathways in the model organisms Escherichia coli, Bacillus subtilis, and Pseudomonas aeruginosa. PMID:26157642

  4. Space Shuttle Propulsion Systems Plume Modeling and Simulation for the Lift-Off Computational Fluid Dynamics Model

    NASA Technical Reports Server (NTRS)

    Strutzenberg, L. L.; Dougherty, N. S.; Liever, P. A.; West, J. S.; Smith, S. D.

    2007-01-01

    This paper details advances being made in the development of Reynolds-Averaged Navier-Stokes numerical simulation tools, models, and methods for the integrated Space Shuttle Vehicle at launch. The conceptual model and modeling approach described includes the development of multiple computational models to appropriately analyze the potential debris transport for critical debris sources at Lift-Off. The conceptual model described herein involves the integration of propulsion analysis for the nozzle/plume flow with the overall 3D vehicle flowfield at Lift-Off. Debris Transport Analyses are being performed using the Shuttle Lift-Off models to assess the risk to the vehicle from Lift-Off debris and appropriately prioritized mitigation of potential debris sources to continue to reduce vehicle risk. These integrated simulations are being used to evaluate plume-induced debris environments where the multi-plume interactions with the launch facility can potentially accelerate debris particles toward the vehicle.

  5. Integrated system for automated financial document processing

    NASA Astrophysics Data System (ADS)

    Hassanein, Khaled S.; Wesolkowski, Slawo; Higgins, Ray; Crabtree, Ralph; Peng, Antai

    1997-02-01

    A system was developed that integrates intelligent document analysis with multiple character/numeral recognition engines in order to achieve high accuracy automated financial document processing. In this system, images are accepted in both their grayscale and binary formats. A document analysis module starts by extracting essential features from the document to help identify its type (e.g. personal check, business check, etc.). These features are also utilized to conduct a full analysis of the image to determine the location of interesting zones such as the courtesy amount and the legal amount. These fields are then made available to several recognition knowledge sources such as courtesy amount recognition engines and legal amount recognition engines through a blackboard architecture. This architecture allows all the available knowledge sources to contribute incrementally and opportunistically to the solution of the given recognition query. Performance results on a test set of machine printed business checks using the integrated system are also reported.

  6. PICKLE 2.0: A human protein-protein interaction meta-database employing data integration via genetic information ontology

    PubMed Central

    Gioutlakis, Aris; Klapa, Maria I.

    2017-01-01

    It has been acknowledged that source databases recording experimentally supported human protein-protein interactions (PPIs) exhibit limited overlap. Thus, the reconstruction of a comprehensive PPI network requires appropriate integration of multiple heterogeneous primary datasets, presenting the PPIs at various genetic reference levels. Existing PPI meta-databases perform integration via normalization; namely, PPIs are merged after converted to a certain target level. Hence, the node set of the integrated network depends each time on the number and type of the combined datasets. Moreover, the irreversible a priori normalization process hinders the identification of normalization artifacts in the integrated network, which originate from the nonlinearity characterizing the genetic information flow. PICKLE (Protein InteraCtion KnowLedgebasE) 2.0 implements a new architecture for this recently introduced human PPI meta-database. Its main novel feature over the existing meta-databases is its approach to primary PPI dataset integration via genetic information ontology. Building upon the PICKLE principles of using the reviewed human complete proteome (RHCP) of UniProtKB/Swiss-Prot as the reference protein interactor set, and filtering out protein interactions with low probability of being direct based on the available evidence, PICKLE 2.0 first assembles the RHCP genetic information ontology network by connecting the corresponding genes, nucleotide sequences (mRNAs) and proteins (UniProt entries) and then integrates PPI datasets by superimposing them on the ontology network without any a priori transformations. Importantly, this process allows the resulting heterogeneous integrated network to be reversibly normalized to any level of genetic reference without loss of the original information, the latter being used for identification of normalization biases, and enables the appraisal of potential false positive interactions through PPI source database cross-checking. The PICKLE web-based interface (www.pickle.gr) allows for the simultaneous query of multiple entities and provides integrated human PPI networks at either the protein (UniProt) or the gene level, at three PPI filtering modes. PMID:29023571

  7. III–V quantum light source and cavity-QED on Silicon

    PubMed Central

    Luxmoore, I. J.; Toro, R.; Pozo-Zamudio, O. Del; Wasley, N. A.; Chekhovich, E. A.; Sanchez, A. M.; Beanland, R.; Fox, A. M.; Skolnick, M. S.; Liu, H. Y.; Tartakovskii, A. I.

    2013-01-01

    Non-classical light sources offer a myriad of possibilities in both fundamental science and commercial applications. Single photons are the most robust carriers of quantum information and can be exploited for linear optics quantum information processing. Scale-up requires miniaturisation of the waveguide circuit and multiple single photon sources. Silicon photonics, driven by the incentive of optical interconnects is a highly promising platform for the passive optical components, but integrated light sources are limited by silicon's indirect band-gap. III–V semiconductor quantum-dots, on the other hand, are proven quantum emitters. Here we demonstrate single-photon emission from quantum-dots coupled to photonic crystal nanocavities fabricated from III–V material grown directly on silicon substrates. The high quality of the III–V material and photonic structures is emphasized by observation of the strong-coupling regime. This work opens-up the advantages of silicon photonics to the integration and scale-up of solid-state quantum optical systems. PMID:23393621

  8. Efficient algorithms for fast integration on large data sets from multiple sources.

    PubMed

    Mi, Tian; Rajasekaran, Sanguthevar; Aseltine, Robert

    2012-06-28

    Recent large scale deployments of health information technology have created opportunities for the integration of patient medical records with disparate public health, human service, and educational databases to provide comprehensive information related to health and development. Data integration techniques, which identify records belonging to the same individual that reside in multiple data sets, are essential to these efforts. Several algorithms have been proposed in the literatures that are adept in integrating records from two different datasets. Our algorithms are aimed at integrating multiple (in particular more than two) datasets efficiently. Hierarchical clustering based solutions are used to integrate multiple (in particular more than two) datasets. Edit distance is used as the basic distance calculation, while distance calculation of common input errors is also studied. Several techniques have been applied to improve the algorithms in terms of both time and space: 1) Partial Construction of the Dendrogram (PCD) that ignores the level above the threshold; 2) Ignoring the Dendrogram Structure (IDS); 3) Faster Computation of the Edit Distance (FCED) that predicts the distance with the threshold by upper bounds on edit distance; and 4) A pre-processing blocking phase that limits dynamic computation within each block. We have experimentally validated our algorithms on large simulated as well as real data. Accuracy and completeness are defined stringently to show the performance of our algorithms. In addition, we employ a four-category analysis. Comparison with FEBRL shows the robustness of our approach. In the experiments we conducted, the accuracy we observed exceeded 90% for the simulated data in most cases. 97.7% and 98.1% accuracy were achieved for the constant and proportional threshold, respectively, in a real dataset of 1,083,878 records.

  9. Research on Heterogeneous Data Exchange based on XML

    NASA Astrophysics Data System (ADS)

    Li, Huanqin; Liu, Jinfeng

    Integration of multiple data sources is becoming increasingly important for enterprises that cooperate closely with their partners for e-commerce. OLAP enables analysts and decision makers fast access to various materialized views from data warehouses. However, many corporations have internal business applications deployed on different platforms. This paper introduces a model for heterogeneous data exchange based on XML. The system can exchange and share the data among the different sources. The method used to realize the heterogeneous data exchange is given in this paper.

  10. Finite-Length Line Source Superposition Model (FLLSSM)

    NASA Astrophysics Data System (ADS)

    1980-03-01

    A linearized thermal conduction model was developed to economically determine media temperatures in geologic repositories for nuclear wastes. Individual canisters containing either high level waste or spent fuel assemblies were represented as finite length line sources in a continuous media. The combined effects of multiple canisters in a representative storage pattern were established at selected points of interest by superposition of the temperature rises calculated for each canister. The methodology is outlined and the computer code FLLSSM which performs required numerical integrations and superposition operations is described.

  11. Hemispherical reflectance model for passive images in an outdoor environment.

    PubMed

    Kim, Charles C; Thai, Bea; Yamaoka, Neil; Aboutalib, Omar

    2015-05-01

    We present a hemispherical reflectance model for simulating passive images in an outdoor environment where illumination is provided by natural sources such as the sun and the clouds. While the bidirectional reflectance distribution function (BRDF) accurately produces radiance from any objects after the illumination, using the BRDF in calculating radiance requires double integration. Replacing the BRDF by hemispherical reflectance under the natural sources transforms the double integration into a multiplication. This reduces both storage space and computation time. We present the formalism for the radiance of the scene using hemispherical reflectance instead of BRDF. This enables us to generate passive images in an outdoor environment taking advantage of the computational and storage efficiencies. We show some examples for illustration.

  12. From Retention to Satisfaction: New Outcomes for Assessing the Freshman Experience. AIR 1994 Annual Forum Paper.

    ERIC Educational Resources Information Center

    Sanders, Liz; And Others

    To meet accountability challenges from a customer-satisfaction perspective, an urban institution of higher education has developed an integrated approach to studying the freshman year experience in order to develop comprehensive outcomes measures for assessing freshman success. Multiple sources of data (freshman satisfaction survey data,…

  13. Leveraging non-targeted metabolite profiling via statistical genomics

    USDA-ARS?s Scientific Manuscript database

    One of the challenges of systems biology is to integrate multiple sources of data in order to build a cohesive view of the system of study. Here we describe the mass spectrometry based profiling of maize kernels, a model system for genomic studies and a cornerstone of the agroeconomy. Using a networ...

  14. Popular Culture and Academic Literacies Situated in a Pedagogical Third Space

    ERIC Educational Resources Information Center

    Buelow, Stephanie

    2016-01-01

    This critical participatory action research study sought to understand what happens when students' interest and experiences with popular culture are integrated into a standards-based sixth grade English language arts curriculum. Multiple data sources were analyzed using the theoretical concept of third space. Findings showed that (a) a democratic,…

  15. Analyzing Student and Employer Satisfaction with Cooperative Education through Multiple Data Sources

    ERIC Educational Resources Information Center

    Jiang, Yuheng Helen; Lee, Sally Wai Yin; Golab, Lukasz

    2015-01-01

    This paper reports on the analysis of three years research of undergraduate cooperative work term postings and employer and employee evaluations. The objective of the analysis was to determine the factors affecting student and employer success and satisfaction with the work-integrated learning experience. It was found that students performed…

  16. Reflective Practice: Using Focus Groups to Determine Family Priorities and Guide Social Pragmatic Program Development

    ERIC Educational Resources Information Center

    Theadore, Geraldine; Laurent, Amy; Kovarsky, Dana; Weiss, Amy L.

    2011-01-01

    Reflective practice requires that professionals carefully examine and integrate multiple sources of information when designing intervention and evaluating its effectiveness. This article describes the use of focus group discussion as a form of qualitative research for understanding parents' perspectives of a university-based intervention program…

  17. Evidence-Based Assessment of Attention-Deficit/Hyperactivity Disorder: Using Multiple Sources of Information

    ERIC Educational Resources Information Center

    Frazier, Thomas W.; Youngstrom, Eric A.

    2006-01-01

    In this article, the authors illustrate a step-by-step process of acquiring and integrating information according to the recommendations of evidence-based practices. A case example models the process, leading to specific recommendations regarding instruments and strategies for evidence-based assessment (EBA) of attention-deficit/hyperactivity…

  18. Characterizing use-phase chemical releases, fate, and disposal for modeling longitudinal human exposures to consumer products

    EPA Science Inventory

    The US EPA’s Human Exposure Model (HEM) is an integrated modeling system to estimate human exposure to chemicals in household consumer products. HEM consists of multiple modules, which may be run either together, or independently. The Source-to-Dose (S2D) module in HEM use...

  19. Passing Messages between Biological Networks to Refine Predicted Interactions

    PubMed Central

    Glass, Kimberly; Huttenhower, Curtis; Quackenbush, John; Yuan, Guo-Cheng

    2013-01-01

    Regulatory network reconstruction is a fundamental problem in computational biology. There are significant limitations to such reconstruction using individual datasets, and increasingly people attempt to construct networks using multiple, independent datasets obtained from complementary sources, but methods for this integration are lacking. We developed PANDA (Passing Attributes between Networks for Data Assimilation), a message-passing model using multiple sources of information to predict regulatory relationships, and used it to integrate protein-protein interaction, gene expression, and sequence motif data to reconstruct genome-wide, condition-specific regulatory networks in yeast as a model. The resulting networks were not only more accurate than those produced using individual data sets and other existing methods, but they also captured information regarding specific biological mechanisms and pathways that were missed using other methodologies. PANDA is scalable to higher eukaryotes, applicable to specific tissue or cell type data and conceptually generalizable to include a variety of regulatory, interaction, expression, and other genome-scale data. An implementation of the PANDA algorithm is available at www.sourceforge.net/projects/panda-net. PMID:23741402

  20. Interaction of wave with a body submerged below an ice sheet with multiple arbitrarily spaced cracks

    NASA Astrophysics Data System (ADS)

    Li, Z. F.; Wu, G. X.; Ji, C. Y.

    2018-05-01

    The problem of wave interaction with a body submerged below an ice sheet with multiple arbitrarily spaced cracks is considered, based on the linearized velocity potential theory together with the boundary element method. The ice sheet is modeled as a thin elastic plate with uniform properties, and zero bending moment and shear force conditions are enforced at the cracks. The Green function satisfying all the boundary conditions including those at cracks, apart from that on the body surface, is derived and is expressed in an explicit integral form. The boundary integral equation for the velocity potential is constructed with an unknown source distribution over the body surface only. The wave/crack interaction problem without the body is first solved directly without the need for source. The convergence and comparison studies are undertaken to show the accuracy and reliability of the solution procedure. Detailed numerical results through the hydrodynamic coefficients and wave exciting forces are provided for a body submerged below double cracks and an array of cracks. Some unique features are observed, and their mechanisms are analyzed.

  1. A computational- And storage-cloud for integration of biodiversity collections

    USGS Publications Warehouse

    Matsunaga, A.; Thompson, A.; Figueiredo, R. J.; Germain-Aubrey, C.C; Collins, M.; Beeman, R.S; Macfadden, B.J.; Riccardi, G.; Soltis, P.S; Page, L. M.; Fortes, J.A.B

    2013-01-01

    A core mission of the Integrated Digitized Biocollections (iDigBio) project is the building and deployment of a cloud computing environment customized to support the digitization workflow and integration of data from all U.S. nonfederal biocollections. iDigBio chose to use cloud computing technologies to deliver a cyberinfrastructure that is flexible, agile, resilient, and scalable to meet the needs of the biodiversity community. In this context, this paper describes the integration of open source cloud middleware, applications, and third party services using standard formats, protocols, and services. In addition, this paper demonstrates the value of the digitized information from collections in a broader scenario involving multiple disciplines.

  2. MBAT: a scalable informatics system for unifying digital atlasing workflows.

    PubMed

    Lee, Daren; Ruffins, Seth; Ng, Queenie; Sane, Nikhil; Anderson, Steve; Toga, Arthur

    2010-12-22

    Digital atlases provide a common semantic and spatial coordinate system that can be leveraged to compare, contrast, and correlate data from disparate sources. As the quality and amount of biological data continues to advance and grow, searching, referencing, and comparing this data with a researcher's own data is essential. However, the integration process is cumbersome and time-consuming due to misaligned data, implicitly defined associations, and incompatible data sources. This work addressing these challenges by providing a unified and adaptable environment to accelerate the workflow to gather, align, and analyze the data. The MouseBIRN Atlasing Toolkit (MBAT) project was developed as a cross-platform, free open-source application that unifies and accelerates the digital atlas workflow. A tiered, plug-in architecture was designed for the neuroinformatics and genomics goals of the project to provide a modular and extensible design. MBAT provides the ability to use a single query to search and retrieve data from multiple data sources, align image data using the user's preferred registration method, composite data from multiple sources in a common space, and link relevant informatics information to the current view of the data or atlas. The workspaces leverage tool plug-ins to extend and allow future extensions of the basic workspace functionality. A wide variety of tool plug-ins were developed that integrate pre-existing as well as newly created technology into each workspace. Novel atlasing features were also developed, such as supporting multiple label sets, dynamic selection and grouping of labels, and synchronized, context-driven display of ontological data. MBAT empowers researchers to discover correlations among disparate data by providing a unified environment for bringing together distributed reference resources, a user's image data, and biological atlases into the same spatial or semantic context. Through its extensible tiered plug-in architecture, MBAT allows researchers to customize all platform components to quickly achieve personalized workflows.

  3. Generation of open biomedical datasets through ontology-driven transformation and integration processes.

    PubMed

    Carmen Legaz-García, María Del; Miñarro-Giménez, José Antonio; Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás

    2016-06-03

    Biomedical research usually requires combining large volumes of data from multiple heterogeneous sources, which makes difficult the integrated exploitation of such data. The Semantic Web paradigm offers a natural technological space for data integration and exploitation by generating content readable by machines. Linked Open Data is a Semantic Web initiative that promotes the publication and sharing of data in machine readable semantic formats. We present an approach for the transformation and integration of heterogeneous biomedical data with the objective of generating open biomedical datasets in Semantic Web formats. The transformation of the data is based on the mappings between the entities of the data schema and the ontological infrastructure that provides the meaning to the content. Our approach permits different types of mappings and includes the possibility of defining complex transformation patterns. Once the mappings are defined, they can be automatically applied to datasets to generate logically consistent content and the mappings can be reused in further transformation processes. The results of our research are (1) a common transformation and integration process for heterogeneous biomedical data; (2) the application of Linked Open Data principles to generate interoperable, open, biomedical datasets; (3) a software tool, called SWIT, that implements the approach. In this paper we also describe how we have applied SWIT in different biomedical scenarios and some lessons learned. We have presented an approach that is able to generate open biomedical repositories in Semantic Web formats. SWIT is able to apply the Linked Open Data principles in the generation of the datasets, so allowing for linking their content to external repositories and creating linked open datasets. SWIT datasets may contain data from multiple sources and schemas, thus becoming integrated datasets.

  4. Implementation of Single Source Based Hospital Information System for the Catholic Medical Center Affiliated Hospitals

    PubMed Central

    Choi, Inyoung; Choi, Ran; Lee, Jonghyun

    2010-01-01

    Objectives The objective of this research is to introduce the unique approach of the Catholic Medical Center (CMC) integrate network hospitals with organizational and technical methodologies adopted for seamless implementation. Methods The Catholic Medical Center has developed a new hospital information system to connect network hospitals and adopted new information technology architecture which uses single source for multiple distributed hospital systems. Results The hospital information system of the CMC was developed to integrate network hospitals adopting new system development principles; one source, one route and one management. This information architecture has reduced the cost for system development and operation, and has enhanced the efficiency of the management process. Conclusions Integrating network hospital through information system was not simple; it was much more complicated than single organization implementation. We are still looking for more efficient communication channel and decision making process, and also believe that our new system architecture will be able to improve CMC health care system and provide much better quality of health care service to patients and customers. PMID:21818432

  5. DasPy – Open Source Multivariate Land Data Assimilation Framework with High Performance Computing

    NASA Astrophysics Data System (ADS)

    Han, Xujun; Li, Xin; Montzka, Carsten; Kollet, Stefan; Vereecken, Harry; Hendricks Franssen, Harrie-Jan

    2015-04-01

    Data assimilation has become a popular method to integrate observations from multiple sources with land surface models to improve predictions of the water and energy cycles of the soil-vegetation-atmosphere continuum. In recent years, several land data assimilation systems have been developed in different research agencies. Because of the software availability or adaptability, these systems are not easy to apply for the purpose of multivariate land data assimilation research. Multivariate data assimilation refers to the simultaneous assimilation of observation data for multiple model state variables into a simulation model. Our main motivation was to develop an open source multivariate land data assimilation framework (DasPy) which is implemented using the Python script language mixed with C++ and Fortran language. This system has been evaluated in several soil moisture, L-band brightness temperature and land surface temperature assimilation studies. The implementation allows also parameter estimation (soil properties and/or leaf area index) on the basis of the joint state and parameter estimation approach. LETKF (Local Ensemble Transform Kalman Filter) is implemented as the main data assimilation algorithm, and uncertainties in the data assimilation can be represented by perturbed atmospheric forcings, perturbed soil and vegetation properties and model initial conditions. The CLM4.5 (Community Land Model) was integrated as the model operator. The CMEM (Community Microwave Emission Modelling Platform), COSMIC (COsmic-ray Soil Moisture Interaction Code) and the two source formulation were integrated as observation operators for assimilation of L-band passive microwave, cosmic-ray soil moisture probe and land surface temperature measurements, respectively. DasPy is parallelized using the hybrid MPI (Message Passing Interface) and OpenMP (Open Multi-Processing) techniques. All the input and output data flow is organized efficiently using the commonly used NetCDF file format. Online 1D and 2D visualization of data assimilation results is also implemented to facilitate the post simulation analysis. In summary, DasPy is a ready to use open source parallel multivariate land data assimilation framework.

  6. Power control apparatus and methods for electric vehicles

    DOEpatents

    Gadh, Rajit; Chung, Ching-Yen; Chu, Chi-Cheng; Qiu, Li

    2016-03-22

    Electric vehicle (EV) charging apparatus and methods are described which allow the sharing of charge current between multiple vehicles connected to a single source of charging energy. In addition, this charge sharing can be performed in a grid-friendly manner by lowering current supplied to EVs when necessary in order to satisfy the needs of the grid, or building operator. The apparatus and methods can be integrated into charging stations or can be implemented with a middle-man approach in which a multiple EV charging box, which includes an EV emulator and multiple pilot signal generation circuits, is coupled to a single EV charge station.

  7. Insights and Challenges to Integrating Data from Diverse Ecological Networks

    NASA Astrophysics Data System (ADS)

    Peters, D. P. C.

    2014-12-01

    Many of the most dramatic and surprising effects of global change occur across large spatial extents, from regions to continents, that impact multiple ecosystem types across a range of interacting spatial and temporal scales. The ability of ecologists and inter-disciplinary scientists to understand and predict these dynamics depend, in large part, on existing site-based research infrastructures that developed in response to historic events. Integrating these diverse sources of data is critical to addressing these broad-scale questions. A conceptual approach is presented to synthesize and integrate diverse sources and types of data from different networks of research sites. This approach focuses on developing derived data products through spatial and temporal aggregation that allow datasets collected with different methods to be compared. The approach is illustrated through the integration, analysis, and comparison of hundreds of long-term datasets from 50 ecological sites in the US that represent ecosystem types commonly found globally. New insights were found by comparing multiple sites using common derived data. In addition to "bringing to light" many dark data in a standardized, open access, easy-to-use format, a suite of lessons were learned that can be applied to up and coming research networks in the US and internationally. These lessons will be described along with the challenges, including cyber-infrastructure, cultural, and behavioral constraints associated with the use of big and little data, that may keep ecologists and inter-disciplinary scientists from taking full advantage of the vast amounts of existing and yet-to-be exposed data.

  8. In Silico Gene Prioritization by Integrating Multiple Data Sources

    PubMed Central

    Zhou, Yingyao; Shields, Robert; Chanda, Sumit K.; Elston, Robert C.; Li, Jing

    2011-01-01

    Identifying disease genes is crucial to the understanding of disease pathogenesis, and to the improvement of disease diagnosis and treatment. In recent years, many researchers have proposed approaches to prioritize candidate genes by considering the relationship of candidate genes and existing known disease genes, reflected in other data sources. In this paper, we propose an expandable framework for gene prioritization that can integrate multiple heterogeneous data sources by taking advantage of a unified graphic representation. Gene-gene relationships and gene-disease relationships are then defined based on the overall topology of each network using a diffusion kernel measure. These relationship measures are in turn normalized to derive an overall measure across all networks, which is utilized to rank all candidate genes. Based on the informativeness of available data sources with respect to each specific disease, we also propose an adaptive threshold score to select a small subset of candidate genes for further validation studies. We performed large scale cross-validation analysis on 110 disease families using three data sources. Results have shown that our approach consistently outperforms other two state of the art programs. A case study using Parkinson disease (PD) has identified four candidate genes (UBB, SEPT5, GPR37 and TH) that ranked higher than our adaptive threshold, all of which are involved in the PD pathway. In particular, a very recent study has observed a deletion of TH in a patient with PD, which supports the importance of the TH gene in PD pathogenesis. A web tool has been implemented to assist scientists in their genetic studies. PMID:21731658

  9. A method and software framework for enriching private biomedical sources with data from public online repositories.

    PubMed

    Anguita, Alberto; García-Remesal, Miguel; Graf, Norbert; Maojo, Victor

    2016-04-01

    Modern biomedical research relies on the semantic integration of heterogeneous data sources to find data correlations. Researchers access multiple datasets of disparate origin, and identify elements-e.g. genes, compounds, pathways-that lead to interesting correlations. Normally, they must refer to additional public databases in order to enrich the information about the identified entities-e.g. scientific literature, published clinical trial results, etc. While semantic integration techniques have traditionally focused on providing homogeneous access to private datasets-thus helping automate the first part of the research, and there exist different solutions for browsing public data, there is still a need for tools that facilitate merging public repositories with private datasets. This paper presents a framework that automatically locates public data of interest to the researcher and semantically integrates it with existing private datasets. The framework has been designed as an extension of traditional data integration systems, and has been validated with an existing data integration platform from a European research project by integrating a private biological dataset with data from the National Center for Biotechnology Information (NCBI). Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Explosion localization and characterization via infrasound using numerical modeling

    NASA Astrophysics Data System (ADS)

    Fee, D.; Kim, K.; Iezzi, A. M.; Matoza, R. S.; Jolly, A. D.; De Angelis, S.; Diaz Moreno, A.; Szuberla, C.

    2017-12-01

    Numerous methods have been applied to locate, detect, and characterize volcanic and anthropogenic explosions using infrasound. Far-field localization techniques typically use back-azimuths from multiple arrays (triangulation) or Reverse Time Migration (RTM, or back-projection). At closer ranges, networks surrounding a source may use Time Difference of Arrival (TDOA), semblance, station-pair double difference, etc. However, at volcanoes and regions with topography or obstructions that block the direct path of sound, recent studies have shown that numerical modeling is necessary to provide an accurate source location. A heterogeneous and moving atmosphere (winds) may also affect the location. The time reversal mirror (TRM) application of Kim et al. (2015) back-propagates the wavefield using a Finite Difference Time Domain (FDTD) algorithm, with the source corresponding to the location of peak convergence. Although it provides high-resolution source localization and can account for complex wave propagation, TRM is computationally expensive and limited to individual events. Here we present a new technique, termed RTM-FDTD, which integrates TRM and FDTD. Travel time and transmission loss information is computed from each station to the entire potential source grid from 3-D Green's functions derived via FDTD. The wave energy is then back-projected and stacked at each grid point, with the maximum corresponding to the likely source. We apply our method to detect and characterize thousands of explosions from Yasur Volcano, Vanuatu and Etna Volcano, Italy, which both provide complex wave propagation and multiple source locations. We compare our results with those from more traditional methods (e.g. semblance), and suggest our method is preferred as it is computationally less expensive than TRM but still integrates numerical modeling. RTM-FDTD could be applied to volcanic other anthropogenic sources at a wide variety of ranges and scenarios. Kim, K., Lees, J.M., 2015. Imaging volcanic infrasound sources using time reversal mirror algorithm. Geophysical Journal International 202, 1663-1676.

  11. COHeRE: Cross-Ontology Hierarchical Relation Examination for Ontology Quality Assurance.

    PubMed

    Cui, Licong

    Biomedical ontologies play a vital role in healthcare information management, data integration, and decision support. Ontology quality assurance (OQA) is an indispensable part of the ontology engineering cycle. Most existing OQA methods are based on the knowledge provided within the targeted ontology. This paper proposes a novel cross-ontology analysis method, Cross-Ontology Hierarchical Relation Examination (COHeRE), to detect inconsistencies and possible errors in hierarchical relations across multiple ontologies. COHeRE leverages the Unified Medical Language System (UMLS) knowledge source and the MapReduce cloud computing technique for systematic, large-scale ontology quality assurance work. COHeRE consists of three main steps with the UMLS concepts and relations as the input. First, the relations claimed in source vocabularies are filtered and aggregated for each pair of concepts. Second, inconsistent relations are detected if a concept pair is related by different types of relations in different source vocabularies. Finally, the uncovered inconsistent relations are voted according to their number of occurrences across different source vocabularies. The voting result together with the inconsistent relations serve as the output of COHeRE for possible ontological change. The highest votes provide initial suggestion on how such inconsistencies might be fixed. In UMLS, 138,987 concept pairs were found to have inconsistent relationships across multiple source vocabularies. 40 inconsistent concept pairs involving hierarchical relationships were randomly selected and manually reviewed by a human expert. 95.8% of the inconsistent relations involved in these concept pairs indeed exist in their source vocabularies rather than being introduced by mistake in the UMLS integration process. 73.7% of the concept pairs with suggested relationship were agreed by the human expert. The effectiveness of COHeRE indicates that UMLS provides a promising environment to enhance qualities of biomedical ontologies by performing cross-ontology examination.

  12. Measurement-device-independent quantum key distribution with multiple crystal heralded source with post-selection

    NASA Astrophysics Data System (ADS)

    Chen, Dong; Shang-Hong, Zhao; MengYi, Deng

    2018-03-01

    The multiple crystal heralded source with post-selection (MHPS), originally introduced to improve the single-photon character of the heralded source, has specific applications for quantum information protocols. In this paper, by combining decoy-state measurement-device-independent quantum key distribution (MDI-QKD) with spontaneous parametric downconversion process, we present a modified MDI-QKD scheme with MHPS where two architectures are proposed corresponding to symmetric scheme and asymmetric scheme. The symmetric scheme, which linked by photon switches in a log-tree structure, is adopted to overcome the limitation of the current low efficiency of m-to-1 optical switches. The asymmetric scheme, which shows a chained structure, is used to cope with the scalability issue with increase in the number of crystals suffered in symmetric scheme. The numerical simulations show that our modified scheme has apparent advances both in transmission distance and key generation rate compared to the original MDI-QKD with weak coherent source and traditional heralded source with post-selection. Furthermore, the recent advances in integrated photonics suggest that if built into a single chip, the MHPS might be a practical alternative source in quantum key distribution tasks requiring single photons to work.

  13. PySE: Python Source Extractor for radio astronomical images

    NASA Astrophysics Data System (ADS)

    Spreeuw, Hanno; Swinbank, John; Molenaar, Gijs; Staley, Tim; Rol, Evert; Sanders, John; Scheers, Bart; Kuiack, Mark

    2018-05-01

    PySE finds and measures sources in radio telescope images. It is run with several options, such as the detection threshold (a multiple of the local noise), grid size, and the forced clean beam fit, followed by a list of input image files in standard FITS or CASA format. From these, PySe provides a list of found sources; information such as the calculated background image, source list in different formats (e.g. text, region files importable in DS9), and other data may be saved. PySe can be integrated into a pipeline; it was originally written as part of the LOFAR Transient Detection Pipeline (TraP, ascl:1412.011).

  14. Liquid metal ion source and alloy for ion emission of multiple ionic species

    DOEpatents

    Clark, Jr., William M.; Utlaut, Mark W.; Wysocki, Joseph A.; Storms, Edmund K.; Szklarz, Eugene G.; Behrens, Robert G.; Swanson, Lynwood W.; Bell, Anthony E.

    1987-06-02

    A liquid metal ion source and alloy for the simultaneous ion evaporation of arsenic and boron, arsenic and phosphorus, or arsenic, boron and phosphorus. The ionic species to be evaporated are contained in palladium-arsenic-boron and palladium-arsenic-boron-phosphorus alloys. The ion source, including an emitter means such as a needle emitter and a source means such as U-shaped heater element, is preferably constructed of rhenium and tungsten, both of which are readily fabricated. The ion sources emit continuous beams of ions having sufficiently high currents of the desired species to be useful in ion implantation of semiconductor wafers for preparing integrated circuit devices. The sources are stable in operation, experience little corrosion during operation, and have long operating lifetimes.

  15. The Unidata Integrated Data Viewer

    NASA Astrophysics Data System (ADS)

    Weber, W. J.; Ho, Y.

    2016-12-01

    The Unidata Integrated Data Viewer (IDV) is a free and open source, virtual globe, software application that enables three dimensional viewing of earth science data. The Unidata IDV is data agnostic and can display and analyze disparate data in a single view. This capability facilitates cross discipline research and allows for multiple observation platforms to be displayed simultaneously for any given event. The Unidata IDV is a mature application, written in JAVA, and has been serving the earth science community for over 15 years. This demonstration will focus on near real time global satelliteobservations, the integration of the COSMIC radio occultation data set that profiles the atmosphere, and high resolution numerical weather prediction.

  16. A VGI data integration framework based on linked data model

    NASA Astrophysics Data System (ADS)

    Wan, Lin; Ren, Rongrong

    2015-12-01

    This paper aims at the geographic data integration and sharing method for multiple online VGI data sets. We propose a semantic-enabled framework for online VGI sources cooperative application environment to solve a target class of geospatial problems. Based on linked data technologies - which is one of core components of semantic web, we can construct the relationship link among geographic features distributed in diverse VGI platform by using linked data modeling methods, then deploy these semantic-enabled entities on the web, and eventually form an interconnected geographic data network to support geospatial information cooperative application across multiple VGI data sources. The mapping and transformation from VGI sources to RDF linked data model is presented to guarantee the unique data represent model among different online social geographic data sources. We propose a mixed strategy which combined spatial distance similarity and feature name attribute similarity as the measure standard to compare and match different geographic features in various VGI data sets. And our work focuses on how to apply Markov logic networks to achieve interlinks of the same linked data in different VGI-based linked data sets. In our method, the automatic generating method of co-reference object identification model according to geographic linked data is discussed in more detail. It finally built a huge geographic linked data network across loosely-coupled VGI web sites. The results of the experiment built on our framework and the evaluation of our method shows the framework is reasonable and practicable.

  17. Hawks and Baby Chickens: Cultivating the Sources of Indigenous Science Education

    ERIC Educational Resources Information Center

    Easton, Peter B.

    2011-01-01

    In this response to Hewson and Ogunniyi's paper on indigenous knowledge (IK) and science teaching in South Africa, I seek to broaden the debate by setting the enterprise of integrating IK into science education in its cultural and socio-political context. I begin by exploring the multiple meanings of indigenous knowledge in Africa, next consider…

  18. D-Move: A Mobile Communication Based Delphi for Digital Natives to Support Embedded Research

    ERIC Educational Resources Information Center

    Petrovic, Otto

    2017-01-01

    Digital Natives are raised with computers and the Internet, which are a familiar part of their daily life. To gain insights into their attitude and behavior, methods and media for empirical research face new challenges like gamification, context oriented embedded research, integration of multiple data sources, and the increased importance of…

  19. Resources and Practices to Help Graduate Students and Postdoctoral Fellows Write Statements of Teaching Philosophy

    ERIC Educational Resources Information Center

    Kearns, Katherine D.; Sullivan, Carol Subino

    2011-01-01

    Students and postdoctoral fellows currently encounter requests for a statement of teaching philosophy in at least half of academic job announcements in the United States. A systematic process for the development of a teaching statement is required that integrates multiple sources of support, informs writers of the document's purpose and audience,…

  20. Collaborative Action Research on Technology Integration for Science Learning

    ERIC Educational Resources Information Center

    Wang, Chien-hsing; Ke, Yi-Ting; Wu, Jin-Tong; Hsu, Wen-Hua

    2012-01-01

    This paper briefly reports the outcomes of an action research inquiry on the use of blogs, MS PowerPoint [PPT], and the Internet as learning tools with a science class of sixth graders for project-based learning. Multiple sources of data were essential to triangulate the key findings articulated in this paper. Corresponding to previous studies,…

  1. Complex within complex: integrative taxonomy reveals hidden diversity in Cicadetta brevipennis (Hemiptera: Cicadidae) and unexpected relationships with a song divergent relative

    USDA-ARS?s Scientific Manuscript database

    Multiple sources of data in combination are essential for species delimitation and classification of difficult taxonomic groups. Here we investigate a cicada taxon with unusual cryptic diversity and we attempt to resolve seemingly contradictory data sets. Cicada songs act as species-specific premati...

  2. When Neurons Meet Electrons: Three Trends That Are Sparking Change in Computer Publishing.

    ERIC Educational Resources Information Center

    Cranney, Charles

    1992-01-01

    Three important trends in desktop publishing include (1) use of multiple media in presentation of information; (2) networking; and (3) "hot links" (integrated file-exchange formats). It is also important for college publications professionals to be familiar with sources of information about technological change and to be able to sort out the…

  3. Misconceptions and Biases in German Students' Perception of Multiple Energy Sources: Implications for Science Education

    ERIC Educational Resources Information Center

    Lee, Roh Pin

    2016-01-01

    Misconceptions and biases in energy perception could influence people's support for developments integral to the success of restructuring a nation's energy system. Science education, in equipping young adults with the cognitive skills and knowledge necessary to navigate in the confusing energy environment, could play a key role in paving the way…

  4. Wanted dead or alive: A state-space mark-recapture-recovery model incorporating multiple recovery types and state uncertainty

    USGS Publications Warehouse

    Hostetter, Nathan; Gardner, Beth; Evans, Allen F.; Cramer, Bradley M.; Payton, Quinn; Collis, Ken; Roby, Daniel D.

    2017-01-01

    We developed a state-space mark-recapture-recovery model that incorporates multiple recovery types and state uncertainty to estimate survival of an anadromous fish species. We apply the model to a dataset of out-migrating juvenile steelhead trout (Oncorhynchus mykiss) tagged with passive integrated transponders, recaptured during outmigration, and recovered on bird colonies in the Columbia River basin (2008-2014). Recoveries on bird colonies are often ignored in survival studies because the river reach of mortality is often unknown, which we model as a form of state uncertainty. Median outmigration survival from release to the lower river (river kilometer 729 to 75) ranged from 0.27 to 0.35, depending on year. Recovery probabilities were frequently >0.20 in the first river reach following tagging, indicating that one out of five fish that died in that reach was recovered on a bird colony. Integrating dead recovery data provided increased parameter precision, estimation of where birds consumed fish, and survival estimates across larger spatial scales. More generally, these modeling approaches provide a flexible framework to integrate multiple sources of tag recovery data into mark-recapture studies.

  5. Two-particle Bose-Einstein correlations in pp collisions at [Formula: see text] 0.9 and 7 TeV measured with the ATLAS detector.

    PubMed

    Aad, G; Abbott, B; Abdallah, J; Abdel Khalek, S; Abdinov, O; Aben, R; Abi, B; Abolins, M; AbouZeid, O S; Abramowicz, H; Abreu, H; Abreu, R; Abulaiti, Y; Acharya, B S; Adamczyk, L; Adams, D L; Adelman, J; Adomeit, S; Adye, T; Agatonovic-Jovin, T; Aguilar-Saavedra, J A; Agustoni, M; Ahlen, S P; Ahmadov, F; Aielli, G; Akerstedt, H; Åkesson, T P A; Akimoto, G; Akimov, A V; Alberghi, G L; Albert, J; Albrand, S; Alconada Verzini, M J; Aleksa, M; Aleksandrov, I N; Alexa, C; Alexander, G; Alexandre, G; Alexopoulos, T; Alhroob, M; Alimonti, G; Alio, L; Alison, J; Allbrooke, B M M; Allison, L J; Allport, P P; Almond, J; Aloisio, A; Alonso, A; Alonso, F; Alpigiani, C; Altheimer, A; Alvarez Gonzalez, B; Alviggi, M G; Amako, K; Amaral Coutinho, Y; Amelung, C; Amidei, D; Amor Dos Santos, S P; Amorim, A; Amoroso, S; Amram, N; Amundsen, G; Anastopoulos, C; Ancu, L S; Andari, N; Andeen, T; Anders, C F; Anders, G; Anderson, K J; Andreazza, A; Andrei, V; Anduaga, X S; Angelidakis, S; Angelozzi, I; Anger, P; Angerami, A; Anghinolfi, F; Anisenkov, A V; Anjos, N; Annovi, A; Antonaki, A; Antonelli, M; Antonov, A; Antos, J; Anulli, F; Aoki, M; Aperio Bella, L; Apolle, R; Arabidze, G; Aracena, I; Arai, Y; Araque, J P; Arce, A T H; Arguin, J-F; Argyropoulos, S; Arik, M; Armbruster, A J; Arnaez, O; Arnal, V; Arnold, H; Arratia, M; Arslan, O; Artamonov, A; Artoni, G; Asai, S; Asbah, N; Ashkenazi, A; Åsman, B; Asquith, L; Assamagan, K; Astalos, R; Atkinson, M; Atlay, N B; Auerbach, B; Augsten, K; Aurousseau, M; Avolio, G; Azuelos, G; Azuma, Y; Baak, M A; Baas, A E; Bacci, C; Bachacou, H; Bachas, K; Backes, M; Backhaus, M; Backus Mayes, J; Badescu, E; Bagiacchi, P; Bagnaia, P; Bai, Y; Bain, T; Baines, J T; Baker, O K; Balek, P; Balli, F; Banas, E; Banerjee, Sw; Bannoura, A A E; Bansal, V; Bansil, H S; Barak, L; Baranov, S P; Barberio, E L; Barberis, D; Barbero, M; Barillari, T; Barisonzi, M; Barklow, T; Barlow, N; Barnett, B M; Barnett, R M; Barnovska, Z; Baroncelli, A; Barone, G; Barr, A J; Barreiro, F; Barreiro Guimarães da Costa, J; Bartoldus, R; Barton, A E; Bartos, P; Bartsch, V; Bassalat, A; Basye, A; Bates, R L; Batley, J R; Battaglia, M; Battistin, M; Bauer, F; Bawa, H S; Beattie, M D; Beau, T; Beauchemin, P H; Beccherle, R; Bechtle, P; Beck, H P; Becker, K; Becker, S; Beckingham, M; Becot, C; Beddall, A J; Beddall, A; Bedikian, S; Bednyakov, V A; Bee, C P; Beemster, L J; Beermann, T A; Begel, M; Behr, J K; Belanger-Champagne, C; Bell, P J; Bell, W H; Bella, G; Bellagamba, L; Bellerive, A; Bellomo, M; Belotskiy, K; Beltramello, O; Benary, O; Benchekroun, D; Bendtz, K; Benekos, N; Benhammou, Y; Benhar Noccioli, E; Benitez Garcia, J A; Benjamin, D P; Bensinger, J R; Benslama, K; Bentvelsen, S; Berge, D; Bergeaas Kuutmann, E; Berger, N; Berghaus, F; Beringer, J; Bernard, C; Bernat, P; Bernius, C; Bernlochner, F U; Berry, T; Berta, P; Bertella, C; Bertoli, G; Bertolucci, F; Bertsche, C; Bertsche, D; Besana, M I; Besjes, G J; Bessidskaia Bylund, O; Bessner, M; Besson, N; Betancourt, C; Bethke, S; Bhimji, W; Bianchi, R M; Bianchini, L; Bianco, M; Biebel, O; Bieniek, S P; Bierwagen, K; Biesiada, J; Biglietti, M; Bilbao De Mendizabal, J; Bilokon, H; Bindi, M; Binet, S; Bingul, A; Bini, C; Black, C W; Black, J E; Black, K M; Blackburn, D; Blair, R E; Blanchard, J-B; Blazek, T; Bloch, I; Blocker, C; Blum, W; Blumenschein, U; Bobbink, G J; Bobrovnikov, V S; Bocchetta, S S; Bocci, A; Bock, C; Boddy, C R; Boehler, M; Boek, T T; Bogaerts, J A; Bogdanchikov, A G; Bogouch, A; Bohm, C; Bohm, J; Boisvert, V; Bold, T; Boldea, V; Boldyrev, A S; Bomben, M; Bona, M; Boonekamp, M; Borisov, A; Borissov, G; Borri, M; Borroni, S; Bortfeldt, J; Bortolotto, V; Bos, K; Boscherini, D; Bosman, M; Boterenbrood, H; Boudreau, J; Bouffard, J; Bouhova-Thacker, E V; Boumediene, D; Bourdarios, C; Bousson, N; Boutouil, S; Boveia, A; Boyd, J; Boyko, I R; Bozic, I; Bracinik, J; Brandt, A; Brandt, G; Brandt, O; Bratzler, U; Brau, B; Brau, J E; Braun, H M; Brazzale, S F; Brelier, B; Brendlinger, K; Brennan, A J; Brenner, R; Bressler, S; Bristow, K; Bristow, T M; Britton, D; Brochu, F M; Brock, I; Brock, R; Bromberg, C; Bronner, J; Brooijmans, G; Brooks, T; Brooks, W K; Brosamer, J; Brost, E; Brown, J; Bruckman de Renstrom, P A; Bruncko, D; Bruneliere, R; Brunet, S; Bruni, A; Bruni, G; Bruschi, M; Bryngemark, L; Buanes, T; Buat, Q; Bucci, F; Buchholz, P; Buckingham, R M; Buckley, A G; Buda, S I; Budagov, I A; Buehrer, F; Bugge, L; Bugge, M K; Bulekov, O; Bundock, A C; Burckhart, H; Burdin, S; Burghgrave, B; Burke, S; Burmeister, I; Busato, E; Büscher, D; Büscher, V; Bussey, P; Buszello, C P; Butler, B; Butler, J M; Butt, A I; Buttar, C M; Butterworth, J M; Butti, P; Buttinger, W; Buzatu, A; Byszewski, M; Cabrera Urbán, S; Caforio, D; Cakir, O; Calafiura, P; Calandri, A; Calderini, G; Calfayan, P; Calkins, R; Caloba, L P; Calvet, D; Calvet, S; Camacho Toro, R; Camarda, S; Cameron, D; Caminada, L M; Caminal Armadans, R; Campana, S; Campanelli, M; Campoverde, A; Canale, V; Canepa, A; Cano Bret, M; Cantero, J; Cantrill, R; Cao, T; Capeans Garrido, M D M; Caprini, I; Caprini, M; Capua, M; Caputo, R; Cardarelli, R; Carli, T; Carlino, G; Carminati, L; Caron, S; Carquin, E; Carrillo-Montoya, G D; Carter, J R; Carvalho, J; Casadei, D; Casado, M P; Casolino, M; Castaneda-Miranda, E; Castelli, A; Castillo Gimenez, V; Castro, N F; Catastini, P; Catinaccio, A; Catmore, J R; Cattai, A; Cattani, G; Caudron, J; Cavaliere, V; Cavalli, D; Cavalli-Sforza, M; Cavasinni, V; Ceradini, F; Cerio, B C; Cerny, K; Cerqueira, A S; Cerri, A; Cerrito, L; Cerutti, F; Cerv, M; Cervelli, A; Cetin, S A; Chafaq, A; Chakraborty, D; Chalupkova, I; Chang, P; Chapleau, B; Chapman, J D; Charfeddine, D; Charlton, D G; Chau, C C; Chavez Barajas, C A; Cheatham, S; Chegwidden, A; Chekanov, S; Chekulaev, S V; Chelkov, G A; Chelstowska, M A; Chen, C; Chen, H; Chen, K; Chen, L; Chen, S; Chen, X; Chen, Y; Chen, Y; Cheng, H C; Cheng, Y; Cheplakov, A; Cherkaoui El Moursli, R; Chernyatin, V; Cheu, E; Chevalier, L; Chiarella, V; Chiefari, G; Childers, J T; Chilingarov, A; Chiodini, G; Chisholm, A S; Chislett, R T; Chitan, A; Chizhov, M V; Chouridou, S; Chow, B K B; Chromek-Burckhart, D; Chu, M L; Chudoba, J; Chwastowski, J J; Chytka, L; Ciapetti, G; Ciftci, A K; Ciftci, R; Cinca, D; Cindro, V; Ciocio, A; Cirkovic, P; Citron, Z H; Ciubancan, M; Clark, A; Clark, P J; Clarke, R N; Cleland, W; Clemens, J C; Clement, C; Coadou, Y; Cobal, M; Coccaro, A; Cochran, J; Coffey, L; Cogan, J G; Coggeshall, J; Cole, B; Cole, S; Colijn, A P; Collot, J; Colombo, T; Colon, G; Compostella, G; Conde Muiño, P; Coniavitis, E; Conidi, M C; Connell, S H; Connelly, I A; Consonni, S M; Consorti, V; Constantinescu, S; Conta, C; Conti, G; Conventi, F; Cooke, M; Cooper, B D; Cooper-Sarkar, A M; Cooper-Smith, N J; Copic, K; Cornelissen, T; Corradi, M; Corriveau, F; Corso-Radu, A; Cortes-Gonzalez, A; Cortiana, G; Costa, G; Costa, M J; Costanzo, D; Côté, D; Cottin, G; Cowan, G; Cox, B E; Cranmer, K; Cree, G; Crépé-Renaudin, S; Crescioli, F; Cribbs, W A; Crispin Ortuzar, M; Cristinziani, M; Croft, V; Crosetti, G; Cuciuc, C-M; Cuhadar Donszelmann, T; Cummings, J; Curatolo, M; Cuthbert, C; Czirr, H; Czodrowski, P; Czyczula, Z; D'Auria, S; D'Onofrio, M; Da Cunha Sargedas De Sousa, M J; Da Via, C; Dabrowski, W; Dafinca, A; Dai, T; Dale, O; Dallaire, F; Dallapiccola, C; Dam, M; Daniells, A C; Dano Hoffmann, M; Dao, V; Darbo, G; Darmora, S; Dassoulas, J; Dattagupta, A; Davey, W; David, C; Davidek, T; Davies, E; Davies, M; Davignon, O; Davison, A R; Davison, P; Davygora, Y; Dawe, E; Dawson, I; Daya-Ishmukhametova, R K; De, K; de Asmundis, R; De Castro, S; De Cecco, S; De Groot, N; de Jong, P; De la Torre, H; De Lorenzi, F; De Nooij, L; De Pedis, D; De Salvo, A; De Sanctis, U; De Santo, A; De Vivie De Regie, J B; Dearnaley, W J; Debbe, R; Debenedetti, C; Dechenaux, B; Dedovich, D V; Deigaard, I; Del Peso, J; Del Prete, T; Deliot, F; Delitzsch, C M; Deliyergiyev, M; Dell'Acqua, A; Dell'Asta, L; Dell'Orso, M; Della Pietra, M; Della Volpe, D; Delmastro, M; Delsart, P A; Deluca, C; Demers, S; Demichev, M; Demilly, A; Denisov, S P; Derendarz, D; Derkaoui, J E; Derue, F; Dervan, P; Desch, K; Deterre, C; Deviveiros, P O; Dewhurst, A; Dhaliwal, S; Di Ciaccio, A; Di Ciaccio, L; Di Domenico, A; Di Donato, C; Di Girolamo, A; Di Girolamo, B; Di Mattia, A; Di Micco, B; Di Nardo, R; Di Simone, A; Di Sipio, R; Di Valentino, D; Dias, F A; Diaz, M A; Diehl, E B; Dietrich, J; Dietzsch, T A; Diglio, S; Dimitrievska, A; Dingfelder, J; Dionisi, C; Dita, P; Dita, S; Dittus, F; Djama, F; Djobava, T; Djuvsland, J I; do Vale, M A B; Do Valle Wemans, A; Dobos, D; Doglioni, C; Doherty, T; Dohmae, T; Dolejsi, J; Dolezal, Z; Dolgoshein, B A; Donadelli, M; Donati, S; Dondero, P; Donini, J; Dopke, J; Doria, A; Dova, M T; Doyle, A T; Dris, M; Dubbert, J; Dube, S; Dubreuil, E; Duchovni, E; Duckeck, G; Ducu, O A; Duda, D; Dudarev, A; Dudziak, F; Duflot, L; Duguid, L; Dührssen, M; Dunford, M; Duran Yildiz, H; Düren, M; Durglishvili, A; Dwuznik, M; Dyndal, M; Ebke, J; Edson, W; Edwards, N C; Ehrenfeld, W; Eifert, T; Eigen, G; Einsweiler, K; Ekelof, T; El Kacimi, M; Ellert, M; Elles, S; Ellinghaus, F; Ellis, N; Elmsheuser, J; Elsing, M; Emeliyanov, D; Enari, Y; Endner, O C; Endo, M; Engelmann, R; Erdmann, J; Ereditato, A; Eriksson, D; Ernis, G; Ernst, J; Ernst, M; Ernwein, J; Errede, D; Errede, S; Ertel, E; Escalier, M; Esch, H; Escobar, C; Esposito, B; Etienvre, A I; Etzion, E; Evans, H; Ezhilov, A; Fabbri, L; Facini, G; Fakhrutdinov, R M; Falciano, S; Falla, R J; Faltova, J; Fang, Y; Fanti, M; Farbin, A; Farilla, A; Farooque, T; Farrell, S; Farrington, S M; Farthouat, P; Fassi, F; Fassnacht, P; Fassouliotis, D; Favareto, A; Fayard, L; Federic, P; Fedin, O L; Fedorko, W; Fehling-Kaschek, M; Feigl, S; Feligioni, L; Feng, C; Feng, E J; Feng, H; Fenyuk, A B; Fernandez Perez, S; Ferrag, S; Ferrando, J; Ferrari, A; Ferrari, P; Ferrari, R; Ferreira de Lima, D E; Ferrer, A; Ferrere, D; Ferretti, C; Ferretto Parodi, A; Fiascaris, M; Fiedler, F; Filipčič, A; Filipuzzi, M; Filthaut, F; Fincke-Keeler, M; Finelli, K D; Fiolhais, M C N; Fiorini, L; Firan, A; Fischer, A; Fischer, J; Fisher, W C; Fitzgerald, E A; Flechl, M; Fleck, I; Fleischmann, P; Fleischmann, S; Fletcher, G T; Fletcher, G; Flick, T; Floderus, A; Flores Castillo, L R; Florez Bustos, A C; Flowerdew, M J; Formica, A; Forti, A; Fortin, D; Fournier, D; Fox, H; Fracchia, S; Francavilla, P; Franchini, M; Franchino, S; Francis, D; Franconi, L; Franklin, M; Franz, S; Fraternali, M; French, S T; Friedrich, C; Friedrich, F; Froidevaux, D; Frost, J A; Fukunaga, C; Fullana Torregrosa, E; Fulsom, B G; Fuster, J; Gabaldon, C; Gabizon, O; Gabrielli, A; Gabrielli, A; Gadatsch, S; Gadomski, S; Gagliardi, G; Gagnon, P; Galea, C; Galhardo, B; Gallas, E J; Gallo, V; Gallop, B J; Gallus, P; Galster, G; Gan, K K; Gao, J; Gao, Y S; Garay Walls, F M; Garberson, F; García, C; García Navarro, J E; Garcia-Sciveres, M; Gardner, R W; Garelli, N; Garonne, V; Gatti, C; Gaudio, G; Gaur, B; Gauthier, L; Gauzzi, P; Gavrilenko, I L; Gay, C; Gaycken, G; Gazis, E N; Ge, P; Gecse, Z; Gee, C N P; Geerts, D A A; Geich-Gimbel, Ch; Gellerstedt, K; Gemme, C; Gemmell, A; Genest, M H; Gentile, S; George, M; George, S; Gerbaudo, D; Gershon, A; Ghazlane, H; Ghodbane, N; Giacobbe, B; Giagu, S; Giangiobbe, V; Giannetti, P; Gianotti, F; Gibbard, B; Gibson, S M; Gilchriese, M; Gillam, T P S; Gillberg, D; Gilles, G; Gingrich, D M; Giokaris, N; Giordani, M P; Giordano, R; Giorgi, F M; Giorgi, F M; Giraud, P F; Giugni, D; Giuliani, C; Giulini, M; Gjelsten, B K; Gkaitatzis, S; Gkialas, I; Gladilin, L K; Glasman, C; Glatzer, J; Glaysher, P C F; Glazov, A; Glonti, G L; Goblirsch-Kolb, M; Goddard, J R; Godlewski, J; Goeringer, C; Goldfarb, S; Golling, T; Golubkov, D; Gomes, A; Gomez Fajardo, L S; Gonçalo, R; Goncalves Pinto Firmino Da Costa, J; Gonella, L; González de la Hoz, S; Gonzalez Parra, G; Gonzalez-Sevilla, S; Goossens, L; Gorbounov, P A; Gordon, H A; Gorelov, I; Gorini, B; Gorini, E; Gorišek, A; Gornicki, E; Goshaw, A T; Gössling, C; Gostkin, M I; Gouighri, M; Goujdami, D; Goulette, M P; Goussiou, A G; Goy, C; Gozpinar, S; Grabas, H M X; Graber, L; Grabowska-Bold, I; Grafström, P; Grahn, K-J; Gramling, J; Gramstad, E; Grancagnolo, S; Grassi, V; Gratchev, V; Gray, H M; Graziani, E; Grebenyuk, O G; Greenwood, Z D; Gregersen, K; Gregor, I M; Grenier, P; Griffiths, J; Grillo, A A; Grimm, K; Grinstein, S; Gris, Ph; Grishkevich, Y V; Grivaz, J-F; Grohs, J P; Grohsjean, A; Gross, E; Grosse-Knetter, J; Grossi, G C; Groth-Jensen, J; Grout, Z J; Guan, L; Guenther, J; Guescini, F; Guest, D; Gueta, O; Guicheney, C; Guido, E; Guillemin, T; Guindon, S; Gul, U; Gumpert, C; Guo, J; Gupta, S; Gutierrez, P; Gutierrez Ortiz, N G; Gutschow, C; Guttman, N; Guyot, C; Gwenlan, C; Gwilliam, C B; Haas, A; Haber, C; Hadavand, H K; Haddad, N; Haefner, P; Hageböck, S; Hajduk, Z; Hakobyan, H; Haleem, M; Hall, D; Halladjian, G; Hamacher, K; Hamal, P; Hamano, K; Hamer, M; Hamilton, A; Hamilton, S; Hamity, G N; Hamnett, P G; Han, L; Hanagaki, K; Hanawa, K; Hance, M; Hanke, P; Hanna, R; Hansen, J B; Hansen, J D; Hansen, P H; Hara, K; Hard, A S; Harenberg, T; Hariri, F; Harkusha, S; Harper, D; Harrington, R D; Harris, O M; Harrison, P F; Hartjes, F; Hasegawa, M; Hasegawa, S; Hasegawa, Y; Hasib, A; Hassani, S; Haug, S; Hauschild, M; Hauser, R; Havranek, M; Hawkes, C M; Hawkings, R J; Hawkins, A D; Hayashi, T; Hayden, D; Hays, C P; Hayward, H S; Haywood, S J; Head, S J; Heck, T; Hedberg, V; Heelan, L; Heim, S; Heim, T; Heinemann, B; Heinrich, L; Hejbal, J; Helary, L; Heller, C; Heller, M; Hellman, S; Hellmich, D; Helsens, C; Henderson, J; Henderson, R C W; Heng, Y; Hengler, C; Henrichs, A; Henriques Correia, A M; Henrot-Versille, S; Hensel, C; Herbert, G H; Hernández Jiménez, Y; Herrberg-Schubert, R; Herten, G; Hertenberger, R; Hervas, L; Hesketh, G G; Hessey, N P; Hickling, R; Higón-Rodriguez, E; Hill, E; Hill, J C; Hiller, K H; Hillert, S; Hillier, S J; Hinchliffe, I; Hines, E; Hirose, M; Hirschbuehl, D; Hobbs, J; Hod, N; Hodgkinson, M C; Hodgson, P; Hoecker, A; Hoeferkamp, M R; Hoenig, F; Hoffman, J; Hoffmann, D; Hohlfeld, M; Holmes, T R; Hong, T M; Hooft van Huysduynen, L; Hopkins, W H; Horii, Y; Hostachy, J-Y; Hou, S; Hoummada, A; Howard, J; Howarth, J; Hrabovsky, M; Hristova, I; Hrivnac, J; Hryn'ova, T; Hsu, C; Hsu, P J; Hsu, S-C; Hu, D; Hu, X; Huang, Y; Hubacek, Z; Hubaut, F; Huegging, F; Huffman, T B; Hughes, E W; Hughes, G; Huhtinen, M; Hülsing, T A; Hurwitz, M; Huseynov, N; Huston, J; Huth, J; Iacobucci, G; Iakovidis, G; Ibragimov, I; Iconomidou-Fayard, L; Ideal, E; Iengo, P; Igonkina, O; Iizawa, T; Ikegami, Y; Ikematsu, K; Ikeno, M; Ilchenko, Y; Iliadis, D; Ilic, N; Inamaru, Y; Ince, T; Ioannou, P; Iodice, M; Iordanidou, K; Ippolito, V; Irles Quiles, A; Isaksson, C; Ishino, M; Ishitsuka, M; Ishmukhametov, R; Issever, C; Istin, S; Iturbe Ponce, J M; Iuppa, R; Ivarsson, J; Iwanski, W; Iwasaki, H; Izen, J M; Izzo, V; Jackson, B; Jackson, M; Jackson, P; Jaekel, M R; Jain, V; Jakobs, K; Jakobsen, S; Jakoubek, T; Jakubek, J; Jamin, D O; Jana, D K; Jansen, E; Jansen, H; Janssen, J; Janus, M; Jarlskog, G; Javadov, N; Javůrek, T; Jeanty, L; Jejelava, J; Jeng, G-Y; Jennens, D; Jenni, P; Jentzsch, J; Jeske, C; Jézéquel, S; Ji, H; Jia, J; Jiang, Y; Jimenez Belenguer, M; Jin, S; Jinaru, A; Jinnouchi, O; Joergensen, M D; Johansson, K E; Johansson, P; Johns, K A; Jon-And, K; Jones, G; Jones, R W L; Jones, T J; Jongmanns, J; Jorge, P M; Joshi, K D; Jovicevic, J; Ju, X; Jung, C A; Jungst, R M; Jussel, P; Juste Rozas, A; Kaci, M; Kaczmarska, A; Kado, M; Kagan, H; Kagan, M; Kajomovitz, E; Kalderon, C W; Kama, S; Kamenshchikov, A; Kanaya, N; Kaneda, M; Kaneti, S; Kantserov, V A; Kanzaki, J; Kaplan, B; Kapliy, A; Kar, D; Karakostas, K; Karastathis, N; Kareem, M J; Karnevskiy, M; Karpov, S N; Karpova, Z M; Karthik, K; Kartvelishvili, V; Karyukhin, A N; Kashif, L; Kasieczka, G; Kass, R D; Kastanas, A; Kataoka, Y; Katre, A; Katzy, J; Kaushik, V; Kawagoe, K; Kawamoto, T; Kawamura, G; Kazama, S; Kazanin, V F; Kazarinov, M Y; Keeler, R; Kehoe, R; Keller, J S; Kempster, J J; Keoshkerian, H; Kepka, O; Kerševan, B P; Kersten, S; Kessoku, K; Keung, J; Khalil-Zada, F; Khandanyan, H; Khanov, A; Khodinov, A; Khomich, A; Khoo, T J; Khoriauli, G; Khoroshilov, A; Khovanskiy, V; Khramov, E; Khubua, J; Kim, H Y; Kim, H; Kim, S H; Kimura, N; Kind, O M; King, B T; King, M; King, R S B; King, S B; Kirk, J; Kiryunin, A E; Kishimoto, T; Kisielewska, D; Kiss, F; Kittelmann, T; Kiuchi, K; Kladiva, E; Klein, M; Klein, U; Kleinknecht, K; Klimek, P; Klimentov, A; Klingenberg, R; Klinger, J A; Klioutchnikova, T; Klok, P F; Kluge, E-E; Kluit, P; Kluth, S; Kneringer, E; Knoops, E B F G; Knue, A; Kobayashi, D; Kobayashi, T; Kobel, M; Kocian, M; Kodys, P; Koevesarki, P; Koffas, T; Koffeman, E; Kogan, L A; Kohlmann, S; Kohout, Z; Kohriki, T; Koi, T; Kolanoski, H; Koletsou, I; Koll, J; Komar, A A; Komori, Y; Kondo, T; Kondrashova, N; Köneke, K; König, A C; König, S; Kono, T; Konoplich, R; Konstantinidis, N; Kopeliansky, R; Koperny, S; Köpke, L; Kopp, A K; Korcyl, K; Kordas, K; Korn, A; Korol, A A; Korolkov, I; Korolkova, E V; Korotkov, V A; Kortner, O; Kortner, S; Kostyukhin, V V; Kotov, V M; Kotwal, A; Kourkoumelis, C; Kouskoura, V; Koutsman, A; Kowalewski, R; Kowalski, T Z; Kozanecki, W; Kozhin, A S; Kral, V; Kramarenko, V A; Kramberger, G; Krasnopevtsev, D; Krasny, M W; Krasznahorkay, A; Kraus, J K; Kravchenko, A; Kreiss, S; Kretz, M; Kretzschmar, J; Kreutzfeldt, K; Krieger, P; Kroeninger, K; Kroha, H; Kroll, J; Kroseberg, J; Krstic, J; Kruchonak, U; Krüger, H; Kruker, T; Krumnack, N; Krumshteyn, Z V; Kruse, A; Kruse, M C; Kruskal, M; Kubota, T; Kucuk, H; Kuday, S; Kuehn, S; Kugel, A; Kuhl, A; Kuhl, T; Kukhtin, V; Kulchitsky, Y; Kuleshov, S; Kuna, M; Kunkle, J; Kupco, A; Kurashige, H; Kurochkin, Y A; Kurumida, R; Kus, V; Kuwertz, E S; Kuze, M; Kvita, J; La Rosa, A; La Rotonda, L; Lacasta, C; Lacava, F; Lacey, J; Lacker, H; Lacour, D; Lacuesta, V R; Ladygin, E; Lafaye, R; Laforge, B; Lagouri, T; Lai, S; Laier, H; Lambourne, L; Lammers, S; Lampen, C L; Lampl, W; Lançon, E; Landgraf, U; Landon, M P J; Lang, V S; Lankford, A J; Lanni, F; Lantzsch, K; Laplace, S; Lapoire, C; Laporte, J F; Lari, T; Lasagni Manghi, F; Lassnig, M; Laurelli, P; Lavrijsen, W; Law, A T; Laycock, P; Le Dortz, O; Le Guirriec, E; Le Menedeu, E; LeCompte, T; Ledroit-Guillon, F; Lee, C A; Lee, H; Lee, J S H; Lee, S C; Lee, L; Lefebvre, G; Lefebvre, M; Legger, F; Leggett, C; Lehan, A; Lehmacher, M; Lehmann Miotto, G; Lei, X; Leight, W A; Leisos, A; Leister, A G; Leite, M A L; Leitner, R; Lellouch, D; Lemmer, B; Leney, K J C; Lenz, T; Lenzi, B; Leone, R; Leone, S; Leonidopoulos, C; Leontsinis, S; Leroy, C; Lester, C G; Lester, C M; Levchenko, M; Levêque, J; Levin, D; Levinson, L J; Levy, M; Lewis, A; Lewis, G H; Leyko, A M; Leyton, M; Li, B; Li, B; Li, H; Li, H L; Li, L; Li, L; Li, S; Li, Y; Liang, Z; Liao, H; Liberti, B; Lichard, P; Lie, K; Liebal, J; Liebig, W; Limbach, C; Limosani, A; Lin, S C; Lin, T H; Linde, F; Lindquist, B E; Linnemann, J T; Lipeles, E; Lipniacka, A; Lisovyi, M; Liss, T M; Lissauer, D; Lister, A; Litke, A M; Liu, B; Liu, D; Liu, J B; Liu, K; Liu, L; Liu, M; Liu, M; Liu, Y; Livan, M; Livermore, S S A; Lleres, A; Llorente Merino, J; Lloyd, S L; Lo Sterzo, F; Lobodzinska, E; Loch, P; Lockman, W S; Loebinger, F K; Loevschall-Jensen, A E; Loginov, A; Lohse, T; Lohwasser, K; Lokajicek, M; Lombardo, V P; Long, B A; Long, J D; Long, R E; Lopes, L; Lopez Mateos, D; Lopez Paredes, B; Lopez Paz, I; Lorenz, J; Lorenzo Martinez, N; Losada, M; Loscutoff, P; Lou, X; Lounis, A; Love, J; Love, P A; Lowe, A J; Lu, N; Lubatti, H J; Luci, C; Lucotte, A; Luehring, F; Lukas, W; Luminari, L; Lundberg, O; Lund-Jensen, B; Lungwitz, M; Lynn, D; Lysak, R; Lytken, E; Ma, H; Ma, L L; Maccarrone, G; Macchiolo, A; Machado Miguens, J; Macina, D; Madaffari, D; Madar, R; Maddocks, H J; Mader, W F; Madsen, A; Maeno, T; Maeno Kataoka, M; Maevskiy, A; Magradze, E; Mahboubi, K; Mahlstedt, J; Mahmoud, S; Maiani, C; Maidantchik, C; Maier, A A; Maio, A; Majewski, S; Makida, Y; Makovec, N; Mal, P; Malaescu, B; Malecki, Pa; Maleev, V P; Malek, F; Mallik, U; Malon, D; Malone, C; Maltezos, S; Malyshev, V M; Malyukov, S; Mamuzic, J; Mandelli, B; Mandelli, L; Mandić, I; Mandrysch, R; Maneira, J; Manfredini, A; Manhaes de Andrade Filho, L; Manjarres Ramos, J; Mann, A; Manning, P M; Manousakis-Katsikakis, A; Mansoulie, B; Mantifel, R; Mapelli, L; March, L; Marchand, J F; Marchiori, G; Marcisovsky, M; Marino, C P; Marjanovic, M; Marques, C N; Marroquim, F; Marsden, S P; Marshall, Z; Marti, L F; Marti-Garcia, S; Martin, B; Martin, B; Martin, T A; Martin, V J; Martin Dit Latour, B; Martinez, H; Martinez, M; Martin-Haugh, S; Martyniuk, A C; Marx, M; Marzano, F; Marzin, A; Masetti, L; Mashimo, T; Mashinistov, R; Masik, J; Maslennikov, A L; Massa, I; Massa, L; Massol, N; Mastrandrea, P; Mastroberardino, A; Masubuchi, T; Mättig, P; Mattmann, J; Maurer, J; Maxfield, S J; Maximov, D A; Mazini, R; Mazzaferro, L; Mc Goldrick, G; Mc Kee, S P; McCarn, A; McCarthy, R L; McCarthy, T G; McCubbin, N A; McFarlane, K W; Mcfayden, J A; Mchedlidze, G; McMahon, S J; McPherson, R A; Mechnich, J; Medinnis, M; Meehan, S; Mehlhase, S; Mehta, A; Meier, K; Meineck, C; Meirose, B; Melachrinos, C; Mellado Garcia, B R; Meloni, F; Mengarelli, A; Menke, S; Meoni, E; Mercurio, K M; Mergelmeyer, S; Meric, N; Mermod, P; Merola, L; Meroni, C; Merritt, F S; Merritt, H; Messina, A; Metcalfe, J; Mete, A S; Meyer, C; Meyer, C; Meyer, J-P; Meyer, J; Middleton, R P; Migas, S; Mijović, L; Mikenberg, G; Mikestikova, M; Mikuž, M; Milic, A; Miller, D W; Mills, C; Milov, A; Milstead, D A; Milstein, D; Minaenko, A A; Minashvili, I A; Mincer, A I; Mindur, B; Mineev, M; Ming, Y; Mir, L M; Mirabelli, G; Mitani, T; Mitrevski, J; Mitsou, V A; Mitsui, S; Miucci, A; Miyagawa, P S; Mjörnmark, J U; Moa, T; Mochizuki, K; Mohapatra, S; Mohr, W; Molander, S; Moles-Valls, R; Mönig, K; Monini, C; Monk, J; Monnier, E; Montejo Berlingen, J; Monticelli, F; Monzani, S; Moore, R W; Morange, N; Moreno, D; Moreno Llácer, M; Morettini, P; Morgenstern, M; Morii, M; Moritz, S; Morley, A K; Mornacchi, G; Morris, J D; Morvaj, L; Moser, H G; Mosidze, M; Moss, J; Motohashi, K; Mount, R; Mountricha, E; Mouraviev, S V; Moyse, E J W; Muanza, S; Mudd, R D; Mueller, F; Mueller, J; Mueller, K; Mueller, T; Mueller, T; Muenstermann, D; Munwes, Y; Murillo Quijada, J A; Murray, W J; Musheghyan, H; Musto, E; Myagkov, A G; Myska, M; Nackenhorst, O; Nadal, J; Nagai, K; Nagai, R; Nagai, Y; Nagano, K; Nagarkar, A; Nagasaka, Y; Nagel, M; Nairz, A M; Nakahama, Y; Nakamura, K; Nakamura, T; Nakano, I; Namasivayam, H; Nanava, G; Narayan, R; Nattermann, T; Naumann, T; Navarro, G; Nayyar, R; Neal, H A; Nechaeva, P Yu; Neep, T J; Nef, P D; Negri, A; Negri, G; Negrini, M; Nektarijevic, S; Nellist, C; Nelson, A; Nelson, T K; Nemecek, S; Nemethy, P; Nepomuceno, A A; Nessi, M; Neubauer, M S; Neumann, M; Neves, R M; Nevski, P; Newman, P R; Nguyen, D H; Nickerson, R B; Nicolaidou, R; Nicquevert, B; Nielsen, J; Nikiforou, N; Nikiforov, A; Nikolaenko, V; Nikolic-Audit, I; Nikolics, K; Nikolopoulos, K; Nilsson, P; Ninomiya, Y; Nisati, A; Nisius, R; Nobe, T; Nodulman, L; Nomachi, M; Nomidis, I; Norberg, S; Nordberg, M; Novgorodova, O; Nowak, S; Nozaki, M; Nozka, L; Ntekas, K; Nunes Hanninger, G; Nunnemann, T; Nurse, E; Nuti, F; O'Brien, B J; O'grady, F; O'Neil, D C; O'Shea, V; Oakham, F G; Oberlack, H; Obermann, T; Ocariz, J; Ochi, A; Ochoa, I; Oda, S; Odaka, S; Ogren, H; Oh, A; Oh, S H; Ohm, C C; Ohman, H; Okamura, W; Okawa, H; Okumura, Y; Okuyama, T; Olariu, A; Olchevski, A G; Olivares Pino, S A; Oliveira Damazio, D; Oliver Garcia, E; Olszewski, A; Olszowska, J; Onofre, A; Onyisi, P U E; Oram, C J; Oreglia, M J; Oren, Y; Orestano, D; Orlando, N; Oropeza Barrera, C; Orr, R S; Osculati, B; Ospanov, R; Otero Y Garzon, G; Otono, H; Ouchrif, M; Ouellette, E A; Ould-Saada, F; Ouraou, A; Oussoren, K P; Ouyang, Q; Ovcharova, A; Owen, M; Ozcan, V E; Ozturk, N; Pachal, K; Pacheco Pages, A; Padilla Aranda, C; Pagáčová, M; Pagan Griso, S; Paganis, E; Pahl, C; Paige, F; Pais, P; Pajchel, K; Palacino, G; Palestini, S; Palka, M; Pallin, D; Palma, A; Palmer, J D; Pan, Y B; Panagiotopoulou, E; Panduro Vazquez, J G; Pani, P; Panikashvili, N; Panitkin, S; Pantea, D; Paolozzi, L; Papadopoulou, Th D; Papageorgiou, K; Paramonov, A; Paredes Hernandez, D; Parker, M A; Parodi, F; Parsons, J A; Parzefall, U; Pasqualucci, E; Passaggio, S; Passeri, A; Pastore, F; Pastore, Fr; Pásztor, G; Pataraia, S; Patel, N D; Pater, J R; Patricelli, S; Pauly, T; Pearce, J; Pedersen, L E; Pedersen, M; Pedraza Lopez, S; Pedro, R; Peleganchuk, S V; Pelikan, D; Peng, H; Penning, B; Penwell, J; Perepelitsa, D V; Perez Codina, E; Pérez García-Estañ, M T; Perez Reale, V; Perini, L; Pernegger, H; Perrella, S; Perrino, R; Peschke, R; Peshekhonov, V D; Peters, K; Peters, R F Y; Petersen, B A; Petersen, T C; Petit, E; Petridis, A; Petridou, C; Petrolo, E; Petrucci, F; Pettersson, N E; Pezoa, R; Phillips, P W; Piacquadio, G; Pianori, E; Picazio, A; Piccaro, E; Piccinini, M; Piegaia, R; Pignotti, D T; Pilcher, J E; Pilkington, A D; Pina, J; Pinamonti, M; Pinder, A; Pinfold, J L; Pingel, A; Pinto, B; Pires, S; Pitt, M; Pizio, C; Plazak, L; Pleier, M-A; Pleskot, V; Plotnikova, E; Plucinski, P; Poddar, S; Podlyski, F; Poettgen, R; Poggioli, L; Pohl, D; Pohl, M; Polesello, G; Policicchio, A; Polifka, R; Polini, A; Pollard, C S; Polychronakos, V; Pommès, K; Pontecorvo, L; Pope, B G; Popeneciu, G A; Popovic, D S; Poppleton, A; Portell Bueso, X; Pospisil, S; Potamianos, K; Potrap, I N; Potter, C J; Potter, C T; Poulard, G; Poveda, J; Pozdnyakov, V; Pralavorio, P; Pranko, A; Prasad, S; Pravahan, R; Prell, S; Price, D; Price, J; Price, L E; Prieur, D; Primavera, M; Proissl, M; Prokofiev, K; Prokoshin, F; Protopapadaki, E; Protopopescu, S; Proudfoot, J; Przybycien, M; Przysiezniak, H; Ptacek, E; Puddu, D; Pueschel, E; Puldon, D; Purohit, M; Puzo, P; Qian, J; Qin, G; Qin, Y; Quadt, A; Quarrie, D R; Quayle, W B; Queitsch-Maitland, M; Quilty, D; Qureshi, A; Radeka, V; Radescu, V; Radhakrishnan, S K; Radloff, P; Rados, P; Ragusa, F; Rahal, G; Rajagopalan, S; Rammensee, M; Randle-Conde, A S; Rangel-Smith, C; Rao, K; Rauscher, F; Rave, T C; Ravenscroft, T; Raymond, M; Read, A L; Readioff, N P; Rebuzzi, D M; Redelbach, A; Redlinger, G; Reece, R; Reeves, K; Rehnisch, L; Reisin, H; Relich, M; Rembser, C; Ren, H; Ren, Z L; Renaud, A; Rescigno, M; Resconi, S; Rezanova, O L; Reznicek, P; Rezvani, R; Richter, R; Ridel, M; Rieck, P; Rieger, J; Rijssenbeek, M; Rimoldi, A; Rinaldi, L; Ritsch, E; Riu, I; Rizatdinova, F; Rizvi, E; Robertson, S H; Robichaud-Veronneau, A; Robinson, D; Robinson, J E M; Robson, A; Roda, C; Rodrigues, L; Roe, S; Røhne, O; Rolli, S; Romaniouk, A; Romano, M; Romero Adam, E; Rompotis, N; Ronzani, M; Roos, L; Ros, E; Rosati, S; Rosbach, K; Rose, M; Rose, P; Rosendahl, P L; Rosenthal, O; Rossetti, V; Rossi, E; Rossi, L P; Rosten, R; Rotaru, M; Roth, I; Rothberg, J; Rousseau, D; Royon, C R; Rozanov, A; Rozen, Y; Ruan, X; Rubbo, F; Rubinskiy, I; Rud, V I; Rudolph, C; Rudolph, M S; Rühr, F; Ruiz-Martinez, A; Rurikova, Z; Rusakovich, N A; Ruschke, A; Rutherfoord, J P; Ruthmann, N; Ryabov, Y F; Rybar, M; Rybkin, G; Ryder, N C; Saavedra, A F; Sacerdoti, S; Saddique, A; Sadeh, I; Sadrozinski, H F-W; Sadykov, R; Safai Tehrani, F; Sakamoto, H; Sakurai, Y; Salamanna, G; Salamon, A; Saleem, M; Salek, D; Sales De Bruin, P H; Salihagic, D; Salnikov, A; Salt, J; Salvatore, D; Salvatore, F; Salvucci, A; Salzburger, A; Sampsonidis, D; Sanchez, A; Sánchez, J; Sanchez Martinez, V; Sandaker, H; Sandbach, R L; Sander, H G; Sanders, M P; Sandhoff, M; Sandoval, T; Sandoval, C; Sandstroem, R; Sankey, D P C; Sansoni, A; Santoni, C; Santonico, R; Santos, H; Santoyo Castillo, I; Sapp, K; Sapronov, A; Saraiva, J G; Sarkisyan-Grinbaum, E; Sarrazin, B; Sartisohn, G; Sasaki, O; Sasaki, Y; Sauvage, G; Sauvan, E; Savard, P; Savu, D O; Sawyer, C; Sawyer, L; Saxon, D H; Saxon, J; Sbarra, C; Sbrizzi, A; Scanlon, T; Scannicchio, D A; Scarcella, M; Scarfone, V; Schaarschmidt, J; Schacht, P; Schaefer, D; Schaefer, R; Schaepe, S; Schaetzel, S; Schäfer, U; Schaffer, A C; Schaile, D; Schamberger, R D; Scharf, V; Schegelsky, V A; Scheirich, D; Schernau, M; Scherzer, M I; Schiavi, C; Schieck, J; Schillo, C; Schioppa, M; Schlenker, S; Schmidt, E; Schmieden, K; Schmitt, C; Schmitt, S; Schneider, B; Schnellbach, Y J; Schnoor, U; Schoeffel, L; Schoening, A; Schoenrock, B D; Schorlemmer, A L S; Schott, M; Schouten, D; Schovancova, J; Schramm, S; Schreyer, M; Schroeder, C; Schuh, N; Schultens, M J; Schultz-Coulon, H-C; Schulz, H; Schumacher, M; Schumm, B A; Schune, Ph; Schwanenberger, C; Schwartzman, A; Schwarz, T A; Schwegler, Ph; Schwemling, Ph; Schwienhorst, R; Schwindling, J; Schwindt, T; Schwoerer, M; Sciacca, F G; Scifo, E; Sciolla, G; Scott, W G; Scuri, F; Scutti, F; Searcy, J; Sedov, G; Sedykh, E; Seidel, S C; Seiden, A; Seifert, F; Seixas, J M; Sekhniaidze, G; Sekula, S J; Selbach, K E; Seliverstov, D M; Sellers, G; Semprini-Cesari, N; Serfon, C; Serin, L; Serkin, L; Serre, T; Seuster, R; Severini, H; Sfiligoj, T; Sforza, F; Sfyrla, A; Shabalina, E; Shamim, M; Shan, L Y; Shang, R; Shank, J T; Shapiro, M; Shatalov, P B; Shaw, K; Shehu, C Y; Sherwood, P; Shi, L; Shimizu, S; Shimmin, C O; Shimojima, M; Shiyakova, M; Shmeleva, A; Shochet, M J; Short, D; Shrestha, S; Shulga, E; Shupe, M A; Shushkevich, S; Sicho, P; Sidiropoulou, O; Sidorov, D; Sidoti, A; Siegert, F; Sijacki, Dj; Silva, J; Silver, Y; Silverstein, D; Silverstein, S B; Simak, V; Simard, O; Simic, Lj; Simion, S; Simioni, E; Simmons, B; Simoniello, R; Simonyan, M; Sinervo, P; Sinev, N B; Sipica, V; Siragusa, G; Sircar, A; Sisakyan, A N; Sivoklokov, S Yu; Sjölin, J; Sjursen, T B; Skottowe, H P; Skovpen, K Yu; Skubic, P; Slater, M; Slavicek, T; Sliwa, K; Smakhtin, V; Smart, B H; Smestad, L; Smirnov, S Yu; Smirnov, Y; Smirnova, L N; Smirnova, O; Smith, K M; Smizanska, M; Smolek, K; Snesarev, A A; Snidero, G; Snyder, S; Sobie, R; Socher, F; Soffer, A; Soh, D A; Solans, C A; Solar, M; Solc, J; Soldatov, E Yu; Soldevila, U; Solodkov, A A; Soloshenko, A; Solovyanov, O V; Solovyev, V; Sommer, P; Song, H Y; Soni, N; Sood, A; Sopczak, A; Sopko, B; Sopko, V; Sorin, V; Sosebee, M; Soualah, R; Soueid, P; Soukharev, A M; South, D; Spagnolo, S; Spanò, F; Spearman, W R; Spettel, F; Spighi, R; Spigo, G; Spiller, L A; Spousta, M; Spreitzer, T; Spurlock, B; St Denis, R D; Staerz, S; Stahlman, J; Stamen, R; Stamm, S; Stanecka, E; Stanek, R W; Stanescu, C; Stanescu-Bellu, M; Stanitzki, M M; Stapnes, S; Starchenko, E A; Stark, J; Staroba, P; Starovoitov, P; Staszewski, R; Stavina, P; Steinberg, P; Stelzer, B; Stelzer, H J; Stelzer-Chilton, O; Stenzel, H; Stern, S; Stewart, G A; Stillings, J A; Stockton, M C; Stoebe, M; Stoicea, G; Stolte, P; Stonjek, S; Stradling, A R; Straessner, A; Stramaglia, M E; Strandberg, J; Strandberg, S; Strandlie, A; Strauss, E; Strauss, M; Strizenec, P; Ströhmer, R; Strom, D M; Stroynowski, R; Strubig, A; Stucci, S A; Stugu, B; Styles, N A; Su, D; Su, J; Subramaniam, R; Succurro, A; Sugaya, Y; Suhr, C; Suk, M; Sulin, V V; Sultansoy, S; Sumida, T; Sun, S; Sun, X; Sundermann, J E; Suruliz, K; Susinno, G; Sutton, M R; Suzuki, Y; Svatos, M; Swedish, S; Swiatlowski, M; Sykora, I; Sykora, T; Ta, D; Taccini, C; Tackmann, K; Taenzer, J; Taffard, A; Tafirout, R; Taiblum, N; Takai, H; Takashima, R; Takeda, H; Takeshita, T; Takubo, Y; Talby, M; Talyshev, A A; Tam, J Y C; Tan, K G; Tanaka, J; Tanaka, R; Tanaka, S; Tanaka, S; Tanasijczuk, A J; Tannenwald, B B; Tannoury, N; Tapprogge, S; Tarem, S; Tarrade, F; Tartarelli, G F; Tas, P; Tasevsky, M; Tashiro, T; Tassi, E; Tavares Delgado, A; Tayalati, Y; Taylor, F E; Taylor, G N; Taylor, W; Teischinger, F A; Teixeira Dias Castanheira, M; Teixeira-Dias, P; Temming, K K; Ten Kate, H; Teng, P K; Teoh, J J; Terada, S; Terashi, K; Terron, J; Terzo, S; Testa, M; Teuscher, R J; Therhaag, J; Theveneaux-Pelzer, T; Thomas, J P; Thomas-Wilsker, J; Thompson, E N; Thompson, P D; Thompson, P D; Thompson, R J; Thompson, A S; Thomsen, L A; Thomson, E; Thomson, M; Thong, W M; Thun, R P; Tian, F; Tibbetts, M J; Tikhomirov, V O; Tikhonov, Yu A; Timoshenko, S; Tiouchichine, E; Tipton, P; Tisserant, S; Todorov, T; Todorova-Nova, S; Toggerson, B; Tojo, J; Tokár, S; Tokushuku, K; Tollefson, K; Tolley, E; Tomlinson, L; Tomoto, M; Tompkins, L; Toms, K; Topilin, N D; Torrence, E; Torres, H; Torró Pastor, E; Toth, J; Touchard, F; Tovey, D R; Tran, H L; Trefzger, T; Tremblet, L; Tricoli, A; Trigger, I M; Trincaz-Duvoid, S; Tripiana, M F; Trischuk, W; Trocmé, B; Troncon, C; Trottier-McDonald, M; Trovatelli, M; True, P; Trzebinski, M; Trzupek, A; Tsarouchas, C; Tseng, J C-L; Tsiareshka, P V; Tsionou, D; Tsipolitis, G; Tsirintanis, N; Tsiskaridze, S; Tsiskaridze, V; Tskhadadze, E G; Tsukerman, I I; Tsulaia, V; Tsuno, S; Tsybychev, D; Tudorache, A; Tudorache, V; Tuna, A N; Tupputi, S A; Turchikhin, S; Turecek, D; Turra, R; Tuts, P M; Tykhonov, A; Tylmad, M; Tyndel, M; Uchida, K; Ueda, I; Ueno, R; Ughetto, M; Ugland, M; Uhlenbrock, M; Ukegawa, F; Unal, G; Undrus, A; Unel, G; Ungaro, F C; Unno, Y; Unverdorben, C; Urbaniec, D; Urquijo, P; Usai, G; Usanova, A; Vacavant, L; Vacek, V; Vachon, B; Valencic, N; Valentinetti, S; Valero, A; Valery, L; Valkar, S; Valladolid Gallego, E; Vallecorsa, S; Valls Ferrer, J A; Van Den Wollenberg, W; Van Der Deijl, P C; van der Geer, R; van der Graaf, H; Van Der Leeuw, R; van der Ster, D; van Eldik, N; van Gemmeren, P; Van Nieuwkoop, J; van Vulpen, I; van Woerden, M C; Vanadia, M; Vandelli, W; Vanguri, R; Vaniachine, A; Vannucci, F; Vardanyan, G; Vari, R; Varnes, E W; Varol, T; Varouchas, D; Vartapetian, A; Varvell, K E; Vazeille, F; Vazquez Schroeder, T; Veatch, J; Veloso, F; Velz, T; Veneziano, S; Ventura, A; Ventura, D; Venturi, M; Venturi, N; Venturini, A; Vercesi, V; Verducci, M; Verkerke, W; Vermeulen, J C; Vest, A; Vetterli, M C; Viazlo, O; Vichou, I; Vickey, T; Vickey Boeriu, O E; Viehhauser, G H A; Viel, S; Vigne, R; Villa, M; Villaplana Perez, M; Vilucchi, E; Vincter, M G; Vinogradov, V B; Virzi, J; Vivarelli, I; Vives Vaque, F; Vlachos, S; Vladoiu, D; Vlasak, M; Vogel, A; Vogel, M; Vokac, P; Volpi, G; Volpi, M; von der Schmitt, H; von Radziewski, H; von Toerne, E; Vorobel, V; Vorobev, K; Vos, M; Voss, R; Vossebeld, J H; Vranjes, N; Vranjes Milosavljevic, M; Vrba, V; Vreeswijk, M; Vu Anh, T; Vuillermet, R; Vukotic, I; Vykydal, Z; Wagner, P; Wagner, W; Wahlberg, H; Wahrmund, S; Wakabayashi, J; Walder, J; Walker, R; Walkowiak, W; Wall, R; Waller, P; Walsh, B; Wang, C; Wang, C; Wang, F; Wang, H; Wang, H; Wang, J; Wang, J; Wang, K; Wang, R; Wang, S M; Wang, T; Wang, X; Wanotayaroj, C; Warburton, A; Ward, C P; Wardrope, D R; Warsinsky, M; Washbrook, A; Wasicki, C; Watkins, P M; Watson, A T; Watson, I J; Watson, M F; Watts, G; Watts, S; Waugh, B M; Webb, S; Weber, M S; Weber, S W; Webster, J S; Weidberg, A R; Weigell, P; Weinert, B; Weingarten, J; Weiser, C; Weits, H; Wells, P S; Wenaus, T; Wendland, D; Weng, Z; Wengler, T; Wenig, S; Wermes, N; Werner, M; Werner, P; Wessels, M; Wetter, J; Whalen, K; White, A; White, M J; White, R; White, S; Whiteson, D; Wicke, D; Wickens, F J; Wiedenmann, W; Wielers, M; Wienemann, P; Wiglesworth, C; Wiik-Fuchs, L A M; Wijeratne, P A; Wildauer, A; Wildt, M A; Wilkens, H G; Will, J Z; Williams, H H; Williams, S; Willis, C; Willocq, S; Wilson, A; Wilson, J A; Wingerter-Seez, I; Winklmeier, F; Winter, B T; Wittgen, M; Wittig, T; Wittkowski, J; Wollstadt, S J; Wolter, M W; Wolters, H; Wosiek, B K; Wotschack, J; Woudstra, M J; Wozniak, K W; Wright, M; Wu, M; Wu, S L; Wu, X; Wu, Y; Wulf, E; Wyatt, T R; Wynne, B M; Xella, S; Xiao, M; Xu, D; Xu, L; Yabsley, B; Yacoob, S; Yakabe, R; Yamada, M; Yamaguchi, H; Yamaguchi, Y; Yamamoto, A; Yamamoto, K; Yamamoto, S; Yamamura, T; Yamanaka, T; Yamauchi, K; Yamazaki, Y; Yan, Z; Yang, H; Yang, H; Yang, U K; Yang, Y; Yanush, S; Yao, L; Yao, W-M; Yasu, Y; Yatsenko, E; Yau Wong, K H; Ye, J; Ye, S; Yeletskikh, I; Yen, A L; Yildirim, E; Yilmaz, M; Yoosoofmiya, R; Yorita, K; Yoshida, R; Yoshihara, K; Young, C; Young, C J S; Youssef, S; Yu, D R; Yu, J; Yu, J M; Yu, J; Yuan, L; Yurkewicz, A; Yusuff, I; Zabinski, B; Zaidan, R; Zaitsev, A M; Zaman, A; Zambito, S; Zanello, L; Zanzi, D; Zeitnitz, C; Zeman, M; Zemla, A; Zengel, K; Zenin, O; Ženiš, T; Zerwas, D; Zevi Della Porta, G; Zhang, D; Zhang, F; Zhang, H; Zhang, J; Zhang, L; Zhang, X; Zhang, Z; Zhao, Z; Zhemchugov, A; Zhong, J; Zhou, B; Zhou, L; Zhou, N; Zhu, C G; Zhu, H; Zhu, J; Zhu, Y; Zhuang, X; Zhukov, K; Zibell, A; Zieminska, D; Zimine, N I; Zimmermann, C; Zimmermann, R; Zimmermann, S; Zimmermann, S; Zinonos, Z; Ziolkowski, M; Zobernig, G; Zoccoli, A; Zur Nedden, M; Zurzolo, G; Zutshi, V; Zwalinski, L

    The paper presents studies of Bose-Einstein Correlations (BEC) for pairs of like-sign charged particles measured in the kinematic range [Formula: see text] 100 MeV and [Formula: see text] 2.5 in proton collisions at centre-of-mass energies of 0.9 and 7 TeV with the ATLAS detector at the CERN Large Hadron Collider. The integrated luminosities are approximately 7 [Formula: see text]b[Formula: see text], 190 [Formula: see text]b[Formula: see text] and 12.4 nb[Formula: see text] for 0.9 TeV, 7 TeV minimum-bias and 7 TeV high-multiplicity data samples, respectively. The multiplicity dependence of the BEC parameters characterizing the correlation strength and the correlation source size are investigated for charged-particle multiplicities of up to 240. A saturation effect in the multiplicity dependence of the correlation source size parameter is observed using the high-multiplicity 7 TeV data sample. The dependence of the BEC parameters on the average transverse momentum of the particle pair is also investigated.

  6. The risk assessment of sudden water pollution for river network system under multi-source random emission

    NASA Astrophysics Data System (ADS)

    Li, D.

    2016-12-01

    Sudden water pollution accidents are unavoidable risk events that we must learn to co-exist with. In China's Taihu River Basin, the river flow conditions are complicated with frequently artificial interference. Sudden water pollution accident occurs mainly in the form of a large number of abnormal discharge of wastewater, and has the characteristics with the sudden occurrence, the uncontrollable scope, the uncertainty object and the concentrated distribution of many risk sources. Effective prevention of pollution accidents that may occur is of great significance for the water quality safety management. Bayesian networks can be applied to represent the relationship between pollution sources and river water quality intuitively. Using the time sequential Monte Carlo algorithm, the pollution sources state switching model, water quality model for river network and Bayesian reasoning is integrated together, and the sudden water pollution risk assessment model for river network is developed to quantify the water quality risk under the collective influence of multiple pollution sources. Based on the isotope water transport mechanism, a dynamic tracing model of multiple pollution sources is established, which can describe the relationship between the excessive risk of the system and the multiple risk sources. Finally, the diagnostic reasoning algorithm based on Bayesian network is coupled with the multi-source tracing model, which can identify the contribution of each risk source to the system risk under the complex flow conditions. Taking Taihu Lake water system as the research object, the model is applied to obtain the reasonable results under the three typical years. Studies have shown that the water quality risk at critical sections are influenced by the pollution risk source, the boundary water quality, the hydrological conditions and self -purification capacity, and the multiple pollution sources have obvious effect on water quality risk of the receiving water body. The water quality risk assessment approach developed in this study offers a effective tool for systematically quantifying the random uncertainty in plain river network system, and it also provides the technical support for the decision-making of controlling the sudden water pollution through identification of critical pollution sources.

  7. Subsurface Hydrology: Data Integration for Properties and Processes

    NASA Astrophysics Data System (ADS)

    Hyndman, David W.; Day-Lewis, Frederick D.; Singha, Kamini

    Groundwater is a critical resource and the PrinciPal source of drinking water for over 1.5 billion people. In 2001, the National Research Council cited as a "grand challenge" our need to understand the processes that control water movement in the subsurface. This volume faces that challenge in terms of data integration between complex, multi-scale hydrologie processes, and their links to other physical, chemical, and biological processes at multiple scales. Subsurface Hydrology: Data Integration for Properties and Processes presents the current state of the science in four aspects: • Approaches to hydrologie data integration • Data integration for characterization of hydrologie properties • Data integration for understanding hydrologie processes • Meta-analysis of current interpretations Scientists and researchers in the field, the laboratory, and the classroom will find this work an important resource in advancing our understanding of subsurface water movement.

  8. The impact of relative intensity noise on the signal in multiple reference optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Neuhaus, Kai; Subhash, Hrebesh; Alexandrov, Sergey; Dsouza, Roshan; Hogan, Josh; Wilson, Carol; Leahy, Martin; Slepneva, Svetlana; Huyet, Guillaume

    2016-03-01

    Multiple reference optical coherence tomography (MR-OCT) applies a unique low-cost solution to enhance the scanning depth of standard time domain OCT by inserting an partial mirror into the reference arm of the interferometric system. This novel approach achieves multiple reflections for different layers and depths of an sample with minimal effort of engineering and provides an excellent platform for low-cost OCT systems based on well understood production methods for micro-mechanical systems such as CD/DVD pick-up systems. The direct integration of a superluminescent light-emitting diode (SLED) is a preferable solution to reduce the form- factor of an MR-OCT system. Such direct integration exposes the light source to environmental conditions that can increase fluctuations in heat dissipation and vibrations and affect the noise characteristics of the output spectrum. This work describes the impact of relative intensity noise (RIN) on the quality of the interference signal of MR-OCT related to a variety of environmental conditions, such as temperature.

  9. 50-GHz-spaced comb of high-dimensional frequency-bin entangled photons from an on-chip silicon nitride microresonator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Imany, Poolad; Jaramillo-Villegas, Jose A.; Odele, Ogaga D.

    Quantum frequency combs from chip-scale integrated sources are promising candidates for scalable and robust quantum information processing (QIP). However, to use these quantum combs for frequency domain QIP, demonstration of entanglement in the frequency basis, showing that the entangled photons are in a coherent superposition of multiple frequency bins, is required. We present a verification of qubit and qutrit frequency-bin entanglement using an on-chip quantum frequency comb with 40 mode pairs, through a two-photon interference measurement that is based on electro-optic phase modulation. Our demonstrations provide an important contribution in establishing integrated optical microresonators as a source for high-dimensional frequency-binmore » encoded quantum computing, as well as dense quantum key distribution.« less

  10. 50-GHz-spaced comb of high-dimensional frequency-bin entangled photons from an on-chip silicon nitride microresonator

    DOE PAGES

    Imany, Poolad; Jaramillo-Villegas, Jose A.; Odele, Ogaga D.; ...

    2018-01-18

    Quantum frequency combs from chip-scale integrated sources are promising candidates for scalable and robust quantum information processing (QIP). However, to use these quantum combs for frequency domain QIP, demonstration of entanglement in the frequency basis, showing that the entangled photons are in a coherent superposition of multiple frequency bins, is required. We present a verification of qubit and qutrit frequency-bin entanglement using an on-chip quantum frequency comb with 40 mode pairs, through a two-photon interference measurement that is based on electro-optic phase modulation. Our demonstrations provide an important contribution in establishing integrated optical microresonators as a source for high-dimensional frequency-binmore » encoded quantum computing, as well as dense quantum key distribution.« less

  11. Conference proceedings of Helmet Mounted Displays and Night Vision Goggles (Visuels Montes sur le Casque et Equipments de Vision Nocturne). Held in Pensacola, Florida, on May 2, 1991

    DTIC Science & Technology

    1991-12-01

    integration. Threc papers considered the ergonomics of helmet design and the snugness of fit to the head and the integration of new helmet mounted devices...with existing equipment. Two papers considered the effects of novel helmet designs on the pilot’s ability to control head position and avoid fatigue. Two...the nature of information displayed, including data fused froml multiple sources and design of abstract symbologics that presernt paramcecis of fight

  12. An integrated modelling framework for neural circuits with multiple neuromodulators.

    PubMed

    Joshi, Alok; Youssofzadeh, Vahab; Vemana, Vinith; McGinnity, T M; Prasad, Girijesh; Wong-Lin, KongFatt

    2017-01-01

    Neuromodulators are endogenous neurochemicals that regulate biophysical and biochemical processes, which control brain function and behaviour, and are often the targets of neuropharmacological drugs. Neuromodulator effects are generally complex partly owing to the involvement of broad innervation, co-release of neuromodulators, complex intra- and extrasynaptic mechanism, existence of multiple receptor subtypes and high interconnectivity within the brain. In this work, we propose an efficient yet sufficiently realistic computational neural modelling framework to study some of these complex behaviours. Specifically, we propose a novel dynamical neural circuit model that integrates the effective neuromodulator-induced currents based on various experimental data (e.g. electrophysiology, neuropharmacology and voltammetry). The model can incorporate multiple interacting brain regions, including neuromodulator sources, simulate efficiently and easily extendable to large-scale brain models, e.g. for neuroimaging purposes. As an example, we model a network of mutually interacting neural populations in the lateral hypothalamus, dorsal raphe nucleus and locus coeruleus, which are major sources of neuromodulator orexin/hypocretin, serotonin and norepinephrine/noradrenaline, respectively, and which play significant roles in regulating many physiological functions. We demonstrate that such a model can provide predictions of systemic drug effects of the popular antidepressants (e.g. reuptake inhibitors), neuromodulator antagonists or their combinations. Finally, we developed user-friendly graphical user interface software for model simulation and visualization for both fundamental sciences and pharmacological studies. © 2017 The Authors.

  13. An integrated modelling framework for neural circuits with multiple neuromodulators

    PubMed Central

    Vemana, Vinith

    2017-01-01

    Neuromodulators are endogenous neurochemicals that regulate biophysical and biochemical processes, which control brain function and behaviour, and are often the targets of neuropharmacological drugs. Neuromodulator effects are generally complex partly owing to the involvement of broad innervation, co-release of neuromodulators, complex intra- and extrasynaptic mechanism, existence of multiple receptor subtypes and high interconnectivity within the brain. In this work, we propose an efficient yet sufficiently realistic computational neural modelling framework to study some of these complex behaviours. Specifically, we propose a novel dynamical neural circuit model that integrates the effective neuromodulator-induced currents based on various experimental data (e.g. electrophysiology, neuropharmacology and voltammetry). The model can incorporate multiple interacting brain regions, including neuromodulator sources, simulate efficiently and easily extendable to large-scale brain models, e.g. for neuroimaging purposes. As an example, we model a network of mutually interacting neural populations in the lateral hypothalamus, dorsal raphe nucleus and locus coeruleus, which are major sources of neuromodulator orexin/hypocretin, serotonin and norepinephrine/noradrenaline, respectively, and which play significant roles in regulating many physiological functions. We demonstrate that such a model can provide predictions of systemic drug effects of the popular antidepressants (e.g. reuptake inhibitors), neuromodulator antagonists or their combinations. Finally, we developed user-friendly graphical user interface software for model simulation and visualization for both fundamental sciences and pharmacological studies. PMID:28100828

  14. RSAT matrix-clustering: dynamic exploration and redundancy reduction of transcription factor binding motif collections

    PubMed Central

    Jaeger, Sébastien; Thieffry, Denis

    2017-01-01

    Abstract Transcription factor (TF) databases contain multitudes of binding motifs (TFBMs) from various sources, from which non-redundant collections are derived by manual curation. The advent of high-throughput methods stimulated the production of novel collections with increasing numbers of motifs. Meta-databases, built by merging these collections, contain redundant versions, because available tools are not suited to automatically identify and explore biologically relevant clusters among thousands of motifs. Motif discovery from genome-scale data sets (e.g. ChIP-seq) also produces redundant motifs, hampering the interpretation of results. We present matrix-clustering, a versatile tool that clusters similar TFBMs into multiple trees, and automatically creates non-redundant TFBM collections. A feature unique to matrix-clustering is its dynamic visualisation of aligned TFBMs, and its capability to simultaneously treat multiple collections from various sources. We demonstrate that matrix-clustering considerably simplifies the interpretation of combined results from multiple motif discovery tools, and highlights biologically relevant variations of similar motifs. We also ran a large-scale application to cluster ∼11 000 motifs from 24 entire databases, showing that matrix-clustering correctly groups motifs belonging to the same TF families, and drastically reduced motif redundancy. matrix-clustering is integrated within the RSAT suite (http://rsat.eu/), accessible through a user-friendly web interface or command-line for its integration in pipelines. PMID:28591841

  15. Rectangular beam (5 X 40 cm multipole ion source). M.S. Thesis - Nov. 1979; [applications to electron bombardment in materials processing

    NASA Technical Reports Server (NTRS)

    Haynes, C. M.

    1980-01-01

    A 5 x 40 cm rectangular-beam ion source was designed and fabricated. A multipole field configuration was used to facilitate design of the modular rectangular chamber, while a three-grid ion optics system was used for increased ion current densities. For the multipole chamber, a magnetic integral of 0.000056 Tesla-m was used to contain the primary electrons. This integral value was reduced from the initial design value, with the reduction found necessary for discharge stability. The final value of magnetic integral resulted in discharge losses at typical operating conditions which ranged from 600 to 1000 eV/ion, in good agreement with the design value of 800 eV/ion. The beam current density at the ion optics was limited to about 3.2 mA/sq cm at 500 eV and to about 3.5 mA/sq cm at 1000 ev. The effects of nonuniform ion current, dimension tolerance, and grid thermal warping were considered. The use of multiple rectangular-beam ion sources to process wider areas than would be possible with a single source (approx. 40 cm) was also studied. Beam profiles were surveyed at a variety of operating conditions and the results of various amounts of beam overlap calculated.

  16. The World Spatiotemporal Analytics and Mapping Project (WSTAMP): Discovering, Exploring, and Mapping Spatiotemporal Patterns Across Heterogenous Space-Time Data

    NASA Astrophysics Data System (ADS)

    Morton, A.; Stewart, R.; Held, E.; Piburn, J.; Allen, M. R.; McManamay, R.; Sanyal, J.; Sorokine, A.; Bhaduri, B. L.

    2017-12-01

    Spatiotemporal (ST) analytics applied to major spatio-temporal data sources from major vendors such as USGS, NOAA, World Bank and World Health Organization have tremendous value in shedding light on the evolution of physical, cultural, and geopolitical landscapes on a local and global level. Especially powerful is the integration of these physical and cultural datasets across multiple and disparate formats, facilitating new interdisciplinary analytics and insights. Realizing this potential first requires an ST data model that addresses challenges in properly merging data from multiple authors, with evolving ontological perspectives, semantical differences, changing attributes, and content that is textual, numeric, categorical, and hierarchical. Equally challenging is the development of analytical and visualization approaches that provide a serious exploration of this integrated data while remaining accessible to practitioners with varied backgrounds. The WSTAMP project at the Oak Ridge National Laboratory has yielded two major results in addressing these challenges: 1) development of the WSTAMP database, a significant advance in ST data modeling that integrates 16000+ attributes covering 200+ countries for over 50 years from over 30 major sources and 2) a novel online ST exploratory and analysis tool providing an array of modern statistical and visualization techniques for analyzing these data temporally, spatially, and spatiotemporally under a standard analytic workflow. We report on these advances, provide an illustrative case study, and inform how others may freely access the tool.

  17. FALDO: a semantic standard for describing the location of nucleotide and protein feature annotation.

    PubMed

    Bolleman, Jerven T; Mungall, Christopher J; Strozzi, Francesco; Baran, Joachim; Dumontier, Michel; Bonnal, Raoul J P; Buels, Robert; Hoehndorf, Robert; Fujisawa, Takatomo; Katayama, Toshiaki; Cock, Peter J A

    2016-06-13

    Nucleotide and protein sequence feature annotations are essential to understand biology on the genomic, transcriptomic, and proteomic level. Using Semantic Web technologies to query biological annotations, there was no standard that described this potentially complex location information as subject-predicate-object triples. We have developed an ontology, the Feature Annotation Location Description Ontology (FALDO), to describe the positions of annotated features on linear and circular sequences. FALDO can be used to describe nucleotide features in sequence records, protein annotations, and glycan binding sites, among other features in coordinate systems of the aforementioned "omics" areas. Using the same data format to represent sequence positions that are independent of file formats allows us to integrate sequence data from multiple sources and data types. The genome browser JBrowse is used to demonstrate accessing multiple SPARQL endpoints to display genomic feature annotations, as well as protein annotations from UniProt mapped to genomic locations. Our ontology allows users to uniformly describe - and potentially merge - sequence annotations from multiple sources. Data sources using FALDO can prospectively be retrieved using federalised SPARQL queries against public SPARQL endpoints and/or local private triple stores.

  18. FALDO: a semantic standard for describing the location of nucleotide and protein feature annotation

    DOE PAGES

    Bolleman, Jerven T.; Mungall, Christopher J.; Strozzi, Francesco; ...

    2016-06-13

    Nucleotide and protein sequence feature annotations are essential to understand biology on the genomic, transcriptomic, and proteomic level. Using Semantic Web technologies to query biological annotations, there was no standard that described this potentially complex location information as subject-predicate-object triples. In this paper, we have developed an ontology, the Feature Annotation Location Description Ontology (FALDO), to describe the positions of annotated features on linear and circular sequences. FALDO can be used to describe nucleotide features in sequence records, protein annotations, and glycan binding sites, among other features in coordinate systems of the aforementioned “omics” areas. Using the same data formatmore » to represent sequence positions that are independent of file formats allows us to integrate sequence data from multiple sources and data types. The genome browser JBrowse is used to demonstrate accessing multiple SPARQL endpoints to display genomic feature annotations, as well as protein annotations from UniProt mapped to genomic locations. Our ontology allows users to uniformly describe – and potentially merge – sequence annotations from multiple sources. Finally, data sources using FALDO can prospectively be retrieved using federalised SPARQL queries against public SPARQL endpoints and/or local private triple stores.« less

  19. FALDO: a semantic standard for describing the location of nucleotide and protein feature annotation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bolleman, Jerven T.; Mungall, Christopher J.; Strozzi, Francesco

    Nucleotide and protein sequence feature annotations are essential to understand biology on the genomic, transcriptomic, and proteomic level. Using Semantic Web technologies to query biological annotations, there was no standard that described this potentially complex location information as subject-predicate-object triples. In this paper, we have developed an ontology, the Feature Annotation Location Description Ontology (FALDO), to describe the positions of annotated features on linear and circular sequences. FALDO can be used to describe nucleotide features in sequence records, protein annotations, and glycan binding sites, among other features in coordinate systems of the aforementioned “omics” areas. Using the same data formatmore » to represent sequence positions that are independent of file formats allows us to integrate sequence data from multiple sources and data types. The genome browser JBrowse is used to demonstrate accessing multiple SPARQL endpoints to display genomic feature annotations, as well as protein annotations from UniProt mapped to genomic locations. Our ontology allows users to uniformly describe – and potentially merge – sequence annotations from multiple sources. Finally, data sources using FALDO can prospectively be retrieved using federalised SPARQL queries against public SPARQL endpoints and/or local private triple stores.« less

  20. Integrated Millimeter-Wave Frequency Multiplers

    NASA Astrophysics Data System (ADS)

    Schoenthal, Gerhard S.; Deaver, B. S.; Crowe, T. W.; Bishop, W. L.; Saini, K.; Bradley, R. F.

    2001-11-01

    Many of the molecules of interest to radio astronomers and atmospheric chemists resonate at frequencies in the millimeter and submillimeter wavelength bands. To measure the spectra of these molecules scientists rely on heterodyne receivers that convert the high frequency signal to the GHz band where it is readily amplified and analyzed. One of the challenges of developing suitable receiver systems is the development of compact, reliable and affordable sources of local oscillator power at frequencies in excess of 100 GHz. One useful solution is to use GaAs Schottky diodes, in their varactor mode, to generate high frequency harmonics of lower frequency sources such as Gunn oscillators. As a part of a multi-national radio astronomy project, the Atacama Millimeter Large Array (ALMA), we have designed and fabricated a broadband frequency tripler with an output centered at 240 GHz. It is integrated on a quartz substrate to greatly reduce the parasitic capacitance and thereby improve electrical performance. The integrated circuit was designed to require no oxides or ohmic contacts, thereby easing fabrication. This talk will discuss the novel millimeter-wave integrated circuit fabrication process and the initial results.

  1. Comparing and Combining Data across Multiple Sources via Integration of Paired-sample Data to Correct for Measurement Error

    PubMed Central

    Huang, Yunda; Huang, Ying; Moodie, Zoe; Li, Sue; Self, Steve

    2014-01-01

    Summary In biomedical research such as the development of vaccines for infectious diseases or cancer, measures from the same assay are often collected from multiple sources or laboratories. Measurement error that may vary between laboratories needs to be adjusted for when combining samples across laboratories. We incorporate such adjustment in comparing and combining independent samples from different labs via integration of external data, collected on paired samples from the same two laboratories. We propose: 1) normalization of individual level data from two laboratories to the same scale via the expectation of true measurements conditioning on the observed; 2) comparison of mean assay values between two independent samples in the Main study accounting for inter-source measurement error; and 3) sample size calculations of the paired-sample study so that hypothesis testing error rates are appropriately controlled in the Main study comparison. Because the goal is not to estimate the true underlying measurements but to combine data on the same scale, our proposed methods do not require that the true values for the errorprone measurements are known in the external data. Simulation results under a variety of scenarios demonstrate satisfactory finite sample performance of our proposed methods when measurement errors vary. We illustrate our methods using real ELISpot assay data generated by two HIV vaccine laboratories. PMID:22764070

  2. A New Architecture for Visualization: Open Mission Control Technologies

    NASA Technical Reports Server (NTRS)

    Trimble, Jay

    2017-01-01

    Open Mission Control Technologies (MCT) is a new architecture for visualisation of mission data. Driven by requirements for new mission capabilities, including distributed mission operations, access to data anywhere, customization by users, synthesis of multiple data sources, and flexibility for multi-mission adaptation, Open MCT provides users with an integrated customizable environment. Developed at NASAs Ames Research Center (ARC), in collaboration with NASAs Advanced Multimission Operations System (AMMOS) and NASAs Jet Propulsion Laboratory (JPL), Open MCT is getting its first mission use on the Jason 3 Mission, and is also available in the testbed for the Mars 2020 Rover and for development use for NASAs Resource Prospector Lunar Rover. The open source nature of the project provides for use outside of space missions, including open source contributions from a community of users. The defining features of Open MCT for mission users are data integration, end user composition and multiple views. Data integration provides access to mission data across domains in one place, making data such as activities, timelines, telemetry, imagery, event timers and procedures available in one place, without application switching. End user composition provides users with layouts, which act as a canvas to assemble visualisations. Multiple views provide the capability to view the same data in different ways, with live switching of data views in place. Open MCT is browser based, and works on the desktop as well as tablets and phones, providing access to data anywhere. An early use case for mobile data access took place on the Resource Prospector (RP) Mission Distributed Operations Test, in which rover engineers in the field were able to view telemetry on their phones. We envision this capability providing decision support to on console operators from off duty personnel. The plug-in architecture also allows for adaptation for different mission capabilities. Different data types and capabilities may be added or removed using plugins. An API provides a means to write new capabilities and to create data adaptors. Data plugins exist for mission data sources for NASA missions. Adaptors have been written by international and commercial users. Open MCT is open source. Open source enables collaborative development across organizations and also makes the product available outside of the space community, providing a potential source of usage and ideas to drive product design and development. The combination of open source with an Apache 2 license, and distribution on GitHub, has enabled an active community of users and contributors. The spectrum of users for Open MCT is, to our knowledge, unprecedented for mission software. In addition to our NASA users, we have, through open source, had users and inquires on projects ranging from Internet of Things, to radio hobbyists, to farming projects. We have an active community of contributors, enabling a flow of ideas inside and outside of the space community.

  3. Unsupervised multiple kernel learning for heterogeneous data integration.

    PubMed

    Mariette, Jérôme; Villa-Vialaneix, Nathalie

    2018-03-15

    Recent high-throughput sequencing advances have expanded the breadth of available omics datasets and the integrated analysis of multiple datasets obtained on the same samples has allowed to gain important insights in a wide range of applications. However, the integration of various sources of information remains a challenge for systems biology since produced datasets are often of heterogeneous types, with the need of developing generic methods to take their different specificities into account. We propose a multiple kernel framework that allows to integrate multiple datasets of various types into a single exploratory analysis. Several solutions are provided to learn either a consensus meta-kernel or a meta-kernel that preserves the original topology of the datasets. We applied our framework to analyse two public multi-omics datasets. First, the multiple metagenomic datasets, collected during the TARA Oceans expedition, was explored to demonstrate that our method is able to retrieve previous findings in a single kernel PCA as well as to provide a new image of the sample structures when a larger number of datasets are included in the analysis. To perform this analysis, a generic procedure is also proposed to improve the interpretability of the kernel PCA in regards with the original data. Second, the multi-omics breast cancer datasets, provided by The Cancer Genome Atlas, is analysed using a kernel Self-Organizing Maps with both single and multi-omics strategies. The comparison of these two approaches demonstrates the benefit of our integration method to improve the representation of the studied biological system. Proposed methods are available in the R package mixKernel, released on CRAN. It is fully compatible with the mixOmics package and a tutorial describing the approach can be found on mixOmics web site http://mixomics.org/mixkernel/. jerome.mariette@inra.fr or nathalie.villa-vialaneix@inra.fr. Supplementary data are available at Bioinformatics online.

  4. Common Calibration Source for Monitoring Long-term Ozone Trends

    NASA Technical Reports Server (NTRS)

    Kowalewski, Matthew

    2004-01-01

    Accurate long-term satellite measurements are crucial for monitoring the recovery of the ozone layer. The slow pace of the recovery and limited lifetimes of satellite monitoring instruments demands that datasets from multiple observation systems be combined to provide the long-term accuracy needed. A fundamental component of accurately monitoring long-term trends is the calibration of these various instruments. NASA s Radiometric Calibration and Development Facility at the Goddard Space Flight Center has provided resources to minimize calibration biases between multiple instruments through the use of a common calibration source and standardized procedures traceable to national standards. The Facility s 50 cm barium sulfate integrating sphere has been used as a common calibration source for both US and international satellite instruments, including the Total Ozone Mapping Spectrometer (TOMS), Solar Backscatter Ultraviolet 2 (SBUV/2) instruments, Shuttle SBUV (SSBUV), Ozone Mapping Instrument (OMI), Global Ozone Monitoring Experiment (GOME) (ESA), Scanning Imaging SpectroMeter for Atmospheric ChartographY (SCIAMACHY) (ESA), and others. We will discuss the advantages of using a common calibration source and its effects on long-term ozone data sets. In addition, sphere calibration results from various instruments will be presented to demonstrate the accuracy of the long-term characterization of the source itself.

  5. Attenuation and bit error rate for four co-propagating spatially multiplexed optical communication channels of exactly same wavelength in step index multimode fibers

    NASA Astrophysics Data System (ADS)

    Murshid, Syed H.; Chakravarty, Abhijit

    2011-06-01

    Spatial domain multiplexing (SDM) utilizes co-propagation of exactly the same wavelength in optical fibers to increase the bandwidth by integer multiples. Input signals from multiple independent single mode pigtail laser sources are launched at different input angles into a single multimode carrier fiber. The SDM channels follow helical paths and traverse through the carrier fiber without interfering with each other. The optical energy from the different sources is spatially distributed and takes the form of concentric circular donut shaped rings, where each ring corresponds to an independent laser source. At the output end of the fiber these donut shaped independent channels can be separated either with the help of bulk optics or integrated concentric optical detectors. This presents the experimental setup and results for a four channel SDM system. The attenuation and bit error rate for individual channels of such a system is also presented.

  6. JBioWH: an open-source Java framework for bioinformatics data integration

    PubMed Central

    Vera, Roberto; Perez-Riverol, Yasset; Perez, Sonia; Ligeti, Balázs; Kertész-Farkas, Attila; Pongor, Sándor

    2013-01-01

    The Java BioWareHouse (JBioWH) project is an open-source platform-independent programming framework that allows a user to build his/her own integrated database from the most popular data sources. JBioWH can be used for intensive querying of multiple data sources and the creation of streamlined task-specific data sets on local PCs. JBioWH is based on a MySQL relational database scheme and includes JAVA API parser functions for retrieving data from 20 public databases (e.g. NCBI, KEGG, etc.). It also includes a client desktop application for (non-programmer) users to query data. In addition, JBioWH can be tailored for use in specific circumstances, including the handling of massive queries for high-throughput analyses or CPU intensive calculations. The framework is provided with complete documentation and application examples and it can be downloaded from the Project Web site at http://code.google.com/p/jbiowh. A MySQL server is available for demonstration purposes at hydrax.icgeb.trieste.it:3307. Database URL: http://code.google.com/p/jbiowh PMID:23846595

  7. JBioWH: an open-source Java framework for bioinformatics data integration.

    PubMed

    Vera, Roberto; Perez-Riverol, Yasset; Perez, Sonia; Ligeti, Balázs; Kertész-Farkas, Attila; Pongor, Sándor

    2013-01-01

    The Java BioWareHouse (JBioWH) project is an open-source platform-independent programming framework that allows a user to build his/her own integrated database from the most popular data sources. JBioWH can be used for intensive querying of multiple data sources and the creation of streamlined task-specific data sets on local PCs. JBioWH is based on a MySQL relational database scheme and includes JAVA API parser functions for retrieving data from 20 public databases (e.g. NCBI, KEGG, etc.). It also includes a client desktop application for (non-programmer) users to query data. In addition, JBioWH can be tailored for use in specific circumstances, including the handling of massive queries for high-throughput analyses or CPU intensive calculations. The framework is provided with complete documentation and application examples and it can be downloaded from the Project Web site at http://code.google.com/p/jbiowh. A MySQL server is available for demonstration purposes at hydrax.icgeb.trieste.it:3307. Database URL: http://code.google.com/p/jbiowh.

  8. Improving sensor data analysis through diverse data source integration

    NASA Astrophysics Data System (ADS)

    Casper, Jennifer; Albuquerque, Ronald; Hyland, Jeremy; Leveille, Peter; Hu, Jing; Cheung, Eddy; Mauer, Dan; Couture, Ronald; Lai, Barry

    2009-05-01

    Daily sensor data volumes are increasing from gigabytes to multiple terabytes. The manpower and resources needed to analyze the increasing amount of data are not growing at the same rate. Current volumes of diverse data, both live streaming and historical, are not fully analyzed. Analysts are left mostly to analyzing the individual data sources manually. This is both time consuming and mentally exhausting. Expanding data collections only exacerbate this problem. Improved data management techniques and analysis methods are required to process the increasing volumes of historical and live streaming data sources simultaneously. Improved techniques are needed to reduce an analysts decision response time and to enable more intelligent and immediate situation awareness. This paper describes the Sensor Data and Analysis Framework (SDAF) system built to provide analysts with the ability to pose integrated queries on diverse live and historical data sources, and plug in needed algorithms for upstream processing and filtering. The SDAF system was inspired by input and feedback from field analysts and experts. This paper presents SDAF's capabilities, implementation, and reasoning behind implementation decisions. Finally, lessons learned from preliminary tests and deployments are captured for future work.

  9. Boosting probabilistic graphical model inference by incorporating prior knowledge from multiple sources.

    PubMed

    Praveen, Paurush; Fröhlich, Holger

    2013-01-01

    Inferring regulatory networks from experimental data via probabilistic graphical models is a popular framework to gain insights into biological systems. However, the inherent noise in experimental data coupled with a limited sample size reduces the performance of network reverse engineering. Prior knowledge from existing sources of biological information can address this low signal to noise problem by biasing the network inference towards biologically plausible network structures. Although integrating various sources of information is desirable, their heterogeneous nature makes this task challenging. We propose two computational methods to incorporate various information sources into a probabilistic consensus structure prior to be used in graphical model inference. Our first model, called Latent Factor Model (LFM), assumes a high degree of correlation among external information sources and reconstructs a hidden variable as a common source in a Bayesian manner. The second model, a Noisy-OR, picks up the strongest support for an interaction among information sources in a probabilistic fashion. Our extensive computational studies on KEGG signaling pathways as well as on gene expression data from breast cancer and yeast heat shock response reveal that both approaches can significantly enhance the reconstruction accuracy of Bayesian Networks compared to other competing methods as well as to the situation without any prior. Our framework allows for using diverse information sources, like pathway databases, GO terms and protein domain data, etc. and is flexible enough to integrate new sources, if available.

  10. Preschoolers' Use of Morphosyntactic Cues to Identify Generic Sentences: Indefinite Singular Noun Phrases, Tense, and Aspect

    ERIC Educational Resources Information Center

    Cimpian, Andrei; Meltzer, Trent J.; Markman, Ellen M.

    2011-01-01

    Generic sentences (e.g., "Birds lay eggs") convey generalizations about entire categories and may thus be an important source of knowledge for children. However, these sentences cannot be identified by a simple rule, requiring instead the integration of multiple cues. The present studies focused on 3- to 5-year-olds' (N = 91) use of…

  11. The Integrated Epidemiologic Profile: Using Multiple Data Sources in Developing Profiles to Inform HIV Prevention and Care Planning

    ERIC Educational Resources Information Center

    Whitmore, Suzanne K.; Zaidi, Irum F.; Dean, Hazel D.

    2005-01-01

    HIV/AIDS epidemiologic profiles describe the HIV/AIDS epidemic among state and local populations. The Centers for Disease Control and Prevention and the Health Resources Services Administration collaborated to develop one set of guidelines for developing epidemiologic profiles that would serve as the basis for both prevention and care planning.…

  12. Air Force Technical Objective Document, FY89.

    DTIC Science & Technology

    1988-04-01

    threat warning; multimegawatt stand-off jammers; a family of new, broadband , active decoy expendables; E4? subsystems and EW suites for Military...and monolithic integrated circuits. (3) Microwave TWTs Develop microwave tube technology and selected thermionic power sources and amplifiers for ECM...Improved design reliability and multiple application of tube technology are stressed. Improve Traveling Wave Tube ( TWT ) reliability by instrumenting a TWT

  13. Integrating Multiple Knowledge Sources for Utterance-Level Confidence Annotation in the CMU Communicator Spoken Dialog System

    DTIC Science & Technology

    2002-11-01

    Wilson, Rong Zhang for their collaboration on the first part of this work. We would also like to thank Tania Liebowitz and Tina Bennett for their help in...Regression”, Wiley Seried in Prob- ability and Statistics, 2000 [32] Walker M.A., Litman D.J., Kamm C.A., Abella A. “PARADISE: A Framework for Evaluating

  14. A Comparison of Methods to Screen Middle School Students for Reading and Math Difficulties

    ERIC Educational Resources Information Center

    Nelson, Peter M.; Van Norman, Ethan R.; Lackner, Stacey K.

    2016-01-01

    The current study explored multiple ways in which middle schools can use and integrate data sources to predict proficiency on future high-stakes state achievement tests. The diagnostic accuracy of (a) prior achievement data, (b) teacher rating scale scores, (c) a composite score combining state test scores and rating scale responses, and (d) two…

  15. Total-dose radiation effects data for semiconductor devices, volume 3

    NASA Technical Reports Server (NTRS)

    Price, W. E.; Martin, K. E.; Nichols, D. K.; Gauthier, M. K.; Brown, S. F.

    1982-01-01

    Volume 3 of this three-volume set provides a detailed analysis of the data in Volumes 1 and 2, most of which was generated for the Galileo Orbiter Program in support of NASA space programs. Volume 1 includes total ionizing dose radiation test data on diodes, bipolar transistors, field effect transistors, and miscellaneous discrete solid-state devices. Volume 2 includes similar data on integrated circuits and a few large-scale integrated circuits. The data of Volumes 1 and 2 are combined in graphic format in Volume 3 to provide a comparison of radiation sensitivities of devices of a given type and different manufacturer, a comparison of multiple tests for a single data code, a comparison of multiple tests for a single lot, and a comparison of radiation sensitivities vs time (date codes). All data were generated using a steady-state 2.5-MeV electron source (Dynamitron) or a Cobalt-60 gamma ray source. The data that compose Volume 3 represent 26 different device types, 224 tests, and a total of 1040 devices. A comparison of the effects of steady-state electrons and Cobat-60 gamma rays is also presented.

  16. A single-sided homogeneous Green's function representation for holographic imaging, inverse scattering, time-reversal acoustics and interferometric Green's function retrieval

    NASA Astrophysics Data System (ADS)

    Wapenaar, Kees; Thorbecke, Jan; van der Neut, Joost

    2016-04-01

    Green's theorem plays a fundamental role in a diverse range of wavefield imaging applications, such as holographic imaging, inverse scattering, time-reversal acoustics and interferometric Green's function retrieval. In many of those applications, the homogeneous Green's function (i.e. the Green's function of the wave equation without a singularity on the right-hand side) is represented by a closed boundary integral. In practical applications, sources and/or receivers are usually present only on an open surface, which implies that a significant part of the closed boundary integral is by necessity ignored. Here we derive a homogeneous Green's function representation for the common situation that sources and/or receivers are present on an open surface only. We modify the integrand in such a way that it vanishes on the part of the boundary where no sources and receivers are present. As a consequence, the remaining integral along the open surface is an accurate single-sided representation of the homogeneous Green's function. This single-sided representation accounts for all orders of multiple scattering. The new representation significantly improves the aforementioned wavefield imaging applications, particularly in situations where the first-order scattering approximation breaks down.

  17. Latent Heating Retrieval from TRMM Observations Using a Simplified Thermodynamic Model

    NASA Technical Reports Server (NTRS)

    Grecu, Mircea; Olson, William S.

    2003-01-01

    A procedure for the retrieval of hydrometeor latent heating from TRMM active and passive observations is presented. The procedure is based on current methods for estimating multiple-species hydrometeor profiles from TRMM observations. The species include: cloud water, cloud ice, rain, and graupel (or snow). A three-dimensional wind field is prescribed based on the retrieved hydrometeor profiles, and, assuming a steady-state, the sources and sinks in the hydrometeor conservation equations are determined. Then, the momentum and thermodynamic equations, in which the heating and cooling are derived from the hydrometeor sources and sinks, are integrated one step forward in time. The hydrometeor sources and sinks are reevaluated based on the new wind field, and the momentum and thermodynamic equations are integrated one more step. The reevalution-integration process is repeated until a steady state is reached. The procedure is tested using cloud model simulations. Cloud-model derived fields are used to synthesize TRMM observations, from which hydrometeor profiles are derived. The procedure is applied to the retrieved hydrometeor profiles, and the latent heating estimates are compared to the actual latent heating produced by the cloud model. Examples of procedure's applications to real TRMM data are also provided.

  18. CARHTA GENE: multipopulation integrated genetic and radiation hybrid mapping.

    PubMed

    de Givry, Simon; Bouchez, Martin; Chabrier, Patrick; Milan, Denis; Schiex, Thomas

    2005-04-15

    CAR(H)(T)A GENE: is an integrated genetic and radiation hybrid (RH) mapping tool which can deal with multiple populations, including mixtures of genetic and RH data. CAR(H)(T)A GENE: performs multipoint maximum likelihood estimations with accelerated expectation-maximization algorithms for some pedigrees and has sophisticated algorithms for marker ordering. Dedicated heuristics for framework mapping are also included. CAR(H)(T)A GENE: can be used as a C++ library, through a shell command and a graphical interface. The XML output for companion tools is integrated. The program is available free of charge from www.inra.fr/bia/T/CarthaGene for Linux, Windows and Solaris machines (with Open Source). tschiex@toulouse.inra.fr.

  19. Function of basal ganglia in bridging cognitive and motor modules to perform an action

    PubMed Central

    Nagano-Saito, Atsuko; Martinu, Kristina; Monchi, Oury

    2014-01-01

    The basal ganglia (BG) are thought to be involved in the integration of multiple sources of information, and their dysfunction can lead to disorders such as Parkinson's disease (PD). PD patients show motor and cognitive dysfunction with specific impairments in the internal generation of motor actions and executive deficits, respectively. The role of the BG, then, would be to integrate information from several sources in order to make a decision on a resulting action adequate for the required task. Reanalyzing the data set from our previous study (Martinu et al., 2012), we investigated this hypothesis by applying a graph theory method to a series of fMRI data during the performance of self-initiated (SI) finger movement tasks obtained in healthy volunteers (HV) and early stage PD patients. Dorsally, connectivity strength between the medial prefrontal areas (mPFC) and cortical regions including the primary motor area (M1), the extrastriate visual cortex, and the associative cortex, was reduced in the PD patients. The connectivity strengths were positively correlated to activity in the striatum in both groups. Ventrally, all connectivity between the striatum, the thalamus, and the extrastriate visual cortex decreased in strength in the PD, as did the connectivity between the striatum and the ventrolateral PFC (VLPFC). Individual response time (RT) was negatively correlated to connectivity strength between the dorsolateral PFC (DLPFC) and the striatum and positively correlated to connectivity between the VLPFC and the striatum in the HV. These results indicate that the BG, with the mPFC and thalamus, are involved in integrating multiple sources of information from areas such as DLPFC, and VLPFC, connecting to M1, thereby determining a network that leads to the adequate decision and performance of the resulting action. PMID:25071432

  20. Integrating heterogeneous earth observation data for assessment of high-resolution inundation boundaries generated during flood emergencies.

    NASA Astrophysics Data System (ADS)

    Sava, E.; Cervone, G.; Kalyanapu, A. J.; Sampson, K. M.

    2017-12-01

    The increasing trend in flooding events, paired with rapid urbanization and an aging infrastructure is projected to enhance the risk of catastrophic losses and increase the frequency of both flash and large area floods. During such events, it is critical for decision makers and emergency responders to have access to timely actionable knowledge regarding preparedness, emergency response, and recovery before, during and after a disaster. Large volumes of data sets derived from sophisticated sensors, mobile phones, and social media feeds are increasingly being used to improve citizen services and provide clues to the best way to respond to emergencies through the use of visualization and GIS mapping. Such data, coupled with recent advancements in data fusion techniques of remote sensing with near real time heterogeneous datasets have allowed decision makers to more efficiently extract precise and relevant knowledge and better understand how damage caused by disasters have real time effects on urban population. This research assesses the feasibility of integrating multiple sources of contributed data into hydrodynamic models for flood inundation simulation and estimating damage assessment. It integrates multiple sources of high-resolution physiographic data such as satellite remote sensing imagery coupled with non-authoritative data such as Civil Air Patrol (CAP) and `during-event' social media observations of flood inundation in order to improve the identification of flood mapping. The goal is to augment remote sensing imagery with new open-source datasets to generate flood extend maps at higher temporal and spatial resolution. The proposed methodology is applied on two test cases, relative to the 2013 Boulder Colorado flood and the 2015 floods in Texas.

  1. Continuous integration for concurrent MOOSE framework and application development on GitHub

    DOE PAGES

    Slaughter, Andrew E.; Peterson, John W.; Gaston, Derek R.; ...

    2015-11-20

    For the past several years, Idaho National Laboratory’s MOOSE framework team has employed modern software engineering techniques (continuous integration, joint application/framework source code repos- itories, automated regression testing, etc.) in developing closed-source multiphysics simulation software (Gaston et al., Journal of Open Research Software vol. 2, article e10, 2014). In March 2014, the MOOSE framework was released under an open source license on GitHub, significantly expanding and diversifying the pool of current active and potential future contributors on the project. Despite this recent growth, the same philosophy of concurrent framework and application development continues to guide the project’s development roadmap. Severalmore » specific practices, including techniques for managing multiple repositories, conducting automated regression testing, and implementing a cascading build process are discussed in this short paper. Furthermore, special attention is given to describing the manner in which these practices naturally synergize with the GitHub API and GitHub-specific features such as issue tracking, Pull Requests, and project forks.« less

  2. Continuous integration for concurrent MOOSE framework and application development on GitHub

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slaughter, Andrew E.; Peterson, John W.; Gaston, Derek R.

    For the past several years, Idaho National Laboratory’s MOOSE framework team has employed modern software engineering techniques (continuous integration, joint application/framework source code repos- itories, automated regression testing, etc.) in developing closed-source multiphysics simulation software (Gaston et al., Journal of Open Research Software vol. 2, article e10, 2014). In March 2014, the MOOSE framework was released under an open source license on GitHub, significantly expanding and diversifying the pool of current active and potential future contributors on the project. Despite this recent growth, the same philosophy of concurrent framework and application development continues to guide the project’s development roadmap. Severalmore » specific practices, including techniques for managing multiple repositories, conducting automated regression testing, and implementing a cascading build process are discussed in this short paper. Furthermore, special attention is given to describing the manner in which these practices naturally synergize with the GitHub API and GitHub-specific features such as issue tracking, Pull Requests, and project forks.« less

  3. Developing a Domain Ontology: the Case of Water Cycle and Hydrology

    NASA Astrophysics Data System (ADS)

    Gupta, H.; Pozzi, W.; Piasecki, M.; Imam, B.; Houser, P.; Raskin, R.; Ramachandran, R.; Martinez Baquero, G.

    2008-12-01

    A semantic web ontology enables semantic data integration and semantic smart searching. Several organizations have attempted to implement smart registration and integration or searching using ontologies. These are the NOESIS (NSF project: LEAD) and HydroSeek (NSF project: CUAHS HIS) data discovery engines and the NSF project GEON. All three applications use ontologies to discover data from multiple sources and projects. The NASA WaterNet project was established to identify creative, innovative ways to bridge NASA research results to real world applications, linking decision support needs to available data, observations, and modeling capability. WaterNet (NASA project) utilized the smart query tool Noesis as a testbed to test whether different ontologies (and different catalog searches) could be combined to match resources with user needs. NOESIS contains the upper level SWEET ontology that accepts plug in domain ontologies to refine user search queries, reducing the burden of multiple keyword searches. Another smart search interface was that developed for CUAHSI, HydroSeek, that uses a multi-layered concept search ontology, tagging variables names from any number of data sources to specific leaf and higher level concepts on which the search is executed. This approach has proven to be quite successful in mitigating semantic heterogeneity as the user does not need to know the semantic specifics of each data source system but just uses a set of common keywords to discover the data for a specific temporal and geospatial domain. This presentation will show tests with Noesis and Hydroseek lead to the conclusion that the construction of a complex, and highly heterogeneous water cycle ontology requires multiple ontology modules. To illustrate the complexity and heterogeneity of a water cycle ontology, Hydroseek successfully utilizes WaterOneFlow to integrate data across multiple different data collections, such as USGS NWIS. However,different methodologies are employed by the Earth Science, the Hydrological, and Hydraulic Engineering Communities, and each community employs models that require different input data. If a sub-domain ontology is created for each of these,describing water balance calculations, then the resulting structure of the semantic network describing these various terms can be rather complex, heterogeneous, and overlapping, and will require "mapping" between equivalent terms in the ontologies, along with the development of an upper level conceptual or domain ontology to utilize and link to those already in existence.

  4. Analysis and Application of Microgrids

    NASA Astrophysics Data System (ADS)

    Yue, Lu

    New trends of generating electricity locally and utilizing non-conventional or renewable energy sources have attracted increasing interests due to the gradual depletion of conventional fossil fuel energy sources. The new type of power generation is called Distributed Generation (DG) and the energy sources utilized by Distributed Generation are termed Distributed Energy Sources (DERs). With DGs embedded in the distribution networks, they evolve from passive distribution networks to active distribution networks enabling bidirectional power flows in the networks. Further incorporating flexible and intelligent controllers and employing future technologies, active distribution networks will turn to a Microgrid. A Microgrid is a small-scale, low voltage Combined with Heat and Power (CHP) supply network designed to supply electrical and heat loads for a small community. To further implement Microgrids, a sophisticated Microgrid Management System must be integrated. However, due to the fact that a Microgrid has multiple DERs integrated and is likely to be deregulated, the ability to perform real-time OPF and economic dispatch with fast speed advanced communication network is necessary. In this thesis, first, problems such as, power system modelling, power flow solving and power system optimization, are studied. Then, Distributed Generation and Microgrid are studied and reviewed, including a comprehensive review over current distributed generation technologies and Microgrid Management Systems, etc. Finally, a computer-based AC optimization method which minimizes the total transmission loss and generation cost of a Microgrid is proposed and a wireless communication scheme based on synchronized Code Division Multiple Access (sCDMA) is proposed. The algorithm is tested with a 6-bus power system and a 9-bus power system.

  5. NOAA's Approach to Community Building and Governance for Data Integration and Standards Within IOOS

    NASA Astrophysics Data System (ADS)

    Willis, Z.; Shuford, R.

    2007-12-01

    This presentation will review NOAA's current approach to the Integrated Ocean Observing System (IOOS) at a national and regional level within the context of our United States Federal and Non-Federal partners. Further, it will discuss the context of integrating data and the necessary standards definition that must be done not only within the United States but in a larger global context. IOOS is the U.S. contribution to the Global Ocean Observing System (GOOS), which itself is the ocean contribution to the Global Earth Observation System of Systems (GEOSS). IOOS is a nationally important network of distributed systems that forms an infrastructure providing many different users with the diverse information they require to characterize, understand, predict, and monitor changes in dynamic coastal and open ocean environments. NOAA recently established an IOOS Program Office to provide a focal point for its ocean observation programs and assist with coordination of regional and national IOOS activities. One of the Program's initial priorities is the development of a data integration framework (DIF) proof-of-concept for IOOS data. The initial effort will focus on NOAA sources of data and be implemented incrementally over the course of three years. The first phase will focus on the integration of five core IOOS variables being collected, and disseminated, for independent purposes and goals by multiple NOAA observing sources. The goal is to ensure that data from different sources is interoperable to enable rapid and routine use by multiple NOAA decision-support tool developers and other end users. During the second phase we expect to ingest these integrated variables into four specific NOAA data products used for decision-support. Finally, we will systematically test and evaluate enhancements to these products, and verify, validate, and benchmark new performance specifications. The outcome will be an extensible product for operational use that allows for broader community applicability to include additional variables, applications, and non-NOAA sources of data. NOAA is working with Ocean.US to implement an interagency process for the submission, proposal, and recommendation of IOOS data standards. In order to achieve the broader goals of data interoperability of GEOSS, communication of this process and the identified standards needs to be coordinated at the international level. NOAA is participating in the development of a series of IODE workshops with the objective to achieve broad agreement and commitment to ocean data management and exchange standards. The first of these meetings will use the five core variables identified by the NOAA DIF as a focus.

  6. Measurement and Analysis of Multiple Output Transient Propagation in BJT Analog Circuits

    NASA Astrophysics Data System (ADS)

    Roche, Nicolas J.-H.; Khachatrian, A.; Warner, J. H.; Buchner, S. P.; McMorrow, D.; Clymer, D. A.

    2016-08-01

    The propagation of Analog Single Event Transients (ASETs) to multiple outputs of Bipolar Junction Transistor (BJTs) Integrated Circuits (ICs) is reported for the first time. The results demonstrate that ASETs can appear at several outputs of a BJT amplifier or comparator as a result of a single ion or single laser pulse strike at a single physical location on the chip of a large-scale integrated BJT analog circuit. This is independent of interconnect cross-talk or charge-sharing effects. Laser experiments, together with SPICE simulations and analysis of the ASET's propagation in the s-domain are used to explain how multiple-output transients (MOTs) are generated and propagate in the device. This study demonstrates that both the charge collection associated with an ASET and the ASET's shape, commonly used to characterize the propagation of SETs in devices and systems, are unable to explain quantitatively how MOTs propagate through an integrated analog circuit. The analysis methodology adopted here involves combining the Fourier transform of the propagating signal and the current-source transfer function in the s-domain. This approach reveals the mechanisms involved in the transient signal propagation from its point of generation to one or more outputs without the signal following a continuous interconnect path.

  7. Unified double- and single-sided homogeneous Green’s function representations

    PubMed Central

    van der Neut, Joost; Slob, Evert

    2016-01-01

    In wave theory, the homogeneous Green’s function consists of the impulse response to a point source, minus its time-reversal. It can be represented by a closed boundary integral. In many practical situations, the closed boundary integral needs to be approximated by an open boundary integral because the medium of interest is often accessible from one side only. The inherent approximations are acceptable as long as the effects of multiple scattering are negligible. However, in case of strongly inhomogeneous media, the effects of multiple scattering can be severe. We derive double- and single-sided homogeneous Green’s function representations. The single-sided representation applies to situations where the medium can be accessed from one side only. It correctly handles multiple scattering. It employs a focusing function instead of the backward propagating Green’s function in the classical (double-sided) representation. When reflection measurements are available at the accessible boundary of the medium, the focusing function can be retrieved from these measurements. Throughout the paper, we use a unified notation which applies to acoustic, quantum-mechanical, electromagnetic and elastodynamic waves. We foresee many interesting applications of the unified single-sided homogeneous Green’s function representation in holographic imaging and inverse scattering, time-reversed wave field propagation and interferometric Green’s function retrieval. PMID:27436983

  8. Unified double- and single-sided homogeneous Green's function representations

    NASA Astrophysics Data System (ADS)

    Wapenaar, Kees; van der Neut, Joost; Slob, Evert

    2016-06-01

    In wave theory, the homogeneous Green's function consists of the impulse response to a point source, minus its time-reversal. It can be represented by a closed boundary integral. In many practical situations, the closed boundary integral needs to be approximated by an open boundary integral because the medium of interest is often accessible from one side only. The inherent approximations are acceptable as long as the effects of multiple scattering are negligible. However, in case of strongly inhomogeneous media, the effects of multiple scattering can be severe. We derive double- and single-sided homogeneous Green's function representations. The single-sided representation applies to situations where the medium can be accessed from one side only. It correctly handles multiple scattering. It employs a focusing function instead of the backward propagating Green's function in the classical (double-sided) representation. When reflection measurements are available at the accessible boundary of the medium, the focusing function can be retrieved from these measurements. Throughout the paper, we use a unified notation which applies to acoustic, quantum-mechanical, electromagnetic and elastodynamic waves. We foresee many interesting applications of the unified single-sided homogeneous Green's function representation in holographic imaging and inverse scattering, time-reversed wave field propagation and interferometric Green's function retrieval.

  9. DasPy 1.0 - the Open Source Multivariate Land Data Assimilation Framework in combination with the Community Land Model 4.5

    NASA Astrophysics Data System (ADS)

    Han, X.; Li, X.; He, G.; Kumbhar, P.; Montzka, C.; Kollet, S.; Miyoshi, T.; Rosolem, R.; Zhang, Y.; Vereecken, H.; Franssen, H.-J. H.

    2015-08-01

    Data assimilation has become a popular method to integrate observations from multiple sources with land surface models to improve predictions of the water and energy cycles of the soil-vegetation-atmosphere continuum. Multivariate data assimilation refers to the simultaneous assimilation of observation data from multiple model state variables into a simulation model. In recent years, several land data assimilation systems have been developed in different research agencies. Because of the software availability or adaptability, these systems are not easy to apply for the purpose of multivariate land data assimilation research. We developed an open source multivariate land data assimilation framework (DasPy) which is implemented using the Python script language mixed with the C++ and Fortran programming languages. LETKF (Local Ensemble Transform Kalman Filter) is implemented as the main data assimilation algorithm, and uncertainties in the data assimilation can be introduced by perturbed atmospheric forcing data, and represented by perturbed soil and vegetation parameters and model initial conditions. The Community Land Model (CLM) was integrated as the model operator. The implementation allows also parameter estimation (soil properties and/or leaf area index) on the basis of the joint state and parameter estimation approach. The Community Microwave Emission Modelling platform (CMEM), COsmic-ray Soil Moisture Interaction Code (COSMIC) and the Two-Source Formulation (TSF) were integrated as observation operators for the assimilation of L-band passive microwave, cosmic-ray soil moisture probe and land surface temperature measurements, respectively. DasPy has been evaluated in several assimilation studies of neutron count intensity (soil moisture), L-band brightness temperature and land surface temperature. DasPy is parallelized using the hybrid Message Passing Interface and Open Multi-Processing techniques. All the input and output data flows are organized efficiently using the commonly used NetCDF file format. Online 1-D and 2-D visualization of data assimilation results is also implemented to facilitate the post simulation analysis. In summary, DasPy is a ready to use open source parallel multivariate land data assimilation framework.

  10. Matching Alternative Addresses: a Semantic Web Approach

    NASA Astrophysics Data System (ADS)

    Ariannamazi, S.; Karimipour, F.; Hakimpour, F.

    2015-12-01

    Rapid development of crowd-sourcing or volunteered geographic information (VGI) provides opportunities for authoritatives that deal with geospatial information. Heterogeneity of multiple data sources and inconsistency of data types is a key characteristics of VGI datasets. The expansion of cities resulted in the growing number of POIs in the OpenStreetMap, a well-known VGI source, which causes the datasets to outdate in short periods of time. These changes made to spatial and aspatial attributes of features such as names and addresses might cause confusion or ambiguity in the processes that require feature's literal information like addressing and geocoding. VGI sources neither will conform specific vocabularies nor will remain in a specific schema for a long period of time. As a result, the integration of VGI sources is crucial and inevitable in order to avoid duplication and the waste of resources. Information integration can be used to match features and qualify different annotation alternatives for disambiguation. This study enhances the search capabilities of geospatial tools with applications able to understand user terminology to pursuit an efficient way for finding desired results. Semantic web is a capable tool for developing technologies that deal with lexical and numerical calculations and estimations. There are a vast amount of literal-spatial data representing the capability of linguistic information in knowledge modeling, but these resources need to be harmonized based on Semantic Web standards. The process of making addresses homogenous generates a helpful tool based on spatial data integration and lexical annotation matching and disambiguating.

  11. A unified structural/terminological interoperability framework based on LexEVS: application to TRANSFoRm.

    PubMed

    Ethier, Jean-François; Dameron, Olivier; Curcin, Vasa; McGilchrist, Mark M; Verheij, Robert A; Arvanitis, Theodoros N; Taweel, Adel; Delaney, Brendan C; Burgun, Anita

    2013-01-01

    Biomedical research increasingly relies on the integration of information from multiple heterogeneous data sources. Despite the fact that structural and terminological aspects of interoperability are interdependent and rely on a common set of requirements, current efforts typically address them in isolation. We propose a unified ontology-based knowledge framework to facilitate interoperability between heterogeneous sources, and investigate if using the LexEVS terminology server is a viable implementation method. We developed a framework based on an ontology, the general information model (GIM), to unify structural models and terminologies, together with relevant mapping sets. This allowed a uniform access to these resources within LexEVS to facilitate interoperability by various components and data sources from implementing architectures. Our unified framework has been tested in the context of the EU Framework Program 7 TRANSFoRm project, where it was used to achieve data integration in a retrospective diabetes cohort study. The GIM was successfully instantiated in TRANSFoRm as the clinical data integration model, and necessary mappings were created to support effective information retrieval for software tools in the project. We present a novel, unifying approach to address interoperability challenges in heterogeneous data sources, by representing structural and semantic models in one framework. Systems using this architecture can rely solely on the GIM that abstracts over both the structure and coding. Information models, terminologies and mappings are all stored in LexEVS and can be accessed in a uniform manner (implementing the HL7 CTS2 service functional model). The system is flexible and should reduce the effort needed from data sources personnel for implementing and managing the integration.

  12. A unified structural/terminological interoperability framework based on LexEVS: application to TRANSFoRm

    PubMed Central

    Ethier, Jean-François; Dameron, Olivier; Curcin, Vasa; McGilchrist, Mark M; Verheij, Robert A; Arvanitis, Theodoros N; Taweel, Adel; Delaney, Brendan C; Burgun, Anita

    2013-01-01

    Objective Biomedical research increasingly relies on the integration of information from multiple heterogeneous data sources. Despite the fact that structural and terminological aspects of interoperability are interdependent and rely on a common set of requirements, current efforts typically address them in isolation. We propose a unified ontology-based knowledge framework to facilitate interoperability between heterogeneous sources, and investigate if using the LexEVS terminology server is a viable implementation method. Materials and methods We developed a framework based on an ontology, the general information model (GIM), to unify structural models and terminologies, together with relevant mapping sets. This allowed a uniform access to these resources within LexEVS to facilitate interoperability by various components and data sources from implementing architectures. Results Our unified framework has been tested in the context of the EU Framework Program 7 TRANSFoRm project, where it was used to achieve data integration in a retrospective diabetes cohort study. The GIM was successfully instantiated in TRANSFoRm as the clinical data integration model, and necessary mappings were created to support effective information retrieval for software tools in the project. Conclusions We present a novel, unifying approach to address interoperability challenges in heterogeneous data sources, by representing structural and semantic models in one framework. Systems using this architecture can rely solely on the GIM that abstracts over both the structure and coding. Information models, terminologies and mappings are all stored in LexEVS and can be accessed in a uniform manner (implementing the HL7 CTS2 service functional model). The system is flexible and should reduce the effort needed from data sources personnel for implementing and managing the integration. PMID:23571850

  13. SZDB: A Database for Schizophrenia Genetic Research

    PubMed Central

    Wu, Yong; Yao, Yong-Gang

    2017-01-01

    Abstract Schizophrenia (SZ) is a debilitating brain disorder with a complex genetic architecture. Genetic studies, especially recent genome-wide association studies (GWAS), have identified multiple variants (loci) conferring risk to SZ. However, how to efficiently extract meaningful biological information from bulk genetic findings of SZ remains a major challenge. There is a pressing need to integrate multiple layers of data from various sources, eg, genetic findings from GWAS, copy number variations (CNVs), association and linkage studies, gene expression, protein–protein interaction (PPI), co-expression, expression quantitative trait loci (eQTL), and Encyclopedia of DNA Elements (ENCODE) data, to provide a comprehensive resource to facilitate the translation of genetic findings into SZ molecular diagnosis and mechanism study. Here we developed the SZDB database (http://www.szdb.org/), a comprehensive resource for SZ research. SZ genetic data, gene expression data, network-based data, brain eQTL data, and SNP function annotation information were systematically extracted, curated and deposited in SZDB. In-depth analyses and systematic integration were performed to identify top prioritized SZ genes and enriched pathways. Multiple types of data from various layers of SZ research were systematically integrated and deposited in SZDB. In-depth data analyses and integration identified top prioritized SZ genes and enriched pathways. We further showed that genes implicated in SZ are highly co-expressed in human brain and proteins encoded by the prioritized SZ risk genes are significantly interacted. The user-friendly SZDB provides high-confidence candidate variants and genes for further functional characterization. More important, SZDB provides convenient online tools for data search and browse, data integration, and customized data analyses. PMID:27451428

  14. Integrative data analysis in clinical psychology research.

    PubMed

    Hussong, Andrea M; Curran, Patrick J; Bauer, Daniel J

    2013-01-01

    Integrative data analysis (IDA), a novel framework for conducting the simultaneous analysis of raw data pooled from multiple studies, offers many advantages including economy (i.e., reuse of extant data), power (i.e., large combined sample sizes), the potential to address new questions not answerable by a single contributing study (e.g., combining longitudinal studies to cover a broader swath of the lifespan), and the opportunity to build a more cumulative science (i.e., examining the similarity of effects across studies and potential reasons for dissimilarities). There are also methodological challenges associated with IDA, including the need to account for sampling heterogeneity across studies, to develop commensurate measures across studies, and to account for multiple sources of study differences as they impact hypothesis testing. In this review, we outline potential solutions to these challenges and describe future avenues for developing IDA as a framework for studies in clinical psychology.

  15. Integrative Data Analysis in Clinical Psychology Research

    PubMed Central

    Hussong, Andrea M.; Curran, Patrick J.; Bauer, Daniel J.

    2013-01-01

    Integrative Data Analysis (IDA), a novel framework for conducting the simultaneous analysis of raw data pooled from multiple studies, offers many advantages including economy (i.e., reuse of extant data), power (i.e., large combined sample sizes), the potential to address new questions not answerable by a single contributing study (e.g., combining longitudinal studies to cover a broader swath of the lifespan), and the opportunity to build a more cumulative science (i.e., examining the similarity of effects across studies and potential reasons for dissimilarities). There are also methodological challenges associated with IDA, including the need to account for sampling heterogeneity across studies, to develop commensurate measures across studies, and to account for multiple sources of study differences as they impact hypothesis testing. In this review, we outline potential solutions to these challenges and describe future avenues for developing IDA as a framework for studies in clinical psychology. PMID:23394226

  16. Dealing with Uncertainty: Readers' Memory for and Use of Conflicting Information from Science Texts as Function of Presentation Format and Source Expertise

    ERIC Educational Resources Information Center

    Stadtler, Marc; Scharrer, Lisa; Brummernhenrich, Benjamin; Bromme, Rainer

    2013-01-01

    Past research has shown that readers often fail to notice conflicts in text. In our present study we investigated whether accessing information from multiple documents instead of a single document might alleviate this problem by motivating readers to integrate information. We further tested whether this effect would be moderated by source…

  17. Integration of enabling methods for the automated flow preparation of piperazine-2-carboxamide.

    PubMed

    Ingham, Richard J; Battilocchio, Claudio; Hawkins, Joel M; Ley, Steven V

    2014-01-01

    Here we describe the use of a new open-source software package and a Raspberry Pi(®) computer for the simultaneous control of multiple flow chemistry devices and its application to a machine-assisted, multi-step flow preparation of pyrazine-2-carboxamide - a component of Rifater(®), used in the treatment of tuberculosis - and its reduced derivative piperazine-2-carboxamide.

  18. Investigating Early Childhood Teachers' Views on Science Teaching Practices: The Integration of Science with Visual Art in Early Childhood Settings

    ERIC Educational Resources Information Center

    Öztürk Yilmaztekin, Elif; Erden, Feyza Tantekin

    2017-01-01

    This study investigates early childhood teachers' views about science teaching practices in an early childhood settings. It was conducted in a preschool located in Ankara, Turkey. The data of the study were collected through multiple sources of information such as interviews with early childhood teachers and observations of their practices in the…

  19. Advanced Modeling and Uncertainty Quantification for Flight Dynamics; Interim Results and Challenges

    NASA Technical Reports Server (NTRS)

    Hyde, David C.; Shweyk, Kamal M.; Brown, Frank; Shah, Gautam

    2014-01-01

    As part of the NASA Vehicle Systems Safety Technologies (VSST), Assuring Safe and Effective Aircraft Control Under Hazardous Conditions (Technical Challenge #3), an effort is underway within Boeing Research and Technology (BR&T) to address Advanced Modeling and Uncertainty Quantification for Flight Dynamics (VSST1-7). The scope of the effort is to develop and evaluate advanced multidisciplinary flight dynamics modeling techniques, including integrated uncertainties, to facilitate higher fidelity response characterization of current and future aircraft configurations approaching and during loss-of-control conditions. This approach is to incorporate multiple flight dynamics modeling methods for aerodynamics, structures, and propulsion, including experimental, computational, and analytical. Also to be included are techniques for data integration and uncertainty characterization and quantification. This research shall introduce new and updated multidisciplinary modeling and simulation technologies designed to improve the ability to characterize airplane response in off-nominal flight conditions. The research shall also introduce new techniques for uncertainty modeling that will provide a unified database model comprised of multiple sources, as well as an uncertainty bounds database for each data source such that a full vehicle uncertainty analysis is possible even when approaching or beyond Loss of Control boundaries. Methodologies developed as part of this research shall be instrumental in predicting and mitigating loss of control precursors and events directly linked to causal and contributing factors, such as stall, failures, damage, or icing. The tasks will include utilizing the BR&T Water Tunnel to collect static and dynamic data to be compared to the GTM extended WT database, characterizing flight dynamics in off-nominal conditions, developing tools for structural load estimation under dynamic conditions, devising methods for integrating various modeling elements into a real-time simulation capability, generating techniques for uncertainty modeling that draw data from multiple modeling sources, and providing a unified database model that includes nominal plus increments for each flight condition. This paper presents status of testing in the BR&T water tunnel and analysis of the resulting data and efforts to characterize these data using alternative modeling methods. Program challenges and issues are also presented.

  20. RSAT matrix-clustering: dynamic exploration and redundancy reduction of transcription factor binding motif collections.

    PubMed

    Castro-Mondragon, Jaime Abraham; Jaeger, Sébastien; Thieffry, Denis; Thomas-Chollier, Morgane; van Helden, Jacques

    2017-07-27

    Transcription factor (TF) databases contain multitudes of binding motifs (TFBMs) from various sources, from which non-redundant collections are derived by manual curation. The advent of high-throughput methods stimulated the production of novel collections with increasing numbers of motifs. Meta-databases, built by merging these collections, contain redundant versions, because available tools are not suited to automatically identify and explore biologically relevant clusters among thousands of motifs. Motif discovery from genome-scale data sets (e.g. ChIP-seq) also produces redundant motifs, hampering the interpretation of results. We present matrix-clustering, a versatile tool that clusters similar TFBMs into multiple trees, and automatically creates non-redundant TFBM collections. A feature unique to matrix-clustering is its dynamic visualisation of aligned TFBMs, and its capability to simultaneously treat multiple collections from various sources. We demonstrate that matrix-clustering considerably simplifies the interpretation of combined results from multiple motif discovery tools, and highlights biologically relevant variations of similar motifs. We also ran a large-scale application to cluster ∼11 000 motifs from 24 entire databases, showing that matrix-clustering correctly groups motifs belonging to the same TF families, and drastically reduced motif redundancy. matrix-clustering is integrated within the RSAT suite (http://rsat.eu/), accessible through a user-friendly web interface or command-line for its integration in pipelines. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  1. Column-parallel correlated multiple sampling circuits for CMOS image sensors and their noise reduction effects.

    PubMed

    Suh, Sungho; Itoh, Shinya; Aoyama, Satoshi; Kawahito, Shoji

    2010-01-01

    For low-noise complementary metal-oxide-semiconductor (CMOS) image sensors, the reduction of pixel source follower noises is becoming very important. Column-parallel high-gain readout circuits are useful for low-noise CMOS image sensors. This paper presents column-parallel high-gain signal readout circuits, correlated multiple sampling (CMS) circuits and their noise reduction effects. In the CMS, the gain of the noise cancelling is controlled by the number of samplings. It has a similar effect to that of an amplified CDS for the thermal noise but is a little more effective for 1/f and RTS noises. Two types of the CMS with simple integration and folding integration are proposed. In the folding integration, the output signal swing is suppressed by a negative feedback using a comparator and one-bit D-to-A converter. The CMS circuit using the folding integration technique allows to realize a very low-noise level while maintaining a wide dynamic range. The noise reduction effects of their circuits have been investigated with a noise analysis and an implementation of a 1Mpixel pinned photodiode CMOS image sensor. Using 16 samplings, dynamic range of 59.4 dB and noise level of 1.9 e(-) for the simple integration CMS and 75 dB and 2.2 e(-) for the folding integration CMS, respectively, are obtained.

  2. PyPanda: a Python package for gene regulatory network reconstruction

    PubMed Central

    van IJzendoorn, David G.P.; Glass, Kimberly; Quackenbush, John; Kuijjer, Marieke L.

    2016-01-01

    Summary: PANDA (Passing Attributes between Networks for Data Assimilation) is a gene regulatory network inference method that uses message-passing to integrate multiple sources of ‘omics data. PANDA was originally coded in C ++. In this application note we describe PyPanda, the Python version of PANDA. PyPanda runs considerably faster than the C ++ version and includes additional features for network analysis. Availability and implementation: The open source PyPanda Python package is freely available at http://github.com/davidvi/pypanda. Contact: mkuijjer@jimmy.harvard.edu or d.g.p.van_ijzendoorn@lumc.nl PMID:27402905

  3. PyPanda: a Python package for gene regulatory network reconstruction.

    PubMed

    van IJzendoorn, David G P; Glass, Kimberly; Quackenbush, John; Kuijjer, Marieke L

    2016-11-01

    PANDA (Passing Attributes between Networks for Data Assimilation) is a gene regulatory network inference method that uses message-passing to integrate multiple sources of 'omics data. PANDA was originally coded in C ++. In this application note we describe PyPanda, the Python version of PANDA. PyPanda runs considerably faster than the C ++ version and includes additional features for network analysis. The open source PyPanda Python package is freely available at http://github.com/davidvi/pypanda CONTACT: mkuijjer@jimmy.harvard.edu or d.g.p.van_ijzendoorn@lumc.nl. © The Author 2016. Published by Oxford University Press.

  4. A Roadmap for using Agile Development in a Traditional System

    NASA Technical Reports Server (NTRS)

    Streiffert, Barbara; Starbird, Thomas

    2006-01-01

    I. Ensemble Development Group: a) Produces activity planning software for in spacecraft; b) Built on Eclipse Rich Client Platform (open source development and runtime software); c) Funded by multiple sources including the Mars Technology Program; d) Incorporated the use of Agile Development. II. Next Generation Uplink Planning System: a) Researches the Activity Planning and Sequencing Subsystem for Mars Science Laboratory (APSS); b) APSS includes Ensemble, Activity Modeling, Constraint Checking, Command Editing and Sequencing tools plus other uplink generation utilities; c) Funded by the Mars Technology Program; d) Integrates all of the tools for APSS.

  5. A method for classification of multisource data using interval-valued probabilities and its application to HIRIS data

    NASA Technical Reports Server (NTRS)

    Kim, H.; Swain, P. H.

    1991-01-01

    A method of classifying multisource data in remote sensing is presented. The proposed method considers each data source as an information source providing a body of evidence, represents statistical evidence by interval-valued probabilities, and uses Dempster's rule to integrate information based on multiple data source. The method is applied to the problems of ground-cover classification of multispectral data combined with digital terrain data such as elevation, slope, and aspect. Then this method is applied to simulated 201-band High Resolution Imaging Spectrometer (HIRIS) data by dividing the dimensionally huge data source into smaller and more manageable pieces based on the global statistical correlation information. It produces higher classification accuracy than the Maximum Likelihood (ML) classification method when the Hughes phenomenon is apparent.

  6. Enhancements to the MCNP6 background source

    DOE PAGES

    McMath, Garrett E.; McKinney, Gregg W.

    2015-10-19

    The particle transport code MCNP has been used to produce a background radiation data file on a worldwide grid that can easily be sampled as a source in the code. Location-dependent cosmic showers were modeled by Monte Carlo methods to produce the resulting neutron and photon background flux at 2054 locations around Earth. An improved galactic-cosmic-ray feature was used to model the source term as well as data from multiple sources to model the transport environment through atmosphere, soil, and seawater. A new elevation scaling feature was also added to the code to increase the accuracy of the cosmic neutronmore » background for user locations with off-grid elevations. Furthermore, benchmarking has shown the neutron integral flux values to be within experimental error.« less

  7. Flood extent and water level estimation from SAR using data-model integration

    NASA Astrophysics Data System (ADS)

    Ajadi, O. A.; Meyer, F. J.

    2017-12-01

    Synthetic Aperture Radar (SAR) images have long been recognized as a valuable data source for flood mapping. Compared to other sources, SAR's weather and illumination independence and large area coverage at high spatial resolution supports reliable, frequent, and detailed observations of developing flood events. Accordingly, SAR has the potential to greatly aid in the near real-time monitoring of natural hazards, such as flood detection, if combined with automated image processing. This research works towards increasing the reliability and temporal sampling of SAR-derived flood hazard information by integrating information from multiple SAR sensors and SAR modalities (images and Interferometric SAR (InSAR) coherence) and by combining SAR-derived change detection information with hydrologic and hydraulic flood forecast models. First, the combination of multi-temporal SAR intensity images and coherence information for generating flood extent maps is introduced. The application of least-squares estimation integrates flood information from multiple SAR sensors, thus increasing the temporal sampling. SAR-based flood extent information will be combined with a Digital Elevation Model (DEM) to reduce false alarms and to estimate water depth and flood volume. The SAR-based flood extent map is assimilated into the Hydrologic Engineering Center River Analysis System (Hec-RAS) model to aid in hydraulic model calibration. The developed technology is improving the accuracy of flood information by exploiting information from data and models. It also provides enhanced flood information to decision-makers supporting the response to flood extent and improving emergency relief efforts.

  8. A Systems Biology Approach for Identifying Hepatotoxicant Groups Based on Similarity in Mechanisms of Action and Chemical Structure.

    PubMed

    Hebels, Dennie G A J; Rasche, Axel; Herwig, Ralf; van Westen, Gerard J P; Jennen, Danyel G J; Kleinjans, Jos C S

    2016-01-01

    When evaluating compound similarity, addressing multiple sources of information to reach conclusions about common pharmaceutical and/or toxicological mechanisms of action is a crucial strategy. In this chapter, we describe a systems biology approach that incorporates analyses of hepatotoxicant data for 33 compounds from three different sources: a chemical structure similarity analysis based on the 3D Tanimoto coefficient, a chemical structure-based protein target prediction analysis, and a cross-study/cross-platform meta-analysis of in vitro and in vivo human and rat transcriptomics data derived from public resources (i.e., the diXa data warehouse). Hierarchical clustering of the outcome scores of the separate analyses did not result in a satisfactory grouping of compounds considering their known toxic mechanism as described in literature. However, a combined analysis of multiple data types may hypothetically compensate for missing or unreliable information in any of the single data types. We therefore performed an integrated clustering analysis of all three data sets using the R-based tool iClusterPlus. This indeed improved the grouping results. The compound clusters that were formed by means of iClusterPlus represent groups that show similar gene expression while simultaneously integrating a similarity in structure and protein targets, which corresponds much better with the known mechanism of action of these toxicants. Using an integrative systems biology approach may thus overcome the limitations of the separate analyses when grouping liver toxicants sharing a similar mechanism of toxicity.

  9. MRMer, an interactive open source and cross-platform system for data extraction and visualization of multiple reaction monitoring experiments.

    PubMed

    Martin, Daniel B; Holzman, Ted; May, Damon; Peterson, Amelia; Eastham, Ashley; Eng, Jimmy; McIntosh, Martin

    2008-11-01

    Multiple reaction monitoring (MRM) mass spectrometry identifies and quantifies specific peptides in a complex mixture with very high sensitivity and speed and thus has promise for the high throughput screening of clinical samples for candidate biomarkers. We have developed an interactive software platform, called MRMer, for managing highly complex MRM-MS experiments, including quantitative analyses using heavy/light isotopic peptide pairs. MRMer parses and extracts information from MS files encoded in the platform-independent mzXML data format. It extracts and infers precursor-product ion transition pairings, computes integrated ion intensities, and permits rapid visual curation for analyses exceeding 1000 precursor-product pairs. Results can be easily output for quantitative comparison of consecutive runs. Additionally MRMer incorporates features that permit the quantitative analysis experiments including heavy and light isotopic peptide pairs. MRMer is open source and provided under the Apache 2.0 license.

  10. Techniques to Access Databases and Integrate Data for Hydrologic Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whelan, Gene; Tenney, Nathan D.; Pelton, Mitchell A.

    2009-06-17

    This document addresses techniques to access and integrate data for defining site-specific conditions and behaviors associated with ground-water and surface-water radionuclide transport applicable to U.S. Nuclear Regulatory Commission reviews. Environmental models typically require input data from multiple internal and external sources that may include, but are not limited to, stream and rainfall gage data, meteorological data, hydrogeological data, habitat data, and biological data. These data may be retrieved from a variety of organizations (e.g., federal, state, and regional) and source types (e.g., HTTP, FTP, and databases). Available data sources relevant to hydrologic analyses for reactor licensing are identified and reviewed.more » The data sources described can be useful to define model inputs and parameters, including site features (e.g., watershed boundaries, stream locations, reservoirs, site topography), site properties (e.g., surface conditions, subsurface hydraulic properties, water quality), and site boundary conditions, input forcings, and extreme events (e.g., stream discharge, lake levels, precipitation, recharge, flood and drought characteristics). Available software tools for accessing established databases, retrieving the data, and integrating it with models were identified and reviewed. The emphasis in this review was on existing software products with minimal required modifications to enable their use with the FRAMES modeling framework. The ability of four of these tools to access and retrieve the identified data sources was reviewed. These four software tools were the Hydrologic Data Acquisition and Processing System (HDAPS), Integrated Water Resources Modeling System (IWRMS) External Data Harvester, Data for Environmental Modeling Environmental Data Download Tool (D4EM EDDT), and the FRAMES Internet Database Tools. The IWRMS External Data Harvester and the D4EM EDDT were identified as the most promising tools based on their ability to access and retrieve the required data, and their ability to integrate the data into environmental models using the FRAMES environment.« less

  11. Effect of Aggregation Operators on Network-Based Disease Gene Prioritization: A Case Study on Blood Disorders.

    PubMed

    Grewal, Nivit; Singh, Shailendra; Chand, Trilok

    2017-01-01

    Owing to the innate noise in the biological data sources, a single source or a single measure do not suffice for an effective disease gene prioritization. So, the integration of multiple data sources or aggregation of multiple measures is the need of the hour. The aggregation operators combine multiple related data values to a single value such that the combined value has the effect of all the individual values. In this paper, an attempt has been made for applying the fuzzy aggregation on the network-based disease gene prioritization and investigate its effect under noise conditions. This study has been conducted for a set of 15 blood disorders by fusing four different network measures, computed from the protein interaction network, using a selected set of aggregation operators and ranking the genes on the basis of the aggregated value. The aggregation operator-based rankings have been compared with the "Random walk with restart" gene prioritization method. The impact of noise has also been investigated by adding varying proportions of noise to the seed set. The results reveal that for all the selected blood disorders, the Mean of Maximal operator has relatively outperformed the other aggregation operators for noisy as well as non-noisy data.

  12. Boosting Probabilistic Graphical Model Inference by Incorporating Prior Knowledge from Multiple Sources

    PubMed Central

    Praveen, Paurush; Fröhlich, Holger

    2013-01-01

    Inferring regulatory networks from experimental data via probabilistic graphical models is a popular framework to gain insights into biological systems. However, the inherent noise in experimental data coupled with a limited sample size reduces the performance of network reverse engineering. Prior knowledge from existing sources of biological information can address this low signal to noise problem by biasing the network inference towards biologically plausible network structures. Although integrating various sources of information is desirable, their heterogeneous nature makes this task challenging. We propose two computational methods to incorporate various information sources into a probabilistic consensus structure prior to be used in graphical model inference. Our first model, called Latent Factor Model (LFM), assumes a high degree of correlation among external information sources and reconstructs a hidden variable as a common source in a Bayesian manner. The second model, a Noisy-OR, picks up the strongest support for an interaction among information sources in a probabilistic fashion. Our extensive computational studies on KEGG signaling pathways as well as on gene expression data from breast cancer and yeast heat shock response reveal that both approaches can significantly enhance the reconstruction accuracy of Bayesian Networks compared to other competing methods as well as to the situation without any prior. Our framework allows for using diverse information sources, like pathway databases, GO terms and protein domain data, etc. and is flexible enough to integrate new sources, if available. PMID:23826291

  13. U.S. Seismic Design Maps Web Application

    NASA Astrophysics Data System (ADS)

    Martinez, E.; Fee, J.

    2015-12-01

    The application computes earthquake ground motion design parameters compatible with the International Building Code and other seismic design provisions. It is the primary method for design engineers to obtain ground motion parameters for multiple building codes across the country. When designing new buildings and other structures, engineers around the country use the application. Users specify the design code of interest, location, and other parameters to obtain necessary ground motion information consisting of a high-level executive summary as well as detailed information including maps, data, and graphs. Results are formatted such that they can be directly included in a final engineering report. In addition to single-site analysis, the application supports a batch mode for simultaneous consideration of multiple locations. Finally, an application programming interface (API) is available which allows other application developers to integrate this application's results into larger applications for additional processing. Development on the application has proceeded in an iterative manner working with engineers through email, meetings, and workshops. Each iteration provided new features, improved performance, and usability enhancements. This development approach positioned the application to be integral to the structural design process and is now used to produce over 1800 reports daily. Recent efforts have enhanced the application to be a data-driven, mobile-first, responsive web application. Development is ongoing, and source code has recently been published into the open-source community on GitHub. Open-sourcing the code facilitates improved incorporation of user feedback to add new features ensuring the application's continued success.

  14. A fast and high performance multiple data integration algorithm for identifying human disease genes

    PubMed Central

    2015-01-01

    Background Integrating multiple data sources is indispensable in improving disease gene identification. It is not only due to the fact that disease genes associated with similar genetic diseases tend to lie close with each other in various biological networks, but also due to the fact that gene-disease associations are complex. Although various algorithms have been proposed to identify disease genes, their prediction performances and the computational time still should be further improved. Results In this study, we propose a fast and high performance multiple data integration algorithm for identifying human disease genes. A posterior probability of each candidate gene associated with individual diseases is calculated by using a Bayesian analysis method and a binary logistic regression model. Two prior probability estimation strategies and two feature vector construction methods are developed to test the performance of the proposed algorithm. Conclusions The proposed algorithm is not only generated predictions with high AUC scores, but also runs very fast. When only a single PPI network is employed, the AUC score is 0.769 by using F2 as feature vectors. The average running time for each leave-one-out experiment is only around 1.5 seconds. When three biological networks are integrated, the AUC score using F3 as feature vectors increases to 0.830, and the average running time for each leave-one-out experiment takes only about 12.54 seconds. It is better than many existing algorithms. PMID:26399620

  15. The Chandra Source Catalog: Source Variability

    NASA Astrophysics Data System (ADS)

    Nowak, Michael; Rots, A. H.; McCollough, M. L.; Primini, F. A.; Glotfelty, K. J.; Bonaventura, N. R.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Evans, J. D.; Fabbiano, G.; Galle, E.; Gibbs, D. G.; Grier, J. D.; Hain, R.; Hall, D. M.; Harbo, P. N.; He, X.; Houck, J. C.; Karovska, M.; Lauer, J.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Plummer, D. A.; Refsdal, B. L.; Siemiginowska, A. L.; Sundheim, B. A.; Tibbetts, M. S.; Van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-01-01

    The Chandra Source Catalog (CSC) contains fields of view that have been studied with individual, uninterrupted observations that span integration times ranging from 1 ksec to 160 ksec, and a large number of which have received (multiple) repeat observations days to years later. The CSC thus offers an unprecedented look at the variability of the X-ray sky over a broad range of time scales, and across a wide diversity of variable X-ray sources: stars in the local galactic neighborhood, galactic and extragalactic X-ray binaries, Active Galactic Nuclei, etc. Here we describe the methods used to identify and quantify source variability within a single observation, and the methods used to assess the variability of a source when detected in multiple, individual observations. Three tests are used to detect source variability within a single observation: the Kolmogorov-Smirnov test and its variant, the Kuiper test, and a Bayesian approach originally suggested by Gregory and Loredo. The latter test not only provides an indicator of variability, but is also used to create a best estimate of the variable lightcurve shape. We assess the performance of these tests via simulation of statistically stationary, variable processes with arbitrary input power spectral densities (here we concentrate on results of red noise simulations) at variety of mean count rates and fractional root mean square variabilities relevant to CSC sources. We also assess the false positive rate via simulations of constant sources whose sole source of fluctuation is Poisson noise. We compare these simulations to a preliminary assessment of the variability found in real CSC sources, and estimate the variability sensitivities of the CSC.

  16. The Chandra Source Catalog: Source Variability

    NASA Astrophysics Data System (ADS)

    Nowak, Michael; Rots, A. H.; McCollough, M. L.; Primini, F. A.; Glotfelty, K. J.; Bonaventura, N. R.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Evans, J. D.; Evans, I.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hain, R.; Hall, D. M.; Harbo, P. N.; He, X.; Houck, J. C.; Karovska, M.; Lauer, J.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Plummer, D. A.; Refsdal, B. L.; Siemiginowska, A. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-09-01

    The Chandra Source Catalog (CSC) contains fields of view that have been studied with individual, uninterrupted observations that span integration times ranging from 1 ksec to 160 ksec, and a large number of which have received (multiple) repeat observations days to years later. The CSC thus offers an unprecedented look at the variability of the X-ray sky over a broad range of time scales, and across a wide diversity of variable X-ray sources: stars in the local galactic neighborhood, galactic and extragalactic X-ray binaries, Active Galactic Nuclei, etc. Here we describe the methods used to identify and quantify source variability within a single observation, and the methods used to assess the variability of a source when detected in multiple, individual observations. Three tests are used to detect source variability within a single observation: the Kolmogorov-Smirnov test and its variant, the Kuiper test, and a Bayesian approach originally suggested by Gregory and Loredo. The latter test not only provides an indicator of variability, but is also used to create a best estimate of the variable lightcurve shape. We assess the performance of these tests via simulation of statistically stationary, variable processes with arbitrary input power spectral densities (here we concentrate on results of red noise simulations) at variety of mean count rates and fractional root mean square variabilities relevant to CSC sources. We also assess the false positive rate via simulations of constant sources whose sole source of fluctuation is Poisson noise. We compare these simulations to an assessment of the variability found in real CSC sources, and estimate the variability sensitivities of the CSC.

  17. Integrating distributed multimedia systems and interactive television networks

    NASA Astrophysics Data System (ADS)

    Shvartsman, Alex A.

    1996-01-01

    Recent advances in networks, storage and video delivery systems are about to make commercial deployment of interactive multimedia services over digital television networks a reality. The emerging components individually have the potential to satisfy the technical requirements in the near future. However, no single vendor is offering a complete end-to-end commercially-deployable and scalable interactive multimedia applications systems over digital/analog television systems. Integrating a large set of maturing sub-assemblies and interactive multimedia applications is a major task in deploying such systems. Here we deal with integration issues, requirements and trade-offs in building delivery platforms and applications for interactive television services. Such integration efforts must overcome lack of standards, and deal with unpredictable development cycles and quality problems of leading- edge technology. There are also the conflicting goals of optimizing systems for video delivery while enabling highly interactive distributed applications. It is becoming possible to deliver continuous video streams from specific sources, but it is difficult and expensive to provide the ability to rapidly switch among multiple sources of video and data. Finally, there is the ever- present challenge of integrating and deploying expensive systems whose scalability and extensibility is limited, while ensuring some resiliency in the face of inevitable changes. This proceedings version of the paper is an extended abstract.

  18. Opportunities and Challenges in Supply-Side Simulation: Physician-Based Models

    PubMed Central

    Gresenz, Carole Roan; Auerbach, David I; Duarte, Fabian

    2013-01-01

    Objective To provide a conceptual framework and to assess the availability of empirical data for supply-side microsimulation modeling in the context of health care. Data Sources Multiple secondary data sources, including the American Community Survey, Health Tracking Physician Survey, and SK&A physician database. Study Design We apply our conceptual framework to one entity in the health care market—physicians—and identify, assess, and compare data available for physician-based simulation models. Principal Findings Our conceptual framework describes three broad types of data required for supply-side microsimulation modeling. Our assessment of available data for modeling physician behavior suggests broad comparability across various sources on several dimensions and highlights the need for significant integration of data across multiple sources to provide a platform adequate for modeling. A growing literature provides potential estimates for use as behavioral parameters that could serve as the models' engines. Sources of data for simulation modeling that account for the complex organizational and financial relationships among physicians and other supply-side entities are limited. Conclusions A key challenge for supply-side microsimulation modeling is optimally combining available data to harness their collective power. Several possibilities also exist for novel data collection. These have the potential to serve as catalysts for the next generation of supply-side-focused simulation models to inform health policy. PMID:23347041

  19. Planarian shows decision-making behavior in response to multiple stimuli by integrative brain function.

    PubMed

    Inoue, Takeshi; Hoshino, Hajime; Yamashita, Taiga; Shimoyama, Seira; Agata, Kiyokazu

    2015-01-01

    Planarians belong to an evolutionarily early group of organisms that possess a central nervous system including a well-organized brain with a simple architecture but many types of neurons. Planarians display a number of behaviors, such as phototaxis and thermotaxis, in response to external stimuli, and it has been shown that various molecules and neural pathways in the brain are involved in controlling these behaviors. However, due to the lack of combinatorial assay methods, it remains obscure whether planarians possess higher brain functions, including integration in the brain, in which multiple signals coming from outside are coordinated and used in determining behavioral strategies. In the present study, we designed chemotaxis and thigmotaxis/kinesis tracking assays to measure several planarian behaviors in addition to those measured by phototaxis and thermotaxis assays previously established by our group, and used these tests to analyze planarian chemotactic and thigmotactic/kinetic behaviors. We found that headless planarian body fragments and planarians that had specifically lost neural activity following regeneration-dependent conditional gene knockdown (Readyknock) of synaptotagmin in the brain lost both chemotactic and thigmotactic behaviors, suggesting that neural activity in the brain is required for the planarian's chemotactic and thigmotactic behaviors. Furthermore, we compared the strength of phototaxis, chemotaxis, thigmotaxis/kinesis, and thermotaxis by presenting simultaneous binary stimuli to planarians. We found that planarians showed a clear order of predominance of these behaviors. For example, when planarians were simultaneously exposed to 400 lux of light and a chemoattractant, they showed chemoattractive behavior irrespective of the direction of the light source, although exposure to light of this intensity alone induces evasive behavior away from the light source. In contrast, when the light intensity was increased to 800 or 1600 lux and the same dose of chemoattractant was presented, planarian behaviors were gradually shifted to negative phototaxis rather than chemoattraction. These results suggest that planarians may be capable of selecting behavioral strategies via the integration of discrete brain functions when exposed to multiple stimuli. The planarian brain processes external signals received through the respective sensory neurons, thereby resulting in the production of appropriate behaviors. In addition, planarians can adjust behavioral features in response to stimulus conditions by integrating multiple external signals in the brain.

  20. SPARQL-enabled identifier conversion with Identifiers.org

    PubMed Central

    Wimalaratne, Sarala M.; Bolleman, Jerven; Juty, Nick; Katayama, Toshiaki; Dumontier, Michel; Redaschi, Nicole; Le Novère, Nicolas; Hermjakob, Henning; Laibe, Camille

    2015-01-01

    Motivation: On the semantic web, in life sciences in particular, data is often distributed via multiple resources. Each of these sources is likely to use their own International Resource Identifier for conceptually the same resource or database record. The lack of correspondence between identifiers introduces a barrier when executing federated SPARQL queries across life science data. Results: We introduce a novel SPARQL-based service to enable on-the-fly integration of life science data. This service uses the identifier patterns defined in the Identifiers.org Registry to generate a plurality of identifier variants, which can then be used to match source identifiers with target identifiers. We demonstrate the utility of this identifier integration approach by answering queries across major producers of life science Linked Data. Availability and implementation: The SPARQL-based identifier conversion service is available without restriction at http://identifiers.org/services/sparql. Contact: sarala@ebi.ac.uk PMID:25638809

  1. SPARQL-enabled identifier conversion with Identifiers.org.

    PubMed

    Wimalaratne, Sarala M; Bolleman, Jerven; Juty, Nick; Katayama, Toshiaki; Dumontier, Michel; Redaschi, Nicole; Le Novère, Nicolas; Hermjakob, Henning; Laibe, Camille

    2015-06-01

    On the semantic web, in life sciences in particular, data is often distributed via multiple resources. Each of these sources is likely to use their own International Resource Identifier for conceptually the same resource or database record. The lack of correspondence between identifiers introduces a barrier when executing federated SPARQL queries across life science data. We introduce a novel SPARQL-based service to enable on-the-fly integration of life science data. This service uses the identifier patterns defined in the Identifiers.org Registry to generate a plurality of identifier variants, which can then be used to match source identifiers with target identifiers. We demonstrate the utility of this identifier integration approach by answering queries across major producers of life science Linked Data. The SPARQL-based identifier conversion service is available without restriction at http://identifiers.org/services/sparql. © The Author 2015. Published by Oxford University Press.

  2. Integrating the human element into the systems engineering process and MBSE methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tadros, Michael Samir

    In response to the challenges related to the increasing size and complexity of systems, organizations have recognized the need to integrate human considerations in the beginning stages of systems development. Human Systems Integration (HSI) seeks to accomplish this objective by incorporating human factors within systems engineering (SE) processes and methodologies, which is the focus of this paper. A representative set of HSI methods from multiple sources are organized, analyzed, and mapped to the systems engineering Vee-model. These methods are then consolidated and evaluated against the SE process and Models-Based Systems Engineering (MBSE) methodology to determine where and how they couldmore » integrate within systems development activities in the form of specific enhancements. Overall conclusions based on these evaluations are presented and future research areas are proposed.« less

  3. External Contamination Control of Attached Payloads on the International Space Station

    NASA Technical Reports Server (NTRS)

    Soares, Carlos E.; Mikatarian, Ronald R.; Olsen, Randy L.; Huang, Alvin Y.; Steagall, Courtney A.; Schmidl, William D.; Wright, Bruce D.; Koontz, Steven

    2012-01-01

    The International Space Station (ISS) is an on-orbit platform for science utilization in low Earth orbit with multiple sites for external payloads with exposure to the natural and induced environments. Contamination is one of the induced environments that can impact performance, mission success and science utilization on the vehicle. This paper describes the external contamination control requirements and integration process for externally mounted payloads on the ISS. The external contamination control requirements are summarized and a description of the integration and verification process is detailed to guide payload developers in the certification process of attached payloads on the vehicle. A description of the required data certification deliverables covers the characterization of contamination sources. Such characterization includes identification, usage and operational data for each class of contamination source. Classes of external contamination sources covered are vacuum exposed materials, sources of leakage, vacuum venting and thrusters. ISS system level analyses are conducted by the ISS Space Environments Team to certify compliance with external contamination control requirements. This paper also addresses the ISS induced contamination environment at attached payload sites, both at the requirements level as well as measurements made on ISS.

  4. Clinical results of HIS, RIS, PACS integration using data integration CASE tools

    NASA Astrophysics Data System (ADS)

    Taira, Ricky K.; Chan, Hing-Ming; Breant, Claudine M.; Huang, Lu J.; Valentino, Daniel J.

    1995-05-01

    Current infrastructure research in PACS is dominated by the development of communication networks (local area networks, teleradiology, ATM networks, etc.), multimedia display workstations, and hierarchical image storage architectures. However, limited work has been performed on developing flexible, expansible, and intelligent information processing architectures for the vast decentralized image and text data repositories prevalent in healthcare environments. Patient information is often distributed among multiple data management systems. Current large-scale efforts to integrate medical information and knowledge sources have been costly with limited retrieval functionality. Software integration strategies to unify distributed data and knowledge sources is still lacking commercially. Systems heterogeneity (i.e., differences in hardware platforms, communication protocols, database management software, nomenclature, etc.) is at the heart of the problem and is unlikely to be standardized in the near future. In this paper, we demonstrate the use of newly available CASE (computer- aided software engineering) tools to rapidly integrate HIS, RIS, and PACS information systems. The advantages of these tools include fast development time (low-level code is generated from graphical specifications), and easy system maintenance (excellent documentation, easy to perform changes, and centralized code repository in an object-oriented database). The CASE tools are used to develop and manage the `middle-ware' in our client- mediator-serve architecture for systems integration. Our architecture is scalable and can accommodate heterogeneous database and communication protocols.

  5. Combining multiple sources of data to inform conservation of Lesser Prairie-Chicken populations

    USGS Publications Warehouse

    Ross, Beth; Haukos, David A.; Hagen, Christian A.; Pitman, James

    2018-01-01

    Conservation of small populations is often based on limited data from spatially and temporally restricted studies, resulting in management actions based on an incomplete assessment of the population drivers. If fluctuations in abundance are related to changes in weather, proper management is especially important, because extreme weather events could disproportionately affect population abundance. Conservation assessments, especially for vulnerable populations, are aided by a knowledge of how extreme events influence population status and trends. Although important for conservation efforts, data may be limited for small or vulnerable populations. Integrated population models maximize information from various sources of data to yield population estimates that fully incorporate uncertainty from multiple data sources while allowing for the explicit incorporation of environmental covariates of interest. Our goal was to assess the relative influence of population drivers for the Lesser Prairie-Chicken (Tympanuchus pallidicinctus) in the core of its range, western and southern Kansas, USA. We used data from roadside lek count surveys, nest monitoring surveys, and survival data from telemetry monitoring combined with climate (Palmer drought severity index) data in an integrated population model. Our results indicate that variability in population growth rate was most influenced by variability in juvenile survival. The Palmer drought severity index had no measurable direct effects on adult survival or mean number of offspring per female; however, there were declines in population growth rate following severe drought. Because declines in population growth rate occurred at a broad spatial scale, declines in response to drought were likely due to decreases in chick and juvenile survival rather than emigration outside of the study area. Overall, our model highlights the importance of accounting for environmental and demographic sources of variability, and provides a thorough method for simultaneously evaluating population demography in response to long-term climate effects.

  6. Linking genes to diseases with a SNPedia-Gene Wiki mashup

    PubMed Central

    2012-01-01

    Background A variety of topic-focused wikis are used in the biomedical sciences to enable the mass-collaborative synthesis and distribution of diverse bodies of knowledge. To address complex problems such as defining the relationships between genes and disease, it is important to bring the knowledge from many different domains together. Here we show how advances in wiki technology and natural language processing can be used to automatically assemble ‘meta-wikis’ that present integrated views over the data collaboratively created in multiple source wikis. Results We produced a semantic meta-wiki called the Gene Wiki+ that automatically mirrors and integrates data from the Gene Wiki and SNPedia. The Gene Wiki+, available at (http://genewikiplus.org/), captures 8,047 distinct gene-disease relationships. SNPedia accounts for 4,149 of the gene-disease pairs, the Gene Wiki provides 4,377 and only 479 appear independently in both sources. All of this content is available to query and browse and is provided as linked open data. Conclusions Wikis contain increasing amounts of diverse, biological information useful for elucidating the connections between genes and disease. The Gene Wiki+ shows how wiki technology can be used in concert with natural language processing to provide integrated views over diverse underlying data sources. PMID:22541597

  7. 22 W coherent GaAlAs amplifier array with 400 emitters

    NASA Technical Reports Server (NTRS)

    Krebs, D.; Herrick, R.; No, K.; Harting, W.; Struemph, F.

    1991-01-01

    Greater than 22 W of optical power has been demonstrated from a multiple-emitter, traveling-wave semiconductor amplifier, with approximately 87 percent of the output at the frequency of the injection source. The device integrates, in AlGaAs graded-index separate-confinement heterostructure single quantum well (GRINSCH-SQW) epitaxy, 400 ridge waveguide amplifiers with a coherent optical signal distribution circuit on a 12 x 6 mm chip.

  8. Force on Force Modeling with Formal Task Structures and Dynamic Geometry

    DTIC Science & Technology

    2017-03-24

    task framework, derived using the MMF methodology to structure a complex mission. It further demonstrated the integration of effects from a range of...application methodology was intended to support a combined developmental testing (DT) and operational testing (OT) strategy for selected systems under test... methodology to develop new or modify existing Models and Simulations (M&S) to: • Apply data from multiple, distributed sources (including test

  9. Identifying Students for Secondary and Tertiary Prevention Efforts: How Do We Determine Which Students Have Tier 2 and Tier 3 Needs?

    ERIC Educational Resources Information Center

    Lane, Kathleen Lynne; Oakes, Wendy Peia; Ennis, Robin Parks; Hirsch, Shanna Eisner

    2014-01-01

    In comprehensive, integrated, three-tiered models, it is essential to have a systematic method for identifying students who need supports at Tier 2 or Tier 3. This article provides explicit information on how to use multiple sources of data to determine which students might benefit from these supports. First, the authors provide an overview of how…

  10. Opportunities to utilize traditional phenological knowledge to support adaptive management of social-ecological systems vulnerable to changes in climate and fire regimes

    Treesearch

    Christopher A. Armatas; Tyron J. Venn; Brooke B. McBride; Alan E. Watson; Steve J. Carver

    2016-01-01

    The field of adaptive management has been embraced by researchers and managers in the United States as an approach to improve natural resource stewardship in the face of uncertainty and complex environmental problems. Integrating multiple knowledge sources and feedback mechanisms is an important step in this approach. Our objective is to contribute to the...

  11. Integration of enabling methods for the automated flow preparation of piperazine-2-carboxamide

    PubMed Central

    Ingham, Richard J; Battilocchio, Claudio; Hawkins, Joel M

    2014-01-01

    Summary Here we describe the use of a new open-source software package and a Raspberry Pi® computer for the simultaneous control of multiple flow chemistry devices and its application to a machine-assisted, multi-step flow preparation of pyrazine-2-carboxamide – a component of Rifater®, used in the treatment of tuberculosis – and its reduced derivative piperazine-2-carboxamide. PMID:24778715

  12. iGC-an integrated analysis package of gene expression and copy number alteration.

    PubMed

    Lai, Yi-Pin; Wang, Liang-Bo; Wang, Wei-An; Lai, Liang-Chuan; Tsai, Mong-Hsun; Lu, Tzu-Pin; Chuang, Eric Y

    2017-01-14

    With the advancement in high-throughput technologies, researchers can simultaneously investigate gene expression and copy number alteration (CNA) data from individual patients at a lower cost. Traditional analysis methods analyze each type of data individually and integrate their results using Venn diagrams. Challenges arise, however, when the results are irreproducible and inconsistent across multiple platforms. To address these issues, one possible approach is to concurrently analyze both gene expression profiling and CNAs in the same individual. We have developed an open-source R/Bioconductor package (iGC). Multiple input formats are supported and users can define their own criteria for identifying differentially expressed genes driven by CNAs. The analysis of two real microarray datasets demonstrated that the CNA-driven genes identified by the iGC package showed significantly higher Pearson correlation coefficients with their gene expression levels and copy numbers than those genes located in a genomic region with CNA. Compared with the Venn diagram approach, the iGC package showed better performance. The iGC package is effective and useful for identifying CNA-driven genes. By simultaneously considering both comparative genomic and transcriptomic data, it can provide better understanding of biological and medical questions. The iGC package's source code and manual are freely available at https://www.bioconductor.org/packages/release/bioc/html/iGC.html .

  13. Environmental and Genetic Determinants of Colony Morphology in Yeast

    PubMed Central

    Granek, Joshua A.; Magwene, Paul M.

    2010-01-01

    Nutrient stresses trigger a variety of developmental switches in the budding yeast Saccharomyces cerevisiae. One of the least understood of such responses is the development of complex colony morphology, characterized by intricate, organized, and strain-specific patterns of colony growth and architecture. The genetic bases of this phenotype and the key environmental signals involved in its induction have heretofore remained poorly understood. By surveying multiple strain backgrounds and a large number of growth conditions, we show that limitation for fermentable carbon sources coupled with a rich nitrogen source is the primary trigger for the colony morphology response in budding yeast. Using knockout mutants and transposon-mediated mutagenesis, we demonstrate that two key signaling networks regulating this response are the filamentous growth MAP kinase cascade and the Ras-cAMP-PKA pathway. We further show synergistic epistasis between Rim15, a kinase involved in integration of nutrient signals, and other genes in these pathways. Ploidy, mating-type, and genotype-by-environment interactions also appear to play a role in the controlling colony morphology. Our study highlights the high degree of network reuse in this model eukaryote; yeast use the same core signaling pathways in multiple contexts to integrate information about environmental and physiological states and generate diverse developmental outputs. PMID:20107600

  14. Symmetrical group theory for mathematical complexity reduction of digital holograms

    NASA Astrophysics Data System (ADS)

    Perez-Ramirez, A.; Guerrero-Juk, J.; Sanchez-Lara, R.; Perez-Ramirez, M.; Rodriguez-Blanco, M. A.; May-Alarcon, M.

    2017-10-01

    This work presents the use of mathematical group theory through an algorithm to reduce the multiplicative computational complexity in the process of creating digital holograms. An object is considered as a set of point sources using mathematical symmetry properties of both the core in the Fresnel integral and the image, where the image is modeled using group theory. This algorithm has multiplicative complexity equal to zero and an additive complexity ( k - 1) × N for the case of sparse matrices and binary images, where k is the number of pixels other than zero and N is the total points in the image.

  15. An integrated data model to estimate spatiotemporal occupancy, abundance, and colonization dynamics

    USGS Publications Warehouse

    Williams, Perry J.; Hooten, Mevin B.; Womble, Jamie N.; Esslinger, George G.; Bower, Michael R.; Hefley, Trevor J.

    2017-01-01

    Ecological invasions and colonizations occur dynamically through space and time. Estimating the distribution and abundance of colonizing species is critical for efficient management or conservation. We describe a statistical framework for simultaneously estimating spatiotemporal occupancy and abundance dynamics of a colonizing species. Our method accounts for several issues that are common when modeling spatiotemporal ecological data including multiple levels of detection probability, multiple data sources, and computational limitations that occur when making fine-scale inference over a large spatiotemporal domain. We apply the model to estimate the colonization dynamics of sea otters (Enhydra lutris) in Glacier Bay, in southeastern Alaska.

  16. An environmental-level, real-time, pulsed photon dosemeter.

    PubMed

    Olsher, R H; Frymire, A; Gregoire, T

    2005-01-01

    Radiation sources producing short pulses of photon radiation are widespread. Such sources include electron linear accelerators and field emission impulse generators. It is often desirable to measure leakage and skyshine radiation for these sources in real time and at environmental levels as low as 0.02 microSv per pulse. This note provides an overview of the design and performance of a commercial, real-time, pulsed photon dosemeter (PPD) capable of single-pulse dose measurements over the range from 0.02 to 20 microSv. The PPD may also be operated in a multiple-pulse mode that integrates the dose from a train of pulses over a 3 s period. A pulse repetition rate of up to 300 Hz is accommodated.

  17. A high-order relaxation method with projective integration for solving nonlinear systems of hyperbolic conservation laws

    NASA Astrophysics Data System (ADS)

    Lafitte, Pauline; Melis, Ward; Samaey, Giovanni

    2017-07-01

    We present a general, high-order, fully explicit relaxation scheme which can be applied to any system of nonlinear hyperbolic conservation laws in multiple dimensions. The scheme consists of two steps. In a first (relaxation) step, the nonlinear hyperbolic conservation law is approximated by a kinetic equation with stiff BGK source term. Then, this kinetic equation is integrated in time using a projective integration method. After taking a few small (inner) steps with a simple, explicit method (such as direct forward Euler) to damp out the stiff components of the solution, the time derivative is estimated and used in an (outer) Runge-Kutta method of arbitrary order. We show that, with an appropriate choice of inner step size, the time step restriction on the outer time step is similar to the CFL condition for the hyperbolic conservation law. Moreover, the number of inner time steps is also independent of the stiffness of the BGK source term. We discuss stability and consistency, and illustrate with numerical results (linear advection, Burgers' equation and the shallow water and Euler equations) in one and two spatial dimensions.

  18. A boundary element approach to optimization of active noise control sources on three-dimensional structures

    NASA Technical Reports Server (NTRS)

    Cunefare, K. A.; Koopmann, G. H.

    1991-01-01

    This paper presents the theoretical development of an approach to active noise control (ANC) applicable to three-dimensional radiators. The active noise control technique, termed ANC Optimization Analysis, is based on minimizing the total radiated power by adding secondary acoustic sources on the primary noise source. ANC Optimization Analysis determines the optimum magnitude and phase at which to drive the secondary control sources in order to achieve the best possible reduction in the total radiated power from the noise source/control source combination. For example, ANC Optimization Analysis predicts a 20 dB reduction in the total power radiated from a sphere of radius at a dimensionless wavenumber ka of 0.125, for a single control source representing 2.5 percent of the total area of the sphere. ANC Optimization Analysis is based on a boundary element formulation of the Helmholtz Integral Equation, and thus, the optimization analysis applies to a single frequency, while multiple frequencies can be treated through repeated analyses.

  19. Advances in the computation of the Sjöstrand, Rossi, and Feynman distributions

    DOE PAGES

    Talamo, A.; Gohar, Y.; Gabrielli, F.; ...

    2017-02-01

    This study illustrates recent computational advances in the application of the Sjöstrand (area), Rossi, and Feynman methods to estimate the effective multiplication factor of a subcritical system driven by an external neutron source. The methodologies introduced in this study have been validated with the experimental results from the KUKA facility of Japan by Monte Carlo (MCNP6 and MCNPX) and deterministic (ERANOS, VARIANT, and PARTISN) codes. When the assembly is driven by a pulsed neutron source generated by a particle accelerator and delayed neutrons are at equilibrium, the Sjöstrand method becomes extremely fast if the integral of the reaction rate frommore » a single pulse is split into two parts. These two integrals distinguish between the neutron counts during and after the pulse period. To conclude, when the facility is driven by a spontaneous fission neutron source, the timestamps of the detector neutron counts can be obtained up to the nanosecond precision using MCNP6, which allows obtaining the Rossi and Feynman distributions.« less

  20. Integrated genome browser: visual analytics platform for genomics.

    PubMed

    Freese, Nowlan H; Norris, David C; Loraine, Ann E

    2016-07-15

    Genome browsers that support fast navigation through vast datasets and provide interactive visual analytics functions can help scientists achieve deeper insight into biological systems. Toward this end, we developed Integrated Genome Browser (IGB), a highly configurable, interactive and fast open source desktop genome browser. Here we describe multiple updates to IGB, including all-new capabilities to display and interact with data from high-throughput sequencing experiments. To demonstrate, we describe example visualizations and analyses of datasets from RNA-Seq, ChIP-Seq and bisulfite sequencing experiments. Understanding results from genome-scale experiments requires viewing the data in the context of reference genome annotations and other related datasets. To facilitate this, we enhanced IGB's ability to consume data from diverse sources, including Galaxy, Distributed Annotation and IGB-specific Quickload servers. To support future visualization needs as new genome-scale assays enter wide use, we transformed the IGB codebase into a modular, extensible platform for developers to create and deploy all-new visualizations of genomic data. IGB is open source and is freely available from http://bioviz.org/igb aloraine@uncc.edu. © The Author 2016. Published by Oxford University Press.

  1. Integration of optical imaging with a small animal irradiator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weersink, Robert A., E-mail: robert.weersink@rmp.uhn.on.ca; Ansell, Steve; Wang, An

    Purpose: The authors describe the integration of optical imaging with a targeted small animal irradiator device, focusing on design, instrumentation, 2D to 3D image registration, 2D targeting, and the accuracy of recovering and mapping the optical signal to a 3D surface generated from the cone-beam computed tomography (CBCT) imaging. The integration of optical imaging will improve targeting of the radiation treatment and offer longitudinal tracking of tumor response of small animal models treated using the system. Methods: The existing image-guided small animal irradiator consists of a variable kilovolt (peak) x-ray tube mounted opposite an aSi flat panel detector, both mountedmore » on a c-arm gantry. The tube is used for both CBCT imaging and targeted irradiation. The optical component employs a CCD camera perpendicular to the x-ray treatment/imaging axis with a computer controlled filter for spectral decomposition. Multiple optical images can be acquired at any angle as the gantry rotates. The optical to CBCT registration, which uses a standard pinhole camera model, was modeled and tested using phantoms with markers visible in both optical and CBCT images. Optically guided 2D targeting in the anterior/posterior direction was tested on an anthropomorphic mouse phantom with embedded light sources. The accuracy of the mapping of optical signal to the CBCT surface was tested using the same mouse phantom. A surface mesh of the phantom was generated based on the CBCT image and optical intensities projected onto the surface. The measured surface intensity was compared to calculated surface for a point source at the actual source position. The point-source position was also optimized to provide the closest match between measured and calculated intensities, and the distance between the optimized and actual source positions was then calculated. This process was repeated for multiple wavelengths and sources. Results: The optical to CBCT registration error was 0.8 mm. Two-dimensional targeting of a light source in the mouse phantom based on optical imaging along the anterior/posterior direction was accurate to 0.55 mm. The mean square residual error in the normalized measured projected surface intensities versus the calculated normalized intensities ranged between 0.0016 and 0.006. Optimizing the position reduced this error from 0.00016 to 0.0004 with distances ranging between 0.7 and 1 mm between the actual and calculated position source positions. Conclusions: The integration of optical imaging on an existing small animal irradiation platform has been accomplished. A targeting accuracy of 1 mm can be achieved in rigid, homogeneous phantoms. The combination of optical imaging with a CBCT image-guided small animal irradiator offers the potential to deliver functionally targeted dose distributions, as well as monitor spatial and temporal functional changes that occur with radiation therapy.« less

  2. Evaluating the effect of integrated microfinance and health interventions: an updated review of the evidence.

    PubMed

    Lorenzetti, Lara M J; Leatherman, Sheila; Flax, Valerie L

    2017-06-01

    Solutions delivered within firm sectoral boundaries are inadequate in achieving income security and better health for poor populations. Integrated microfinance and health interventions leverage networks of women to promote financial inclusion, build livelihoods, and safeguard against high cost illnesses. Our understanding of the effect of integrated interventions has been limited by variability in intervention, outcome, design, and methodological rigour. This systematic review synthesises the literature through 2015 to understand the effect of integrated microfinance and health programs. We searched PubMed, Scopus, Embase, EconLit, and Global Health databases and sourced bibliographies, identifying 964 articles exclusive of duplicates. Title, abstract, and full text review yielded 35 articles. Articles evaluated the effect of intentionally integrated microfinance and health programs on client outcomes. We rated the quality of evidence for each article. Most interventions combined microfinance with health education, which demonstrated positive effects on health knowledge and behaviours, though not health status. Among programs that integrated microfinance with other health components ( i.e. health micro-insurance, linkages to health providers, and access to health products), results were generally positive but mixed due to the smaller number and quality of studies. Interventions combining multiple health components in a given study demonstrated positive effects, though it was unclear which component was driving the effect. Most articles (57%) were moderate in quality. Integrated microfinance and health education programs were effective, though longer intervention periods are necessary to measure more complex pathways to health status. The effect of microfinance combined with other health components was less clear. Stronger randomized research designs with multiple study arms are required to improve evidence and disentangle the effects of multiple component microfinance and health interventions. Few studies attempted to understand changes in economic outcomes, limiting our understanding of the relationship between health and income effects. © The Author 2017. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  3. Integrative Analysis of the Physical Transport Network into Australia.

    PubMed

    Cope, Robert C; Ross, Joshua V; Wittmann, Talia A; Prowse, Thomas A A; Cassey, Phillip

    2016-01-01

    Effective biosecurity is necessary to protect nations and their citizens from a variety of threats, including emerging infectious diseases, agricultural or environmental pests and pathogens, and illegal wildlife trade. The physical pathways by which these threats are transported internationally, predominantly shipping and air traffic, have undergone significant growth and changes in spatial distributions in recent decades. An understanding of the specific pathways and donor-traffic hotspots created by this integrated physical transport network is vital for the development of effective biosecurity strategies into the future. In this study, we analysed the physical transport network into Australia over the period 1999-2012. Seaborne and air traffic were weighted to calculate a "weighted cumulative impact" score for each source region worldwide, each year. High risk source regions, and those source regions that underwent substantial changes in risk over the study period, were determined. An overall risk ranking was calculated by integrating across all possible weighting combinations. The source regions having greatest overall physical connectedness with Australia were Singapore, which is a global transport hub, and the North Island of New Zealand, a close regional trading partner with Australia. Both those regions with large amounts of traffic across multiple vectors (e.g., Hong Kong), and those with high levels of traffic of only one type (e.g., Bali, Indonesia with respect to passenger flights), were represented among high risk source regions. These data provide a baseline model for the transport of individuals and commodities against which the effectiveness of biosecurity controls may be assessed, and are a valuable tool in the development of future biosecurity policy.

  4. Integrative Analysis of the Physical Transport Network into Australia

    PubMed Central

    Cope, Robert C.; Ross, Joshua V.; Wittmann, Talia A.; Prowse, Thomas A. A.; Cassey, Phillip

    2016-01-01

    Effective biosecurity is necessary to protect nations and their citizens from a variety of threats, including emerging infectious diseases, agricultural or environmental pests and pathogens, and illegal wildlife trade. The physical pathways by which these threats are transported internationally, predominantly shipping and air traffic, have undergone significant growth and changes in spatial distributions in recent decades. An understanding of the specific pathways and donor-traffic hotspots created by this integrated physical transport network is vital for the development of effective biosecurity strategies into the future. In this study, we analysed the physical transport network into Australia over the period 1999–2012. Seaborne and air traffic were weighted to calculate a “weighted cumulative impact” score for each source region worldwide, each year. High risk source regions, and those source regions that underwent substantial changes in risk over the study period, were determined. An overall risk ranking was calculated by integrating across all possible weighting combinations. The source regions having greatest overall physical connectedness with Australia were Singapore, which is a global transport hub, and the North Island of New Zealand, a close regional trading partner with Australia. Both those regions with large amounts of traffic across multiple vectors (e.g., Hong Kong), and those with high levels of traffic of only one type (e.g., Bali, Indonesia with respect to passenger flights), were represented among high risk source regions. These data provide a baseline model for the transport of individuals and commodities against which the effectiveness of biosecurity controls may be assessed, and are a valuable tool in the development of future biosecurity policy. PMID:26881782

  5. Mining large heterogeneous data sets in drug discovery.

    PubMed

    Wild, David J

    2009-10-01

    Increasingly, effective drug discovery involves the searching and data mining of large volumes of information from many sources covering the domains of chemistry, biology and pharmacology amongst others. This has led to a proliferation of databases and data sources relevant to drug discovery. This paper provides a review of the publicly-available large-scale databases relevant to drug discovery, describes the kinds of data mining approaches that can be applied to them and discusses recent work in integrative data mining that looks for associations that pan multiple sources, including the use of Semantic Web techniques. The future of mining large data sets for drug discovery requires intelligent, semantic aggregation of information from all of the data sources described in this review, along with the application of advanced methods such as intelligent agents and inference engines in client applications.

  6. MetaMeta: integrating metagenome analysis tools to improve taxonomic profiling.

    PubMed

    Piro, Vitor C; Matschkowski, Marcel; Renard, Bernhard Y

    2017-08-14

    Many metagenome analysis tools are presently available to classify sequences and profile environmental samples. In particular, taxonomic profiling and binning methods are commonly used for such tasks. Tools available among these two categories make use of several techniques, e.g., read mapping, k-mer alignment, and composition analysis. Variations on the construction of the corresponding reference sequence databases are also common. In addition, different tools provide good results in different datasets and configurations. All this variation creates a complicated scenario to researchers to decide which methods to use. Installation, configuration and execution can also be difficult especially when dealing with multiple datasets and tools. We propose MetaMeta: a pipeline to execute and integrate results from metagenome analysis tools. MetaMeta provides an easy workflow to run multiple tools with multiple samples, producing a single enhanced output profile for each sample. MetaMeta includes a database generation, pre-processing, execution, and integration steps, allowing easy execution and parallelization. The integration relies on the co-occurrence of organisms from different methods as the main feature to improve community profiling while accounting for differences in their databases. In a controlled case with simulated and real data, we show that the integrated profiles of MetaMeta overcome the best single profile. Using the same input data, it provides more sensitive and reliable results with the presence of each organism being supported by several methods. MetaMeta uses Snakemake and has six pre-configured tools, all available at BioConda channel for easy installation (conda install -c bioconda metameta). The MetaMeta pipeline is open-source and can be downloaded at: https://gitlab.com/rki_bioinformatics .

  7. Semantic Web meets Integrative Biology: a survey.

    PubMed

    Chen, Huajun; Yu, Tong; Chen, Jake Y

    2013-01-01

    Integrative Biology (IB) uses experimental or computational quantitative technologies to characterize biological systems at the molecular, cellular, tissue and population levels. IB typically involves the integration of the data, knowledge and capabilities across disciplinary boundaries in order to solve complex problems. We identify a series of bioinformatics problems posed by interdisciplinary integration: (i) data integration that interconnects structured data across related biomedical domains; (ii) ontology integration that brings jargons, terminologies and taxonomies from various disciplines into a unified network of ontologies; (iii) knowledge integration that integrates disparate knowledge elements from multiple sources; (iv) service integration that build applications out of services provided by different vendors. We argue that IB can benefit significantly from the integration solutions enabled by Semantic Web (SW) technologies. The SW enables scientists to share content beyond the boundaries of applications and websites, resulting into a web of data that is meaningful and understandable to any computers. In this review, we provide insight into how SW technologies can be used to build open, standardized and interoperable solutions for interdisciplinary integration on a global basis. We present a rich set of case studies in system biology, integrative neuroscience, bio-pharmaceutics and translational medicine, to highlight the technical features and benefits of SW applications in IB.

  8. A semantic problem solving environment for integrative parasite research: identification of intervention targets for Trypanosoma cruzi.

    PubMed

    Parikh, Priti P; Minning, Todd A; Nguyen, Vinh; Lalithsena, Sarasi; Asiaee, Amir H; Sahoo, Satya S; Doshi, Prashant; Tarleton, Rick; Sheth, Amit P

    2012-01-01

    Research on the biology of parasites requires a sophisticated and integrated computational platform to query and analyze large volumes of data, representing both unpublished (internal) and public (external) data sources. Effective analysis of an integrated data resource using knowledge discovery tools would significantly aid biologists in conducting their research, for example, through identifying various intervention targets in parasites and in deciding the future direction of ongoing as well as planned projects. A key challenge in achieving this objective is the heterogeneity between the internal lab data, usually stored as flat files, Excel spreadsheets or custom-built databases, and the external databases. Reconciling the different forms of heterogeneity and effectively integrating data from disparate sources is a nontrivial task for biologists and requires a dedicated informatics infrastructure. Thus, we developed an integrated environment using Semantic Web technologies that may provide biologists the tools for managing and analyzing their data, without the need for acquiring in-depth computer science knowledge. We developed a semantic problem-solving environment (SPSE) that uses ontologies to integrate internal lab data with external resources in a Parasite Knowledge Base (PKB), which has the ability to query across these resources in a unified manner. The SPSE includes Web Ontology Language (OWL)-based ontologies, experimental data with its provenance information represented using the Resource Description Format (RDF), and a visual querying tool, Cuebee, that features integrated use of Web services. We demonstrate the use and benefit of SPSE using example queries for identifying gene knockout targets of Trypanosoma cruzi for vaccine development. Answers to these queries involve looking up multiple sources of data, linking them together and presenting the results. The SPSE facilitates parasitologists in leveraging the growing, but disparate, parasite data resources by offering an integrative platform that utilizes Semantic Web techniques, while keeping their workload increase minimal.

  9. Integrating data and mashup concepts in Hydro-Meteorological Research: the torrential rainfall event in Genoa (4th November 2011) case study.

    NASA Astrophysics Data System (ADS)

    Bedrina, T.; Parodi, A.; Quarati, A.; Clematis, A.; Rebora, N.; Laiosa, D.

    2012-04-01

    One of the critical issues in Hydro-Meteorological Research (HMR) is a better exploitation of data archives according to a multidisciplinary perspective. Different Earth science databases offer a huge amount of observational data, which often need to be assembled, processed, combined accordingly HM scientists needs. The cooperation between scientists active in HMR and Information and Communication Technologies (ICT) is essential in the development of innovative tools and applications for manipulating, aggregating and re-arranging heterogeneous information in flexible way. In this paper it is described an application devoted to the collection and integration of HM datasets, originated by public or private sources, freely exposed via Web services API. This application uses the mashup, recently become very popular in many fields, (Chow S.-W., 2007) technology concepts. Such methodology means combination of data and/or programs published by external online sources into an integrated experience. Mashup seems to be a promising methodology to respond to the multiple data-related activities into which HM researchers are daily involved (e.g. finding and retrieving high volume data; learning formats and developing readers; extracting parameters; performing filtering and mask; developing analysis and visualization tools). The specific case study of the recent extreme rainfall event, occurred over Genoa in Italy on the 4th November 2011 is shown through the integration of semi-professional weather observational networks as free available data source in addition to official weather networks.

  10. Integrating Instrumental Data Provides the Full Science in 3D

    NASA Astrophysics Data System (ADS)

    Turrin, M.; Boghosian, A.; Bell, R. E.; Frearson, N.

    2017-12-01

    Looking at data sparks questions, discussion and insights. By integrating multiple data sets we deepen our understanding of how cryosphere processes operate. Field collected data provide measurements from multiple instruments supporting rapid insights. Icepod provides a platform focused on the integration of multiple instruments. Over the last three seasons, the ROSETTA-Ice project has deployed Icepod to comprehensively map the Ross Ice Shelf, Antarctica. This integrative data collection along with new methods of data visualization allows us to answer questions about ice shelf structure and evolution that arise during data processing and review. While data are vetted and archived in the field to confirm instruments are operating, upon return to the lab data are again reviewed for accuracy before full analysis. Recent review of shallow ice radar data from the Beardmore Glacier, an outlet glacier into the Ross Ice Shelf, presented an abrupt discontinuity in the ice surface. This sharp 8m surface elevation drop was originally interpreted as a processing error. Data were reexamined, integrating the simultaneously collected shallow and deep ice radar with lidar data. All the data sources showed the surface discontinuity, confirming the abrupt 8m drop in surface elevation. Examining high resolution WorldView satellite imagery revealed a persistent source for these elevation drops. The satellite imagery showed that this tear in the ice surface was only one piece of a larger pattern of "chatter marks" in ice that flows at a rate of 300 m/yr. The markings are buried over a distance of 30 km or after 100 years of travel down Beardmore Glacier towards the front of the Ross Ice Shelf. Using Icepod's lidar and cameras we map this chatter mark feature in 3D to reveal its full structure. We use digital elevation models from WorldView to map the other along flow chatter marks. In order to investigate the relationship between these surface features and basal crevasses, the deep ice radar enables a 3D model of the base of the ice shelf. Both the high resolution imagery and radar echograms along with a VR experience of our 3D models, allows viewers to fully explore the dataset and gain insight into the processes producing these features.

  11. PACOM: A Versatile Tool for Integrating, Filtering, Visualizing, and Comparing Multiple Large Mass Spectrometry Proteomics Data Sets.

    PubMed

    Martínez-Bartolomé, Salvador; Medina-Aunon, J Alberto; López-García, Miguel Ángel; González-Tejedo, Carmen; Prieto, Gorka; Navajas, Rosana; Salazar-Donate, Emilio; Fernández-Costa, Carolina; Yates, John R; Albar, Juan Pablo

    2018-04-06

    Mass-spectrometry-based proteomics has evolved into a high-throughput technology in which numerous large-scale data sets are generated from diverse analytical platforms. Furthermore, several scientific journals and funding agencies have emphasized the storage of proteomics data in public repositories to facilitate its evaluation, inspection, and reanalysis. (1) As a consequence, public proteomics data repositories are growing rapidly. However, tools are needed to integrate multiple proteomics data sets to compare different experimental features or to perform quality control analysis. Here, we present a new Java stand-alone tool, Proteomics Assay COMparator (PACOM), that is able to import, combine, and simultaneously compare numerous proteomics experiments to check the integrity of the proteomic data as well as verify data quality. With PACOM, the user can detect source of errors that may have been introduced in any step of a proteomics workflow and that influence the final results. Data sets can be easily compared and integrated, and data quality and reproducibility can be visually assessed through a rich set of graphical representations of proteomics data features as well as a wide variety of data filters. Its flexibility and easy-to-use interface make PACOM a unique tool for daily use in a proteomics laboratory. PACOM is available at https://github.com/smdb21/pacom .

  12. OpenLMD, multimodal monitoring and control of LMD processing

    NASA Astrophysics Data System (ADS)

    Rodríguez-Araújo, Jorge; García-Díaz, Antón

    2017-02-01

    This paper presents OpenLMD, a novel open-source solution for on-line multimodal monitoring of Laser Metal Deposition (LMD). The solution is also applicable to a wider range of laser-based applications that require on-line control (e.g. laser welding). OpenLMD is a middleware that enables the orchestration and virtualization of a LMD robot cell, using several open-source frameworks (e.g. ROS, OpenCV, PCL). The solution also allows reconfiguration by easy integration of multiple sensors and processing equipment. As a result, OpenLMD delivers significant advantages over existing monitoring and control approaches, such as improved scalability, and multimodal monitoring and data sharing capabilities.

  13. A multiple hypotheses uncertainty analysis in hydrological modelling: about model structure, landscape parameterization, and numerical integration

    NASA Astrophysics Data System (ADS)

    Pilz, Tobias; Francke, Till; Bronstert, Axel

    2016-04-01

    Until today a large number of competing computer models has been developed to understand hydrological processes and to simulate and predict streamflow dynamics of rivers. This is primarily the result of a lack of a unified theory in catchment hydrology due to insufficient process understanding and uncertainties related to model development and application. Therefore, the goal of this study is to analyze the uncertainty structure of a process-based hydrological catchment model employing a multiple hypotheses approach. The study focuses on three major problems that have received only little attention in previous investigations. First, to estimate the impact of model structural uncertainty by employing several alternative representations for each simulated process. Second, explore the influence of landscape discretization and parameterization from multiple datasets and user decisions. Third, employ several numerical solvers for the integration of the governing ordinary differential equations to study the effect on simulation results. The generated ensemble of model hypotheses is then analyzed and the three sources of uncertainty compared against each other. To ensure consistency and comparability all model structures and numerical solvers are implemented within a single simulation environment. First results suggest that the selection of a sophisticated numerical solver for the differential equations positively affects simulation outcomes. However, already some simple and easy to implement explicit methods perform surprisingly well and need less computational efforts than more advanced but time consuming implicit techniques. There is general evidence that ambiguous and subjective user decisions form a major source of uncertainty and can greatly influence model development and application at all stages.

  14. Quantifying methane emission from fugitive sources by combining tracer release and downwind measurements - a sensitivity analysis based on multiple field surveys.

    PubMed

    Mønster, Jacob G; Samuelsson, Jerker; Kjeldsen, Peter; Rella, Chris W; Scheutz, Charlotte

    2014-08-01

    Using a dual species methane/acetylene instrument based on cavity ring down spectroscopy (CRDS), the dynamic plume tracer dispersion method for quantifying the emission rate of methane was successfully tested in four measurement campaigns: (1) controlled methane and trace gas release with different trace gas configurations, (2) landfill with unknown emission source locations, (3) landfill with closely located emission sources, and (4) comparing with an Fourier transform infrared spectroscopy (FTIR) instrument using multiple trace gasses for source separation. The new real-time, high precision instrument can measure methane plumes more than 1.2 km away from small sources (about 5 kg h(-1)) in urban areas with a measurement frequency allowing plume crossing at normal driving speed. The method can be used for quantification of total methane emissions from diffuse area sources down to 1 kg per hour and can be used to quantify individual sources with the right choice of wind direction and road distance. The placement of the trace gas is important for obtaining correct quantification and uncertainty of up to 36% can be incurred when the trace gas is not co-located with the methane source. Measurements made at greater distances are less sensitive to errors in trace gas placement and model calculations showed an uncertainty of less than 5% in both urban and open-country for placing the trace gas 100 m from the source, when measurements were done more than 3 km away. Using the ratio of the integrated plume concentrations of tracer gas and methane gives the most reliable results for measurements at various distances to the source, compared to the ratio of the highest concentration in the plume, the direct concentration ratio and using a Gaussian plume model. Under suitable weather and road conditions, the CRDS system can quantify the emission from different sources located close to each other using only one kind of trace gas due to the high time resolution, while the FTIR system can measure multiple trace gasses but with a lower time resolution. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. BIAS: Bioinformatics Integrated Application Software.

    PubMed

    Finak, G; Godin, N; Hallett, M; Pepin, F; Rajabi, Z; Srivastava, V; Tang, Z

    2005-04-15

    We introduce a development platform especially tailored to Bioinformatics research and software development. BIAS (Bioinformatics Integrated Application Software) provides the tools necessary for carrying out integrative Bioinformatics research requiring multiple datasets and analysis tools. It follows an object-relational strategy for providing persistent objects, allows third-party tools to be easily incorporated within the system and supports standards and data-exchange protocols common to Bioinformatics. BIAS is an OpenSource project and is freely available to all interested users at http://www.mcb.mcgill.ca/~bias/. This website also contains a paper containing a more detailed description of BIAS and a sample implementation of a Bayesian network approach for the simultaneous prediction of gene regulation events and of mRNA expression from combinations of gene regulation events. hallett@mcb.mcgill.ca.

  16. Defining disease phenotypes using national linked electronic health records: a case study of atrial fibrillation.

    PubMed

    Morley, Katherine I; Wallace, Joshua; Denaxas, Spiros C; Hunter, Ross J; Patel, Riyaz S; Perel, Pablo; Shah, Anoop D; Timmis, Adam D; Schilling, Richard J; Hemingway, Harry

    2014-01-01

    National electronic health records (EHR) are increasingly used for research but identifying disease cases is challenging due to differences in information captured between sources (e.g. primary and secondary care). Our objective was to provide a transparent, reproducible model for integrating these data using atrial fibrillation (AF), a chronic condition diagnosed and managed in multiple ways in different healthcare settings, as a case study. Potentially relevant codes for AF screening, diagnosis, and management were identified in four coding systems: Read (primary care diagnoses and procedures), British National Formulary (BNF; primary care prescriptions), ICD-10 (secondary care diagnoses) and OPCS-4 (secondary care procedures). From these we developed a phenotype algorithm via expert review and analysis of linked EHR data from 1998 to 2010 for a cohort of 2.14 million UK patients aged ≥ 30 years. The cohort was also used to evaluate the phenotype by examining associations between incident AF and known risk factors. The phenotype algorithm incorporated 286 codes: 201 Read, 63 BNF, 18 ICD-10, and four OPCS-4. Incident AF diagnoses were recorded for 72,793 patients, but only 39.6% (N = 28,795) were recorded in primary care and secondary care. An additional 7,468 potential cases were inferred from data on treatment and pre-existing conditions. The proportion of cases identified from each source differed by diagnosis age; inferred diagnoses contributed a greater proportion of younger cases (≤ 60 years), while older patients (≥ 80 years) were mainly diagnosed in SC. Associations of risk factors (hypertension, myocardial infarction, heart failure) with incident AF defined using different EHR sources were comparable in magnitude to those from traditional consented cohorts. A single EHR source is not sufficient to identify all patients, nor will it provide a representative sample. Combining multiple data sources and integrating information on treatment and comorbid conditions can substantially improve case identification.

  17. Integrating Remote and Social Sensing Data for a Scenario on Secure Societies in Big Data Platform

    NASA Astrophysics Data System (ADS)

    Albani, Sergio; Lazzarini, Michele; Koubarakis, Manolis; Taniskidou, Efi Karra; Papadakis, George; Karkaletsis, Vangelis; Giannakopoulos, George

    2016-08-01

    In the framework of the Horizon 2020 project BigDataEurope (Integrating Big Data, Software & Communities for Addressing Europe's Societal Challenges), a pilot for the Secure Societies Societal Challenge was designed considering the requirements coming from relevant stakeholders. The pilot is focusing on the integration in a Big Data platform of data coming from remote and social sensing.The information on land changes coming from the Copernicus Sentinel 1A sensor (Change Detection workflow) is integrated with information coming from selected Twitter and news agencies accounts (Event Detection workflow) in order to provide the user with multiple sources of information.The Change Detection workflow implements a processing chain in a distributed parallel manner, exploiting the Big Data capabilities in place; the Event Detection workflow implements parallel and distributed social media and news agencies monitoring as well as suitable mechanisms to detect and geo-annotate the related events.

  18. Proximal design for a multimodality endoscope with multiphoton microscopy, optical coherence microscopy and visual modalities

    NASA Astrophysics Data System (ADS)

    Kiekens, Kelli C.; Talarico, Olivia; Barton, Jennifer K.

    2018-02-01

    A multimodality endoscope system has been designed for early detection of ovarian cancer. Multiple illumination and detection systems must be integrated in a compact, stable, transportable configuration to meet the requirements of a clinical setting. The proximal configuration presented here supports visible light navigation with a large field of view and low resolution, high resolution multiphoton microscopy (MPM), and high resolution optical coherence microscopy (OCM). All modalities are integrated into a single optical system in the endoscope. The system requires two light sources: a green laser for visible light navigation and a compact fiber based femtosecond laser for MPM and OCM. Using an inline wavelength division multiplexer, the two sources are combined into a single mode fiber. To accomplish OCM, a fiber coupler is used to separate the femtosecond laser into a reference arm and signal arm. The reflected reference arm and the signal from the sample are interfered and wavelength separated by a reflection grating and detected using a linear array. The MPM signal is collimated and goes through a series of filters to separate the 2nd and 3rd harmonics as well as twophoton excitation florescence (2PEF) and 3PEF. Each signal is independently detected on a photo multiplier tube and amplified. The visible light is collected by multiple high numerical aperture fibers at the endoscope tip which are bundled into one SMA adapter at the proximal end and connected to a photodetector. This integrated system design is compact, efficient and meets both optical and mechanical requirements for clinical applications.

  19. Online health information seeking: how people with multiple sclerosis find, assess and integrate treatment information to manage their health.

    PubMed

    Synnot, Anneliese J; Hill, Sophie J; Garner, Kerryn A; Summers, Michael P; Filippini, Graziella; Osborne, Richard H; Shapland, Sue D P; Colombo, Cinzia; Mosconi, Paola

    2016-06-01

    The Internet is increasingly prominent as a source of health information for people with multiple sclerosis (MS). But there has been little exploration of the needs, experiences and preferences of people with MS for integrating treatment information into decision making, in the context of searching on the Internet. This was the aim of our study. Sixty participants (51 people with MS; nine family members) took part in a focus group or online forum. They were asked to describe how they find and assess reliable treatment information (particularly online) and how this changes over time. Thematic analysis was underpinned by a coding frame. Participants described that there was both too much information online and too little that applied to them. They spoke of wariness and scepticism but also empowerment. The availability of up-to-date and unbiased treatment information, including practical and lifestyle-related information, was important to many. Many participants were keen to engage in a 'research partnership' with health professionals and developed a range of strategies to enhance the trustworthiness of online information. We use the term 'self-regulation' to capture the variations in information seeking behaviour that participants described over time, as they responded to their changing information needs, their emotional state and growing expertise about MS. People with MS have developed a number of strategies to both find and integrate treatment information from a range of sources. Their reflections informed the development of an evidence-based consumer web site based on summaries of MS Cochrane reviews. © 2014 John Wiley & Sons Ltd.

  20. The Unresourced Burden on United States Navy Sailors at Sea

    DTIC Science & Technology

    2018-03-01

    through multiple teaching sources, whether through a computer, peers, or an instructor. However, the learning model will likely fail when only one piece...What situations cause the ship to fail to meet maintenance requirements? a. What are the effects when a ship is unable to meet maintenance...In D. A. Boehm-Davis, F. T. Durso, & J. D. Lee (Eds.), Handbook of Human Systems Integration (pp. 277–292). Washing- ton, DC: American Psychological

  1. Comparing top-down and bottom-up estimates of methane emissions across multiple U.S. oil and gas basins provides insights into national O&G emissions, mitigation strategies, and research priorities

    NASA Astrophysics Data System (ADS)

    Lyon, D. R.; Alvarez, R.; Zavala Araiza, D.; Hamburg, S.

    2017-12-01

    We develop a county-level inventory of U.S. anthropogenic methane emissions by integrating multiple data sources including the Drillinginfo oil and gas (O&G) production database, Environmental Protection Agency (EPA) Greenhouse Gas Reporting Program, a previously published gridded EPA Greenhouse Gas Inventory (Maasakkers et al 2016), and recent measurements studies of O&G pneumatic devices, equipment leaks, abandoned wells, and midstream facilities. Our bottom-up estimates of total and O&G methane emissions are consistently lower than top-down, aerial mass balance estimates in ten O&G production areas. We evaluate several hypotheses for the top-down/bottom-up discrepancy including potential bias of the aerial mass balance method, temporal mismatch of top-down and bottom-up emission estimates, and source attribution errors. In most basins, the top-down/bottom-up gap cannot be explained fully without additional O&G emissions from sources not included in traditional inventories, such as super-emitters caused by malfunctions or abnormal process conditions. Top-down/bottom-up differences across multiple basins are analyzed to estimate the magnitude of these additional emissions and constrain total methane emissions from the U.S. O&G supply chain. We discuss the implications for mitigating O&G methane emissions and suggest research priorities for increasing the accuracy of future emission inventories.

  2. Optimizing Irrigation Water Allocation under Multiple Sources of Uncertainty in an Arid River Basin

    NASA Astrophysics Data System (ADS)

    Wei, Y.; Tang, D.; Gao, H.; Ding, Y.

    2015-12-01

    Population growth and climate change add additional pressures affecting water resources management strategies for meeting demands from different economic sectors. It is especially challenging in arid regions where fresh water is limited. For instance, in the Tailanhe River Basin (Xinjiang, China), a compromise must be made between water suppliers and users during drought years. This study presents a multi-objective irrigation water allocation model to cope with water scarcity in arid river basins. To deal with the uncertainties from multiple sources in the water allocation system (e.g., variations of available water amount, crop yield, crop prices, and water price), the model employs a interval linear programming approach. The multi-objective optimization model developed from this study is characterized by integrating eco-system service theory into water-saving measures. For evaluation purposes, the model is used to construct an optimal allocation system for irrigation areas fed by the Tailan River (Xinjiang Province, China). The objective functions to be optimized are formulated based on these irrigation areas' economic, social, and ecological benefits. The optimal irrigation water allocation plans are made under different hydroclimate conditions (wet year, normal year, and dry year), with multiple sources of uncertainty represented. The modeling tool and results are valuable for advising decision making by the local water authority—and the agricultural community—especially on measures for coping with water scarcity (by incorporating uncertain factors associated with crop production planning).

  3. Resolving z ~2 galaxy using adaptive coadded source plane reconstruction

    NASA Astrophysics Data System (ADS)

    Sharma, Soniya; Richard, Johan; Kewley, Lisa; Yuan, Tiantian

    2018-06-01

    Natural magnification provided by gravitational lensing coupled with Integral field spectrographic observations (IFS) and adaptive optics (AO) imaging techniques have become the frontier of spatially resolved studies of high redshift galaxies (z>1). Mass models of gravitational lenses hold the key for understanding the spatially resolved source–plane (unlensed) physical properties of the background lensed galaxies. Lensing mass models very sensitively control the accuracy and precision of source-plane reconstructions of the observed lensed arcs. Effective source-plane resolution defined by image-plane (observed) point spread function (PSF) makes it challenging to recover the unlensed (source-plane) surface brightness distribution.We conduct a detailed study to recover the source-plane physical properties of z=2 lensed galaxy using spatially resolved observations from two different multiple images of the lensed target. To deal with PSF’s from two data sets on different multiple images of the galaxy, we employ a forward (Source to Image) approach to merge these independent observations. Using our novel technique, we are able to present a detailed analysis of the source-plane dynamics at scales much better than previously attainable through traditional image inversion methods. Moreover, our technique is adapted to magnification, thus allowing us to achieve higher resolution in highly magnified regions of the source. We find that this lensed system is highly evident of a minor merger. In my talk, I present this case study of z=2 lensed galaxy and also discuss the applications of our algorithm to study plethora of lensed systems, which will be available through future telescopes like JWST and GMT.

  4. Layout-aware simulation of soft errors in sub-100 nm integrated circuits

    NASA Astrophysics Data System (ADS)

    Balbekov, A.; Gorbunov, M.; Bobkov, S.

    2016-12-01

    Single Event Transient (SET) caused by charged particle traveling through the sensitive volume of integral circuit (IC) may lead to different errors in digital circuits in some cases. In technologies below 180 nm, a single particle can affect multiple devices causing multiple SET. This fact adds the complexity to fault tolerant devices design, because the schematic design techniques become useless without their layout consideration. The most common layout mitigation technique is a spatial separation of sensitive nodes of hardened circuits. Spatial separation decreases the circuit performance and increases power consumption. Spacing should thus be reasonable and its scaling follows the device dimensions' scaling trend. This paper presents the development of the SET simulation approach comprised of SPICE simulation with "double exponent" current source as SET model. The technique uses layout in GDSII format to locate nearby devices that can be affected by a single particle and that can share the generated charge. The developed software tool automatizes multiple simulations and gathers the produced data to present it as the sensitivity map. The examples of conducted simulations of fault tolerant cells and their sensitivity maps are presented in this paper.

  5. A Query Integrator and Manager for the Query Web

    PubMed Central

    Brinkley, James F.; Detwiler, Landon T.

    2012-01-01

    We introduce two concepts: the Query Web as a layer of interconnected queries over the document web and the semantic web, and a Query Web Integrator and Manager (QI) that enables the Query Web to evolve. QI permits users to write, save and reuse queries over any web accessible source, including other queries saved in other installations of QI. The saved queries may be in any language (e.g. SPARQL, XQuery); the only condition for interconnection is that the queries return their results in some form of XML. This condition allows queries to chain off each other, and to be written in whatever language is appropriate for the task. We illustrate the potential use of QI for several biomedical use cases, including ontology view generation using a combination of graph-based and logical approaches, value set generation for clinical data management, image annotation using terminology obtained from an ontology web service, ontology-driven brain imaging data integration, small-scale clinical data integration, and wider-scale clinical data integration. Such use cases illustrate the current range of applications of QI and lead us to speculate about the potential evolution from smaller groups of interconnected queries into a larger query network that layers over the document and semantic web. The resulting Query Web could greatly aid researchers and others who now have to manually navigate through multiple information sources in order to answer specific questions. PMID:22531831

  6. Tracing catchment fine sediment sources using the new SIFT (SedIment Fingerprinting Tool) open source software.

    PubMed

    Pulley, S; Collins, A L

    2018-09-01

    The mitigation of diffuse sediment pollution requires reliable provenance information so that measures can be targeted. Sediment source fingerprinting represents one approach for supporting these needs, but recent methodological developments have resulted in an increasing complexity of data processing methods rendering the approach less accessible to non-specialists. A comprehensive new software programme (SIFT; SedIment Fingerprinting Tool) has therefore been developed which guides the user through critical data analysis decisions and automates all calculations. Multiple source group configurations and composite fingerprints are identified and tested using multiple methods of uncertainty analysis. This aims to explore the sediment provenance information provided by the tracers more comprehensively than a single model, and allows for model configurations with high uncertainties to be rejected. This paper provides an overview of its application to an agricultural catchment in the UK to determine if the approach used can provide a reduction in uncertainty and increase in precision. Five source group classifications were used; three formed using a k-means cluster analysis containing 2, 3 and 4 clusters, and two a-priori groups based upon catchment geology. Three different composite fingerprints were used for each classification and bi-plots, range tests, tracer variability ratios and virtual mixtures tested the reliability of each model configuration. Some model configurations performed poorly when apportioning the composition of virtual mixtures, and different model configurations could produce different sediment provenance results despite using composite fingerprints able to discriminate robustly between the source groups. Despite this uncertainty, dominant sediment sources were identified, and those in close proximity to each sediment sampling location were found to be of greatest importance. This new software, by integrating recent methodological developments in tracer data processing, guides users through key steps. Critically, by applying multiple model configurations and uncertainty assessment, it delivers more robust solutions for informing catchment management of the sediment problem than many previously used approaches. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  7. Entanglement enhancement in multimode integrated circuits

    NASA Astrophysics Data System (ADS)

    Léger, Zacharie M.; Brodutch, Aharon; Helmy, Amr S.

    2018-06-01

    The faithful distribution of entanglement in continuous-variable systems is essential to many quantum information protocols. As such, entanglement distillation and enhancement schemes are a cornerstone of many applications. The photon subtraction scheme offers enhancement with a relatively simple setup and has been studied in various scenarios. Motivated by recent advances in integrated optics, particularly the ability to build stable multimode interferometers with squeezed input states, a multimodal extension to the enhancement via photon subtraction protocol is studied. States generated with multiple squeezed input states, rather than a single input source, are shown to be more sensitive to the enhancement protocol, leading to increased entanglement at the output. Numerical results show the gain in entanglement is not monotonic with the number of modes or the degree of squeezing in the additional modes. Consequently, the advantage due to having multiple squeezed input states can be maximized when the number of modes is still relatively small (e.g., four). The requirement for additional squeezing is within the current realm of implementation, making this scheme achievable with present technologies.

  8. Modelling Concentrating Solar Power with Thermal Energy Storage for Integration Studies (Presentation)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hummon, M.; Jorgenson, J.; Denholm, P.

    2013-10-01

    Concentrating solar power with thermal energy storage (CSP-TES) can provide multiple benefits to the grid, including low marginal cost energy and the ability to levelize load, provide operating reserves, and provide firm capacity. It is challenging to properly value the integration of CSP because of the complicated nature of this technology. Unlike completely dispatchable fossil sources, CSP is a limited energy resource, depending on the hourly and daily supply of solar energy. To optimize the use of this limited energy, CSP-TES must be implemented in a production cost model with multiple decision variables for the operation of the CSP-TES plant.more » We develop and implement a CSP-TES plant in a production cost model that accurately characterizes the three main components of the plant: solar field, storage tank, and power block. We show the effect of various modelling simplifications on the value of CSP, including: scheduled versus optimized dispatch from the storage tank and energy-only operation versus co-optimization with ancillary services.« less

  9. Modelling Concentrating Solar Power with Thermal Energy Storage for Integration Studies: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hummon, M.; Denholm, P.; Jorgenson, J.

    2013-10-01

    Concentrating solar power with thermal energy storage (CSP-TES) can provide multiple benefits to the grid, including low marginal cost energy and the ability to levelize load, provide operating reserves, and provide firm capacity. It is challenging to properly value the integration of CSP because of the complicated nature of this technology. Unlike completely dispatchable fossil sources, CSP is a limited energy resource, depending on the hourly and daily supply of solar energy. To optimize the use of this limited energy, CSP-TES must be implemented in a production cost model with multiple decision variables for the operation of the CSP-TES plant.more » We develop and implement a CSP-TES plant in a production cost model that accurately characterizes the three main components of the plant: solar field, storage tank, and power block. We show the effect of various modelling simplifications on the value of CSP, including: scheduled versus optimized dispatch from the storage tank and energy-only operation versus co-optimization with ancillary services.« less

  10. Misconceptions and biases in German students' perception of multiple energy sources: implications for science education

    NASA Astrophysics Data System (ADS)

    Lee, Roh Pin

    2016-04-01

    Misconceptions and biases in energy perception could influence people's support for developments integral to the success of restructuring a nation's energy system. Science education, in equipping young adults with the cognitive skills and knowledge necessary to navigate in the confusing energy environment, could play a key role in paving the way for informed decision-making. This study examined German students' knowledge of the contribution of diverse energy sources to their nation's energy mix as well as their affective energy responses so as to identify implications for science education. Specifically, the study investigated whether and to what extent students hold mistaken beliefs about the role of multiple energy sources in their nation's energy mix, and assessed how misconceptions could act as self-generated reference points to underpin support/resistance of proposed developments. An in-depth analysis of spontaneous affective associations with five key energy sources also enabled the identification of underlying concerns driving people's energy responses and facilitated an examination of how affective perception, in acting as a heuristic, could lead to biases in energy judgment and decision-making. Finally, subgroup analysis differentiated by education and gender supported insights into a 'two culture' effect on energy perception and the challenge it poses to science education.

  11. Multiple description distributed image coding with side information for mobile wireless transmission

    NASA Astrophysics Data System (ADS)

    Wu, Min; Song, Daewon; Chen, Chang Wen

    2005-03-01

    Multiple description coding (MDC) is a source coding technique that involves coding the source information into multiple descriptions, and then transmitting them over different channels in packet network or error-prone wireless environment to achieve graceful degradation if parts of descriptions are lost at the receiver. In this paper, we proposed a multiple description distributed wavelet zero tree image coding system for mobile wireless transmission. We provide two innovations to achieve an excellent error resilient capability. First, when MDC is applied to wavelet subband based image coding, it is possible to introduce correlation between the descriptions in each subband. We consider using such a correlation as well as potentially error corrupted description as side information in the decoding to formulate the MDC decoding as a Wyner Ziv decoding problem. If only part of descriptions is lost, however, their correlation information is still available, the proposed Wyner Ziv decoder can recover the description by using the correlation information and the error corrupted description as side information. Secondly, in each description, single bitstream wavelet zero tree coding is very vulnerable to the channel errors. The first bit error may cause the decoder to discard all subsequent bits whether or not the subsequent bits are correctly received. Therefore, we integrate the multiple description scalar quantization (MDSQ) with the multiple wavelet tree image coding method to reduce error propagation. We first group wavelet coefficients into multiple trees according to parent-child relationship and then code them separately by SPIHT algorithm to form multiple bitstreams. Such decomposition is able to reduce error propagation and therefore improve the error correcting capability of Wyner Ziv decoder. Experimental results show that the proposed scheme not only exhibits an excellent error resilient performance but also demonstrates graceful degradation over the packet loss rate.

  12. An integrated data model to estimate spatiotemporal occupancy, abundance, and colonization dynamics.

    PubMed

    Williams, Perry J; Hooten, Mevin B; Womble, Jamie N; Esslinger, George G; Bower, Michael R; Hefley, Trevor J

    2017-02-01

    Ecological invasions and colonizations occur dynamically through space and time. Estimating the distribution and abundance of colonizing species is critical for efficient management or conservation. We describe a statistical framework for simultaneously estimating spatiotemporal occupancy and abundance dynamics of a colonizing species. Our method accounts for several issues that are common when modeling spatiotemporal ecological data including multiple levels of detection probability, multiple data sources, and computational limitations that occur when making fine-scale inference over a large spatiotemporal domain. We apply the model to estimate the colonization dynamics of sea otters (Enhydra lutris) in Glacier Bay, in southeastern Alaska. © 2016 by the Ecological Society of America.

  13. Challenges and trends in magnetic sensor integration with microfluidics for biomedical applications

    NASA Astrophysics Data System (ADS)

    Cardoso, S.; Leitao, D. C.; Dias, T. M.; Valadeiro, J.; Silva, M. D.; Chicharo, A.; Silverio, V.; Gaspar, J.; Freitas, P. P.

    2017-06-01

    Magnetoresistive (MR) sensors have been successfully applied in many technologies, in particular readout electronics and smart systems for multiple signal addressing and readout. When single sensors are used, the requirements relate to spatial resolution and localized field sources. The integration of MR sensors in adaptable media (e.g. flexible, stretchable substrates) offers the possibility to merge the magnetic detection with mechanical functionalities. In addition, the precision of a micrometric needle can benefit greatly from the integration of MR sensors with submicrometric resolution. In this paper, we demonstrate through several detailed examples how advanced MR sensors can be integrated with the systems described above, and also with microfluidic technologies. Here, the challenges of handling liquids over a chip combine with those for miniaturization of microelectronics for MR readout. However, when these are overcome, the result is an integrated system with added functionalities, capable of answering the demand in biomedicine and biochemistry for lab-on-a-chip devices.

  14. Deterministic Integration of Quantum Dots into on-Chip Multimode Interference Beamsplitters Using in Situ Electron Beam Lithography.

    PubMed

    Schnauber, Peter; Schall, Johannes; Bounouar, Samir; Höhne, Theresa; Park, Suk-In; Ryu, Geun-Hwan; Heindel, Tobias; Burger, Sven; Song, Jin-Dong; Rodt, Sven; Reitzenstein, Stephan

    2018-04-11

    The development of multinode quantum optical circuits has attracted great attention in recent years. In particular, interfacing quantum-light sources, gates, and detectors on a single chip is highly desirable for the realization of large networks. In this context, fabrication techniques that enable the deterministic integration of preselected quantum-light emitters into nanophotonic elements play a key role when moving forward to circuits containing multiple emitters. Here, we present the deterministic integration of an InAs quantum dot into a 50/50 multimode interference beamsplitter via in situ electron beam lithography. We demonstrate the combined emitter-gate interface functionality by measuring triggered single-photon emission on-chip with g (2) (0) = 0.13 ± 0.02. Due to its high patterning resolution as well as spectral and spatial control, in situ electron beam lithography allows for integration of preselected quantum emitters into complex photonic systems. Being a scalable single-step approach, it paves the way toward multinode, fully integrated quantum photonic chips.

  15. SPRAI: coupling of radiative feedback and primordial chemistry in moving mesh hydrodynamics

    NASA Astrophysics Data System (ADS)

    Jaura, O.; Glover, S. C. O.; Klessen, R. S.; Paardekooper, J.-P.

    2018-04-01

    In this paper, we introduce a new radiative transfer code SPRAI (Simplex Photon Radiation in the Arepo Implementation) based on the SIMPLEX radiation transfer method. This method, originally used only for post-processing, is now directly integrated into the AREPO code and takes advantage of its adaptive unstructured mesh. Radiated photons are transferred from the sources through the series of Voronoi gas cells within a specific solid angle. From the photon attenuation, we derive corresponding photon fluxes and ionization rates and feed them to a primordial chemistry module. This gives us a self-consistent method for studying dynamical and chemical processes caused by ionizing sources in primordial gas. Since the computational cost of the SIMPLEX method does not scale directly with the number of sources, it is convenient for studying systems such as primordial star-forming haloes that may form multiple ionizing sources.

  16. Data integration for inference about spatial processes: A model-based approach to test and account for data inconsistency

    PubMed Central

    Pedrini, Paolo; Bragalanti, Natalia; Groff, Claudio

    2017-01-01

    Recently-developed methods that integrate multiple data sources arising from the same ecological processes have typically utilized structured data from well-defined sampling protocols (e.g., capture-recapture and telemetry). Despite this new methodological focus, the value of opportunistic data for improving inference about spatial ecological processes is unclear and, perhaps more importantly, no procedures are available to formally test whether parameter estimates are consistent across data sources and whether they are suitable for integration. Using data collected on the reintroduced brown bear population in the Italian Alps, a population of conservation importance, we combined data from three sources: traditional spatial capture-recapture data, telemetry data, and opportunistic data. We developed a fully integrated spatial capture-recapture (SCR) model that included a model-based test for data consistency to first compare model estimates using different combinations of data, and then, by acknowledging data-type differences, evaluate parameter consistency. We demonstrate that opportunistic data lend itself naturally to integration within the SCR framework and highlight the value of opportunistic data for improving inference about space use and population size. This is particularly relevant in studies of rare or elusive species, where the number of spatial encounters is usually small and where additional observations are of high value. In addition, our results highlight the importance of testing and accounting for inconsistencies in spatial information from structured and unstructured data so as to avoid the risk of spurious or averaged estimates of space use and consequently, of population size. Our work supports the use of a single modeling framework to combine spatially-referenced data while also accounting for parameter consistency. PMID:28973034

  17. Integrated Data & Analysis in Support of Informed and Transparent Decision Making

    NASA Astrophysics Data System (ADS)

    Guivetchi, K.

    2012-12-01

    The California Water Plan includes a framework for improving water reliability, environmental stewardship, and economic stability through two initiatives - integrated regional water management to make better use of local water sources by integrating multiple aspects of managing water and related resources; and maintaining and improving statewide water management systems. The Water Plan promotes ways to develop a common approach for data standards and for understanding, evaluating, and improving regional and statewide water management systems, and for common ways to evaluate and select from alternative management strategies and projects. The California Water Plan acknowledges that planning for the future is uncertain and that change will continue to occur. It is not possible to know for certain how population growth, land use decisions, water demand patterns, environmental conditions, the climate, and many other factors that affect water use and supply may change by 2050. To anticipate change, our approach to water management and planning for the future needs to consider and quantify uncertainty, risk, and sustainability. There is a critical need for information sharing and information management to support over-arching and long-term water policy decisions that cross-cut multiple programs across many organizations and provide a common and transparent understanding of water problems and solutions. Achieving integrated water management with multiple benefits requires a transparent description of dynamic linkages between water supply, flood management, water quality, land use, environmental water, and many other factors. Water Plan Update 2013 will include an analytical roadmap for improving data, analytical tools, and decision-support to advance integrated water management at statewide and regional scales. It will include recommendations for linking collaborative processes with technical enhancements, providing effective analytical tools, and improving and sharing data and information. Specifically, this includes achieving better integration and consistency with other planning activities; obtaining consensus on quantitative deliverables; building a common conceptual understanding of the water management system; developing common schematics of the water management system; establishing modeling protocols and standards; and improving transparency and exchange of Water Plan information.

  18. Water sources and mixing in riparian wetlands revealed by tracers and geospatial analysis.

    PubMed

    Lessels, Jason S; Tetzlaff, Doerthe; Birkel, Christian; Dick, Jonathan; Soulsby, Chris

    2016-01-01

    Mixing of waters within riparian zones has been identified as an important influence on runoff generation and water quality. Improved understanding of the controls on the spatial and temporal variability of water sources and how they mix in riparian zones is therefore of both fundamental and applied interest. In this study, we have combined topographic indices derived from a high-resolution Digital Elevation Model (DEM) with repeated spatially high-resolution synoptic sampling of multiple tracers to investigate such dynamics of source water mixing. We use geostatistics to estimate concentrations of three different tracers (deuterium, alkalinity, and dissolved organic carbon) across an extended riparian zone in a headwater catchment in NE Scotland, to identify spatial and temporal influences on mixing of source waters. The various biogeochemical tracers and stable isotopes helped constrain the sources of runoff and their temporal dynamics. Results show that spatial variability in all three tracers was evident in all sampling campaigns, but more pronounced in warmer dryer periods. The extent of mixing areas within the riparian area reflected strong hydroclimatic controls and showed large degrees of expansion and contraction that was not strongly related to topographic indices. The integrated approach of using multiple tracers, geospatial statistics, and topographic analysis allowed us to classify three main riparian source areas and mixing zones. This study underlines the importance of the riparian zones for mixing soil water and groundwater and introduces a novel approach how this mixing can be quantified and the effect on the downstream chemistry be assessed.

  19. Detecting misinformation and knowledge conflicts in relational data

    NASA Astrophysics Data System (ADS)

    Levchuk, Georgiy; Jackobsen, Matthew; Riordan, Brian

    2014-06-01

    Information fusion is required for many mission-critical intelligence analysis tasks. Using knowledge extracted from various sources, including entities, relations, and events, intelligence analysts respond to commander's information requests, integrate facts into summaries about current situations, augment existing knowledge with inferred information, make predictions about the future, and develop action plans. However, information fusion solutions often fail because of conflicting and redundant knowledge contained in multiple sources. Most knowledge conflicts in the past were due to translation errors and reporter bias, and thus could be managed. Current and future intelligence analysis, especially in denied areas, must deal with open source data processing, where there is much greater presence of intentional misinformation. In this paper, we describe a model for detecting conflicts in multi-source textual knowledge. Our model is based on constructing semantic graphs representing patterns of multi-source knowledge conflicts and anomalies, and detecting these conflicts by matching pattern graphs against the data graph constructed using soft co-reference between entities and events in multiple sources. The conflict detection process maintains the uncertainty throughout all phases, providing full traceability and enabling incremental updates of the detection results as new knowledge or modification to previously analyzed information are obtained. Detected conflicts are presented to analysts for further investigation. In the experimental study with SYNCOIN dataset, our algorithms achieved perfect conflict detection in ideal situation (no missing data) while producing 82% recall and 90% precision in realistic noise situation (15% of missing attributes).

  20. Bayesian networks improve causal environmental ...

    EPA Pesticide Factsheets

    Rule-based weight of evidence approaches to ecological risk assessment may not account for uncertainties and generally lack probabilistic integration of lines of evidence. Bayesian networks allow causal inferences to be made from evidence by including causal knowledge about the problem, using this knowledge with probabilistic calculus to combine multiple lines of evidence, and minimizing biases in predicting or diagnosing causal relationships. Too often, sources of uncertainty in conventional weight of evidence approaches are ignored that can be accounted for with Bayesian networks. Specifying and propagating uncertainties improve the ability of models to incorporate strength of the evidence in the risk management phase of an assessment. Probabilistic inference from a Bayesian network allows evaluation of changes in uncertainty for variables from the evidence. The network structure and probabilistic framework of a Bayesian approach provide advantages over qualitative approaches in weight of evidence for capturing the impacts of multiple sources of quantifiable uncertainty on predictions of ecological risk. Bayesian networks can facilitate the development of evidence-based policy under conditions of uncertainty by incorporating analytical inaccuracies or the implications of imperfect information, structuring and communicating causal issues through qualitative directed graph formulations, and quantitatively comparing the causal power of multiple stressors on value

  1. Exploring uncertainty in the Earth Sciences - the potential field perspective

    NASA Astrophysics Data System (ADS)

    Saltus, R. W.; Blakely, R. J.

    2013-12-01

    Interpretation of gravity and magnetic anomalies is mathematically non-unique because multiple theoretical solutions are possible. The mathematical label of 'non-uniqueness' can lead to the erroneous impression that no single interpretation is better in a geologic sense than any other. The purpose of this talk is to present a practical perspective on the theoretical non-uniqueness of potential field interpretation in geology. There are multiple ways to approach and constrain potential field studies to produce significant, robust, and definitive results. For example, a smooth, bell-shaped gravity profile, in theory, could be caused by an infinite set of physical density bodies, ranging from a deep, compact, circular source to a shallow, smoothly varying, inverted bell-shaped source. In practice, however, we can use independent geologic or geophysical information to limit the range of possible source densities and rule out many of the theoretical solutions. We can further reduce the theoretical uncertainty by careful attention to subtle anomaly details. For example, short-wavelength anomalies are a well-known and theoretically established characteristic of shallow geologic sources. The 'non-uniqueness' of potential field studies is closely related to the more general topic of scientific uncertainty in the Earth sciences and beyond. Nearly all results in the Earth sciences are subject to significant uncertainty because problems are generally addressed with incomplete and imprecise data. The increasing need to combine results from multiple disciplines into integrated solutions in order to address complex global issues requires special attention to the appreciation and communication of uncertainty in geologic interpretation.

  2. Integrating Remote Sensing, Field Observations, and Models to Understand Disturbance and Climate Effects on the Carbon Balance of the West Coast U.S.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cohen, Warren

    2014-07-03

    As an element of NACP research, the proposed investigation is a two pronged approach that derives and evaluates a regional carbon (C) budget for Oregon, Washington, and California. Objectives are (1) Use multiple data sources, including AmeriFlux data, inventories, and multispectral remote sensing data to investigate trends in carbon storage and exchanges of CO2 and water with variation in climate and disturbance history; (2) Develop and apply regional modeling that relies on these multiple data sources to reduce uncertainty in spatial estimates of carbon storage and NEP, and relative contributions of terrestrial ecosystems and anthropogenic emissions to atmospheric CO2 inmore » the region; (3) Model terrestrial carbon processes across the region, using the Biome-BGC terrestrial ecosystem model, and an atmospheric inverse modeling approach to estimate variation in rate and timing of terrestrial uptake and feedbacks to the atmosphere in response to climate and disturbance.« less

  3. Integrating Remote Sensing, Field Observations, and Models to Understand Disturbance and Climate Effects on the Carbon Balance of the West Coast U.S., Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beverly E. Law

    2011-10-05

    As an element of NACP research, the proposed investigation is a two pronged approach that derives and evaluates a regional carbon (C) budget for Oregon, Washington, and California. Objectives are (1) Use multiple data sources, including AmeriFlux data, inventories, and multispectral remote sensing data to investigate trends in carbon storage and exchanges of CO2 and water with variation in climate and disturbance history; (2) Develop and apply regional modeling that relies on these multiple data sources to reduce uncertainty in spatial estimates of carbon storage and NEP, and relative contributions of terrestrial ecosystems and anthropogenic emissions to atmospheric CO2 inmore » the region; (3) Model terrestrial carbon processes across the region, using the Biome-BGC terrestrial ecosystem model, and an atmospheric inverse modeling approach to estimate variation in rate and timing of terrestrial uptake and feedbacks to the atmosphere in response to climate and disturbance.« less

  4. Flex fuel polygeneration: Integrating renewable natural gas

    NASA Astrophysics Data System (ADS)

    Kieffer, Matthew

    Flex Fuel Polygeneration (FFPG) is the use of multiple primary energy sources for the production of multiple energy carriers to achieve increased market opportunities. FFPG allows for adjustments in energy supply to meet market fluctuations and increase resiliency to contingencies such as weather disruptions, technological changes, and variations in supply of energy resources. In this study a FFPG plant is examined that uses a combination of the primary energy sources natural gas and renewable natural gas (RNG) derived from MSW and livestock manure and converts them into energy carriers of electricity and fuels through anaerobic digestion (AD), Fischer-Tropsch synthesis (FTS), and gas turbine cycles. Previous techno-economic analyses of conventional energy production plants are combined to obtain equipment and operating costs, and then the 20-year NPVs of the FFPG plant designs are evaluated by static and stochastic simulations. The effects of changing operating parameters are investigated, as well as the number of anaerobic digestion plants on the 20-year NPV of the FTS and FFPG systems.

  5. Charting a Path to Location Intelligence for STD Control.

    PubMed

    Gerber, Todd M; Du, Ping; Armstrong-Brown, Janelle; McNutt, Louise-Anne; Coles, F Bruce

    2009-01-01

    This article describes the New York State Department of Health's GeoDatabase project, which developed new methods and techniques for designing and building a geocoding and mapping data repository for sexually transmitted disease (STD) control. The GeoDatabase development was supported through the Centers for Disease Control and Prevention's Outcome Assessment through Systems of Integrated Surveillance workgroup. The design and operation of the GeoDatabase relied upon commercial-off-the-shelf tools that other public health programs may also use for disease-control systems. This article provides a blueprint of the structure and software used to build the GeoDatabase and integrate location data from multiple data sources into the everyday activities of STD control programs.

  6. Shared Medical Imaging Repositories.

    PubMed

    Lebre, Rui; Bastião, Luís; Costa, Carlos

    2018-01-01

    This article describes the implementation of a solution for the integration of ownership concept and access control over medical imaging resources, making possible the centralization of multiple instances of repositories. The proposed architecture allows the association of permissions to repository resources and delegation of rights to third entities. It includes a programmatic interface for management of proposed services, made available through web services, with the ability to create, read, update and remove all components resulting from the architecture. The resulting work is a role-based access control mechanism that was integrated with Dicoogle Open-Source Project. The solution has several application scenarios like, for instance, collaborative platforms for research and tele-radiology services deployed at Cloud.

  7. Open-Source, Distributed Computational Environment for Virtual Materials Exploration

    DTIC Science & Technology

    2015-01-01

    compromising structural integrity.  For  example, advanced designs could specify advanced materials processing techniques such as heat  treatments  in specific...orchestration of execution of multiple standalone codes at varying  length scales will need advanced  high ‐performance computing (HPC) integration in...possible hooks that could be used to  coordinate larger  workflows spanning tools developed by different groups.    The  high  level approach explored

  8. Registration and rectification needs of geology

    NASA Technical Reports Server (NTRS)

    Chavez, P. S., Jr.

    1982-01-01

    Geologic applications of remotely sensed imaging encompass five areas of interest. The five areas include: (1) enhancement and analysis of individual images; (2) work with small area mosaics of imagery which have been map projection rectified to individual quadrangles; (3) development of large area mosaics of multiple images for several counties or states; (4) registration of multitemporal images; and (5) data integration from several sensors and map sources. Examples for each of these types of applications are summarized.

  9. Power conversion distribution system using a resonant high-frequency AC link

    NASA Technical Reports Server (NTRS)

    Sood, P. K.; Lipo, T. A.

    1986-01-01

    Static power conversion systems based on a resonant high frequency (HF) link offers a significant reduction in the size and weight of the equipment over that achieved with conventional approaches, especially when multiple sources and loads are to be integrated. A faster system response and absence of audible noise are the other principal characteristics of such systems. A conversion configuration based on a HF link which is suitable for applications requiring distributed power is proposed.

  10. ToxPi GUI: an interactive visualization tool for transparent integration of data from diverse sources of evidence.

    PubMed

    Reif, David M; Sypa, Myroslav; Lock, Eric F; Wright, Fred A; Wilson, Ander; Cathey, Tommy; Judson, Richard R; Rusyn, Ivan

    2013-02-01

    Scientists and regulators are often faced with complex decisions, where use of scarce resources must be prioritized using collections of diverse information. The Toxicological Prioritization Index (ToxPi™) was developed to enable integration of multiple sources of evidence on exposure and/or safety, transformed into transparent visual rankings to facilitate decision making. The rankings and associated graphical profiles can be used to prioritize resources in various decision contexts, such as testing chemical toxicity or assessing similarity of predicted compound bioactivity profiles. The amount and types of information available to decision makers are increasing exponentially, while the complex decisions must rely on specialized domain knowledge across multiple criteria of varying importance. Thus, the ToxPi bridges a gap, combining rigorous aggregation of evidence with ease of communication to stakeholders. An interactive ToxPi graphical user interface (GUI) application has been implemented to allow straightforward decision support across a variety of decision-making contexts in environmental health. The GUI allows users to easily import and recombine data, then analyze, visualize, highlight, export and communicate ToxPi results. It also provides a statistical metric of stability for both individual ToxPi scores and relative prioritized ranks. The ToxPi GUI application, complete user manual and example data files are freely available from http://comptox.unc.edu/toxpi.php.

  11. Fast two-stream method for computing diurnal-mean actinic flux in vertically inhomogeneous atmospheres

    NASA Technical Reports Server (NTRS)

    Filyushkin, V. V.; Madronich, S.; Brasseur, G. P.; Petropavlovskikh, I. V.

    1994-01-01

    Based on a derivation of the two-stream daytime-mean equations of radiative flux transfer, a method for computing the daytime-mean actinic fluxes in the absorbing and scattering vertically inhomogeneous atmosphere is suggested. The method applies direct daytime integration of the particular solutions of the two-stream approximations or the source functions. It is valid for any duration of period of averaging. The merit of the method is that the multiple scattering computation is carried out only once for the whole averaging period. It can be implemented with a number of widely used two-stream approximations. The method agrees with the results obtained with 200-point multiple scattering calculations. The method was also tested in runs with a 1-km cloud layer with optical depth of 10, as well as with aerosol background. Comparison of the results obtained for a cloud subdivided into 20 layers with those obtained for a one-layer cloud with the same optical parameters showed that direct integration of particular solutions possesses an 'analytical' accuracy. In the case of the source function interpolation, the actinic fluxes calculated above the one-layer and 20-layer clouds agreed within 1%-1.5%, while below the cloud they may differ up to 5% (in the worst case). The ways of enhancing the accuracy (in a 'two-stream sense') and computational efficiency of the method are discussed.

  12. On the efficiency of multiple media family planning promotion campaigns.

    PubMed

    1999-01-01

    This article presents the result of a study conducted by Miriam N. Jato on the impact of multimedia family planning communication campaigns on contraceptive use. The study was conducted in Tanzania, where a government program integrated family planning into maternal and child health care services in 1988, while in 1992 a private-sector condom-marketing program begun and a national population policy for wider distribution of family planning information was adopted by the government. In less than 3 years, contraceptive use was found to have doubled to a level of 11.3% and the total fertility rate declined from an average of 6.3 to 5.8 live births. The result of the study indicates that exposure to media sources of family planning messages was directly associated with increased contraceptive use. Moreover, the use of modern methods increased among women who were exposed to a greater number of media sources, as did discussion of family planning with spouses and attendance of health facilities. The programmatic implications of the results confirm that utilization of multiple media channels in the promotion of family planning and other reproductive issues must be continued, with emphasis on media sources that reach large audiences.

  13. Integrated Multidisciplinary Optimization Objects

    NASA Technical Reports Server (NTRS)

    Alston, Katherine

    2014-01-01

    OpenMDAO is an open-source MDAO framework. It is used to develop an integrated analysis and design environment for engineering challenges. This Phase II project integrated additional modules and design tools into OpenMDAO to perform discipline-specific analysis across multiple flight regimes at varying levels of fidelity. It also showcased a refined system architecture that allows the system to be less customized to a specific configuration (i.e., system and configuration separation). By delivering a capable and validated MDAO system along with a set of example applications to be used as a template for future users, this work greatly expands NASA's high-fidelity, physics-based MDAO capabilities and enables the design of revolutionary vehicles in a cost-effective manner. This proposed work complements M4 Engineering's expertise in developing modeling and simulation toolsets that solve relevant subsonic, supersonic, and hypersonic demonstration applications.

  14. Integrated NTP Vehicle Radiation Design

    NASA Technical Reports Server (NTRS)

    Caffrey, Jarvis A.; Rodriquez, Mitchell A.

    2018-01-01

    The development of a nuclear thermal propulsion stage requires consideration for radiation emitted from the nuclear reactor core. Applying shielding mass is an effective mitigating solution, but a better alternative is to incorporate some mitigation strategies into the propulsion stage and crew habitat. In this way, the required additional mass is minimized and the mass that must be applied may in some cases be able to serve multiple purposes. Strategies for crew compartment shielding are discussed that reduce dose from both engine and cosmic sources, and in some cases may also serve to reduce life support risks by permitting abundant water reserves. Early consideration for integrated mitigation solutions in a crewed nuclear thermal propulsion (NTP) vehicle will enable reduced radiation burden from both cosmic and nuclear sources, improved thrust-to-weight ratio or payload capacity by reducing 'dead mass' of shielding, and generally support a more robust risk posture for a NTP-powered Mars mission by permitting shorter trip times and increased water reserves.

  15. Integrating Multiple On-line Knowledge Bases for Disease-Lab Test Relation Extraction.

    PubMed

    Zhang, Yaoyun; Soysal, Ergin; Moon, Sungrim; Wang, Jingqi; Tao, Cui; Xu, Hua

    2015-01-01

    A computable knowledge base containing relations between diseases and lab tests would be a great resource for many biomedical informatics applications. This paper describes our initial step towards establishing a comprehensive knowledge base of disease and lab tests relations utilizing three public on-line resources. LabTestsOnline, MedlinePlus and Wikipedia are integrated to create a freely available, computable disease-lab test knowledgebase. Disease and lab test concepts are identified using MetaMap and relations between diseases and lab tests are determined based on source-specific rules. Experimental results demonstrate a high precision for relation extraction, with Wikipedia achieving the highest precision of 87%. Combining the three sources reached a recall of 51.40%, when compared with a subset of disease-lab test relations extracted from a reference book. Moreover, we found additional disease-lab test relations from on-line resources, indicating they are complementary to existing reference books for building a comprehensive disease and lab test relation knowledge base.

  16. Data mining: childhood injury control and beyond.

    PubMed

    Tepas, Joseph J

    2009-08-01

    Data mining is defined as the automatic extraction of useful, often previously unknown information from large databases or data sets. It has become a major part of modern life and is extensively used in industry, banking, government, and health care delivery. The process requires a data collection system that integrates input from multiple sources containing critical elements that define outcomes of interest. Appropriately designed data mining processes identify and adjust for confounding variables. The statistical modeling used to manipulate accumulated data may involve any number of techniques. As predicted results are periodically analyzed against those observed, the model is consistently refined to optimize precision and accuracy. Whether applying integrated sources of clinical data to inferential probabilistic prediction of risk of ventilator-associated pneumonia or population surveillance for signs of bioterrorism, it is essential that modern health care providers have at least a rudimentary understanding of what the concept means, how it basically works, and what it means to current and future health care.

  17. Electrically driven quantum light emission in electromechanically tuneable photonic crystal cavities

    NASA Astrophysics Data System (ADS)

    Petruzzella, M.; Pagliano, F. M.; Zobenica, Ž.; Birindelli, S.; Cotrufo, M.; van Otten, F. W. M.; van der Heijden, R. W.; Fiore, A.

    2017-12-01

    A single quantum dot deterministically coupled to a photonic crystal environment constitutes an indispensable elementary unit to both generate and manipulate single-photons in next-generation quantum photonic circuits. To date, the scaling of the number of these quantum nodes on a fully integrated chip has been prevented by the use of optical pumping strategies that require a bulky off-chip laser along with the lack of methods to control the energies of nano-cavities and emitters. Here, we concurrently overcome these limitations by demonstrating electrical injection of single excitonic lines within a nano-electro-mechanically tuneable photonic crystal cavity. When an electrically driven dot line is brought into resonance with a photonic crystal mode, its emission rate is enhanced. Anti-bunching experiments reveal the quantum nature of these on-demand sources emitting in the telecom range. These results represent an important step forward in the realization of integrated quantum optics experiments featuring multiple electrically triggered Purcell-enhanced single-photon sources embedded in a reconfigurable semiconductor architecture.

  18. ADEpedia: a scalable and standardized knowledge base of Adverse Drug Events using semantic web technology.

    PubMed

    Jiang, Guoqian; Solbrig, Harold R; Chute, Christopher G

    2011-01-01

    A source of semantically coded Adverse Drug Event (ADE) data can be useful for identifying common phenotypes related to ADEs. We proposed a comprehensive framework for building a standardized ADE knowledge base (called ADEpedia) through combining ontology-based approach with semantic web technology. The framework comprises four primary modules: 1) an XML2RDF transformation module; 2) a data normalization module based on NCBO Open Biomedical Annotator; 3) a RDF store based persistence module; and 4) a front-end module based on a Semantic Wiki for the review and curation. A prototype is successfully implemented to demonstrate the capability of the system to integrate multiple drug data and ontology resources and open web services for the ADE data standardization. A preliminary evaluation is performed to demonstrate the usefulness of the system, including the performance of the NCBO annotator. In conclusion, the semantic web technology provides a highly scalable framework for ADE data source integration and standard query service.

  19. Integrated NTP Vehicle Radiation Design

    NASA Technical Reports Server (NTRS)

    Caffrey, Jarvis; Rodriquez, Mitchell

    2018-01-01

    The development of a nuclear thermal propulsion stage requires consideration for radiation emitted from the nuclear reactor core. Applying shielding mass is an effective mitigating solution, but a better alternative is to incorporate some mitigation strategies into the propulsion stage and crew habitat. In this way, the required additional mass is minimized and the mass that must be applied may in some cases be able to serve multiple purposes. Strategies for crew compartment shielding are discussed that reduce dose from both engine and cosmic sources, and in some cases may also serve to reduce life support risks by permitting abundant water reserves. Early consideration for integrated mitigation solutions in a crewed nuclear thermal propulsion (NTP) vehicle will enable reduced radiation burden from both cosmic and nuclear sources, improved thrust-to-weight ratio or payload capacity by reducing 'dead mass' of shielding, and generally support a more robust risk posture for a NTP-powered Mars mission by permitting shorter trip times and increased water reserves

  20. Combined mining: discovering informative knowledge in complex data.

    PubMed

    Cao, Longbing; Zhang, Huaifeng; Zhao, Yanchang; Luo, Dan; Zhang, Chengqi

    2011-06-01

    Enterprise data mining applications often involve complex data such as multiple large heterogeneous data sources, user preferences, and business impact. In such situations, a single method or one-step mining is often limited in discovering informative knowledge. It would also be very time and space consuming, if not impossible, to join relevant large data sources for mining patterns consisting of multiple aspects of information. It is crucial to develop effective approaches for mining patterns combining necessary information from multiple relevant business lines, catering for real business settings and decision-making actions rather than just providing a single line of patterns. The recent years have seen increasing efforts on mining more informative patterns, e.g., integrating frequent pattern mining with classifications to generate frequent pattern-based classifiers. Rather than presenting a specific algorithm, this paper builds on our existing works and proposes combined mining as a general approach to mining for informative patterns combining components from either multiple data sets or multiple features or by multiple methods on demand. We summarize general frameworks, paradigms, and basic processes for multifeature combined mining, multisource combined mining, and multimethod combined mining. Novel types of combined patterns, such as incremental cluster patterns, can result from such frameworks, which cannot be directly produced by the existing methods. A set of real-world case studies has been conducted to test the frameworks, with some of them briefed in this paper. They identify combined patterns for informing government debt prevention and improving government service objectives, which show the flexibility and instantiation capability of combined mining in discovering informative knowledge in complex data.

  1. Warehousing re-annotated cancer genes for biomarker meta-analysis.

    PubMed

    Orsini, M; Travaglione, A; Capobianco, E

    2013-07-01

    Translational research in cancer genomics assigns a fundamental role to bioinformatics in support of candidate gene prioritization with regard to both biomarker discovery and target identification for drug development. Efforts in both such directions rely on the existence and constant update of large repositories of gene expression data and omics records obtained from a variety of experiments. Users who interactively interrogate such repositories may have problems in retrieving sample fields that present limited associated information, due for instance to incomplete entries or sometimes unusable files. Cancer-specific data sources present similar problems. Given that source integration usually improves data quality, one of the objectives is keeping the computational complexity sufficiently low to allow an optimal assimilation and mining of all the information. In particular, the scope of integrating intraomics data can be to improve the exploration of gene co-expression landscapes, while the scope of integrating interomics sources can be that of establishing genotype-phenotype associations. Both integrations are relevant to cancer biomarker meta-analysis, as the proposed study demonstrates. Our approach is based on re-annotating cancer-specific data available at the EBI's ArrayExpress repository and building a data warehouse aimed to biomarker discovery and validation studies. Cancer genes are organized by tissue with biomedical and clinical evidences combined to increase reproducibility and consistency of results. For better comparative evaluation, multiple queries have been designed to efficiently address all types of experiments and platforms, and allow for retrieval of sample-related information, such as cell line, disease state and clinical aspects. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  2. Design-based online teacher professional development to introduce integration of STEM in Pakistan

    NASA Astrophysics Data System (ADS)

    Anwar, Tasneem

    In today's global society where innovations spread rapidly, the escalating focus on science, technology, engineering and mathematics (STEM) has quickly intensified in the United States, East Asia and much of Western Europe. Our ever-changing, increasingly global society faces many multidisciplinary problems, and many of the solutions require the integration of multiple science, technology, engineering, and mathematics (STEM) concepts. Thus, there is a critical need to explore the integration of STEM subjects in international education contexts. This dissertation study examined the exploration of integration of STEM in the unique context of Pakistan. This study used three-phase design-based methodological framework derived from McKenney and Reeves (2012) to explore the development of a STEM focused online teacher professional development (oTPD-STEM) and to identify the design features that facilitate teacher learning. The oTPD-STEM program was designed to facilitate eight Pakistani elementary school teachers' exploration of the new idea of STEM integration through both practical and theoretical considerations. This design-based study employed inductive analysis (Strauss and Corbin, 1998) to analyze multiple data sources of interviews, STEM perception responses, reflective learning team conversations, pre-post surveys and artifacts produced in oTPD-STEM. Findings of this study are presented as: (1) design-based decisions for oTPD-STEM, and (2) evolution in understanding of STEM by sharing participant teachers' STEM model for Pakistani context. This study advocates for the potential of school-wide oTPD for interdisciplinary collaboration through support for learner-centered practices.

  3. Metadata management for high content screening in OMERO

    PubMed Central

    Li, Simon; Besson, Sébastien; Blackburn, Colin; Carroll, Mark; Ferguson, Richard K.; Flynn, Helen; Gillen, Kenneth; Leigh, Roger; Lindner, Dominik; Linkert, Melissa; Moore, William J.; Ramalingam, Balaji; Rozbicki, Emil; Rustici, Gabriella; Tarkowska, Aleksandra; Walczysko, Petr; Williams, Eleanor; Allan, Chris; Burel, Jean-Marie; Moore, Josh; Swedlow, Jason R.

    2016-01-01

    High content screening (HCS) experiments create a classic data management challenge—multiple, large sets of heterogeneous structured and unstructured data, that must be integrated and linked to produce a set of “final” results. These different data include images, reagents, protocols, analytic output, and phenotypes, all of which must be stored, linked and made accessible for users, scientists, collaborators and where appropriate the wider community. The OME Consortium has built several open source tools for managing, linking and sharing these different types of data. The OME Data Model is a metadata specification that supports the image data and metadata recorded in HCS experiments. Bio-Formats is a Java library that reads recorded image data and metadata and includes support for several HCS screening systems. OMERO is an enterprise data management application that integrates image data, experimental and analytic metadata and makes them accessible for visualization, mining, sharing and downstream analysis. We discuss how Bio-Formats and OMERO handle these different data types, and how they can be used to integrate, link and share HCS experiments in facilities and public data repositories. OME specifications and software are open source and are available at https://www.openmicroscopy.org. PMID:26476368

  4. Metadata management for high content screening in OMERO.

    PubMed

    Li, Simon; Besson, Sébastien; Blackburn, Colin; Carroll, Mark; Ferguson, Richard K; Flynn, Helen; Gillen, Kenneth; Leigh, Roger; Lindner, Dominik; Linkert, Melissa; Moore, William J; Ramalingam, Balaji; Rozbicki, Emil; Rustici, Gabriella; Tarkowska, Aleksandra; Walczysko, Petr; Williams, Eleanor; Allan, Chris; Burel, Jean-Marie; Moore, Josh; Swedlow, Jason R

    2016-03-01

    High content screening (HCS) experiments create a classic data management challenge-multiple, large sets of heterogeneous structured and unstructured data, that must be integrated and linked to produce a set of "final" results. These different data include images, reagents, protocols, analytic output, and phenotypes, all of which must be stored, linked and made accessible for users, scientists, collaborators and where appropriate the wider community. The OME Consortium has built several open source tools for managing, linking and sharing these different types of data. The OME Data Model is a metadata specification that supports the image data and metadata recorded in HCS experiments. Bio-Formats is a Java library that reads recorded image data and metadata and includes support for several HCS screening systems. OMERO is an enterprise data management application that integrates image data, experimental and analytic metadata and makes them accessible for visualization, mining, sharing and downstream analysis. We discuss how Bio-Formats and OMERO handle these different data types, and how they can be used to integrate, link and share HCS experiments in facilities and public data repositories. OME specifications and software are open source and are available at https://www.openmicroscopy.org. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  5. The Human Physiome: how standards, software and innovative service infrastructures are providing the building blocks to make it achievable

    PubMed Central

    2016-01-01

    Reconstructing and understanding the Human Physiome virtually is a complex mathematical problem, and a highly demanding computational challenge. Mathematical models spanning from the molecular level through to whole populations of individuals must be integrated, then personalized. This requires interoperability with multiple disparate and geographically separated data sources, and myriad computational software tools. Extracting and producing knowledge from such sources, even when the databases and software are readily available, is a challenging task. Despite the difficulties, researchers must frequently perform these tasks so that available knowledge can be continually integrated into the common framework required to realize the Human Physiome. Software and infrastructures that support the communities that generate these, together with their underlying standards to format, describe and interlink the corresponding data and computer models, are pivotal to the Human Physiome being realized. They provide the foundations for integrating, exchanging and re-using data and models efficiently, and correctly, while also supporting the dissemination of growing knowledge in these forms. In this paper, we explore the standards, software tooling, repositories and infrastructures that support this work, and detail what makes them vital to realizing the Human Physiome. PMID:27051515

  6. A Location Method Using Sensor Arrays for Continuous Gas Leakage in Integrally Stiffened Plates Based on the Acoustic Characteristics of the Stiffener

    PubMed Central

    Bian, Xu; Li, Yibo; Feng, Hao; Wang, Jiaqiang; Qi, Lei; Jin, Shijiu

    2015-01-01

    This paper proposes a continuous leakage location method based on the ultrasonic array sensor, which is specific to continuous gas leakage in a pressure container with an integral stiffener. This method collects the ultrasonic signals generated from the leakage hole through the piezoelectric ultrasonic sensor array, and analyzes the space-time correlation of every collected signal in the array. Meanwhile, it combines with the method of frequency compensation and superposition in time domain (SITD), based on the acoustic characteristics of the stiffener, to obtain a high-accuracy location result on the stiffener wall. According to the experimental results, the method successfully solves the orientation problem concerning continuous ultrasonic signals generated from leakage sources, and acquires high accuracy location information on the leakage source using a combination of multiple sets of orienting results. The mean value of location absolute error is 13.51 mm on the one-square-meter plate with an integral stiffener (4 mm width; 20 mm height; 197 mm spacing), and the maximum location absolute error is generally within a ±25 mm interval. PMID:26404316

  7. The Human Physiome: how standards, software and innovative service infrastructures are providing the building blocks to make it achievable.

    PubMed

    Nickerson, David; Atalag, Koray; de Bono, Bernard; Geiger, Jörg; Goble, Carole; Hollmann, Susanne; Lonien, Joachim; Müller, Wolfgang; Regierer, Babette; Stanford, Natalie J; Golebiewski, Martin; Hunter, Peter

    2016-04-06

    Reconstructing and understanding the Human Physiome virtually is a complex mathematical problem, and a highly demanding computational challenge. Mathematical models spanning from the molecular level through to whole populations of individuals must be integrated, then personalized. This requires interoperability with multiple disparate and geographically separated data sources, and myriad computational software tools. Extracting and producing knowledge from such sources, even when the databases and software are readily available, is a challenging task. Despite the difficulties, researchers must frequently perform these tasks so that available knowledge can be continually integrated into the common framework required to realize the Human Physiome. Software and infrastructures that support the communities that generate these, together with their underlying standards to format, describe and interlink the corresponding data and computer models, are pivotal to the Human Physiome being realized. They provide the foundations for integrating, exchanging and re-using data and models efficiently, and correctly, while also supporting the dissemination of growing knowledge in these forms. In this paper, we explore the standards, software tooling, repositories and infrastructures that support this work, and detail what makes them vital to realizing the Human Physiome.

  8. Landsat-8 Operational Land Imager On-Orbit Radiometric Calibration

    NASA Technical Reports Server (NTRS)

    Markham, Brian L.; Barsi, Julia A.

    2017-01-01

    The Operational Land Imager (OLI), the VIS/NIR/SWIR sensor on the Landsat-8 has been successfully acquiring Earth Imagery for more than four years. The OLI incorporates two on-board radiometric calibration systems, one diffuser based and one lamp based, each with multiple sources. For each system one source is treated as primary and used frequently and the other source(s) are used less frequently to assist in tracking any degradation in the primary sources. In addition, via a spacecraft maneuver, the OLI instrument views the moon once a lunar cycle (approx. 29 days). The integrated lunar irradiances from these acquisitions are compared to the output of a lunar irradiance model. The results from all these techniques, combined with cross calibrations with other sensors and ground based vicarious measurements are used to monitor the OLI's stability and correct for any changes observed. To date, the various techniques have other detected significant changes in the shortest wavelength OLI band centered at 443 nm and these are currently being adjusted in the operational processing.

  9. A systematic examination of a random sampling strategy for source apportionment calculations.

    PubMed

    Andersson, August

    2011-12-15

    Estimating the relative contributions from multiple potential sources of a specific component in a mixed environmental matrix is a general challenge in diverse fields such as atmospheric, environmental and earth sciences. Perhaps the most common strategy for tackling such problems is by setting up a system of linear equations for the fractional influence of different sources. Even though an algebraic solution of this approach is possible for the common situation with N+1 sources and N source markers, such methodology introduces a bias, since it is implicitly assumed that the calculated fractions and the corresponding uncertainties are independent of the variability of the source distributions. Here, a random sampling (RS) strategy for accounting for such statistical bias is examined by investigating rationally designed synthetic data sets. This random sampling methodology is found to be robust and accurate with respect to reproducibility and predictability. This method is also compared to a numerical integration solution for a two-source situation where source variability also is included. A general observation from this examination is that the variability of the source profiles not only affects the calculated precision but also the mean/median source contributions. Copyright © 2011 Elsevier B.V. All rights reserved.

  10. Baking a mass-spectrometry data PIE with McMC and simulated annealing: predicting protein post-translational modifications from integrated top-down and bottom-up data.

    PubMed

    Jefferys, Stuart R; Giddings, Morgan C

    2011-03-15

    Post-translational modifications are vital to the function of proteins, but are hard to study, especially since several modified isoforms of a protein may be present simultaneously. Mass spectrometers are a great tool for investigating modified proteins, but the data they provide is often incomplete, ambiguous and difficult to interpret. Combining data from multiple experimental techniques-especially bottom-up and top-down mass spectrometry-provides complementary information. When integrated with background knowledge this allows a human expert to interpret what modifications are present and where on a protein they are located. However, the process is arduous and for high-throughput applications needs to be automated. This article explores a data integration methodology based on Markov chain Monte Carlo and simulated annealing. Our software, the Protein Inference Engine (the PIE) applies these algorithms using a modular approach, allowing multiple types of data to be considered simultaneously and for new data types to be added as needed. Even for complicated data representing multiple modifications and several isoforms, the PIE generates accurate modification predictions, including location. When applied to experimental data collected on the L7/L12 ribosomal protein the PIE was able to make predictions consistent with manual interpretation for several different L7/L12 isoforms using a combination of bottom-up data with experimentally identified intact masses. Software, demo projects and source can be downloaded from http://pie.giddingslab.org/

  11. Influence of neural adaptation on dynamics and equilibrium state of neural activities in a ring neural network

    NASA Astrophysics Data System (ADS)

    Takiyama, Ken

    2017-12-01

    How neural adaptation affects neural information processing (i.e. the dynamics and equilibrium state of neural activities) is a central question in computational neuroscience. In my previous works, I analytically clarified the dynamics and equilibrium state of neural activities in a ring-type neural network model that is widely used to model the visual cortex, motor cortex, and several other brain regions. The neural dynamics and the equilibrium state in the neural network model corresponded to a Bayesian computation and statistically optimal multiple information integration, respectively, under a biologically inspired condition. These results were revealed in an analytically tractable manner; however, adaptation effects were not considered. Here, I analytically reveal how the dynamics and equilibrium state of neural activities in a ring neural network are influenced by spike-frequency adaptation (SFA). SFA is an adaptation that causes gradual inhibition of neural activity when a sustained stimulus is applied, and the strength of this inhibition depends on neural activities. I reveal that SFA plays three roles: (1) SFA amplifies the influence of external input in neural dynamics; (2) SFA allows the history of the external input to affect neural dynamics; and (3) the equilibrium state corresponds to the statistically optimal multiple information integration independent of the existence of SFA. In addition, the equilibrium state in a ring neural network model corresponds to the statistically optimal integration of multiple information sources under biologically inspired conditions, independent of the existence of SFA.

  12. Photochemical grid model implementation and application of ...

    EPA Pesticide Factsheets

    For the purposes of developing optimal emissions control strategies, efficient approaches are needed to identify the major sources or groups of sources that contribute to elevated ozone (O3) concentrations. Source-based apportionment techniques implemented in photochemical grid models track sources through the physical and chemical processes important to the formation and transport of air pollutants. Photochemical model source apportionment has been used to track source impacts of specific sources, groups of sources (sectors), sources in specific geographic areas, and stratospheric and lateral boundary inflow on O3. The implementation and application of a source apportionment technique for O3 and its precursors, nitrogen oxides (NOx) and volatile organic compounds (VOCs), for the Community Multiscale Air Quality (CMAQ) model are described here. The Integrated Source Apportionment Method (ISAM) O3 approach is a hybrid of source apportionment and source sensitivity in that O3 production is attributed to precursor sources based on O3 formation regime (e.g., for a NOx-sensitive regime, O3 is apportioned to participating NOx emissions). This implementation is illustrated by tracking multiple emissions source sectors and lateral boundary inflow. NOx, VOC, and O3 attribution to tracked sectors in the application are consistent with spatial and temporal patterns of precursor emissions. The O3 ISAM implementation is further evaluated through comparisons of apportioned am

  13. Multiple point statistical simulation using uncertain (soft) conditional data

    NASA Astrophysics Data System (ADS)

    Hansen, Thomas Mejer; Vu, Le Thanh; Mosegaard, Klaus; Cordua, Knud Skou

    2018-05-01

    Geostatistical simulation methods have been used to quantify spatial variability of reservoir models since the 80s. In the last two decades, state of the art simulation methods have changed from being based on covariance-based 2-point statistics to multiple-point statistics (MPS), that allow simulation of more realistic Earth-structures. In addition, increasing amounts of geo-information (geophysical, geological, etc.) from multiple sources are being collected. This pose the problem of integration of these different sources of information, such that decisions related to reservoir models can be taken on an as informed base as possible. In principle, though difficult in practice, this can be achieved using computationally expensive Monte Carlo methods. Here we investigate the use of sequential simulation based MPS simulation methods conditional to uncertain (soft) data, as a computational efficient alternative. First, it is demonstrated that current implementations of sequential simulation based on MPS (e.g. SNESIM, ENESIM and Direct Sampling) do not account properly for uncertain conditional information, due to a combination of using only co-located information, and a random simulation path. Then, we suggest two approaches that better account for the available uncertain information. The first make use of a preferential simulation path, where more informed model parameters are visited preferentially to less informed ones. The second approach involves using non co-located uncertain information. For different types of available data, these approaches are demonstrated to produce simulation results similar to those obtained by the general Monte Carlo based approach. These methods allow MPS simulation to condition properly to uncertain (soft) data, and hence provides a computationally attractive approach for integration of information about a reservoir model.

  14. High resolution crustal image of South California Continental Borderland: Reverse time imaging including multiples

    NASA Astrophysics Data System (ADS)

    Bian, A.; Gantela, C.

    2014-12-01

    Strong multiples were observed in marine seismic data of Los Angeles Regional Seismic Experiment (LARSE).It is crucial to eliminate these multiples in conventional ray-based or one-way wave-equation based depth image methods. As long as multiples contain information of target zone along travelling path, it's possible to use them as signal, to improve the illumination coverage thus enhance the image quality of structural boundaries. Reverse time migration including multiples is a two-way wave-equation based prestack depth image method that uses both primaries and multiples to map structural boundaries. Several factors, including source wavelet, velocity model, back ground noise, data acquisition geometry and preprocessing workflow may influence the quality of image. The source wavelet is estimated from direct arrival of marine seismic data. Migration velocity model is derived from integrated model building workflow, and the sharp velocity interfaces near sea bottom needs to be preserved in order to generate multiples in the forward and backward propagation steps. The strong amplitude, low frequency marine back ground noise needs to be removed before the final imaging process. High resolution reverse time image sections of LARSE Lines 1 and Line 2 show five interfaces: depth of sea-bottom, base of sedimentary basins, top of Catalina Schist, a deep layer and a possible pluton boundary. Catalina Schist shows highs in the San Clemente ridge, Emery Knoll, Catalina Ridge, under Catalina Basin on both the lines, and a minor high under Avalon Knoll. The high of anticlinal fold in Line 1 is under the north edge of Emery Knoll and under the San Clemente fault zone. An area devoid of any reflection features are interpreted as sides of an igneous plume.

  15. An integrated GIS-based data model for multimodal urban public transportation analysis and management

    NASA Astrophysics Data System (ADS)

    Chen, Shaopei; Tan, Jianjun; Ray, C.; Claramunt, C.; Sun, Qinqin

    2008-10-01

    Diversity is one of the main characteristics of transportation data collected from multiple sources or formats, which can be extremely complex and disparate. Moreover, these multimodal transportation data are usually characterised by spatial and temporal properties. Multimodal transportation network data modelling involves both an engineering and research domain that has attracted the design of a number of spatio-temporal data models in the geographic information system (GIS). However, the application of these specific models to multimodal transportation network is still a challenging task. This research addresses this challenge from both integrated multimodal data organization and object-oriented modelling perspectives, that is, how a complex urban transportation network should be organized, represented and modeled appropriately when considering a multimodal point of view, and using object-oriented modelling method. We proposed an integrated GIS-based data model for multimodal urban transportation network that lays a foundation to enhance the multimodal transportation network analysis and management. This modelling method organizes and integrates multimodal transit network data, and supports multiple representations for spatio-temporal objects and relationship as both visual and graphic views. The data model is expressed by using a spatio-temporal object-oriented modelling method, i.e., the unified modelling language (UML) extended to spatial and temporal plug-in for visual languages (PVLs), which provides an essential support to the spatio-temporal data modelling for transportation GIS.

  16. Real-time trace gas sensor using a multimode diode laser and multiple-line integrated cavity enhanced absorption spectroscopy.

    PubMed

    Karpf, Andreas; Rao, Gottipaty N

    2015-07-01

    We describe and demonstrate a highly sensitive trace gas sensor based on a simplified design that is capable of measuring sub-ppb concentrations of NO2 in tens of milliseconds. The sensor makes use of a relatively inexpensive Fabry-Perot diode laser to conduct off-axis cavity enhanced spectroscopy. The broad frequency range of a multimode Fabry-Perot diode laser spans a large number of absorption lines, thereby removing the need for a single-frequency tunable laser source. The use of cavity enhanced absorption spectroscopy enhances the sensitivity of the sensor by providing a pathlength on the order of 1 km in a small volume. Off-axis alignment excites a large number of cavity modes simultaneously, thereby reducing the sensor's susceptibility to vibration. Multiple-line integrated absorption spectroscopy (where one integrates the absorption spectra over a large number of rovibronic transitions of the molecular species) further improves the sensitivity of detection. Relatively high laser power (∼400  mW) is used to compensate for the low coupling efficiency of a broad linewidth laser to the optical cavity. The approach was demonstrated using a 407 nm diode laser to detect trace quantities of NO2 in zero air. Sensitivities of 750 ppt, 110 ppt, and 65 ppt were achieved using integration times of 50 ms, 5 s, and 20 s respectively.

  17. Enabling the Integrated Assessment of Large Marine Ecosystems: Informatics to the Forefront of Science-Based Decision Support

    NASA Astrophysics Data System (ADS)

    Di Stefano, M.; Fox, P. A.; Beaulieu, S. E.; Maffei, A. R.; West, P.; Hare, J. A.

    2012-12-01

    Integrated assessments of large marine ecosystems require the understanding of interactions between environmental, ecological, and socio-economic factors that affect production and utilization of marine natural resources. Assessing the functioning of complex coupled natural-human systems calls for collaboration between natural and social scientists across disciplinary and national boundaries. We are developing a platform to implement and sustain informatics solutions for these applications, providing interoperability among very diverse and heterogeneous data and information sources, as well as multi-disciplinary organizations and people. We have partnered with NOAA NMFS scientists to facilitate the deployment of an integrated ecosystem approach to management in the Northeast U.S. (NES) and California Current Large Marine Ecosystems (LMEs). Our platform will facilitate the collaboration and knowledge sharing among NMFS natural and social scientists, promoting community participation in integrating data, models, and knowledge. Here, we present collaborative software tools developed to aid the production of the Ecosystem Status Report (ESR) for the NES LME. The ESR addresses the D-P-S portion of the DPSIR (Driver-Pressure-State-Impact-Response) management framework: reporting data, indicators, and information products for climate drivers, physical and human (fisheries) pressures, and ecosystem state (primary and secondary production and higher trophic levels). We are developing our tools in open-source software, with the main tool based on a web application capable of providing the ability to work on multiple data types from a variety of sources, providing an effective way to share the source code used to generate data products and associated metadata as well as track workflow provenance to allow in the reproducibility of a data product. Our platform retrieves data, conducts standard analyses, reports data quality and other standardized metadata, provides iterative and interactive visualization, and enables the download of data plotted in the ESR. Data, indicators, and information products include time series, geographic maps, and uni-variate and multi-variate analyses. Also central to the success of this initiative is the commitment to accommodate and train scientists of multiple disciplines who will learn to interact effectively with this new integrated and interoperable ecosystem assessment capability. Traceability, repeatability, explanation, verification, and validation of data, indicators, and information products are important for cross-disciplinary understanding and sharing with managers, policymakers, and the public. We are also developing an ontology to support the implementation of the DPSIR framework. These new capabilities will serve as the essential foundation for the formal synthesis and quantitative analysis of information on relevant natural and socio-economic factors in relation to specified ecosystem management goals which can be applied in other LMEs.

  18. Traffic handling capability of a broadband indoor wireless network using CDMA multiple access

    NASA Astrophysics Data System (ADS)

    Zhang, Chang G.; Hafez, H. M.; Falconer, David D.

    1994-05-01

    CDMA (code division multiple access) may be an attractive technique for wireless access to broadband services because of its multiple access simplicity and other appealing features. In order to investigate traffic handling capabilities of a future network providing a variety of integrated services, this paper presents a study of a broadband indoor wireless network supporting high-speed traffic using CDMA multiple access. The results are obtained through the simulation of an indoor environment and the traffic capabilities of the wireless access to broadband 155.5 MHz ATM-SONET networks using the mm-wave band. A distributed system architecture is employed and the system performance is measured in terms of call blocking probability and dropping probability. The impacts of the base station density, traffic load, average holding time, and variable traffic sources on the system performance are examined. The improvement of system performance by implementing various techniques such as handoff, admission control, power control and sectorization are also investigated.

  19. Similarity-based prediction for Anatomical Therapeutic Chemical classification of drugs by integrating multiple data sources.

    PubMed

    Liu, Zhongyang; Guo, Feifei; Gu, Jiangyong; Wang, Yong; Li, Yang; Wang, Dan; Lu, Liang; Li, Dong; He, Fuchu

    2015-06-01

    Anatomical Therapeutic Chemical (ATC) classification system, widely applied in almost all drug utilization studies, is currently the most widely recognized classification system for drugs. Currently, new drug entries are added into the system only on users' requests, which leads to seriously incomplete drug coverage of the system, and bioinformatics prediction is helpful during this process. Here we propose a novel prediction model of drug-ATC code associations, using logistic regression to integrate multiple heterogeneous data sources including chemical structures, target proteins, gene expression, side-effects and chemical-chemical associations. The model obtains good performance for the prediction not only on ATC codes of unclassified drugs but also on new ATC codes of classified drugs assessed by cross-validation and independent test sets, and its efficacy exceeds previous methods. Further to facilitate the use, the model is developed into a user-friendly web service SPACE ( S: imilarity-based P: redictor of A: TC C: od E: ), which for each submitted compound, will give candidate ATC codes (ranked according to the decreasing probability_score predicted by the model) together with corresponding supporting evidence. This work not only contributes to knowing drugs' therapeutic, pharmacological and chemical properties, but also provides clues for drug repositioning and side-effect discovery. In addition, the construction of the prediction model also provides a general framework for similarity-based data integration which is suitable for other drug-related studies such as target, side-effect prediction etc. The web service SPACE is available at http://www.bprc.ac.cn/space. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. A general framework for multivariate multi-index drought prediction based on Multivariate Ensemble Streamflow Prediction (MESP)

    NASA Astrophysics Data System (ADS)

    Hao, Zengchao; Hao, Fanghua; Singh, Vijay P.

    2016-08-01

    Drought is among the costliest natural hazards worldwide and extreme drought events in recent years have caused huge losses to various sectors. Drought prediction is therefore critically important for providing early warning information to aid decision making to cope with drought. Due to the complicated nature of drought, it has been recognized that the univariate drought indicator may not be sufficient for drought characterization and hence multivariate drought indices have been developed for drought monitoring. Alongside the substantial effort in drought monitoring with multivariate drought indices, it is of equal importance to develop a drought prediction method with multivariate drought indices to integrate drought information from various sources. This study proposes a general framework for multivariate multi-index drought prediction that is capable of integrating complementary prediction skills from multiple drought indices. The Multivariate Ensemble Streamflow Prediction (MESP) is employed to sample from historical records for obtaining statistical prediction of multiple variables, which is then used as inputs to achieve multivariate prediction. The framework is illustrated with a linearly combined drought index (LDI), which is a commonly used multivariate drought index, based on climate division data in California and New York in the United States with different seasonality of precipitation. The predictive skill of LDI (represented with persistence) is assessed by comparison with the univariate drought index and results show that the LDI prediction skill is less affected by seasonality than the meteorological drought prediction based on SPI. Prediction results from the case study show that the proposed multivariate drought prediction outperforms the persistence prediction, implying a satisfactory performance of multivariate drought prediction. The proposed method would be useful for drought prediction to integrate drought information from various sources for early drought warning.

  1. Integration of Multiple Genomic and Phenotype Data to Infer Novel miRNA-Disease Associations

    PubMed Central

    Zhou, Meng; Cheng, Liang; Yang, Haixiu; Wang, Jing; Sun, Jie; Wang, Zhenzhen

    2016-01-01

    MicroRNAs (miRNAs) play an important role in the development and progression of human diseases. The identification of disease-associated miRNAs will be helpful for understanding the molecular mechanisms of diseases at the post-transcriptional level. Based on different types of genomic data sources, computational methods for miRNA-disease association prediction have been proposed. However, individual source of genomic data tends to be incomplete and noisy; therefore, the integration of various types of genomic data for inferring reliable miRNA-disease associations is urgently needed. In this study, we present a computational framework, CHNmiRD, for identifying miRNA-disease associations by integrating multiple genomic and phenotype data, including protein-protein interaction data, gene ontology data, experimentally verified miRNA-target relationships, disease phenotype information and known miRNA-disease connections. The performance of CHNmiRD was evaluated by experimentally verified miRNA-disease associations, which achieved an area under the ROC curve (AUC) of 0.834 for 5-fold cross-validation. In particular, CHNmiRD displayed excellent performance for diseases without any known related miRNAs. The results of case studies for three human diseases (glioblastoma, myocardial infarction and type 1 diabetes) showed that all of the top 10 ranked miRNAs having no known associations with these three diseases in existing miRNA-disease databases were directly or indirectly confirmed by our latest literature mining. All these results demonstrated the reliability and efficiency of CHNmiRD, and it is anticipated that CHNmiRD will serve as a powerful bioinformatics method for mining novel disease-related miRNAs and providing a new perspective into molecular mechanisms underlying human diseases at the post-transcriptional level. CHNmiRD is freely available at http://www.bio-bigdata.com/CHNmiRD. PMID:26849207

  2. Integration of Multiple Genomic and Phenotype Data to Infer Novel miRNA-Disease Associations.

    PubMed

    Shi, Hongbo; Zhang, Guangde; Zhou, Meng; Cheng, Liang; Yang, Haixiu; Wang, Jing; Sun, Jie; Wang, Zhenzhen

    2016-01-01

    MicroRNAs (miRNAs) play an important role in the development and progression of human diseases. The identification of disease-associated miRNAs will be helpful for understanding the molecular mechanisms of diseases at the post-transcriptional level. Based on different types of genomic data sources, computational methods for miRNA-disease association prediction have been proposed. However, individual source of genomic data tends to be incomplete and noisy; therefore, the integration of various types of genomic data for inferring reliable miRNA-disease associations is urgently needed. In this study, we present a computational framework, CHNmiRD, for identifying miRNA-disease associations by integrating multiple genomic and phenotype data, including protein-protein interaction data, gene ontology data, experimentally verified miRNA-target relationships, disease phenotype information and known miRNA-disease connections. The performance of CHNmiRD was evaluated by experimentally verified miRNA-disease associations, which achieved an area under the ROC curve (AUC) of 0.834 for 5-fold cross-validation. In particular, CHNmiRD displayed excellent performance for diseases without any known related miRNAs. The results of case studies for three human diseases (glioblastoma, myocardial infarction and type 1 diabetes) showed that all of the top 10 ranked miRNAs having no known associations with these three diseases in existing miRNA-disease databases were directly or indirectly confirmed by our latest literature mining. All these results demonstrated the reliability and efficiency of CHNmiRD, and it is anticipated that CHNmiRD will serve as a powerful bioinformatics method for mining novel disease-related miRNAs and providing a new perspective into molecular mechanisms underlying human diseases at the post-transcriptional level. CHNmiRD is freely available at http://www.bio-bigdata.com/CHNmiRD.

  3. a Webgis to Support Gpr 3d Data Acquisition: a First Step for the Integration of Underground Utility Networks in 3d City Models

    NASA Astrophysics Data System (ADS)

    Tabarro, P. G.; Pouliot, J.; Fortier, R.; Losier, L.-M.

    2017-10-01

    For the planning and sustainable development of large cities, it is critical to accurately locate and map, in 3D, existing underground utility networks (UUN) such as pipelines, cables, ducts, and channels. An emerging non-invasive instrument for collecting underground data such as UUN is the ground-penetrating radar (GPR). Although its capabilities, handling GPR and extracting relevant information from its data are not trivial tasks. For instance, both GPR and its complimentary software stack provide very few capabilities to co-visualize GPR collected data and other sources of spatial data such as orthophotography, DEM or road maps. Furthermore, the GPR interface lacks functionalities for adding annotation, editing geometric objects or querying attributes. A new approach to support GPR survey is proposed in this paper. This approach is based on the integration of multiple sources of geospatial datasets and the use of a Web-GIS system and relevant functionalities adapted to interoperable GPR data acquisition. The Web-GIS is developed as an improved module in an existing platform called GVX. The GVX-GPR module provides an interactive visualization of multiple layers of structured spatial data, including GPR profiles. This module offers new features when compared to traditional GPR surveys such as geo-annotated points of interest for identifying spatial clues in the GPR profiles, integration of city contextual data, high definition drone and satellite pictures, as-built, and more. The paper explains the engineering approach used to design and develop the Web GIS and tests for this survey approach, mapping and recording UUN as part of 3D city model.

  4. BioContainers: an open-source and community-driven framework for software standardization.

    PubMed

    da Veiga Leprevost, Felipe; Grüning, Björn A; Alves Aflitos, Saulo; Röst, Hannes L; Uszkoreit, Julian; Barsnes, Harald; Vaudel, Marc; Moreno, Pablo; Gatto, Laurent; Weber, Jonas; Bai, Mingze; Jimenez, Rafael C; Sachsenberg, Timo; Pfeuffer, Julianus; Vera Alvarez, Roberto; Griss, Johannes; Nesvizhskii, Alexey I; Perez-Riverol, Yasset

    2017-08-15

    BioContainers (biocontainers.pro) is an open-source and community-driven framework which provides platform independent executable environments for bioinformatics software. BioContainers allows labs of all sizes to easily install bioinformatics software, maintain multiple versions of the same software and combine tools into powerful analysis pipelines. BioContainers is based on popular open-source projects Docker and rkt frameworks, that allow software to be installed and executed under an isolated and controlled environment. Also, it provides infrastructure and basic guidelines to create, manage and distribute bioinformatics containers with a special focus on omics technologies. These containers can be integrated into more comprehensive bioinformatics pipelines and different architectures (local desktop, cloud environments or HPC clusters). The software is freely available at github.com/BioContainers/. yperez@ebi.ac.uk. © The Author(s) 2017. Published by Oxford University Press.

  5. BioContainers: an open-source and community-driven framework for software standardization

    PubMed Central

    da Veiga Leprevost, Felipe; Grüning, Björn A.; Alves Aflitos, Saulo; Röst, Hannes L.; Uszkoreit, Julian; Barsnes, Harald; Vaudel, Marc; Moreno, Pablo; Gatto, Laurent; Weber, Jonas; Bai, Mingze; Jimenez, Rafael C.; Sachsenberg, Timo; Pfeuffer, Julianus; Vera Alvarez, Roberto; Griss, Johannes; Nesvizhskii, Alexey I.; Perez-Riverol, Yasset

    2017-01-01

    Abstract Motivation BioContainers (biocontainers.pro) is an open-source and community-driven framework which provides platform independent executable environments for bioinformatics software. BioContainers allows labs of all sizes to easily install bioinformatics software, maintain multiple versions of the same software and combine tools into powerful analysis pipelines. BioContainers is based on popular open-source projects Docker and rkt frameworks, that allow software to be installed and executed under an isolated and controlled environment. Also, it provides infrastructure and basic guidelines to create, manage and distribute bioinformatics containers with a special focus on omics technologies. These containers can be integrated into more comprehensive bioinformatics pipelines and different architectures (local desktop, cloud environments or HPC clusters). Availability and Implementation The software is freely available at github.com/BioContainers/. Contact yperez@ebi.ac.uk PMID:28379341

  6. Hurricane Harvey Riverine Flooding: Part 2: Integration of Heterogeneous Earth Observation Data for Comparative Analysis with High-Resolution Inundation Boundaries Reconstructed from Flood2D-GPU Model

    NASA Astrophysics Data System (ADS)

    Jackson, C.; Sava, E.; Cervone, G.

    2017-12-01

    Hurricane Harvey has been noted as the wettest cyclone on record for the US as well as the most destructive (so far) for the 2017 hurricane season. An entire year worth of rainfall occurred over the course of a few days. The city of Houston was greatly impacted as the storm lingered over the city for five days, causing a record-breaking 50+ inches of rain as well as severe damage from flooding. Flood model simulations were performed to reconstruct the event in order to better understand, assess, and predict flooding dynamics for the future. Additionally, number of remote sensing platforms, and on ground instruments that provide near real-time data have also been used for flood identification, monitoring, and damage assessment. Although both flood models and remote sensing techniques are able to identify inundated areas, rapid and accurate flood prediction at a high spatio-temporal resolution remains a challenge. Thus a methodological approach which fuses the two techniques can help to better validate what is being modeled and observed. Recent advancements in data fusion techniques of remote sensing with near real time heterogeneous datasets have allowed emergency responders to more efficiently extract increasingly precise and relevant knowledge from the available information. In this work the use of multiple sources of contributed data, coupled with remotely sensed and open source geospatial datasets is demonstrated to generate an understanding of potential damage assessment for the floods after Hurricane Harvey in Harris County, Texas. The feasibility of integrating multiple sources at different temporal and spatial resolutions into hydrodynamic models for flood inundation simulations is assessed. Furthermore the contributed datasets are compared against a reconstructed flood extent generated from the Flood2D-GPU model.

  7. Optimization of fixture layouts of glass laser optics using multiple kernel regression.

    PubMed

    Su, Jianhua; Cao, Enhua; Qiao, Hong

    2014-05-10

    We aim to build an integrated fixturing model to describe the structural properties and thermal properties of the support frame of glass laser optics. Therefore, (a) a near global optimal set of clamps can be computed to minimize the surface shape error of the glass laser optic based on the proposed model, and (b) a desired surface shape error can be obtained by adjusting the clamping forces under various environmental temperatures based on the model. To construct the model, we develop a new multiple kernel learning method and call it multiple kernel support vector functional regression. The proposed method uses two layer regressions to group and order the data sources by the weights of the kernels and the factors of the layers. Because of that, the influences of the clamps and the temperature can be evaluated by grouping them into different layers.

  8. Managing data from multiple disciplines, scales, and sites to support synthesis and modeling

    USGS Publications Warehouse

    Olson, R. J.; Briggs, J. M.; Porter, J.H.; Mah, Grant R.; Stafford, S.G.

    1999-01-01

    The synthesis and modeling of ecological processes at multiple spatial and temporal scales involves bringing together and sharing data from numerous sources. This article describes a data and information system model that facilitates assembling, managing, and sharing diverse data from multiple disciplines, scales, and sites to support integrated ecological studies. Cross-site scientific-domain working groups coordinate the development of data associated with their particular scientific working group, including decisions about data requirements, data to be compiled, data formats, derived data products, and schedules across the sites. The Web-based data and information system consists of nodes for each working group plus a central node that provides data access, project information, data query, and other functionality. The approach incorporates scientists and computer experts in the working groups and provides incentives for individuals to submit documented data to the data and information system.

  9. SparkClouds: visualizing trends in tag clouds.

    PubMed

    Lee, Bongshin; Riche, Nathalie Henry; Karlson, Amy K; Carpendale, Sheelash

    2010-01-01

    Tag clouds have proliferated over the web over the last decade. They provide a visual summary of a collection of texts by visually depicting the tag frequency by font size. In use, tag clouds can evolve as the associated data source changes over time. Interesting discussions around tag clouds often include a series of tag clouds and consider how they evolve over time. However, since tag clouds do not explicitly represent trends or support comparisons, the cognitive demands placed on the person for perceiving trends in multiple tag clouds are high. In this paper, we introduce SparkClouds, which integrate sparklines into a tag cloud to convey trends between multiple tag clouds. We present results from a controlled study that compares SparkClouds with two traditional trend visualizations—multiple line graphs and stacked bar charts—as well as Parallel Tag Clouds. Results show that SparkClouds ability to show trends compares favourably to the alternative visualizations.

  10. Fissile material detector

    DOEpatents

    Ivanov, Alexander I.; Lushchikov, Vladislav I.; Shabalin, Eugeny P.; Maznyy, Nikita G.; Khvastunov, Michael M.; Rowland, Mark

    2002-01-01

    A detector for fissile materials which provides for integrity monitoring of fissile materials and can be used for nondestructive assay to confirm the presence of a stable content of fissile material in items. The detector has a sample cavity large enough to enable assay of large items of arbitrary configuration, utilizes neutron sources fabricated in spatially extended shapes mounted on the endcaps of the sample cavity, incorporates a thermal neutron filter insert with reflector properties, and the electronics module includes a neutron multiplicity coincidence counter.

  11. A source with a 10{sup 13} DT neutron yield on the basis of a spherical plasma focus chamber

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zavyalov, N. V.; Maslov, V. V.; Rumyantsev, V. G., E-mail: rumyantsev@expd.vniief.ru

    2013-03-15

    Results from preliminary experimental research of neutron emission generated by a spherical plasma focus chamber filled with an equal-component deuterium-tritium mixture are presented. At a maximum current amplitude in the discharge chamber of {approx}1.5 MA, neutron pulses with a full width at half-maximum of 75-80 ns and an integral yield of {approx}1.3 Multiplication-Sign 10{sup 13} DT neutrons have been recorded.

  12. “Gestaltomics”: Systems Biology Schemes for the Study of Neuropsychiatric Diseases

    PubMed Central

    Gutierrez Najera, Nora A.; Resendis-Antonio, Osbaldo; Nicolini, Humberto

    2017-01-01

    The integration of different sources of biological information about what defines a behavioral phenotype is difficult to unify in an entity that reflects the arithmetic sum of its individual parts. In this sense, the challenge of Systems Biology for understanding the “psychiatric phenotype” is to provide an improved vision of the shape of the phenotype as it is visualized by “Gestalt” psychology, whose fundamental axiom is that the observed phenotype (behavior or mental disorder) will be the result of the integrative composition of every part. Therefore, we propose the term “Gestaltomics” as a term from Systems Biology to integrate data coming from different sources of information (such as the genome, transcriptome, proteome, epigenome, metabolome, phenome, and microbiome). In addition to this biological complexity, the mind is integrated through multiple brain functions that receive and process complex information through channels and perception networks (i.e., sight, ear, smell, memory, and attention) that in turn are programmed by genes and influenced by environmental processes (epigenetic). Today, the approach of medical research in human diseases is to isolate one disease for study; however, the presence of an additional disease (co-morbidity) or more than one disease (multimorbidity) adds complexity to the study of these conditions. This review will present the challenge of integrating psychiatric disorders at different levels of information (Gestaltomics). The implications of increasing the level of complexity, for example, studying the co-morbidity with another disease such as cancer, will also be discussed. PMID:28536537

  13. Development of DKB ETL module in case of data conversion

    NASA Astrophysics Data System (ADS)

    Kaida, A. Y.; Golosova, M. V.; Grigorieva, M. A.; Gubin, M. Y.

    2018-05-01

    Modern scientific experiments involve the producing of huge volumes of data that requires new approaches in data processing and storage. These data themselves, as well as their processing and storage, are accompanied by a valuable amount of additional information, called metadata, distributed over multiple informational systems and repositories, and having a complicated, heterogeneous structure. Gathering these metadata for experiments in the field of high energy nuclear physics (HENP) is a complex issue, requiring the quest for solutions outside the box. One of the tasks is to integrate metadata from different repositories into some kind of a central storage. During the integration process, metadata taken from original source repositories go through several processing steps: metadata aggregation, transformation according to the current data model and loading it to the general storage in a standardized form. The R&D project of ATLAS experiment on LHC, Data Knowledge Base, is aimed to provide fast and easy access to significant information about LHC experiments for the scientific community. The data integration subsystem, being developed for the DKB project, can be represented as a number of particular pipelines, arranging data flow from data sources to the main DKB storage. The data transformation process, represented by a single pipeline, can be considered as a number of successive data transformation steps, where each step is implemented as an individual program module. This article outlines the specifics of program modules, used in the dataflow, and describes one of the modules developed and integrated into the data integration subsystem of DKB.

  14. LOCALIZING INTEGRAL SOURCES WITH CHANDRA: X-RAY AND MULTI-WAVELENGTH IDENTIFICATIONS AND ENERGY SPECTRA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tomsick, John A.; Bodaghee, Arash; Chaty, Sylvain

    2012-08-01

    We report on Chandra observations of 18 hard X-ray (>20 keV) sources discovered with the INTEGRAL satellite near the Galactic plane. For 14 of the INTEGRAL sources, we have uncovered one or two potential Chandra counterparts per source. These provide soft X-ray (0.3-10 keV) spectra and subarcsecond localizations, which we use to identify counterparts at other wavelengths, providing information about the nature of each source. Despite the fact that all of the sources are within 5 Degree-Sign of the plane, four of the IGR sources are active galactic nuclei (AGNs; IGR J01545+6437, IGR J15391-5307, IGR J15415-5029, and IGR J21565+5948) andmore » four others are likely AGNs (IGR J03103+5706, IGR J09189-4418, IGR J16413-4046, and IGR J16560-4958) based on each of them having a strong IR excess and/or extended optical or near-IR emission. We compare the X-ray and near-IR fluxes of this group of sources to those of AGNs selected by their 2-10 keV emission in previous studies and find that these IGR AGNs are in the range of typical values. There is evidence in favor of four of the sources being Galactic (IGR J12489-6243, IGR J15293-5609, IGR J16173-5023, and IGR J16206-5253), but only IGR J15293-5609 is confirmed as a Galactic source as it has a unique Chandra counterpart and a parallax measurement from previous optical observations that puts its distance at 1.56 {+-} 0.12 kpc. The 0.3-10 keV luminosity for this source is (1.4{sup +1.0}{sub -0.4}) Multiplication-Sign 10{sup 32} erg s{sup -1}, and its optical/IR spectral energy distribution is well described by a blackbody with a temperature of 4200-7000 K and a radius of 12.0-16.4 R{sub Sun }. These values suggest that IGR J15293-5609 is a symbiotic binary with an early K-type giant and a white dwarf accretor. We also obtained likely Chandra identifications for IGR J13402-6428 and IGR J15368-5102, but follow-up observations are required to constrain their source types.« less

  15. A service-oriented distributed semantic mediator: integrating multiscale biomedical information.

    PubMed

    Mora, Oscar; Engelbrecht, Gerhard; Bisbal, Jesus

    2012-11-01

    Biomedical research continuously generates large amounts of heterogeneous and multimodal data spread over multiple data sources. These data, if appropriately shared and exploited, could dramatically improve the research practice itself, and ultimately the quality of health care delivered. This paper presents DISMED (DIstributed Semantic MEDiator), an open source semantic mediator that provides a unified view of a federated environment of multiscale biomedical data sources. DISMED is a Web-based software application to query and retrieve information distributed over a set of registered data sources, using semantic technologies. It also offers a userfriendly interface specifically designed to simplify the usage of these technologies by non-expert users. Although the architecture of the software mediator is generic and domain independent, in the context of this paper, DISMED has been evaluated for managing biomedical environments and facilitating research with respect to the handling of scientific data distributed in multiple heterogeneous data sources. As part of this contribution, a quantitative evaluation framework has been developed. It consist of a benchmarking scenario and the definition of five realistic use-cases. This framework, created entirely with public datasets, has been used to compare the performance of DISMED against other available mediators. It is also available to the scientific community in order to evaluate progress in the domain of semantic mediation, in a systematic and comparable manner. The results show an average improvement in the execution time by DISMED of 55% compared to the second best alternative in four out of the five use-cases of the experimental evaluation.

  16. Application of Molecular Typing Results in Source Attribution Models: The Case of Multiple Locus Variable Number Tandem Repeat Analysis (MLVA) of Salmonella Isolates Obtained from Integrated Surveillance in Denmark.

    PubMed

    de Knegt, Leonardo V; Pires, Sara M; Löfström, Charlotta; Sørensen, Gitte; Pedersen, Karl; Torpdahl, Mia; Nielsen, Eva M; Hald, Tine

    2016-03-01

    Salmonella is an important cause of bacterial foodborne infections in Denmark. To identify the main animal-food sources of human salmonellosis, risk managers have relied on a routine application of a microbial subtyping-based source attribution model since 1995. In 2013, multiple locus variable number tandem repeat analysis (MLVA) substituted phage typing as the subtyping method for surveillance of S. Enteritidis and S. Typhimurium isolated from animals, food, and humans in Denmark. The purpose of this study was to develop a modeling approach applying a combination of serovars, MLVA types, and antibiotic resistance profiles for the Salmonella source attribution, and assess the utility of the results for the food safety decisionmakers. Full and simplified MLVA schemes from surveillance data were tested, and model fit and consistency of results were assessed using statistical measures. We conclude that loci schemes STTR5/STTR10/STTR3 for S. Typhimurium and SE9/SE5/SE2/SE1/SE3 for S. Enteritidis can be used in microbial subtyping-based source attribution models. Based on the results, we discuss that an adjustment of the discriminatory level of the subtyping method applied often will be required to fit the purpose of the study and the available data. The issues discussed are also considered highly relevant when applying, e.g., extended multi-locus sequence typing or next-generation sequencing techniques. © 2015 Society for Risk Analysis.

  17. Low cost and open source multi-fluorescence imaging system for teaching and research in biology and bioengineering.

    PubMed

    Nuñez, Isaac; Matute, Tamara; Herrera, Roberto; Keymer, Juan; Marzullo, Timothy; Rudge, Timothy; Federici, Fernán

    2017-01-01

    The advent of easy-to-use open source microcontrollers, off-the-shelf electronics and customizable manufacturing technologies has facilitated the development of inexpensive scientific devices and laboratory equipment. In this study, we describe an imaging system that integrates low-cost and open-source hardware, software and genetic resources. The multi-fluorescence imaging system consists of readily available 470 nm LEDs, a Raspberry Pi camera and a set of filters made with low cost acrylics. This device allows imaging in scales ranging from single colonies to entire plates. We developed a set of genetic components (e.g. promoters, coding sequences, terminators) and vectors following the standard framework of Golden Gate, which allowed the fabrication of genetic constructs in a combinatorial, low cost and robust manner. In order to provide simultaneous imaging of multiple wavelength signals, we screened a series of long stokes shift fluorescent proteins that could be combined with cyan/green fluorescent proteins. We found CyOFP1, mBeRFP and sfGFP to be the most compatible set for 3-channel fluorescent imaging. We developed open source Python code to operate the hardware to run time-lapse experiments with automated control of illumination and camera and a Python module to analyze data and extract meaningful biological information. To demonstrate the potential application of this integral system, we tested its performance on a diverse range of imaging assays often used in disciplines such as microbial ecology, microbiology and synthetic biology. We also assessed its potential use in a high school environment to teach biology, hardware design, optics, and programming. Together, these results demonstrate the successful integration of open source hardware, software, genetic resources and customizable manufacturing to obtain a powerful, low cost and robust system for education, scientific research and bioengineering. All the resources developed here are available under open source licenses.

  18. Low cost and open source multi-fluorescence imaging system for teaching and research in biology and bioengineering

    PubMed Central

    Herrera, Roberto; Keymer, Juan; Marzullo, Timothy; Rudge, Timothy

    2017-01-01

    The advent of easy-to-use open source microcontrollers, off-the-shelf electronics and customizable manufacturing technologies has facilitated the development of inexpensive scientific devices and laboratory equipment. In this study, we describe an imaging system that integrates low-cost and open-source hardware, software and genetic resources. The multi-fluorescence imaging system consists of readily available 470 nm LEDs, a Raspberry Pi camera and a set of filters made with low cost acrylics. This device allows imaging in scales ranging from single colonies to entire plates. We developed a set of genetic components (e.g. promoters, coding sequences, terminators) and vectors following the standard framework of Golden Gate, which allowed the fabrication of genetic constructs in a combinatorial, low cost and robust manner. In order to provide simultaneous imaging of multiple wavelength signals, we screened a series of long stokes shift fluorescent proteins that could be combined with cyan/green fluorescent proteins. We found CyOFP1, mBeRFP and sfGFP to be the most compatible set for 3-channel fluorescent imaging. We developed open source Python code to operate the hardware to run time-lapse experiments with automated control of illumination and camera and a Python module to analyze data and extract meaningful biological information. To demonstrate the potential application of this integral system, we tested its performance on a diverse range of imaging assays often used in disciplines such as microbial ecology, microbiology and synthetic biology. We also assessed its potential use in a high school environment to teach biology, hardware design, optics, and programming. Together, these results demonstrate the successful integration of open source hardware, software, genetic resources and customizable manufacturing to obtain a powerful, low cost and robust system for education, scientific research and bioengineering. All the resources developed here are available under open source licenses. PMID:29140977

  19. An integral nuclear power and propulsion system concept

    NASA Astrophysics Data System (ADS)

    Choong, Phillip T.; Teofilo, Vincent L.; Begg, Lester L.; Dunn, Charles; Otting, William

    An integral space power concept provides both the electrical power and propulsion from a common heat source and offers superior performance capabilities over conventional orbital insertion using chemical propulsion systems. This paper describes a hybrid (bimodal) system concept based on a proven, inherently safe solid fuel form for the high temperature reactor core operation and rugged planar thermionic energy converter for long-life steady state electric power production combined with NERVA-based rocket technology for propulsion. The integral system is capable of long-life power operation and multiple propulsion operations. At an optimal thrust level, the integral system can maintain the minimal delta-V requirement while minimizing the orbital transfer time. A trade study comparing the overall benefits in placing large payloads to GEO with the nuclear electric propulsion option shows superiority of nuclear thermal propulsion. The resulting savings in orbital transfer time and the substantial reduction of overall lift requirement enables the use of low-cost launchers for several near-term military satellite missions.

  20. Asymmetric temporal integration of layer 4 and layer 2/3 inputs in visual cortex.

    PubMed

    Hang, Giao B; Dan, Yang

    2011-01-01

    Neocortical neurons in vivo receive concurrent synaptic inputs from multiple sources, including feedforward, horizontal, and feedback pathways. Layer 2/3 of the visual cortex receives feedforward input from layer 4 and horizontal input from layer 2/3. Firing of the pyramidal neurons, which carries the output to higher cortical areas, depends critically on the interaction of these pathways. Here we examined synaptic integration of inputs from layer 4 and layer 2/3 in rat visual cortical slices. We found that the integration is sublinear and temporally asymmetric, with larger responses if layer 2/3 input preceded layer 4 input. The sublinearity depended on inhibition, and the asymmetry was largely attributable to the difference between the two inhibitory inputs. Interestingly, the asymmetric integration was specific to pyramidal neurons, and it strongly affected their spiking output. Thus via cortical inhibition, the temporal order of activation of layer 2/3 and layer 4 pathways can exert powerful control of cortical output during visual processing.

  1. Probabilistic tsunami hazard analysis: Multiple sources and global applications

    USGS Publications Warehouse

    Grezio, Anita; Babeyko, Andrey; Baptista, Maria Ana; Behrens, Jörn; Costa, Antonio; Davies, Gareth; Geist, Eric L.; Glimsdal, Sylfest; González, Frank I.; Griffin, Jonathan; Harbitz, Carl B.; LeVeque, Randall J.; Lorito, Stefano; Løvholt, Finn; Omira, Rachid; Mueller, Christof; Paris, Raphaël; Parsons, Thomas E.; Polet, Jascha; Power, William; Selva, Jacopo; Sørensen, Mathilde B.; Thio, Hong Kie

    2017-01-01

    Applying probabilistic methods to infrequent but devastating natural events is intrinsically challenging. For tsunami analyses, a suite of geophysical assessments should be in principle evaluated because of the different causes generating tsunamis (earthquakes, landslides, volcanic activity, meteorological events, and asteroid impacts) with varying mean recurrence rates. Probabilistic Tsunami Hazard Analyses (PTHAs) are conducted in different areas of the world at global, regional, and local scales with the aim of understanding tsunami hazard to inform tsunami risk reduction activities. PTHAs enhance knowledge of the potential tsunamigenic threat by estimating the probability of exceeding specific levels of tsunami intensity metrics (e.g., run-up or maximum inundation heights) within a certain period of time (exposure time) at given locations (target sites); these estimates can be summarized in hazard maps or hazard curves. This discussion presents a broad overview of PTHA, including (i) sources and mechanisms of tsunami generation, emphasizing the variety and complexity of the tsunami sources and their generation mechanisms, (ii) developments in modeling the propagation and impact of tsunami waves, and (iii) statistical procedures for tsunami hazard estimates that include the associated epistemic and aleatoric uncertainties. Key elements in understanding the potential tsunami hazard are discussed, in light of the rapid development of PTHA methods during the last decade and the globally distributed applications, including the importance of considering multiple sources, their relative intensities, probabilities of occurrence, and uncertainties in an integrated and consistent probabilistic framework.

  2. Probabilistic Tsunami Hazard Analysis: Multiple Sources and Global Applications

    NASA Astrophysics Data System (ADS)

    Grezio, Anita; Babeyko, Andrey; Baptista, Maria Ana; Behrens, Jörn; Costa, Antonio; Davies, Gareth; Geist, Eric L.; Glimsdal, Sylfest; González, Frank I.; Griffin, Jonathan; Harbitz, Carl B.; LeVeque, Randall J.; Lorito, Stefano; Løvholt, Finn; Omira, Rachid; Mueller, Christof; Paris, Raphaël.; Parsons, Tom; Polet, Jascha; Power, William; Selva, Jacopo; Sørensen, Mathilde B.; Thio, Hong Kie

    2017-12-01

    Applying probabilistic methods to infrequent but devastating natural events is intrinsically challenging. For tsunami analyses, a suite of geophysical assessments should be in principle evaluated because of the different causes generating tsunamis (earthquakes, landslides, volcanic activity, meteorological events, and asteroid impacts) with varying mean recurrence rates. Probabilistic Tsunami Hazard Analyses (PTHAs) are conducted in different areas of the world at global, regional, and local scales with the aim of understanding tsunami hazard to inform tsunami risk reduction activities. PTHAs enhance knowledge of the potential tsunamigenic threat by estimating the probability of exceeding specific levels of tsunami intensity metrics (e.g., run-up or maximum inundation heights) within a certain period of time (exposure time) at given locations (target sites); these estimates can be summarized in hazard maps or hazard curves. This discussion presents a broad overview of PTHA, including (i) sources and mechanisms of tsunami generation, emphasizing the variety and complexity of the tsunami sources and their generation mechanisms, (ii) developments in modeling the propagation and impact of tsunami waves, and (iii) statistical procedures for tsunami hazard estimates that include the associated epistemic and aleatoric uncertainties. Key elements in understanding the potential tsunami hazard are discussed, in light of the rapid development of PTHA methods during the last decade and the globally distributed applications, including the importance of considering multiple sources, their relative intensities, probabilities of occurrence, and uncertainties in an integrated and consistent probabilistic framework.

  3. Bringing Web 2.0 to bioinformatics.

    PubMed

    Zhang, Zhang; Cheung, Kei-Hoi; Townsend, Jeffrey P

    2009-01-01

    Enabling deft data integration from numerous, voluminous and heterogeneous data sources is a major bioinformatic challenge. Several approaches have been proposed to address this challenge, including data warehousing and federated databasing. Yet despite the rise of these approaches, integration of data from multiple sources remains problematic and toilsome. These two approaches follow a user-to-computer communication model for data exchange, and do not facilitate a broader concept of data sharing or collaboration among users. In this report, we discuss the potential of Web 2.0 technologies to transcend this model and enhance bioinformatics research. We propose a Web 2.0-based Scientific Social Community (SSC) model for the implementation of these technologies. By establishing a social, collective and collaborative platform for data creation, sharing and integration, we promote a web services-based pipeline featuring web services for computer-to-computer data exchange as users add value. This pipeline aims to simplify data integration and creation, to realize automatic analysis, and to facilitate reuse and sharing of data. SSC can foster collaboration and harness collective intelligence to create and discover new knowledge. In addition to its research potential, we also describe its potential role as an e-learning platform in education. We discuss lessons from information technology, predict the next generation of Web (Web 3.0), and describe its potential impact on the future of bioinformatics studies.

  4. AN INTEGRATED APPROACH TO CHARACTERIZING BYPASSED OIL IN HETEROGENEOUS AND FRACTURED RESERVOIRS USING PARTITIONING TRACERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akhil Datta-Gupta

    2003-08-01

    We explore the use of efficient streamline-based simulation approaches for modeling partitioning interwell tracer tests in hydrocarbon reservoirs. Specifically, we utilize the unique features of streamline models to develop an efficient approach for interpretation and history matching of field tracer response. A critical aspect here is the underdetermined and highly ill-posed nature of the associated inverse problems. We have adopted an integrated approach whereby we combine data from multiple sources to minimize the uncertainty and non-uniqueness in the interpreted results. For partitioning interwell tracer tests, these are primarily the distribution of reservoir permeability and oil saturation distribution. A novel approachmore » to multiscale data integration using Markov Random Fields (MRF) has been developed to integrate static data sources from the reservoir such as core, well log and 3-D seismic data. We have also explored the use of a finite difference reservoir simulator, UTCHEM, for field-scale design and optimization of partitioning interwell tracer tests. The finite-difference model allows us to include detailed physics associated with reactive tracer transport, particularly those related with transverse and cross-streamline mechanisms. We have investigated the potential use of downhole tracer samplers and also the use of natural tracers for the design of partitioning tracer tests. Finally, the behavior of partitioning tracer tests in fractured reservoirs is investigated using a dual-porosity finite-difference model.« less

  5. Quantifying multiple telecouplings using an integrated suite of spatially-explicit tools

    NASA Astrophysics Data System (ADS)

    Tonini, F.; Liu, J.

    2016-12-01

    Telecoupling is an interdisciplinary research umbrella concept that enables natural and social scientists to understand and generate information for managing how humans and nature can sustainably coexist worldwide. To systematically study telecoupling, it is essential to build a comprehensive set of spatially-explicit tools for describing and quantifying multiple reciprocal socioeconomic and environmental interactions between a focal area and other areas. Here we introduce the Telecoupling Toolbox, a new free and open-source set of tools developed to map and identify the five major interrelated components of the telecoupling framework: systems, flows, agents, causes, and effects. The modular design of the toolbox allows the integration of existing tools and software (e.g. InVEST) to assess synergies and tradeoffs associated with policies and other local to global interventions. We show applications of the toolbox using a number of representative studies that address a variety of scientific and management issues related to telecouplings throughout the world. The results suggest that the toolbox can thoroughly map and quantify multiple telecouplings under various contexts while providing users with an easy-to-use interface. It provides a powerful platform to address globally important issues, such as land use and land cover change, species invasion, migration, flows of ecosystem services, and international trade of goods and products.

  6. On-chip integration of suspended InGaN/GaN multiple-quantum-well devices with versatile functionalities.

    PubMed

    Cai, Wei; Yang, Yongchao; Gao, Xumin; Yuan, Jialei; Yuan, Wei; Zhu, Hongbo; Wang, Yongjin

    2016-03-21

    We propose, fabricate and demonstrate on-chip photonic integration of suspended InGaN/GaN multiple quantum wells (MQWs) devices on the GaN-on-silicon platform. Both silicon removal and back wafer etching are conducted to obtain membrane-type devices, and suspended waveguides are used for the connection between p-n junction InGaN/GaN MQWs devices. As an in-plane data transmission system, the middle p-n junction InGaN/GaN MQWs device is used as a light emitting diode (LED) to deliver signals by modulating the intensity of the emitted light, and the other two devices act as photodetectors (PDs) to sense the light guided by the suspended waveguide and convert the photons into electrons, achieving 1 × 2 in-plane information transmission via visible light. Correspondingly, the three devices can function as independent PDs to realize multiple receivers for free space visible light communication. Further, the on-chip photonic platform can be used as an active electro-optical sensing system when the middle device acts as a PD and the other two devices serve as LEDs. The experimental results show that the auxiliary LED sources can enhance the amplitude of the induced photocurrent.

  7. High-efficiency integrated piezoelectric energy harvesting systems

    NASA Astrophysics Data System (ADS)

    Hande, Abhiman; Shah, Pradeep

    2010-04-01

    This paper describes hierarchically architectured development of an energy harvesting (EH) system that consists of micro and/or macro-scale harvesters matched to multiple components of remote wireless sensor and communication nodes. The micro-scale harvesters consist of thin-film MEMS piezoelectric cantilever arrays and power generation modules in IC-like form to allow efficient EH from vibrations. The design uses new high conversion efficiency thin-film processes combined with novel cantilever structures tuned to multiple resonant frequencies as broadband arrays. The macro-scale harvesters are used to power the collector nodes that have higher power specifications. These bulk harvesters can be integrated with efficient adaptive power management circuits that match transducer impedance and maximize power harvested from multiple scavenging sources with very low intrinsic power consumption. Texas MicroPower, Inc. is developing process based on a composition that has the highest reported energy density as compared to other commercially available bulk PZT-based sensor/actuator ceramic materials and extending it to thin-film materials and miniature conversion transducer structures. The multiform factor harvesters can be deployed for several military and commercial applications such as underground unattended sensors, sensors in oil rigs, structural health monitoring, supply chain management, and battlefield applications such as sensors on soldier apparel, equipment, and wearable electronics.

  8. Architecture for WSN Nodes Integration in Context Aware Systems Using Semantic Messages

    NASA Astrophysics Data System (ADS)

    Larizgoitia, Iker; Muguira, Leire; Vazquez, Juan Ignacio

    Wireless sensor networks (WSN) are becoming extremely popular in the development of context aware systems. Traditionally WSN have been focused on capturing data, which was later analyzed and interpreted in a server with more computational power. In this kind of scenario the problem of representing the sensor information needs to be addressed. Every node in the network might have different sensors attached; therefore their correspondent packet structures will be different. The server has to be aware of the meaning of every single structure and data in order to be able to interpret them. Multiple sensors, multiple nodes, multiple packet structures (and not following a standard format) is neither scalable nor interoperable. Context aware systems have solved this problem with the use of semantic technologies. They provide a common framework to achieve a standard definition of any domain. Nevertheless, these representations are computationally expensive, so a WSN cannot afford them. The work presented in this paper tries to bridge the gap between the sensor information and its semantic representation, by defining a simple architecture that enables the definition of this information natively in a semantic way, achieving the integration of the semantic information in the network packets. This will have several benefits, the most important being the possibility of promoting every WSN node to a real semantic information source.

  9. Generating a focused view of disease ontology cancer terms for pan-cancer data integration and analysis

    PubMed Central

    Wu, Tsung-Jung; Schriml, Lynn M.; Chen, Qing-Rong; Colbert, Maureen; Crichton, Daniel J.; Finney, Richard; Hu, Ying; Kibbe, Warren A.; Kincaid, Heather; Meerzaman, Daoud; Mitraka, Elvira; Pan, Yang; Smith, Krista M.; Srivastava, Sudhir; Ward, Sari; Yan, Cheng; Mazumder, Raja

    2015-01-01

    Bio-ontologies provide terminologies for the scientific community to describe biomedical entities in a standardized manner. There are multiple initiatives that are developing biomedical terminologies for the purpose of providing better annotation, data integration and mining capabilities. Terminology resources devised for multiple purposes inherently diverge in content and structure. A major issue of biomedical data integration is the development of overlapping terms, ambiguous classifications and inconsistencies represented across databases and publications. The disease ontology (DO) was developed over the past decade to address data integration, standardization and annotation issues for human disease data. We have established a DO cancer project to be a focused view of cancer terms within the DO. The DO cancer project mapped 386 cancer terms from the Catalogue of Somatic Mutations in Cancer (COSMIC), The Cancer Genome Atlas (TCGA), International Cancer Genome Consortium, Therapeutically Applicable Research to Generate Effective Treatments, Integrative Oncogenomics and the Early Detection Research Network into a cohesive set of 187 DO terms represented by 63 top-level DO cancer terms. For example, the COSMIC term ‘kidney, NS, carcinoma, clear_cell_renal_cell_carcinoma’ and TCGA term ‘Kidney renal clear cell carcinoma’ were both grouped to the term ‘Disease Ontology Identification (DOID):4467 / renal clear cell carcinoma’ which was mapped to the TopNodes_DOcancerslim term ‘DOID:263 / kidney cancer’. Mapping of diverse cancer terms to DO and the use of top level terms (DO slims) will enable pan-cancer analysis across datasets generated from any of the cancer term sources where pan-cancer means including or relating to all or multiple types of cancer. The terms can be browsed from the DO web site (http://www.disease-ontology.org) and downloaded from the DO’s Apache Subversion or GitHub repositories. Database URL: http://www.disease-ontology.org PMID:25841438

  10. Steady-State Ion Beam Modeling with MICHELLE

    NASA Astrophysics Data System (ADS)

    Petillo, John

    2003-10-01

    There is a need to efficiently model ion beam physics for ion implantation, chemical vapor deposition, and ion thrusters. Common to all is the need for three-dimensional (3D) simulation of volumetric ion sources, ion acceleration, and optics, with the ability to model charge exchange of the ion beam with a background neutral gas. The two pieces of physics stand out as significant are the modeling of the volumetric source and charge exchange. In the MICHELLE code, the method for modeling the plasma sheath in ion sources assumes that the electron distribution function is a Maxwellian function of electrostatic potential over electron temperature. Charge exchange is the process by which a neutral background gas with a "fast" charged particle streaming through exchanges its electron with the charged particle. An efficient method for capturing this is essential, and the model presented is based on semi-empirical collision cross section functions. This appears to be the first steady-state 3D algorithm of its type to contain multiple generations of charge exchange, work with multiple species and multiple charge state beam/source particles simultaneously, take into account the self-consistent space charge effects, and track the subsequent fast neutral particles. The solution used by MICHELLE is to combine finite element analysis with particle-in-cell (PIC) methods. The basic physics model is based on the equilibrium steady-state application of the electrostatic particle-in-cell (PIC) approximation employing a conformal computational mesh. The foundation stems from the same basic model introduced in codes such as EGUN. Here, Poisson's equation is used to self-consistently include the effects of space charge on the fields, and the relativistic Lorentz equation is used to integrate the particle trajectories through those fields. The presentation will consider the complexity of modeling ion thrusters.

  11. Metabolic engineering with plants for a sustainable biobased economy.

    PubMed

    Yoon, Jong Moon; Zhao, Le; Shanks, Jacqueline V

    2013-01-01

    Plants are bona fide sustainable organisms because they accumulate carbon and synthesize beneficial metabolites from photosynthesis. To meet the challenges to food security and health threatened by increasing population growth and depletion of nonrenewable natural resources, recent metabolic engineering efforts have shifted from single pathways to holistic approaches with multiple genes owing to integration of omics technologies. Successful engineering of plants results in the high yield of biomass components for primary food sources and biofuel feedstocks, pharmaceuticals, and platform chemicals through synthetic biology and systems biology strategies. Further discovery of undefined biosynthesis pathways in plants, integrative analysis of discrete omics data, and diversified process developments for production of platform chemicals are essential to overcome the hurdles for sustainable production of value-added biomolecules from plants.

  12. Semantic Web Ontology and Data Integration: a Case Study in Aiding Psychiatric Drug Repurposing.

    PubMed

    Liang, Chen; Sun, Jingchun; Tao, Cui

    2015-01-01

    There remain significant difficulties selecting probable candidate drugs from existing databases. We describe an ontology-oriented approach to represent the nexus between genes, drugs, phenotypes, symptoms, and diseases from multiple information sources. We also report a case study in which we attempted to explore candidate drugs effective for bipolar disorder and epilepsy. We constructed an ontology incorporating knowledge between the two diseases and performed semantic reasoning tasks with the ontology. The results suggested 48 candidate drugs that hold promise for further breakthrough. The evaluation demonstrated the validity our approach. Our approach prioritizes the candidate drugs that have potential associations among genes, phenotypes and symptoms, and thus facilitates the data integration and drug repurposing in psychiatric disorders.

  13. Desensitized Optimal Filtering and Sensor Fusion Toolkit

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.

    2015-01-01

    Analytical Mechanics Associates, Inc., has developed a software toolkit that filters and processes navigational data from multiple sensor sources. A key component of the toolkit is a trajectory optimization technique that reduces the sensitivity of Kalman filters with respect to model parameter uncertainties. The sensor fusion toolkit also integrates recent advances in adaptive Kalman and sigma-point filters for non-Gaussian problems with error statistics. This Phase II effort provides new filtering and sensor fusion techniques in a convenient package that can be used as a stand-alone application for ground support and/or onboard use. Its modular architecture enables ready integration with existing tools. A suite of sensor models and noise distribution as well as Monte Carlo analysis capability are included to enable statistical performance evaluations.

  14. Computing Fourier integral operators with caustics

    NASA Astrophysics Data System (ADS)

    Caday, Peter

    2016-12-01

    Fourier integral operators (FIOs) have widespread applications in imaging, inverse problems, and PDEs. An implementation of a generic algorithm for computing FIOs associated with canonical graphs is presented, based on a recent paper of de Hoop et al. Given the canonical transformation and principal symbol of the operator, a preprocessing step reduces application of an FIO approximately to multiplications, pushforwards and forward and inverse discrete Fourier transforms, which can be computed in O({N}n+(n-1)/2{log}N) time for an n-dimensional FIO. The same preprocessed data also allows computation of the inverse and transpose of the FIO, with identical runtime. Examples demonstrate the algorithm’s output, and easily extendible MATLAB/C++ source code is available from the author.

  15. Mass Spec Studio for Integrative Structural Biology

    PubMed Central

    Rey, Martial; Sarpe, Vladimir; Burns, Kyle; Buse, Joshua; Baker, Charles A.H.; van Dijk, Marc; Wordeman, Linda; Bonvin, Alexandre M.J.J.; Schriemer, David C.

    2015-01-01

    SUMMARY The integration of biophysical data from multiple sources is critical for developing accurate structural models of large multiprotein systems and their regulators. Mass spectrometry (MS) can be used to measure the insertion location for a wide range of topographically sensitive chemical probes, and such insertion data provide a rich, but disparate set of modeling restraints. We have developed a software platform that integrates the analysis of label-based MS data with protein modeling activities (Mass Spec Studio). Analysis packages can mine any labeling data from any mass spectrometer in a proteomics-grade manner, and link labeling methods with data-directed protein interaction modeling using HADDOCK. Support is provided for hydrogen/ deuterium exchange (HX) and covalent labeling chemistries, including novel acquisition strategies such as targeted HX-tandem MS (MS2) and data-independent HX-MS2. The latter permits the modeling of highly complex systems, which we demonstrate by the analysis of microtubule interactions. PMID:25242457

  16. Seeking Synthesis: The Integrative Problem in Understanding Language and Its Evolution.

    PubMed

    Dale, Rick; Kello, Christopher T; Schoenemann, P Thomas

    2016-04-01

    We discuss two problems for a general scientific understanding of language, sequences and synergies: how language is an intricately sequenced behavior and how language is manifested as a multidimensionally structured behavior. Though both are central in our understanding, we observe that the former tends to be studied more than the latter. We consider very general conditions that hold in human brain evolution and its computational implications, and identify multimodal and multiscale organization as two key characteristics of emerging cognitive function in our species. This suggests that human brains, and cognitive function specifically, became more adept at integrating diverse information sources and operating at multiple levels for linguistic performance. We argue that framing language evolution, learning, and use in terms of synergies suggests new research questions, and it may be a fruitful direction for new developments in theory and modeling of language as an integrated system. Copyright © 2016 Cognitive Science Society, Inc.

  17. Integrating musculoskeletal sonography into rehabilitation: Therapists’ experiences with training and implementation

    PubMed Central

    Gray, Julie McLaughlin; Frank, Gelya; Roll, Shawn C.

    2018-01-01

    Musculoskeletal sonography is rapidly extending beyond radiology; however, best practices for successful integration into new practice contexts are unknown. This study explored non-physician experiences with the processes of training and integration of musculoskeletal sonography into rehabilitation. Qualitative data were captured through multiple sources and iterative thematic analysis was used to describe two occupational therapists’ experiences. The dominant emerging theme was competency, in three domains: technical, procedural and analytical. Additionally, three practice considerations were illuminated: (1) understanding imaging within the dynamics of rehabilitation, (2) navigating nuances of interprofessional care, and (3) implications for post-professional training. Findings indicate that sonography training for rehabilitation providers requires multi-level competency development and consideration of practice complexities. These data lay a foundation on which to explore and develop best practices for incorporating sonographic imaging into the clinic as a means for engaging clients as active participants in the rehabilitation process to improve health and rehabilitation outcomes. PMID:28830315

  18. Disk-integrated reflection light curves of planets

    NASA Astrophysics Data System (ADS)

    Garcia Munoz, A.

    2014-03-01

    The light scattered by a planet atmosphere contains valuable information on the planet's composition and aerosol content. Typically, the interpretation of that information requires elaborate radiative transport models accounting for the absorption and scattering processes undergone by the star photons on their passage through the atmosphere. I have been working on a particular family of algorithms based on Backward Monte Carlo (BMC) integration for solving the multiple-scattering problem in atmospheric media. BMC algorithms simulate statistically the photon trajectories in the reverse order that they actually occur, i.e. they trace the photons from the detector through the atmospheric medium and onwards to the illumination source following probability laws dictated by the medium's optical properties. BMC algorithms are versatile, as they can handle diverse viewing and illumination geometries, and can readily accommodate various physical phenomena. As will be shown, BMC algorithms are very well suited for the prediction of magnitudes integrated over a planet's disk (whether uniform or not). Disk-integrated magnitudes are relevant in the current context of exploration of extrasolar planets because spatial resolution of these objects will not be technologically feasible in the near future. I have been working on various predictions for the disk-integrated properties of planets that demonstrate the capacities of the BMC algorithm. These cases include the variability of the Earth's integrated signal caused by diurnal and seasonal changes in the surface reflectance and cloudiness, or by sporadic injection of large amounts of volcanic particles into the atmosphere. Since the implemented BMC algorithm includes a polarization mode, these examples also serve to illustrate the potential of polarimetry in the characterization of both Solar System and extrasolar planets. The work is complemented with the analysis of disk-integrated photometric observations of Earth and Venus drawn from various sources.

  19. Effect of multiple-source entry on price competition after patent expiration in the pharmaceutical industry.

    PubMed Central

    Suh, D C; Manning, W G; Schondelmeyer, S; Hadsall, R S

    2000-01-01

    OBJECTIVE: To analyze the effect of multiple-source drug entry on price competition after patent expiration in the pharmaceutical industry. DATA SOURCES: Originators and their multiple-source drugs selected from the 35 chemical entities whose patents expired from 1984 through 1987. Data were obtained from various primary and secondary sources for the patents' expiration dates, sales volume and units sold, and characteristics of drugs in the sample markets. STUDY DESIGN: The study was designed to determine significant factors using the study model developed under the assumption that the off-patented market is an imperfectly segmented market. PRINCIPAL FINDINGS: After patent expiration, the originators' prices continued to increase, while the price of multiple-source drugs decreased significantly over time. By the fourth year after patent expiration, originators' sales had decreased 12 percent in dollars and 30 percent in quantity. Multiple-source drugs increased their sales twofold in dollars and threefold in quantity, and possessed about one-fourth (in dollars) and half (in quantity) of the total market three years after entry. CONCLUSION: After patent expiration, multiple-source drugs compete largely with other multiple-source drugs in the price-sensitive sector, but indirectly with the originator in the price-insensitive sector. Originators have first-mover advantages, and therefore have a market that is less price sensitive after multiple-source drugs enter. On the other hand, multiple-source drugs target the price-sensitive sector, using their lower-priced drugs. This trend may indicate that the off-patented market is imperfectly segmented between the price-sensitive and insensitive sector. Consumers as a whole can gain from the entry of multiple-source drugs because the average price of the market continually declines after patent expiration. PMID:10857475

  20. Processing Uav and LIDAR Point Clouds in Grass GIS

    NASA Astrophysics Data System (ADS)

    Petras, V.; Petrasova, A.; Jeziorska, J.; Mitasova, H.

    2016-06-01

    Today's methods of acquiring Earth surface data, namely lidar and unmanned aerial vehicle (UAV) imagery, non-selectively collect or generate large amounts of points. Point clouds from different sources vary in their properties such as number of returns, density, or quality. We present a set of tools with applications for different types of points clouds obtained by a lidar scanner, structure from motion technique (SfM), and a low-cost 3D scanner. To take advantage of the vertical structure of multiple return lidar point clouds, we demonstrate tools to process them using 3D raster techniques which allow, for example, the development of custom vegetation classification methods. Dense point clouds obtained from UAV imagery, often containing redundant points, can be decimated using various techniques before further processing. We implemented and compared several decimation techniques in regard to their performance and the final digital surface model (DSM). Finally, we will describe the processing of a point cloud from a low-cost 3D scanner, namely Microsoft Kinect, and its application for interaction with physical models. All the presented tools are open source and integrated in GRASS GIS, a multi-purpose open source GIS with remote sensing capabilities. The tools integrate with other open source projects, specifically Point Data Abstraction Library (PDAL), Point Cloud Library (PCL), and OpenKinect libfreenect2 library to benefit from the open source point cloud ecosystem. The implementation in GRASS GIS ensures long term maintenance and reproducibility by the scientific community but also by the original authors themselves.

  1. ReNE: A Cytoscape Plugin for Regulatory Network Enhancement

    PubMed Central

    Politano, Gianfranco; Benso, Alfredo; Savino, Alessandro; Di Carlo, Stefano

    2014-01-01

    One of the biggest challenges in the study of biological regulatory mechanisms is the integration, americanmodeling, and analysis of the complex interactions which take place in biological networks. Despite post transcriptional regulatory elements (i.e., miRNAs) are widely investigated in current research, their usage and visualization in biological networks is very limited. Regulatory networks are commonly limited to gene entities. To integrate networks with post transcriptional regulatory data, researchers are therefore forced to manually resort to specific third party databases. In this context, we introduce ReNE, a Cytoscape 3.x plugin designed to automatically enrich a standard gene-based regulatory network with more detailed transcriptional, post transcriptional, and translational data, resulting in an enhanced network that more precisely models the actual biological regulatory mechanisms. ReNE can automatically import a network layout from the Reactome or KEGG repositories, or work with custom pathways described using a standard OWL/XML data format that the Cytoscape import procedure accepts. Moreover, ReNE allows researchers to merge multiple pathways coming from different sources. The merged network structure is normalized to guarantee a consistent and uniform description of the network nodes and edges and to enrich all integrated data with additional annotations retrieved from genome-wide databases like NCBI, thus producing a pathway fully manageable through the Cytoscape environment. The normalized network is then analyzed to include missing transcription factors, miRNAs, and proteins. The resulting enhanced network is still a fully functional Cytoscape network where each regulatory element (transcription factor, miRNA, gene, protein) and regulatory mechanism (up-regulation/down-regulation) is clearly visually identifiable, thus enabling a better visual understanding of its role and the effect in the network behavior. The enhanced network produced by ReNE is exportable in multiple formats for further analysis via third party applications. ReNE can be freely installed from the Cytoscape App Store (http://apps.cytoscape.org/apps/rene) and the full source code is freely available for download through a SVN repository accessible at http://www.sysbio.polito.it/tools_svn/BioInformatics/Rene/releases/. ReNE enhances a network by only integrating data from public repositories, without any inference or prediction. The reliability of the introduced interactions only depends on the reliability of the source data, which is out of control of ReNe developers. PMID:25541727

  2. RNA-protein binding motifs mining with a new hybrid deep learning based cross-domain knowledge integration approach.

    PubMed

    Pan, Xiaoyong; Shen, Hong-Bin

    2017-02-28

    RNAs play key roles in cells through the interactions with proteins known as the RNA-binding proteins (RBP) and their binding motifs enable crucial understanding of the post-transcriptional regulation of RNAs. How the RBPs correctly recognize the target RNAs and why they bind specific positions is still far from clear. Machine learning-based algorithms are widely acknowledged to be capable of speeding up this process. Although many automatic tools have been developed to predict the RNA-protein binding sites from the rapidly growing multi-resource data, e.g. sequence, structure, their domain specific features and formats have posed significant computational challenges. One of current difficulties is that the cross-source shared common knowledge is at a higher abstraction level beyond the observed data, resulting in a low efficiency of direct integration of observed data across domains. The other difficulty is how to interpret the prediction results. Existing approaches tend to terminate after outputting the potential discrete binding sites on the sequences, but how to assemble them into the meaningful binding motifs is a topic worth of further investigation. In viewing of these challenges, we propose a deep learning-based framework (iDeep) by using a novel hybrid convolutional neural network and deep belief network to predict the RBP interaction sites and motifs on RNAs. This new protocol is featured by transforming the original observed data into a high-level abstraction feature space using multiple layers of learning blocks, where the shared representations across different domains are integrated. To validate our iDeep method, we performed experiments on 31 large-scale CLIP-seq datasets, and our results show that by integrating multiple sources of data, the average AUC can be improved by 8% compared to the best single-source-based predictor; and through cross-domain knowledge integration at an abstraction level, it outperforms the state-of-the-art predictors by 6%. Besides the overall enhanced prediction performance, the convolutional neural network module embedded in iDeep is also able to automatically capture the interpretable binding motifs for RBPs. Large-scale experiments demonstrate that these mined binding motifs agree well with the experimentally verified results, suggesting iDeep is a promising approach in the real-world applications. The iDeep framework not only can achieve promising performance than the state-of-the-art predictors, but also easily capture interpretable binding motifs. iDeep is available at http://www.csbio.sjtu.edu.cn/bioinf/iDeep.

  3. Leak localization and quantification with a small unmanned aerial system

    NASA Astrophysics Data System (ADS)

    Golston, L.; Zondlo, M. A.; Frish, M. B.; Aubut, N. F.; Yang, S.; Talbot, R. W.

    2017-12-01

    Methane emissions from oil and gas facilities are a recognized source of greenhouse gas emissions, requiring cost-effective and reliable monitoring systems to support leak detection and repair programs. We describe a set of methods for locating and quantifying natural gas leaks using a small unmanned aerial system (sUAS) equipped with a path-integrated methane sensor along with ground-based wind measurements. The algorithms are developed as part of a system for continuous well pad scale (100 m2 area) monitoring, supported by a series of over 200 methane release trials covering multiple release locations and flow rates. Test measurements include data obtained on a rotating boom platform as well as flight tests on a sUAS. The system is found throughout the trials to reliably distinguish between cases with and without a methane release down to 6 scfh (0.032 g/s). Among several methods evaluated for horizontal localization, the location corresponding to the maximum integrated methane reading have performed best with a median error of ± 1 m if two or more flights are averaged, or ± 1.2 m for individual flights. Additionally, a method of rotating the data around the estimated leak location is developed, with the leak magnitude calculated as the average crosswind integrated flux in the region near the source location. Validation of these methods will be presented, including blind test results. Sources of error, including GPS uncertainty, meteorological variables, and flight pattern coverage, will be discussed.

  4. Integrated Giant Magnetoresistance Technology for Approachable Weak Biomagnetic Signal Detections

    PubMed Central

    Shen, Hui-Min; Hu, Liang; Fu, Xin

    2018-01-01

    With the extensive applications of biomagnetic signals derived from active biological tissue in both clinical diagnoses and human-computer-interaction, there is an increasing need for approachable weak biomagnetic sensing technology. The inherent merits of giant magnetoresistance (GMR) and its high integration with multiple technologies makes it possible to detect weak biomagnetic signals with micron-sized, non-cooled and low-cost sensors, considering that the magnetic field intensity attenuates rapidly with distance. This paper focuses on the state-of-art in integrated GMR technology for approachable biomagnetic sensing from the perspective of discipline fusion between them. The progress in integrated GMR to overcome the challenges in weak biomagnetic signal detection towards high resolution portable applications is addressed. The various strategies for 1/f noise reduction and sensitivity enhancement in integrated GMR technology for sub-pT biomagnetic signal recording are discussed. In this paper, we review the developments of integrated GMR technology for in vivo/vitro biomagnetic source imaging and demonstrate how integrated GMR can be utilized for biomagnetic field detection. Since the field sensitivity of integrated GMR technology is being pushed to fT/Hz0.5 with the focused efforts, it is believed that the potential of integrated GMR technology will make it preferred choice in weak biomagnetic signal detection in the future. PMID:29316670

  5. Integrated Giant Magnetoresistance Technology for Approachable Weak Biomagnetic Signal Detections.

    PubMed

    Shen, Hui-Min; Hu, Liang; Fu, Xin

    2018-01-07

    With the extensive applications of biomagnetic signals derived from active biological tissue in both clinical diagnoses and human-computer-interaction, there is an increasing need for approachable weak biomagnetic sensing technology. The inherent merits of giant magnetoresistance (GMR) and its high integration with multiple technologies makes it possible to detect weak biomagnetic signals with micron-sized, non-cooled and low-cost sensors, considering that the magnetic field intensity attenuates rapidly with distance. This paper focuses on the state-of-art in integrated GMR technology for approachable biomagnetic sensing from the perspective of discipline fusion between them. The progress in integrated GMR to overcome the challenges in weak biomagnetic signal detection towards high resolution portable applications is addressed. The various strategies for 1/ f noise reduction and sensitivity enhancement in integrated GMR technology for sub-pT biomagnetic signal recording are discussed. In this paper, we review the developments of integrated GMR technology for in vivo/vitro biomagnetic source imaging and demonstrate how integrated GMR can be utilized for biomagnetic field detection. Since the field sensitivity of integrated GMR technology is being pushed to fT/Hz 0.5 with the focused efforts, it is believed that the potential of integrated GMR technology will make it preferred choice in weak biomagnetic signal detection in the future.

  6. Near Real Time Integration of Satellite and Radar Data for Probabilistic Nearcasting of Severe Weather

    NASA Astrophysics Data System (ADS)

    Pilone, D.; Quinn, P.; Mitchell, A. E.; Baynes, K.; Shum, D.

    2014-12-01

    This talk introduces the audience to some of the very real challenges associated with visualizing data from disparate data sources as encountered during the development of real world applications. In addition to the fundamental challenges of dealing with the data and imagery, this talk discusses usability problems encountered while trying to provide interactive and user-friendly visualization tools. At the end of this talk the audience will be aware of some of the pitfalls of data visualization along with tools and techniques to help mitigate them. There are many sources of variable resolution visualizations of science data available to application developers including NASA's Global Imagery Browse Services (GIBS), however integrating and leveraging visualizations in modern applications faces a number of challenges, including: - Varying visualized Earth "tile sizes" resulting in challenges merging disparate sources - Multiple visualization frameworks and toolkits with varying strengths and weaknesses - Global composite imagery vs. imagery matching EOSDIS granule distribution - Challenges visualizing geographically overlapping data with different temporal bounds - User interaction with overlapping or collocated data - Complex data boundaries and shapes combined with multi-orbit data and polar projections - Discovering the availability of visualizations and the specific parameters, color palettes, and configurations used to produce them In addition to discussing the challenges and approaches involved in visualizing disparate data, we will discuss solutions and components we'll be making available as open source to encourage reuse and accelerate application development.

  7. Mining and integration of pathway diagrams from imaging data.

    PubMed

    Kozhenkov, Sergey; Baitaluk, Michael

    2012-03-01

    Pathway diagrams from PubMed and World Wide Web (WWW) contain valuable highly curated information difficult to reach without tools specifically designed and customized for the biological semantics and high-content density of the images. There is currently no search engine or tool that can analyze pathway images, extract their pathway components (molecules, genes, proteins, organelles, cells, organs, etc.) and indicate their relationships. Here, we describe a resource of pathway diagrams retrieved from article and web-page images through optical character recognition, in conjunction with data mining and data integration methods. The recognized pathways are integrated into the BiologicalNetworks research environment linking them to a wealth of data available in the BiologicalNetworks' knowledgebase, which integrates data from >100 public data sources and the biomedical literature. Multiple search and analytical tools are available that allow the recognized cellular pathways, molecular networks and cell/tissue/organ diagrams to be studied in the context of integrated knowledge, experimental data and the literature. BiologicalNetworks software and the pathway repository are freely available at www.biologicalnetworks.org. Supplementary data are available at Bioinformatics online.

  8. Deterministic Integration of Quantum Dots into on-Chip Multimode Interference Beamsplitters Using in Situ Electron Beam Lithography

    NASA Astrophysics Data System (ADS)

    Schnauber, Peter; Schall, Johannes; Bounouar, Samir; Höhne, Theresa; Park, Suk-In; Ryu, Geun-Hwan; Heindel, Tobias; Burger, Sven; Song, Jin-Dong; Rodt, Sven; Reitzenstein, Stephan

    2018-04-01

    The development of multi-node quantum optical circuits has attracted great attention in recent years. In particular, interfacing quantum-light sources, gates and detectors on a single chip is highly desirable for the realization of large networks. In this context, fabrication techniques that enable the deterministic integration of pre-selected quantum-light emitters into nanophotonic elements play a key role when moving forward to circuits containing multiple emitters. Here, we present the deterministic integration of an InAs quantum dot into a 50/50 multi-mode interference beamsplitter via in-situ electron beam lithography. We demonstrate the combined emitter-gate interface functionality by measuring triggered single-photon emission on-chip with $g^{(2)}(0) = 0.13\\pm 0.02$. Due to its high patterning resolution as well as spectral and spatial control, in-situ electron beam lithography allows for integration of pre-selected quantum emitters into complex photonic systems. Being a scalable single-step approach, it paves the way towards multi-node, fully integrated quantum photonic chips.

  9. Post-disaster supply chain interdependent critical infrastructure system restoration: A review of data necessary and available for modeling

    USGS Publications Warehouse

    Ramachandran, Varun; Long, Suzanna K.; Shoberg, Thomas G.; Corns, Steven; Carlo, Hector J.

    2016-01-01

    The majority of restoration strategies in the wake of large-scale disasters have focused on short-term emergency response solutions. Few consider medium- to long-term restoration strategies to reconnect urban areas to national supply chain interdependent critical infrastructure systems (SCICI). These SCICI promote the effective flow of goods, services, and information vital to the economic vitality of an urban environment. To re-establish the connectivity that has been broken during a disaster between the different SCICI, relationships between these systems must be identified, formulated, and added to a common framework to form a system-level restoration plan. To accomplish this goal, a considerable collection of SCICI data is necessary. The aim of this paper is to review what data are required for model construction, the accessibility of these data, and their integration with each other. While a review of publically available data reveals a dearth of real-time data to assist modeling long-term recovery following an extreme event, a significant amount of static data does exist and these data can be used to model the complex interdependencies needed. For the sake of illustration, a particular SCICI (transportation) is used to highlight the challenges of determining the interdependencies and creating models capable of describing the complexity of an urban environment with the data publically available. Integration of such data as is derived from public domain sources is readily achieved in a geospatial environment, after all geospatial infrastructure data are the most abundant data source and while significant quantities of data can be acquired through public sources, a significant effort is still required to gather, develop, and integrate these data from multiple sources to build a complete model. Therefore, while continued availability of high quality, public information is essential for modeling efforts in academic as well as government communities, a more streamlined approach to a real-time acquisition and integration of these data is essential.

  10. Autophagy in alcohol-induced liver diseases

    PubMed Central

    Dolganiuc, Angela; Thomes, Paul G.; Ding, Wen-Xing; Lemasters, John J.; Donohue, Terrence M.

    2013-01-01

    Alcohol is the most abused substance worldwide and a significant source of liver injury; the mechanisms of alcohol-induced liver disease are not fully understood. Significant cellular toxicity and impairment of protein synthesis and degradation occur in alcohol-exposed liver cells, along with changes in energy balance and modified responses to pathogens. Autophagy is the process of cellular catabolism through the lysosomal-dependent machinery, which maintains a balance among protein synthesis, degradation, and recycling of self. Autophagy is part of normal homeostasis and it can be triggered by multiple factors that threaten cell integrity including starvation, toxins, or pathogens. Multiple factors regulate autophagy; survival and preservation of cellular integrity at the expense of inadequately-folded proteins and damaged high energy-generating intracellular organelles are prominent targets of autophagy in pathologic conditions. Coincidentally, inadequately-folded proteins accumulate and high energy-generating intracellular organelles, such as mitochondria, are damaged by alcohol abuse; these alcohol-induced pathological findings prompted investigation of the role of autophagy in the pathogenesis of alcohol-induced liver damage. Our review summarizes the current knowledge about the role and implications of autophagy in alcohol-induced liver disease. PMID:22551004

  11. Light-effect transistor (LET) with multiple independent gating controls for optical logic gates and optical amplification

    NASA Astrophysics Data System (ADS)

    Marmon, Jason; Rai, Satish; Wang, Kai; Zhou, Weilie; Zhang, Yong

    2016-03-01

    Modern electronics are developing electronic-optical integrated circuits, while their electronic backbone, e.g. field-effect transistors (FETs), remains the same. However, further FET down scaling is facing physical and technical challenges. A light-effect transistor (LET) offers electronic-optical hybridization at the component level, which can continue Moore’s law to quantum region without requiring a FET’s fabrication complexity, e.g. physical gate and doping, by employing optical gating and photoconductivity. Multiple independent gates are therefore readily realized to achieve unique functionalities without increasing chip space. Here we report LET device characteristics and novel digital and analog applications, such as optical logic gates and optical amplification. Prototype CdSe-nanowire-based LETs show output and transfer characteristics resembling advanced FETs, e.g. on/off ratios up to ~1.0x106 with a source-drain voltage of ~1.43 V, gate-power of ~260 nW, and subthreshold swing of ~0.3 nW/decade (excluding losses). Our work offers new electronic-optical integration strategies and electronic and optical computing approaches.

  12. Linked data and provenance in biological data webs.

    PubMed

    Zhao, Jun; Miles, Alistair; Klyne, Graham; Shotton, David

    2009-03-01

    The Web is now being used as a platform for publishing and linking life science data. The Web's linking architecture can be exploited to join heterogeneous data from multiple sources. However, as data are frequently being updated in a decentralized environment, provenance information becomes critical to providing reliable and trustworthy services to scientists. This article presents design patterns for representing and querying provenance information relating to mapping links between heterogeneous data from sources in the domain of functional genomics. We illustrate the use of named resource description framework (RDF) graphs at different levels of granularity to make provenance assertions about linked data, and demonstrate that these assertions are sufficient to support requirements including data currency, integrity, evidential support and historical queries.

  13. FRED 2: an immunoinformatics framework for Python

    PubMed Central

    Schubert, Benjamin; Walzer, Mathias; Brachvogel, Hans-Philipp; Szolek, András; Mohr, Christopher; Kohlbacher, Oliver

    2016-01-01

    Summary: Immunoinformatics approaches are widely used in a variety of applications from basic immunological to applied biomedical research. Complex data integration is inevitable in immunological research and usually requires comprehensive pipelines including multiple tools and data sources. Non-standard input and output formats of immunoinformatics tools make the development of such applications difficult. Here we present FRED 2, an open-source immunoinformatics framework offering easy and unified access to methods for epitope prediction and other immunoinformatics applications. FRED 2 is implemented in Python and designed to be extendable and flexible to allow rapid prototyping of complex applications. Availability and implementation: FRED 2 is available at http://fred-2.github.io Contact: schubert@informatik.uni-tuebingen.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27153717

  14. FRED 2: an immunoinformatics framework for Python.

    PubMed

    Schubert, Benjamin; Walzer, Mathias; Brachvogel, Hans-Philipp; Szolek, András; Mohr, Christopher; Kohlbacher, Oliver

    2016-07-01

    Immunoinformatics approaches are widely used in a variety of applications from basic immunological to applied biomedical research. Complex data integration is inevitable in immunological research and usually requires comprehensive pipelines including multiple tools and data sources. Non-standard input and output formats of immunoinformatics tools make the development of such applications difficult. Here we present FRED 2, an open-source immunoinformatics framework offering easy and unified access to methods for epitope prediction and other immunoinformatics applications. FRED 2 is implemented in Python and designed to be extendable and flexible to allow rapid prototyping of complex applications. FRED 2 is available at http://fred-2.github.io schubert@informatik.uni-tuebingen.de Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  15. Large-Scale Transport Model Uncertainty and Sensitivity Analysis: Distributed Sources in Complex Hydrogeologic Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sig Drellack, Lance Prothro

    2007-12-01

    The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result ofmore » the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The simulations are challenged by the distributed sources in each of the Corrective Action Units, by complex mass transfer processes, and by the size and complexity of the field-scale flow models. An efficient methodology utilizing particle tracking results and convolution integrals provides in situ concentrations appropriate for Monte Carlo analysis. Uncertainty in source releases and transport parameters including effective porosity, fracture apertures and spacing, matrix diffusion coefficients, sorption coefficients, and colloid load and mobility are considered. With the distributions of input uncertainties and output plume volumes, global analysis methods including stepwise regression, contingency table analysis, and classification tree analysis are used to develop sensitivity rankings of parameter uncertainties for each model considered, thus assisting a variety of decisions.« less

  16. People-Technology-Ecosystem Integration: A Framework to Ensure Regional Interoperability for Safety, Sustainability, and Resilience of Interdependent Energy, Water, and Seafood Sources in the (Persian) Gulf.

    PubMed

    Meshkati, Najmedin; Tabibzadeh, Maryam; Farshid, Ali; Rahimi, Mansour; Alhanaee, Ghena

    2016-02-01

    The aim of this study is to identify the interdependencies of human and organizational subsystems of multiple complex, safety-sensitive technological systems and their interoperability in the context of sustainability and resilience of an ecosystem. Recent technological disasters with severe environmental impact are attributed to human factors and safety culture causes. One of the most populous and environmentally sensitive regions in the world, the (Persian) Gulf, is on the confluence of an exponentially growing number of two industries--nuclear power and seawater desalination plants--that is changing its land- and seascape. Building upon Rasmussen's model, a macrosystem integrative framework, based on the broader context of human factors, is developed, which can be considered in this context as a "meta-ergonomics" paradigm, for the analysis of interactions, design of interoperability, and integration of decisions of major actors whose actions can affect safety and sustainability of the focused industries during routine and nonroutine (emergency) operations. Based on the emerging realities in the Gulf region, it is concluded that without such systematic approach toward addressing the interdependencies of water and energy sources, sustainability will be only a short-lived dream and prosperity will be a disappearing mirage for millions of people in the region. This multilayered framework for the integration of people, technology, and ecosystem--which has been applied to the (Persian) Gulf--offers a viable and vital approach to the design and operation of large-scale complex systems wherever the nexus of water, energy, and food sources are concerned, such as the Black Sea. © 2016, Human Factors and Ergonomics Society.

  17. Near-field multiple traps of paraxial acoustic vortices with strengthened gradient force generated by sector transducer array

    NASA Astrophysics Data System (ADS)

    Wang, Qingdong; Li, Yuzhi; Ma, Qingyu; Guo, Gepu; Tu, Juan; Zhang, Dong

    2018-01-01

    In order to improve the capability of particle trapping close to the source plane, theoretical and experimental studies on near-field multiple traps of paraxial acoustic vortices (AVs) with a strengthened acoustic gradient force (AGF) generated by a sector transducer array were conducted. By applying the integration of point source radiation, numerical simulations for the acoustic fields generated by the sector transducer array were conducted and compared with those produced by the circular transducer array. It was proved that strengthened AGFs of near-field multiple AVs with higher peak pressures and smaller vortex radii could be produced by the sector transducer array with a small topological charge. The axial distributions of the equivalent potential gradient indicated that the AGFs of paraxial AVs in the near field were much higher than those in the far field, and the distances at the near-field vortex antinodes were also proved to be the ideal trapping positions with relatively higher AGFs. With the established 8-channel AV generation system, theoretical studies were also verified by the experimental measurements of pressure and phase for AVs with various topological charges. The formation of near-field multiple paraxial AVs was verified by the cross-sectional circular pressure distributions with perfect phase spirals around central pressure nulls, and was also proved by the vortex nodes and antinodes along the center axis. The favorable results demonstrated the feasibility of generating near-field multiple traps of paraxial AVs with strengthened AGF using the sector transducer array, and suggested the potential applications of close-range particle trapping in biomedical engineering.

  18. Aspiring to Spectral Ignorance in Earth Observation

    NASA Astrophysics Data System (ADS)

    Oliver, S. A.

    2016-12-01

    Enabling robust, defensible and integrated decision making in the Era of Big Earth Data requires the fusion of data from multiple and diverse sensor platforms and networks. While the application of standardised global grid systems provides a common spatial analytics framework that facilitates the computationally efficient and statistically valid integration and analysis of these various data sources across multiple scales, there remains the challenge of sensor equivalency; particularly when combining data from different earth observation satellite sensors (e.g. combining Landsat and Sentinel-2 observations). To realise the vision of a sensor ignorant analytics platform for earth observation we require automation of spectral matching across the available sensors. Ultimately, the aim is to remove the requirement for the user to possess any sensor knowledge in order to undertake analysis. This paper introduces the concept of spectral equivalence and proposes a methodology through which equivalent bands may be sourced from a set of potential target sensors through application of equivalence metrics and thresholds. A number of parameters can be used to determine whether a pair of spectra are equivalent for the purposes of analysis. A baseline set of thresholds for these parameters and how to apply them systematically to enable relation of spectral bands amongst numerous different sensors is proposed. The base unit for comparison in this work is the relative spectral response. From this input, determination of a what may constitute equivalence can be related by a user, based on their own conceptualisation of equivalence.

  19. The Role of Integrated Modelling and Assessment for Decision-Making: Lessons from Water Allocation Issues in Australia

    NASA Astrophysics Data System (ADS)

    Jakeman, A. J.; Guillaume, J. H. A.; El Sawah, S.; Hamilton, S.

    2014-12-01

    Integrated modelling and assessment (IMA) is best regarded as a process that can support environmental decision-making when issues are strongly contested and uncertainties pervasive. To be most useful, the process must be multi-dimensional and phased. Principally, it must be tailored to the problem context to encompass diverse issues of concern, management settings and stakeholders. This in turn requires the integration of multiple processes and components of natural and human systems and their corresponding spatial and temporal scales. Modellers therefore need to be able to integrate multiple disciplines, methods, models, tools and data, and many sources and types of uncertainty. These dimensions are incorporated into iteration between the various phases of the IMA process, including scoping, problem framing and formulation, assessing options and communicating findings. Two case studies in Australia are employed to share the lessons of how integration can be achieved in these IMA phases using a mix of stakeholder participation processes and modelling tools. One case study aims to improve the relevance of modelling by incorporating stakeholder's views of irrigated viticulture and water management decision making. It used a novel methodology with the acronym ICTAM, consisting of Interviews to elicit mental models, Cognitive maps to represent and analyse individual and group mental models, Time-sequence diagrams to chronologically structure the decision making process, an All-encompassing conceptual model, and computational Models of stakeholder decision making. The second case uses a hydro-economic river network model to examine basin-wide impacts of water allocation cuts and adoption of farm innovations. The knowledge exchange approach used in each case was designed to integrate data and knowledge bearing in mind the contextual dimensions of the problem at hand, and the specific contributions that environmental modelling was thought to be able to make.

  20. Multiple Household Water Sources and Their Use in Remote Communities With Evidence From Pacific Island Countries

    NASA Astrophysics Data System (ADS)

    Elliott, Mark; MacDonald, Morgan C.; Chan, Terence; Kearton, Annika; Shields, Katherine F.; Bartram, Jamie K.; Hadwen, Wade L.

    2017-11-01

    Global water research and monitoring typically focus on the household's "main source of drinking-water." Use of multiple water sources to meet daily household needs has been noted in many developing countries but rarely quantified or reported in detail. We gathered self-reported data using a cross-sectional survey of 405 households in eight communities of the Republic of the Marshall Islands (RMI) and five Solomon Islands (SI) communities. Over 90% of households used multiple sources, with differences in sources and uses between wet and dry seasons. Most RMI households had large rainwater tanks and rationed stored rainwater for drinking throughout the dry season, whereas most SI households collected rainwater in small pots, precluding storage across seasons. Use of a source for cooking was strongly positively correlated with use for drinking, whereas use for cooking was negatively correlated or uncorrelated with nonconsumptive uses (e.g., bathing). Dry season water uses implied greater risk of water-borne disease, with fewer (frequently zero) handwashing sources reported and more unimproved sources consumed. Use of multiple sources is fundamental to household water management and feasible to monitor using electronic survey tools. We contend that recognizing multiple water sources can greatly improve understanding of household-level and community-level climate change resilience, that use of multiple sources confounds health impact studies of water interventions, and that incorporating multiple sources into water supply interventions can yield heretofore-unrealized benefits. We propose that failure to consider multiple sources undermines the design and effectiveness of global water monitoring, data interpretation, implementation, policy, and research.

  1. Dealing with the Data Deluge: Handling the Multitude Of Chemical Biology Data Sources

    PubMed Central

    Guha, Rajarshi; Nguyen, Dac-Trung; Southall, Noel; Jadhav, Ajit

    2012-01-01

    Over the last 20 years, there has been an explosion in the amount and type of biological and chemical data that has been made publicly available in a variety of online databases. While this means that vast amounts of information can be found online, there is no guarantee that it can be found easily (or at all). A scientist searching for a specific piece of information is faced with a daunting task - many databases have overlapping content, use their own identifiers and, in some cases, have arcane and unintuitive user interfaces. In this overview, a variety of well known data sources for chemical and biological information are highlighted, focusing on those most useful for chemical biology research. The issue of using multiple data sources together and the associated problems such as identifier disambiguation are highlighted. A brief discussion is then provided on Tripod, a recently developed platform that supports the integration of arbitrary data sources, providing users a simple interface to search across a federated collection of resources. PMID:26609498

  2. Adaptive sampling of information in perceptual decision-making.

    PubMed

    Cassey, Thomas C; Evens, David R; Bogacz, Rafal; Marshall, James A R; Ludwig, Casimir J H

    2013-01-01

    In many perceptual and cognitive decision-making problems, humans sample multiple noisy information sources serially, and integrate the sampled information to make an overall decision. We derive the optimal decision procedure for two-alternative choice tasks in which the different options are sampled one at a time, sources vary in the quality of the information they provide, and the available time is fixed. To maximize accuracy, the optimal observer allocates time to sampling different information sources in proportion to their noise levels. We tested human observers in a corresponding perceptual decision-making task. Observers compared the direction of two random dot motion patterns that were triggered only when fixated. Observers allocated more time to the noisier pattern, in a manner that correlated with their sensory uncertainty about the direction of the patterns. There were several differences between the optimal observer predictions and human behaviour. These differences point to a number of other factors, beyond the quality of the currently available sources of information, that influences the sampling strategy.

  3. ImTK: an open source multi-center information management toolkit

    NASA Astrophysics Data System (ADS)

    Alaoui, Adil; Ingeholm, Mary Lou; Padh, Shilpa; Dorobantu, Mihai; Desai, Mihir; Cleary, Kevin; Mun, Seong K.

    2008-03-01

    The Information Management Toolkit (ImTK) Consortium is an open source initiative to develop robust, freely available tools related to the information management needs of basic, clinical, and translational research. An open source framework and agile programming methodology can enable distributed software development while an open architecture will encourage interoperability across different environments. The ISIS Center has conceptualized a prototype data sharing network that simulates a multi-center environment based on a federated data access model. This model includes the development of software tools to enable efficient exchange, sharing, management, and analysis of multimedia medical information such as clinical information, images, and bioinformatics data from multiple data sources. The envisioned ImTK data environment will include an open architecture and data model implementation that complies with existing standards such as Digital Imaging and Communications (DICOM), Health Level 7 (HL7), and the technical framework and workflow defined by the Integrating the Healthcare Enterprise (IHE) Information Technology Infrastructure initiative, mainly the Cross Enterprise Document Sharing (XDS) specifications.

  4. Development open source microcontroller based temperature data logger

    NASA Astrophysics Data System (ADS)

    Abdullah, M. H.; Che Ghani, S. A.; Zaulkafilai, Z.; Tajuddin, S. N.

    2017-10-01

    This article discusses the development stages in designing, prototyping, testing and deploying a portable open source microcontroller based temperature data logger for use in rough industrial environment. The 5V powered prototype of data logger is equipped with open source Arduino microcontroller for integrating multiple thermocouple sensors with their module, secure digital (SD) card storage, liquid crystal display (LCD), real time clock and electronic enclosure made of acrylic. The program for the function of the datalogger is programmed so that 8 readings from the thermocouples can be acquired within 3 s interval and displayed on the LCD simultaneously. The recorded temperature readings at four different points on both hydrodistillation show similar profile pattern and highest yield of extracted oil was achieved on hydrodistillation 2 at 0.004%. From the obtained results, this study achieved the objective of developing an inexpensive, portable and robust eight channels temperature measuring module with capabilities to monitor and store real time data.

  5. EEG and MEG data analysis in SPM8.

    PubMed

    Litvak, Vladimir; Mattout, Jérémie; Kiebel, Stefan; Phillips, Christophe; Henson, Richard; Kilner, James; Barnes, Gareth; Oostenveld, Robert; Daunizeau, Jean; Flandin, Guillaume; Penny, Will; Friston, Karl

    2011-01-01

    SPM is a free and open source software written in MATLAB (The MathWorks, Inc.). In addition to standard M/EEG preprocessing, we presently offer three main analysis tools: (i) statistical analysis of scalp-maps, time-frequency images, and volumetric 3D source reconstruction images based on the general linear model, with correction for multiple comparisons using random field theory; (ii) Bayesian M/EEG source reconstruction, including support for group studies, simultaneous EEG and MEG, and fMRI priors; (iii) dynamic causal modelling (DCM), an approach combining neural modelling with data analysis for which there are several variants dealing with evoked responses, steady state responses (power spectra and cross-spectra), induced responses, and phase coupling. SPM8 is integrated with the FieldTrip toolbox , making it possible for users to combine a variety of standard analysis methods with new schemes implemented in SPM and build custom analysis tools using powerful graphical user interface (GUI) and batching tools.

  6. KiT: a MATLAB package for kinetochore tracking.

    PubMed

    Armond, Jonathan W; Vladimirou, Elina; McAinsh, Andrew D; Burroughs, Nigel J

    2016-06-15

    During mitosis, chromosomes are attached to the mitotic spindle via large protein complexes called kinetochores. The motion of kinetochores throughout mitosis is intricate and automated quantitative tracking of their motion has already revealed many surprising facets of their behaviour. Here, we present 'KiT' (Kinetochore Tracking)-an easy-to-use, open-source software package for tracking kinetochores from live-cell fluorescent movies. KiT supports 2D, 3D and multi-colour movies, quantification of fluorescence, integrated deconvolution, parallel execution and multiple algorithms for particle localization. KiT is free, open-source software implemented in MATLAB and runs on all MATLAB supported platforms. KiT can be downloaded as a package from http://www.mechanochemistry.org/mcainsh/software.php The source repository is available at https://bitbucket.org/jarmond/kit and under continuing development. Supplementary data are available at Bioinformatics online. jonathan.armond@warwick.ac.uk. © The Author 2016. Published by Oxford University Press.

  7. Open source tools for the information theoretic analysis of neural data.

    PubMed

    Ince, Robin A A; Mazzoni, Alberto; Petersen, Rasmus S; Panzeri, Stefano

    2010-01-01

    The recent and rapid development of open source software tools for the analysis of neurophysiological datasets consisting of simultaneous multiple recordings of spikes, field potentials and other neural signals holds the promise for a significant advance in the standardization, transparency, quality, reproducibility and variety of techniques used to analyze neurophysiological data and for the integration of information obtained at different spatial and temporal scales. In this review we focus on recent advances in open source toolboxes for the information theoretic analysis of neural responses. We also present examples of their use to investigate the role of spike timing precision, correlations across neurons, and field potential fluctuations in the encoding of sensory information. These information toolboxes, available both in MATLAB and Python programming environments, hold the potential to enlarge the domain of application of information theory to neuroscience and to lead to new discoveries about how neurons encode and transmit information.

  8. Meteorological and air pollution modeling for an urban airport

    NASA Technical Reports Server (NTRS)

    Swan, P. R.; Lee, I. Y.

    1980-01-01

    Results are presented of numerical experiments modeling meteorology, multiple pollutant sources, and nonlinear photochemical reactions for the case of an airport in a large urban area with complex terrain. A planetary boundary-layer model which predicts the mixing depth and generates wind, moisture, and temperature fields was used; it utilizes only surface and synoptic boundary conditions as input data. A version of the Hecht-Seinfeld-Dodge chemical kinetics model is integrated with a new, rapid numerical technique; both the San Francisco Bay Area Air Quality Management District source inventory and the San Jose Airport aircraft inventory are utilized. The air quality model results are presented in contour plots; the combined results illustrate that the highly nonlinear interactions which are present require that the chemistry and meteorology be considered simultaneously to make a valid assessment of the effects of individual sources on regional air quality.

  9. EEG and MEG Data Analysis in SPM8

    PubMed Central

    Litvak, Vladimir; Mattout, Jérémie; Kiebel, Stefan; Phillips, Christophe; Henson, Richard; Kilner, James; Barnes, Gareth; Oostenveld, Robert; Daunizeau, Jean; Flandin, Guillaume; Penny, Will; Friston, Karl

    2011-01-01

    SPM is a free and open source software written in MATLAB (The MathWorks, Inc.). In addition to standard M/EEG preprocessing, we presently offer three main analysis tools: (i) statistical analysis of scalp-maps, time-frequency images, and volumetric 3D source reconstruction images based on the general linear model, with correction for multiple comparisons using random field theory; (ii) Bayesian M/EEG source reconstruction, including support for group studies, simultaneous EEG and MEG, and fMRI priors; (iii) dynamic causal modelling (DCM), an approach combining neural modelling with data analysis for which there are several variants dealing with evoked responses, steady state responses (power spectra and cross-spectra), induced responses, and phase coupling. SPM8 is integrated with the FieldTrip toolbox , making it possible for users to combine a variety of standard analysis methods with new schemes implemented in SPM and build custom analysis tools using powerful graphical user interface (GUI) and batching tools. PMID:21437221

  10. Integrating multiple data sources in species distribution modeling: A framework for data fusion

    USGS Publications Warehouse

    Pacifici, Krishna; Reich, Brian J.; Miller, David A.W.; Gardner, Beth; Stauffer, Glenn E.; Singh, Susheela; McKerrow, Alexa; Collazo, Jaime A.

    2017-01-01

    The last decade has seen a dramatic increase in the use of species distribution models (SDMs) to characterize patterns of species’ occurrence and abundance. Efforts to parameterize SDMs often create a tension between the quality and quantity of data available to fit models. Estimation methods that integrate both standardized and non-standardized data types offer a potential solution to the tradeoff between data quality and quantity. Recently several authors have developed approaches for jointly modeling two sources of data (one of high quality and one of lesser quality). We extend their work by allowing for explicit spatial autocorrelation in occurrence and detection error using a Multivariate Conditional Autoregressive (MVCAR) model and develop three models that share information in a less direct manner resulting in more robust performance when the auxiliary data is of lesser quality. We describe these three new approaches (“Shared,” “Correlation,” “Covariates”) for combining data sources and show their use in a case study of the Brown-headed Nuthatch in the Southeastern U.S. and through simulations. All three of the approaches which used the second data source improved out-of-sample predictions relative to a single data source (“Single”). When information in the second data source is of high quality, the Shared model performs the best, but the Correlation and Covariates model also perform well. When the information quality in the second data source is of lesser quality, the Correlation and Covariates model performed better suggesting they are robust alternatives when little is known about auxiliary data collected opportunistically or through citizen scientists. Methods that allow for both data types to be used will maximize the useful information available for estimating species distributions.

  11. Inverse modelling of fluvial sediment connectivity identifies characteristics and spatial distribution of sediment sources in a large river network.

    NASA Astrophysics Data System (ADS)

    Schmitt, R. J. P.; Bizzi, S.; Kondolf, G. M.; Rubin, Z.; Castelletti, A.

    2016-12-01

    Field and laboratory evidence indicates that the spatial distribution of transport in both alluvial and bedrock rivers is an adaptation to sediment supply. Sediment supply, in turn, depends on spatial distribution and properties (e.g., grain sizes and supply rates) of individual sediment sources. Analyzing the distribution of transport capacity in a river network could hence clarify the spatial distribution and properties of sediment sources. Yet, challenges include a) identifying magnitude and spatial distribution of transport capacity for each of multiple grain sizes being simultaneously transported, and b) estimating source grain sizes and supply rates, both at network scales. Herein, we approach the problem of identifying the spatial distribution of sediment sources and the resulting network sediment fluxes in a major, poorly monitored tributary (80,000 km2) of the Mekong. Therefore, we apply the CASCADE modeling framework (Schmitt et al. (2016)). CASCADE calculates transport capacities and sediment fluxes for multiple grainsizes on the network scale based on remotely-sensed morphology and modelled hydrology. CASCADE is run in an inverse Monte Carlo approach for 7500 random initializations of source grain sizes. In all runs, supply of each source is inferred from the minimum downstream transport capacity for the source grain size. Results for each realization are compared to sparse available sedimentary records. Only 1 % of initializations reproduced the sedimentary record. Results for these realizations revealed a spatial pattern in source supply rates, grain sizes, and network sediment fluxes that correlated well with map-derived patterns in lithology and river-morphology. Hence, we propose that observable river hydro-morphology contains information on upstream source properties that can be back-calculated using an inverse modeling approach. Such an approach could be coupled to more detailed models of hillslope processes in future to derive integrated models of hillslope production and fluvial transport processes, which is particularly useful to identify sediment provenance in poorly monitored river basins.

  12. Inquiry-based Instruction with Archived, Online Data: An Intervention Study with Preservice Teachers

    NASA Astrophysics Data System (ADS)

    Ucar, Sedat; Trundle, Kathy Cabe; Krissek, Lawrence

    2011-03-01

    This mixed methods study described preservice teachers' conceptions of tides and explored the efficacy of integrating online data into inquiry-based instruction. Data sources included a multiple-choice assessment and in-depth interviews. A total of 79 participants in secondary, middle, and early childhood teacher education programs completed the multiple-choice assessment of their baseline knowledge of tides-related concepts. A sub-group of 29 participants also was interviewed to explore their understanding of tides in more detail before instruction. Eighteen of those 29 teachers participated in the instruction, were interviewed again after the instruction, and completed the multiple-choice assessment as a posttest. The interview data sets were analyzed via a constant comparative method in order to produce profiles of each participant's pre- and post-instruction conceptual understandings of tides. Additional quantitative analysis consisted of a paired-sample t-test, which investigated the changes in scores before and after the instructional intervention. Before instruction, all participants held alternative or alternative fragments as their conceptual understandings of tides. After completing the inquiry-based instruction that integrated online tidal data, participants were more likely to hold a scientific conceptual understanding. After instruction, some preservice teachers continued to hold on to the conception that the rotation of the moon around the Earth during one 24-hour period causes the tides to move with the moon. The quantitative results, however, indicated that pre- to post-instruction gains were significant. The findings of this study provide evidence that integrating Web-based archived data into inquiry-based instruction can be used to effectively promote conceptual change among preservice teachers.

  13. Tracking Vessels to Illegal Pollutant Discharges Using Multisource Vessel Information

    NASA Astrophysics Data System (ADS)

    Busler, J.; Wehn, H.; Woodhouse, L.

    2015-04-01

    Illegal discharge of bilge waters is a significant source of oil and other environmental pollutants in Canadian and international waters. Imaging satellites are commonly used to monitor large areas to detect oily discharges from vessels, off-shore platforms and other sources. While remotely sensed imagery provides a snap-shot picture useful for detecting a spill or the presence of vessels in the vicinity, it is difficult to directly associate a vessel to an observed spill unless the vessel is observed while the discharge is occurring. The situation then becomes more challenging with increased vessel traffic as multiple vessels may be associated with a spill event. By combining multiple sources of vessel location data, such as Automated Information Systems (AIS), Long Range Identification and Tracking (LRIT) and SAR-based ship detection, with spill detections and drift models we have created a system that associates detected spill events with vessels in the area using a probabilistic model that intersects vessel tracks and spill drift trajectories in both time and space. Working with the Canadian Space Agency and the Canadian Ice Service's Integrated Satellite Tracking of Pollution (ISTOP) program, we use spills observed in Canadian waters to demonstrate the investigative value of augmenting spill detections with temporally sequenced vessel and spill tracking information.

  14. Emotional contagion and burnout among nurses and doctors: Do joy and anger from different sources of stakeholders matter?

    PubMed

    Petitta, Laura; Jiang, Lixin; Härtel, Charmine E J

    2017-10-01

    The present study adds novel knowledge to the literature on emotional contagion (EC), discrete emotions, job burnout, and the management of healthcare professionals by simultaneously considering EC as both a job demand and a job resource with multiple social pathways. Integrating EC into the job demands-resource model, we develop and test a conceptual model wherein multiple stakeholder sources of emotional exchanges (i.e., leaders, colleagues, patients) play a differential role in predicting caregivers' absorption of positive (i.e., joy) and negative (i.e., anger) emotions, and in turn, burnout. We tested this nomological network using structural equation modeling and invariance analyses on a sample of 252 nurses and 102 doctors from diverse healthcare wards in three Italian hospitals. Our findings show that not all emotional exchange sources contribute to the EC experience or likelihood of burnout. Specifically, we found that doctors absorbed joy and anger from their colleagues but not from their leaders or patients. In contrast, nurses absorbed joy and anger from leaders, colleagues, and patients. Surprisingly, we found that joy-absorbed and anger-absorbed were related to doctors' exhaustion and cynicism, but only to nurses' cynicism. We conclude with suggestions for advancing research and practice in the management of emotions for preventing burnout. Copyright © 2016 John Wiley & Sons, Ltd.

  15. A hydrologic-economic modeling approach for analysis of urban water supply dynamics in Chennai, India

    NASA Astrophysics Data System (ADS)

    Srinivasan, Veena; Gorelick, Steven M.; Goulder, Lawrence

    2010-07-01

    In this paper, we discuss a challenging water resources problem in a developing world city, Chennai, India. The goal is to reconstruct past system behavior and diagnose the causes of a major water crisis. In order to do this, we develop a hydrologic-engineering-economic model to address the complexity of urban water supply arising from consumers' dependence on multiple interconnected sources of water. We integrate different components of the urban water system: water flowing into the reservoir system; diversion and distribution by the public water utility; groundwater flow in the aquifer beneath the city; supply, demand, and prices in the informal tanker-truck-based water market; and consumer behavior. Both the economic and physical impacts of consumers' dependence on multiple sources of water are quantified. The model is calibrated over the period 2002-2006 using a range of hydrologic and socio-economic data. The model's results highlight the inadequacy of the reservoir system and the buffering role played by the urban aquifer and consumers' coping investments during multiyear droughts.

  16. Overview of the NASA Wallops Flight Facility Mobile Range Control System

    NASA Technical Reports Server (NTRS)

    Davis, Rodney A.; Semancik, Susan K.; Smith, Donna C.; Stancil, Robert K.

    1999-01-01

    The NASA GSFC's Wallops Flight Facility (WFF) Mobile Range Control System (MRCS) is based on the functionality of the WFF Range Control Center at Wallops Island, Virginia. The MRCS provides real time instantaneous impact predictions, real time flight performance data, and other critical information needed by mission and range safety personnel in support of range operations at remote launch sites. The MRCS integrates a PC telemetry processing system (TELPro), a PC radar processing system (PCDQS), multiple Silicon Graphics display workstations (IRIS), and communication links within a mobile van for worldwide support of orbital, suborbital, and aircraft missions. This paper describes the MRCS configuration; the TELPro's capability to provide single/dual telemetry tracking and vehicle state data processing; the PCDQS' capability to provide real time positional data and instantaneous impact prediction for up to 8 data sources; and the IRIS' user interface for setup/display options. With portability, PC-based data processing, high resolution graphics, and flexible multiple source support, the MRCS system is proving to be responsive to the ever-changing needs of a variety of increasingly complex missions.

  17. Regression Models for the Analysis of Longitudinal Gaussian Data from Multiple Sources

    PubMed Central

    O’Brien, Liam M.; Fitzmaurice, Garrett M.

    2006-01-01

    We present a regression model for the joint analysis of longitudinal multiple source Gaussian data. Longitudinal multiple source data arise when repeated measurements are taken from two or more sources, and each source provides a measure of the same underlying variable and on the same scale. This type of data generally produces a relatively large number of observations per subject; thus estimation of an unstructured covariance matrix often may not be possible. We consider two methods by which parsimonious models for the covariance can be obtained for longitudinal multiple source data. The methods are illustrated with an example of multiple informant data arising from a longitudinal interventional trial in psychiatry. PMID:15726666

  18. Interoperable Data Access Services for NOAA IOOS

    NASA Astrophysics Data System (ADS)

    de La Beaujardiere, J.

    2008-12-01

    The Integrated Ocean Observing System (IOOS) is intended to enhance our ability to collect, deliver, and use ocean information. The goal is to support research and decision-making by providing data on our open oceans, coastal waters, and Great Lakes in the formats, rates, and scales required by scientists, managers, businesses, governments, and the public. The US National Oceanic and Atmospheric Administration (NOAA) is the lead agency for IOOS. NOAA's IOOS office supports the development of regional coastal observing capability and promotes data management efforts to increase data accessibility. Geospatial web services have been established at NOAA data providers including the National Data Buoy Center (NDBC), the Center for Operational Oceanographic Products and Services (CO-OPS), and CoastWatch, and at regional data provider sites. Services established include Open-source Project for a Network Data Access Protocol (OpenDAP), Open Geospatial Consortium (OGC) Sensor Observation Service (SOS), and OGC Web Coverage Service (WCS). These services provide integrated access to data holdings that have been aggregated at each center from multiple sources. We wish to collaborate with other groups to improve our service offerings to maximize interoperability and enhance cross-provider data integration, and to share common service components such as registries, catalogs, data conversion, and gateways. This paper will discuss the current status of NOAA's IOOS efforts and possible next steps.

  19. Integrating Mercury Science and Policy in the Marine Context: Challenges and Opportunities

    PubMed Central

    Lambert, Kathleen F.; Evers, David C.; Warner, Kimberly A.; King, Susannah L.; Selin, Noelle E.

    2014-01-01

    Mercury is a global pollutant and presents policy challenges at local, regional, and global scales. Mercury poses risks to the health of people, fish, and wildlife exposed to elevated levels of mercury, most commonly from the consumption of methylmercury in marine and estuarine fish. The patchwork of current mercury abatement efforts limits the effectiveness of national and multi-national policies. This paper provides an overview of the major policy challenges and opportunities related to mercury in coastal and marine environments, and highlights science and policy linkages of the past several decades. The U.S. policy examples explored here point to the need for a full life cycle approach to mercury policy with a focus on source reduction and increased attention to: (1) the transboundary movement of mercury in air, water, and biota; (2) the coordination of policy efforts across multiple environmental media; (3) the cross-cutting issues related to pollutant interactions, mitigation of legacy sources, and adaptation to elevated mercury via improved communication efforts; and (4) the integration of recent research on human and ecological health effects into benefits analyses for regulatory purposes. Stronger science and policy integration will benefit national and international efforts to prevent, control, and minimize exposure to methylmercury. PMID:22901766

  20. Multi-source remotely sensed data fusion for improving land cover classification

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Huang, Bo; Xu, Bing

    2017-02-01

    Although many advances have been made in past decades, land cover classification of fine-resolution remotely sensed (RS) data integrating multiple temporal, angular, and spectral features remains limited, and the contribution of different RS features to land cover classification accuracy remains uncertain. We proposed to improve land cover classification accuracy by integrating multi-source RS features through data fusion. We further investigated the effect of different RS features on classification performance. The results of fusing Landsat-8 Operational Land Imager (OLI) data with Moderate Resolution Imaging Spectroradiometer (MODIS), China Environment 1A series (HJ-1A), and Advanced Spaceborne Thermal Emission and Reflection (ASTER) digital elevation model (DEM) data, showed that the fused data integrating temporal, spectral, angular, and topographic features achieved better land cover classification accuracy than the original RS data. Compared with the topographic feature, the temporal and angular features extracted from the fused data played more important roles in classification performance, especially those temporal features containing abundant vegetation growth information, which markedly increased the overall classification accuracy. In addition, the multispectral and hyperspectral fusion successfully discriminated detailed forest types. Our study provides a straightforward strategy for hierarchical land cover classification by making full use of available RS data. All of these methods and findings could be useful for land cover classification at both regional and global scales.

  1. World Spatiotemporal Analytics and Mapping Project (wstamp): Discovering, Exploring, and Mapping Spatiotemporal Patterns across the World's Largest Open Soruce Data Sets

    NASA Astrophysics Data System (ADS)

    Stewart, R.; Piburn, J.; Sorokine, A.; Myers, A.; Moehl, J.; White, D.

    2015-07-01

    The application of spatiotemporal (ST) analytics to integrated data from major sources such as the World Bank, United Nations, and dozens of others holds tremendous potential for shedding new light on the evolution of cultural, health, economic, and geopolitical landscapes on a global level. Realizing this potential first requires an ST data model that addresses challenges in properly merging data from multiple authors, with evolving ontological perspectives, semantical differences, and changing attributes, as well as content that is textual, numeric, categorical, and hierarchical. Equally challenging is the development of analytical and visualization approaches that provide a serious exploration of this integrated data while remaining accessible to practitioners with varied backgrounds. The WSTAMP project at Oak Ridge National Laboratory has yielded two major results in addressing these challenges: 1) development of the WSTAMP database, a significant advance in ST data modeling that integrates 10,000+ attributes covering over 200 nation states spanning over 50 years from over 30 major sources and 2) a novel online ST exploratory and analysis tool providing an array of modern statistical and visualization techniques for analyzing these data temporally, spatially, and spatiotemporally under a standard analytic workflow. We discuss the status of this work and report on major findings.

  2. Bayesian Travel Time Inversion adopting Gaussian Process Regression

    NASA Astrophysics Data System (ADS)

    Mauerberger, S.; Holschneider, M.

    2017-12-01

    A major application in seismology is the determination of seismic velocity models. Travel time measurements are putting an integral constraint on the velocity between source and receiver. We provide insight into travel time inversion from a correlation-based Bayesian point of view. Therefore, the concept of Gaussian process regression is adopted to estimate a velocity model. The non-linear travel time integral is approximated by a 1st order Taylor expansion. A heuristic covariance describes correlations amongst observations and a priori model. That approach enables us to assess a proxy of the Bayesian posterior distribution at ordinary computational costs. No multi dimensional numeric integration nor excessive sampling is necessary. Instead of stacking the data, we suggest to progressively build the posterior distribution. Incorporating only a single evidence at a time accounts for the deficit of linearization. As a result, the most probable model is given by the posterior mean whereas uncertainties are described by the posterior covariance.As a proof of concept, a synthetic purely 1d model is addressed. Therefore a single source accompanied by multiple receivers is considered on top of a model comprising a discontinuity. We consider travel times of both phases - direct and reflected wave - corrupted by noise. Left and right of the interface are assumed independent where the squared exponential kernel serves as covariance.

  3. Integrating Multiple Intelligences in EFL/ESL Classrooms

    ERIC Educational Resources Information Center

    Bas, Gokhan

    2008-01-01

    This article deals with the integration of the theory of Multiple Intelligences in EFL/ESL classrooms. In this study, after the theory of multiple intelligences was presented shortly, the integration of this theory into English classrooms. Intelligence types in MI Theory were discussed and some possible application ways of these intelligence types…

  4. Issues and Methods Concerning the Evaluation of Hypersingular and Near-Hypersingular Integrals in BEM Formulations

    NASA Technical Reports Server (NTRS)

    Fink, P. W.; Khayat, M. A.; Wilton, D. R.

    2005-01-01

    It is known that higher order modeling of the sources and the geometry in Boundary Element Modeling (BEM) formulations is essential to highly efficient computational electromagnetics. However, in order to achieve the benefits of hIgher order basis and geometry modeling, the singular and near-singular terms arising in BEM formulations must be integrated accurately. In particular, the accurate integration of near-singular terms, which occur when observation points are near but not on source regions of the scattering object, has been considered one of the remaining limitations on the computational efficiency of integral equation methods. The method of singularity subtraction has been used extensively for the evaluation of singular and near-singular terms. Piecewise integration of the source terms in this manner, while manageable for bases of constant and linear orders, becomes unwieldy and prone to error for bases of higher order. Furthermore, we find that the singularity subtraction method is not conducive to object-oriented programming practices, particularly in the context of multiple operators. To extend the capabilities, accuracy, and maintainability of general-purpose codes, the subtraction method is being replaced in favor of the purely numerical quadrature schemes. These schemes employ singularity cancellation methods in which a change of variables is chosen such that the Jacobian of the transformation cancels the singularity. An example of the sin,oularity cancellation approach is the Duffy method, which has two major drawbacks: 1) In the resulting integrand, it produces an angular variation about the singular point that becomes nearly-singular for observation points close to an edge of the parent element, and 2) it appears not to work well when applied to nearly-singular integrals. Recently, the authors have introduced the transformation u(x(prime))= sinh (exp -1) x(prime)/Square root of ((y prime (exp 2))+ z(exp 2) for integrating functions of the form I = Integral of (lambda(r(prime))((e(exp -jkR))/(4 pi R) d D where A (r (prime)) is a vector or scalar basis function and R = Square root of( (x(prime)(exp2) + (y(prime)(exp2) + z(exp 2)) is the distance between source and observation points. This scheme has all of the advantages of the Duffy method while avoiding the disadvantages listed above. In this presentation we will survey similar approaches for handling singular and near-singular terms for kernels with 1/R(exp 2) type behavior, addressing potential pitfalls and offering techniques to efficiently handle special cases.

  5. SDSS-IV MaNGA: the spectroscopic discovery of strongly lensed galaxies

    NASA Astrophysics Data System (ADS)

    Talbot, Michael S.; Brownstein, Joel R.; Bolton, Adam S.; Bundy, Kevin; Andrews, Brett H.; Cherinka, Brian; Collett, Thomas E.; More, Anupreeta; More, Surhud; Sonnenfeld, Alessandro; Vegetti, Simona; Wake, David A.; Weijmans, Anne-Marie; Westfall, Kyle B.

    2018-06-01

    We present a catalogue of 38 spectroscopically detected strong galaxy-galaxy gravitational lens candidates identified in the Sloan Digital Sky Survey IV (SDSS-IV). We were able to simulate narrow-band images for eight of them demonstrating evidence of multiple images. Two of our systems are compound lens candidates, each with two background source-planes. One of these compound systems shows clear lensing features in the narrow-band image. Our sample is based on 2812 galaxies observed by the Mapping Nearby Galaxies at APO (MaNGA) integral field unit (IFU). This Spectroscopic Identification of Lensing Objects (SILO) survey extends the methodology of the Sloan Lens ACS Survey (SLACS) and BOSS Emission-Line Survey (BELLS) to lower redshift and multiple IFU spectra. We searched ˜1.5 million spectra, of which 3065 contained multiple high signal-to-noise ratio background emission-lines or a resolved [O II] doublet, that are included in this catalogue. Upon manual inspection, we discovered regions with multiple spectra containing background emission-lines at the same redshift, providing evidence of a common source-plane geometry which was not possible in previous SLACS and BELLS discovery programs. We estimate more than half of our candidates have an Einstein radius ≳ 1.7 arcsec, which is significantly greater than seen in SLACS and BELLS. These larger Einstein radii produce more extended images of the background galaxy increasing the probability that a background emission-line will enter one of the IFU spectroscopic fibres, making detection more likely.

  6. Your perspective and my benefit: multiple lesion models of self-other integration strategies during social bargaining.

    PubMed

    Melloni, Margherita; Billeke, Pablo; Baez, Sandra; Hesse, Eugenia; de la Fuente, Laura; Forno, Gonzalo; Birba, Agustina; García-Cordero, Indira; Serrano, Cecilia; Plastino, Angelo; Slachevsky, Andrea; Huepe, David; Sigman, Mariano; Manes, Facundo; García, Adolfo M; Sedeño, Lucas; Ibáñez, Agustín

    2016-11-01

    Recursive social decision-making requires the use of flexible, context-sensitive long-term strategies for negotiation. To succeed in social bargaining, participants' own perspectives must be dynamically integrated with those of interactors to maximize self-benefits and adapt to the other's preferences, respectively. This is a prerequisite to develop a successful long-term self-other integration strategy. While such form of strategic interaction is critical to social decision-making, little is known about its neurocognitive correlates. To bridge this gap, we analysed social bargaining behaviour in relation to its structural neural correlates, ongoing brain dynamics (oscillations and related source space), and functional connectivity signatures in healthy subjects and patients offering contrastive lesion models of neurodegeneration and focal stroke: behavioural variant frontotemporal dementia, Alzheimer's disease, and frontal lesions. All groups showed preserved basic bargaining indexes. However, impaired self-other integration strategy was found in patients with behavioural variant frontotemporal dementia and frontal lesions, suggesting that social bargaining critically depends on the integrity of prefrontal regions. Also, associations between behavioural performance and data from voxel-based morphometry and voxel-based lesion-symptom mapping revealed a critical role of prefrontal regions in value integration and strategic decisions for self-other integration strategy. Furthermore, as shown by measures of brain dynamics and related sources during the task, the self-other integration strategy was predicted by brain anticipatory activity (alpha/beta oscillations with sources in frontotemporal regions) associated with expectations about others' decisions. This pattern was reduced in all clinical groups, with greater impairments in behavioural variant frontotemporal dementia and frontal lesions than Alzheimer's disease. Finally, connectivity analysis from functional magnetic resonance imaging evidenced a fronto-temporo-parietal network involved in successful self-other integration strategy, with selective compromise of long-distance connections in frontal disorders. In sum, this work provides unprecedented evidence of convergent behavioural and neurocognitive signatures of strategic social bargaining in different lesion models. Our findings offer new insights into the critical roles of prefrontal hubs and associated temporo-parietal networks for strategic social negotiation. © The Author (2016). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. The New York Brain Bank of Columbia University: practical highlights of 35 years of experience.

    PubMed

    Ramirez, Etty Paola Cortes; Keller, Christian Ernst; Vonsattel, Jean Paul

    2018-01-01

    The New York Brain Bank processes brains and organs of clinically well-characterized patients with age-related neurodegenerative diseases, and for comparison, from individuals without neurologic or psychiatric impairments. The donors, either patients or individuals, were evaluated at healthcare facilities of the Columbia University of New York. Each source brain yields four categories of samples: fresh frozen blocks and crushed parenchyma, and formalin-fixed wet blocks and histology sections. A source brain is thoroughly evaluated to determine qualitatively and quantitatively any changes it might harbor using conventional neuropathologic techniques. The clinical and pathologic diagnoses are integrated to determine the distributive diagnosis assigned to the samples obtained from a source brain. The gradual standardization of the protocol was developed in 1981 in response to the evolving requirements of basic investigations on neurodegeneration. The methods assimilate long-standing experience from multiple centers. The resulting and current protocol includes a constant central core applied to all brains with conditional flexibility around it. The New York Brain Bank is an integral part of the department of pathology, where the expertise, teaching duties, and hardware are shared. Since details of the protocols are available online, this chapter focuses on practical issues in professionalizing brain banking. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Real-Time Microscopic Monitoring of Flow, Voltage and Current in the Proton Exchange Membrane Water Electrolyzer.

    PubMed

    Lee, Chi-Yuan; Li, Shih-Chun; Chen, Chia-Hung; Huang, Yen-Ting; Wang, Yu-Syuan

    2018-03-15

    Looking for alternative energy sources has been an inevitable trend since the oil crisis, and close attentioned has been paid to hydrogen energy. The proton exchange membrane (PEM) water electrolyzer is characterized by high energy efficiency, high yield, simple system and low operating temperature. The electrolyzer generates hydrogen from water free of any carbon sources (provided the electrons come from renewable sources such as solar and wind), so it is very clean and completely satisfies the environmental requirement. However, in long-term operation of the PEM water electrolyzer, the membrane material durability, catalyst corrosion and nonuniformity of local flow, voltage and current in the electrolyzer can influence the overall performance. It is difficult to measure the internal physical parameters of the PEM water electrolyzer, and the physical parameters are interrelated. Therefore, this study uses micro-electro-mechanical systems (MEMS) technology to develop a flexible integrated microsensor; internal multiple physical information is extracted to determine the optimal working parameters for the PEM water electrolyzer. The real operational data of local flow, voltage and current in the PEM water electrolyzer are measured simultaneously by the flexible integrated microsensor, so as to enhance the performance of the PEM water electrolyzer and to prolong the service life.

  9. Real-Time Microscopic Monitoring of Flow, Voltage and Current in the Proton Exchange Membrane Water Electrolyzer

    PubMed Central

    Lee, Chi-Yuan; Li, Shih-Chun; Chen, Chia-Hung; Huang, Yen-Ting; Wang, Yu-Syuan

    2018-01-01

    Looking for alternative energy sources has been an inevitable trend since the oil crisis, and close attentioned has been paid to hydrogen energy. The proton exchange membrane (PEM) water electrolyzer is characterized by high energy efficiency, high yield, simple system and low operating temperature. The electrolyzer generates hydrogen from water free of any carbon sources (provided the electrons come from renewable sources such as solar and wind), so it is very clean and completely satisfies the environmental requirement. However, in long-term operation of the PEM water electrolyzer, the membrane material durability, catalyst corrosion and nonuniformity of local flow, voltage and current in the electrolyzer can influence the overall performance. It is difficult to measure the internal physical parameters of the PEM water electrolyzer, and the physical parameters are interrelated. Therefore, this study uses micro-electro-mechanical systems (MEMS) technology to develop a flexible integrated microsensor; internal multiple physical information is extracted to determine the optimal working parameters for the PEM water electrolyzer. The real operational data of local flow, voltage and current in the PEM water electrolyzer are measured simultaneously by the flexible integrated microsensor, so as to enhance the performance of the PEM water electrolyzer and to prolong the service life. PMID:29543734

  10. Coupling long and short term decisions in the design of urban water supply infrastructure for added reliability and flexibility

    NASA Astrophysics Data System (ADS)

    Marques, G.; Fraga, C. C. S.; Medellin-Azuara, J.

    2016-12-01

    The expansion and operation of urban water supply systems under growing demands, hydrologic uncertainty and water scarcity requires a strategic combination of supply sources for reliability, reduced costs and improved operational flexibility. The design and operation of such portfolio of water supply sources involves integration of long and short term planning to determine what and when to expand, and how much to use of each supply source accounting for interest rates, economies of scale and hydrologic variability. This research presents an integrated methodology coupling dynamic programming optimization with quadratic programming to optimize the expansion (long term) and operations (short term) of multiple water supply alternatives. Lagrange Multipliers produced by the short-term model provide a signal about the marginal opportunity cost of expansion to the long-term model, in an iterative procedure. A simulation model hosts the water supply infrastructure and hydrologic conditions. Results allow (a) identification of trade offs between cost and reliability of different expansion paths and water use decisions; (b) evaluation of water transfers between urban supply systems; and (c) evaluation of potential gains by reducing water system losses as a portfolio component. The latter is critical in several developing countries where water supply system losses are high and often neglected in favor of more system expansion.

  11. Automated realtime data import for the i2b2 clinical data warehouse: introducing the HL7 ETL cell.

    PubMed

    Majeed, Raphael W; Röhrig, Rainer

    2012-01-01

    Clinical data warehouses are used to consolidate all available clinical data from one or multiple organizations. They represent an important source for clinical research, quality management and controlling. Since its introduction, the data warehouse i2b2 gathered a large user base in the research community. Yet, little work has been done on the process of importing clinical data into data warehouses using existing standards. In this article, we present a novel approach of utilizing the clinical integration server as data source, commonly available in most hospitals. As information is transmitted through the integration server, the standardized HL7 message is immediately parsed and inserted into the data warehouse. Evaluation of import speeds suggest feasibility of the provided solution for real-time processing of HL7 messages. By using the presented approach of standardized data import, i2b2 can be used as a plug and play data warehouse, without the hurdle of customized import for every clinical information system or electronic medical record. The provided solution is available for download at http://sourceforge.net/projects/histream/.

  12. Parallel evolution of Nitric Oxide signaling: Diversity of synthesis & memory pathways

    PubMed Central

    Moroz, Leonid L.; Kohn, Andrea B.

    2014-01-01

    The origin of NO signaling can be traceable back to the origin of life with the large scale of parallel evolution of NO synthases (NOSs). Inducible-like NOSs may be the most basal prototype of all NOSs and that neuronal-like NOS might have evolved several times from this prototype. Other enzymatic and non-enzymatic pathways for NO synthesis have been discovered using reduction of nitrites, an alternative source of NO. Diverse synthetic mechanisms can co-exist within the same cell providing a complex NO-oxygen microenvironment tightly coupled with cellular energetics. The dissection of multiple sources of NO formation is crucial in analysis of complex biological processes such as neuronal integration and learning mechanisms when NO can act as a volume transmitter within memory-forming circuits. In particular, the molecular analysis of learning mechanisms (most notably in insects and gastropod molluscs) opens conceptually different perspectives to understand the logic of recruiting evolutionarily conserved pathways for novel functions. Giant uniquely identified cells from Aplysia and related species precent unuque opportunities for integrative analysis of NO signaling at the single cell level. PMID:21622160

  13. Triboelectric nanogenerator built on suspended 3D spiral structure as vibration and positioning sensor and wave energy harvester.

    PubMed

    Hu, Youfan; Yang, Jin; Jing, Qingshen; Niu, Simiao; Wu, Wenzhuo; Wang, Zhong Lin

    2013-11-26

    An unstable mechanical structure that can self-balance when perturbed is a superior choice for vibration energy harvesting and vibration detection. In this work, a suspended 3D spiral structure is integrated with a triboelectric nanogenerator (TENG) for energy harvesting and sensor applications. The newly designed vertical contact-separation mode TENG has a wide working bandwidth of 30 Hz in low-frequency range with a maximum output power density of 2.76 W/m(2) on a load of 6 MΩ. The position of an in-plane vibration source was identified by placing TENGs at multiple positions as multichannel, self-powered active sensors, and the location of the vibration source was determined with an error less than 6%. The magnitude of the vibration is also measured by the output voltage and current signal of the TENG. By integrating the TENG inside a buoy ball, wave energy harvesting at water surface has been demonstrated and used for lighting illumination light, which shows great potential applications in marine science and environmental/infrastructure monitoring.

  14. Development of a High Dynamic Range Pixel Array Detector for Synchrotrons and XFELs

    NASA Astrophysics Data System (ADS)

    Weiss, Joel Todd

    Advances in synchrotron radiation light source technology have opened new lines of inquiry in material science, biology, and everything in between. However, x-ray detector capabilities must advance in concert with light source technology to fully realize experimental possibilities. X-ray free electron lasers (XFELs) place particularly large demands on the capabilities of detectors, and developments towards diffraction-limited storage ring sources also necessitate detectors capable of measuring very high flux [1-3]. The detector described herein builds on the Mixed Mode Pixel Array Detector (MM-PAD) framework, developed previously by our group to perform high dynamic range imaging, and the Adaptive Gain Integrating Pixel Detector (AGIPD) developed for the European XFEL by a collaboration between Deustsches Elektronen-Synchrotron (DESY), the Paul-Scherrer-Institute (PSI), the University of Hamburg, and the University of Bonn, led by Heinz Graafsma [4, 5]. The feasibility of combining adaptive gain with charge removal techniques to increase dynamic range in XFEL experiments is assessed by simulating XFEL scatter with a pulsed infrared laser. The strategy is incorporated into pixel prototypes which are evaluated with direct current injection to simulate very high incident x-ray flux. A fully functional 16x16 pixel hybrid integrating x-ray detector featuring several different pixel architectures based on the prototypes was developed. This dissertation describes its operation and characterization. To extend dynamic range, charge is removed from the integration node of the front-end amplifier without interrupting integration. The number of times this process occurs is recorded by a digital counter in the pixel. The parameter limiting full well is thereby shifted from the size of an integration capacitor to the depth of a digital counter. The result is similar to that achieved by counting pixel array detectors, but the integrators presented here are designed to tolerate a sustained flux >1011 x-rays/pixel/second. In addition, digitization of residual analog signals allows sensitivity for single x-rays or low flux signals. Pixel high flux linearity is evaluated by direct exposure to an unattenuated synchrotron source x-ray beam and flux measurements of more than 1010 9.52 keV x-rays/pixel/s are made. Detector sensitivity to small signals is evaluated and dominant sources of error are identified. These new pixels boast multiple orders of magnitude improvement in maximum sustained flux over the MM-PAD, which is capable of measuring a sustained flux in excess of 108 x-rays/pixel/second while maintaining sensitivity to smaller signals, down to single x-rays.

  15. Design, Fabrication, and Characterization of Carbon Nanotube Field Emission Devices for Advanced Applications

    NASA Astrophysics Data System (ADS)

    Radauscher, Erich Justin

    Carbon nanotubes (CNTs) have recently emerged as promising candidates for electron field emission (FE) cathodes in integrated FE devices. These nanostructured carbon materials possess exceptional properties and their synthesis can be thoroughly controlled. Their integration into advanced electronic devices, including not only FE cathodes, but sensors, energy storage devices, and circuit components, has seen rapid growth in recent years. The results of the studies presented here demonstrate that the CNT field emitter is an excellent candidate for next generation vacuum microelectronics and related electron emission devices in several advanced applications. The work presented in this study addresses determining factors that currently confine the performance and application of CNT-FE devices. Characterization studies and improvements to the FE properties of CNTs, along with Micro-Electro-Mechanical Systems (MEMS) design and fabrication, were utilized in achieving these goals. Important performance limiting parameters, including emitter lifetime and failure from poor substrate adhesion, are examined. The compatibility and integration of CNT emitters with the governing MEMS substrate (i.e., polycrystalline silicon), and its impact on these performance limiting parameters, are reported. CNT growth mechanisms and kinetics were investigated and compared to silicon (100) to improve the design of CNT emitter integrated MEMS based electronic devices, specifically in vacuum microelectronic device (VMD) applications. Improved growth allowed for design and development of novel cold-cathode FE devices utilizing CNT field emitters. A chemical ionization (CI) source based on a CNT-FE electron source was developed and evaluated in a commercial desktop mass spectrometer for explosives trace detection. This work demonstrated the first reported use of a CNT-based ion source capable of collecting CI mass spectra. The CNT-FE source demonstrated low power requirements, pulsing capabilities, and average lifetimes of over 320 hours when operated in constant emission mode under elevated pressures, without sacrificing performance. Additionally, a novel packaged ion source for miniature mass spectrometer applications using CNT emitters, a MEMS based Nier-type geometry, and a Low Temperature Cofired Ceramic (LTCC) 3D scaffold with integrated ion optics were developed and characterized. While previous research has shown other devices capable of collecting ion currents on chip, this LTCC packaged MEMS micro-ion source demonstrated improvements in energy and angular dispersion as well as the ability to direct the ions out of the packaged source and towards a mass analyzer. Simulations and experimental design, fabrication, and characterization were used to make these improvements. Finally, novel CNT-FE devices were developed to investigate their potential to perform as active circuit elements in VMD circuits. Difficulty integrating devices at micron-scales has hindered the use of vacuum electronic devices in integrated circuits, despite the unique advantages they offer in select applications. Using a combination of particle trajectory simulation and experimental characterization, device performance in an integrated platform was investigated. Solutions to the difficulties in operating multiple devices in close proximity and enhancing electron transmission (i.e., reducing grid loss) are explored in detail. A systematic and iterative process was used to develop isolation structures that reduced crosstalk between neighboring devices from 15% on average, to nearly zero. Innovative geometries and a new operational mode reduced grid loss by nearly threefold, thereby improving transmission of the emitted cathode current to the anode from 25% in initial designs to 70% on average. These performance enhancements are important enablers for larger scale integration and for the realization of complex vacuum microelectronic circuits.

  16. Integrated water system simulation by considering hydrological and biogeochemical processes: model development, with parameter sensitivity and autocalibration

    NASA Astrophysics Data System (ADS)

    Zhang, Y. Y.; Shao, Q. X.; Ye, A. Z.; Xing, H. T.; Xia, J.

    2016-02-01

    Integrated water system modeling is a feasible approach to understanding severe water crises in the world and promoting the implementation of integrated river basin management. In this study, a classic hydrological model (the time variant gain model: TVGM) was extended to an integrated water system model by coupling multiple water-related processes in hydrology, biogeochemistry, water quality, and ecology, and considering the interference of human activities. A parameter analysis tool, which included sensitivity analysis, autocalibration and model performance evaluation, was developed to improve modeling efficiency. To demonstrate the model performances, the Shaying River catchment, which is the largest highly regulated and heavily polluted tributary of the Huai River basin in China, was selected as the case study area. The model performances were evaluated on the key water-related components including runoff, water quality, diffuse pollution load (or nonpoint sources) and crop yield. Results showed that our proposed model simulated most components reasonably well. The simulated daily runoff at most regulated and less-regulated stations matched well with the observations. The average correlation coefficient and Nash-Sutcliffe efficiency were 0.85 and 0.70, respectively. Both the simulated low and high flows at most stations were improved when the dam regulation was considered. The daily ammonium-nitrogen (NH4-N) concentration was also well captured with the average correlation coefficient of 0.67. Furthermore, the diffuse source load of NH4-N and the corn yield were reasonably simulated at the administrative region scale. This integrated water system model is expected to improve the simulation performances with extension to more model functionalities, and to provide a scientific basis for the implementation in integrated river basin managements.

  17. Time on your hands: Perceived duration of sensory events is biased toward concurrent actions.

    PubMed

    Yon, Daniel; Edey, Rosanna; Ivry, Richard B; Press, Clare

    2017-02-01

    Perceptual systems must rapidly generate accurate representations of the world from sensory inputs that are corrupted by internal and external noise. We can typically obtain more veridical representations by integrating information from multiple channels, but this integration can lead to biases when inputs are, in fact, not from the same source. Although a considerable amount is known about how different sources of information are combined to influence what we perceive, it is not known whether temporal features are combined. It is vital to address this question given the divergent predictions made by different models of cue combination and time perception concerning the plausibility of cross-modal temporal integration, and the implications that such integration would have for research programs in action control and social cognition. Here we present four experiments investigating the influence of movement duration on the perceived duration of an auditory tone. Participants either explicitly (Experiments 1-2) or implicitly (Experiments 3-4) produced hand movements of shorter or longer durations, while judging the duration of a concurrently presented tone (500-950 ms in duration). Across all experiments, judgments of tone duration were attracted toward the duration of executed movements (i.e., tones were perceived to be longer when executing a movement of longer duration). Our results demonstrate that temporal information associated with movement biases perceived auditory duration, placing important constraints on theories modeling cue integration for state estimation, as well as models of time perception, action control and social cognition. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  18. Oxytocin Modulates Semantic Integration in Speech Comprehension.

    PubMed

    Ye, Zheng; Stolk, Arjen; Toni, Ivan; Hagoort, Peter

    2017-02-01

    Listeners interpret utterances by integrating information from multiple sources including word level semantics and world knowledge. When the semantics of an expression is inconsistent with their knowledge about the world, the listener may have to search through the conceptual space for alternative possible world scenarios that can make the expression more acceptable. Such cognitive exploration requires considerable computational resources and might depend on motivational factors. This study explores whether and how oxytocin, a neuropeptide known to influence social motivation by reducing social anxiety and enhancing affiliative tendencies, can modulate the integration of world knowledge and sentence meanings. The study used a between-participant double-blind randomized placebo-controlled design. Semantic integration, indexed with magnetoencephalography through the N400m marker, was quantified while 45 healthy male participants listened to sentences that were either congruent or incongruent with facts of the world, after receiving intranasally delivered oxytocin or placebo. Compared with congruent sentences, world knowledge incongruent sentences elicited a stronger N400m signal from the left inferior frontal and anterior temporal regions and medial pFC (the N400m effect) in the placebo group. Oxytocin administration significantly attenuated the N400m effect at both sensor and cortical source levels throughout the experiment, in a state-like manner. Additional electrophysiological markers suggest that the absence of the N400m effect in the oxytocin group is unlikely due to the lack of early sensory or semantic processing or a general downregulation of attention. These findings suggest that oxytocin drives listeners to resolve challenges of semantic integration, possibly by promoting the cognitive exploration of alternative possible world scenarios.

  19. Validating Measurement of Knowledge Integration in Science Using Multiple-Choice and Explanation Items

    ERIC Educational Resources Information Center

    Lee, Hee-Sun; Liu, Ou Lydia; Linn, Marcia C.

    2011-01-01

    This study explores measurement of a construct called knowledge integration in science using multiple-choice and explanation items. We use construct and instructional validity evidence to examine the role multiple-choice and explanation items plays in measuring students' knowledge integration ability. For construct validity, we analyze item…

  20. Information Integration in Multiple Cue Judgment: A Division of Labor Hypothesis

    ERIC Educational Resources Information Center

    Juslin, Peter; Karlsson, Linnea; Olsson, Henrik

    2008-01-01

    There is considerable evidence that judgment is constrained to additive integration of information. The authors propose an explanation of why serial and additive cognitive integration can produce accurate multiple cue judgment both in additive and non-additive environments in terms of an adaptive division of labor between multiple representations.…

  1. Information processing of earth resources data

    NASA Technical Reports Server (NTRS)

    Zobrist, A. L.; Bryant, N. A.

    1982-01-01

    Current trends in the use of remotely sensed data include integration of multiple data sources of various formats and use of complex models. These trends have placed a strain on information processing systems because an enormous number of capabilities are needed to perform a single application. A solution to this problem is to create a general set of capabilities which can perform a wide variety of applications. General capabilities for the Image-Based Information System (IBIS) are outlined in this report. They are then cross-referenced for a set of applications performed at JPL.

  2. Assessing the Multiple Dimensions of Engagement to Characterize Learning: A Neurophysiological Perspective

    PubMed Central

    Charland, Patrick; Léger, Pierre-Majorique; Sénécal, Sylvain; Courtemanche, François; Mercier, Julien; Skelling, Yannick; Labonté-Lemoyne, Elise

    2015-01-01

    In a recent theoretical synthesis on the concept of engagement, Fredricks, Blumenfeld and Paris1 defined engagement by its multiple dimensions: behavioral, emotional and cognitive. They observed that individual types of engagement had not been studied in conjunction, and little information was available about interactions or synergy between the dimensions; consequently, more studies would contribute to creating finely tuned teaching interventions. Benefiting from the recent technological advances in neurosciences, this paper presents a recently developed methodology to gather and synchronize data on multidimensional engagement during learning tasks. The technique involves the collection of (a) electroencephalography, (b) electrodermal, (c) eye-tracking, and (d) facial emotion recognition data on four different computers. This led to synchronization issues for data collected from multiple sources. Post synchronization in specialized integration software gives researchers a better understanding of the dynamics between the multiple dimensions of engagement. For curriculum developers, these data could provide informed guidelines for achieving better instruction/learning efficiency. This technique also opens up possibilities in the field of brain-computer interactions, where adaptive learning or assessment environments could be developed. PMID:26167712

  3. Hybrid 3D printing: a game-changer in personalized cardiac medicine?

    PubMed

    Kurup, Harikrishnan K N; Samuel, Bennett P; Vettukattil, Joseph J

    2015-12-01

    Three-dimensional (3D) printing in congenital heart disease has the potential to increase procedural efficiency and patient safety by improving interventional and surgical planning and reducing radiation exposure. Cardiac magnetic resonance imaging and computed tomography are usually the source datasets to derive 3D printing. More recently, 3D echocardiography has been demonstrated to derive 3D-printed models. The integration of multiple imaging modalities for hybrid 3D printing has also been shown to create accurate printed heart models, which may prove to be beneficial for interventional cardiologists, cardiothoracic surgeons, and as an educational tool. Further advancements in the integration of different imaging modalities into a single platform for hybrid 3D printing and virtual 3D models will drive the future of personalized cardiac medicine.

  4. Vacuum die attach for integrated circuits

    DOEpatents

    Schmitt, E.H.; Tuckerman, D.B.

    1991-09-10

    A thin film eutectic bond for attaching an integrated circuit die to a circuit substrate is formed by coating at least one bonding surface on the die and substrate with an alloying metal, assembling the die and substrate under compression loading, and heating the assembly to an alloying temperature in a vacuum. A very thin bond, 10 microns or less, which is substantially void free, is produced. These bonds have high reliability, good heat and electrical conduction, and high temperature tolerance. The bonds are formed in a vacuum chamber, using a positioning and loading fixture to compression load the die, and an IR lamp or other heat source. For bonding a silicon die to a silicon substrate, a gold silicon alloy bond is used. Multiple dies can be bonded simultaneously. No scrubbing is required. 1 figure.

  5. Vacuum die attach for integrated circuits

    DOEpatents

    Schmitt, Edward H.; Tuckerman, David B.

    1991-01-01

    A thin film eutectic bond for attaching an integrated circuit die to a circuit substrate is formed by coating at least one bonding surface on the die and substrate with an alloying metal, assembling the die and substrate under compression loading, and heating the assembly to an alloying temperature in a vacuum. A very thin bond, 10 microns or less, which is substantially void free, is produced. These bonds have high reliability, good heat and electrical conduction, and high temperature tolerance. The bonds are formed in a vacuum chamber, using a positioning and loading fixture to compression load the die, and an IR lamp or other heat source. For bonding a silicon die to a silicon substrate, a gold silicon alloy bond is used. Multiple dies can be bonded simultaneously. No scrubbing is required.

  6. KinView: A visual comparative sequence analysis tool for integrated kinome research

    PubMed Central

    McSkimming, Daniel Ian; Dastgheib, Shima; Baffi, Timothy R.; Byrne, Dominic P.; Ferries, Samantha; Scott, Steven Thomas; Newton, Alexandra C.; Eyers, Claire E.; Kochut, Krzysztof J.; Eyers, Patrick A.

    2017-01-01

    Multiple sequence alignments (MSAs) are a fundamental analysis tool used throughout biology to investigate relationships between protein sequence, structure, function, evolutionary history, and patterns of disease-associated variants. However, their widespread application in systems biology research is currently hindered by the lack of user-friendly tools to simultaneously visualize, manipulate and query the information conceptualized in large sequence alignments, and the challenges in integrating MSAs with multiple orthogonal data such as cancer variants and post-translational modifications, which are often stored in heterogeneous data sources and formats. Here, we present the Multiple Sequence Alignment Ontology (MSAOnt), which represents a profile or consensus alignment in an ontological format. Subsets of the alignment are easily selected through the SPARQL Protocol and RDF Query Language for downstream statistical analysis or visualization. We have also created the Kinome Viewer (KinView), an interactive integrative visualization that places eukaryotic protein kinase cancer variants in the context of natural sequence variation and experimentally determined post-translational modifications, which play central roles in the regulation of cellular signaling pathways. Using KinView, we identified differential phosphorylation patterns between tyrosine and serine/threonine kinases in the activation segment, a major kinase regulatory region that is often mutated in proliferative diseases. We discuss cancer variants that disrupt phosphorylation sites in the activation segment, and show how KinView can be used as a comparative tool to identify differences and similarities in natural variation, cancer variants and post-translational modifications between kinase groups, families and subfamilies. Based on KinView comparisons, we identify and experimentally characterize a regulatory tyrosine (Y177PLK4) in the PLK4 C-terminal activation segment region termed the P+1 loop. To further demonstrate the application of KinView in hypothesis generation and testing, we formulate and validate a hypothesis explaining a novel predicted loss-of-function variant (D523NPKCβ) in the regulatory spine of PKCβ, a recently identified tumor suppressor kinase. KinView provides a novel, extensible interface for performing comparative analyses between subsets of kinases and for integrating multiple types of residue specific annotations in user friendly formats. PMID:27731453

  7. Probabilistic data integration and computational complexity

    NASA Astrophysics Data System (ADS)

    Hansen, T. M.; Cordua, K. S.; Mosegaard, K.

    2016-12-01

    Inverse problems in Earth Sciences typically refer to the problem of inferring information about properties of the Earth from observations of geophysical data (the result of nature's solution to the `forward' problem). This problem can be formulated more generally as a problem of `integration of information'. A probabilistic formulation of data integration is in principle simple: If all information available (from e.g. geology, geophysics, remote sensing, chemistry…) can be quantified probabilistically, then different algorithms exist that allow solving the data integration problem either through an analytical description of the combined probability function, or sampling the probability function. In practice however, probabilistic based data integration may not be easy to apply successfully. This may be related to the use of sampling methods, which are known to be computationally costly. But, another source of computational complexity is related to how the individual types of information are quantified. In one case a data integration problem is demonstrated where the goal is to determine the existence of buried channels in Denmark, based on multiple sources of geo-information. Due to one type of information being too informative (and hence conflicting), this leads to a difficult sampling problems with unrealistic uncertainty. Resolving this conflict prior to data integration, leads to an easy data integration problem, with no biases. In another case it is demonstrated how imperfections in the description of the geophysical forward model (related to solving the wave-equation) can lead to a difficult data integration problem, with severe bias in the results. If the modeling error is accounted for, the data integration problems becomes relatively easy, with no apparent biases. Both examples demonstrate that biased information can have a dramatic effect on the computational efficiency solving a data integration problem and lead to biased results, and under-estimation of uncertainty. However, in both examples, one can also analyze the performance of the sampling methods used to solve the data integration problem to indicate the existence of biased information. This can be used actively to avoid biases in the available information and subsequently in the final uncertainty evaluation.

  8. 46 CFR 111.10-5 - Multiple energy sources.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 4 2010-10-01 2010-10-01 false Multiple energy sources. 111.10-5 Section 111.10-5...-GENERAL REQUIREMENTS Power Supply § 111.10-5 Multiple energy sources. Failure of any single generating set energy source such as a boiler, diesel, gas turbine, or steam turbine must not cause all generating sets...

  9. 46 CFR 111.10-5 - Multiple energy sources.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 4 2011-10-01 2011-10-01 false Multiple energy sources. 111.10-5 Section 111.10-5...-GENERAL REQUIREMENTS Power Supply § 111.10-5 Multiple energy sources. Failure of any single generating set energy source such as a boiler, diesel, gas turbine, or steam turbine must not cause all generating sets...

  10. 46 CFR 111.10-5 - Multiple energy sources.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 4 2013-10-01 2013-10-01 false Multiple energy sources. 111.10-5 Section 111.10-5...-GENERAL REQUIREMENTS Power Supply § 111.10-5 Multiple energy sources. Failure of any single generating set energy source such as a boiler, diesel, gas turbine, or steam turbine must not cause all generating sets...

  11. 46 CFR 111.10-5 - Multiple energy sources.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 4 2014-10-01 2014-10-01 false Multiple energy sources. 111.10-5 Section 111.10-5...-GENERAL REQUIREMENTS Power Supply § 111.10-5 Multiple energy sources. Failure of any single generating set energy source such as a boiler, diesel, gas turbine, or steam turbine must not cause all generating sets...

  12. 46 CFR 111.10-5 - Multiple energy sources.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 4 2012-10-01 2012-10-01 false Multiple energy sources. 111.10-5 Section 111.10-5...-GENERAL REQUIREMENTS Power Supply § 111.10-5 Multiple energy sources. Failure of any single generating set energy source such as a boiler, diesel, gas turbine, or steam turbine must not cause all generating sets...

  13. Integration of relational and textual biomedical sources. A pilot experiment using a semi-automated method for logical schema acquisition.

    PubMed

    García-Remesal, M; Maojo, V; Billhardt, H; Crespo, J

    2010-01-01

    Bringing together structured and text-based sources is an exciting challenge for biomedical informaticians, since most relevant biomedical sources belong to one of these categories. In this paper we evaluate the feasibility of integrating relational and text-based biomedical sources using: i) an original logical schema acquisition method for textual databases developed by the authors, and ii) OntoFusion, a system originally designed by the authors for the integration of relational sources. We conducted an integration experiment involving a test set of seven differently structured sources covering the domain of genetic diseases. We used our logical schema acquisition method to generate schemas for all textual sources. The sources were integrated using the methods and tools provided by OntoFusion. The integration was validated using a test set of 500 queries. A panel of experts answered a questionnaire to evaluate i) the quality of the extracted schemas, ii) the query processing performance of the integrated set of sources, and iii) the relevance of the retrieved results. The results of the survey show that our method extracts coherent and representative logical schemas. Experts' feedback on the performance of the integrated system and the relevance of the retrieved results was also positive. Regarding the validation of the integration, the system successfully provided correct results for all queries in the test set. The results of the experiment suggest that text-based sources including a logical schema can be regarded as equivalent to structured databases. Using our method, previous research and existing tools designed for the integration of structured databases can be reused - possibly subject to minor modifications - to integrate differently structured sources.

  14. Drekar v.2.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seefeldt, Ben; Sondak, David; Hensinger, David M.

    Drekar is an application code that solves partial differential equations for fluids that can be optionally coupled to electromagnetics. Drekar solves low-mach compressible and incompressible computational fluid dynamics (CFD), compressible and incompressible resistive magnetohydrodynamics (MHD), and multiple species plasmas interacting with electromagnetic fields. Drekar discretization technology includes continuous and discontinuous finite element formulations, stabilized finite element formulations, mixed integration finite element bases (nodal, edge, face, volume) and an initial arbitrary Lagrangian Eulerian (ALE) capability. Drekar contains the implementation of the discretized physics and leverages the open source Trilinos project for both parallel solver capabilities and general finite element discretization tools.more » The code will be released open source under a BSD license. The code is used for fundamental research for simulation of fluids and plasmas on high performance computing environments.« less

  15. Custom chipset and compact module design for a 75-110 GHz laboratory signal source

    NASA Astrophysics Data System (ADS)

    Morgan, Matthew A.; Boyd, Tod A.; Castro, Jason J.

    2016-12-01

    We report on the development and characterization of a compact, full-waveguide bandwidth (WR-10) signal source for general-purpose testing of mm-wave components. The monolithic microwave integrated circuit (MMIC) based multichip module is designed for compactness and ease-of-use, especially in size-constrained test sets such as a wafer probe station. It takes as input a cm-wave continuous-wave (CW) reference and provides a factor of three frequency multiplication as well as amplification, output power adjustment, and in situ output power monitoring. It utilizes a number of custom MMIC chips such as a Schottky-diode limiter and a broadband mm-wave detector, both designed explicitly for this module, as well as custom millimeter-wave multipliers and amplifiers reported in previous papers.

  16. A multi-channel tunable source for atomic sensors

    NASA Astrophysics Data System (ADS)

    Bigelow, Matthew S.; Roberts, Tony D.; McNeil, Shirley A.; Hawthorne, Todd; Battle, Phil

    2015-09-01

    We have designed and completed initial testing on a laser source suitable for atomic interferometry from compact, robust, integrated components. Our design is enabled by capitalizing on robust, well-commercialized, low-noise telecom components with high reliability and declining costs which will help to drive the widespread deployment of this system. The key innovation is the combination of current telecom-based fiber laser and modulator technology with periodicallypoled waveguide technology to produce tunable laser light at rubidium D1 and D2 wavelengths (and expandable to other alkalis) using second harmonic generation (SHG). Unlike direct-diode sources, this source is immune to feedback at the Rb line eliminating the need for bulky high-power isolators in the system. In addition, the source has GHz-level frequency agility and in our experiments was found to only be limited by the agility of our RF generator. As a proof-of principle, the source was scanned through the Doppler-broadened Rb D2 absorption line. With this technology, multiple channels can be independently tuned to produce the fields needed for addressing atomic states in atom interferometers and clocks. Thus, this technology could be useful in the development cold-atom inertial sensors and gyroscopes.

  17. Use of a mobile device in mental health rehabilitation: A clinical and comprehensive analysis of 11 cases.

    PubMed

    Briand, Catherine; Sablier, Juliette; Therrien, Julie-Anne; Charbonneau, Karine; Pelletier, Jean-François; Weiss-Lambrou, Rhoda

    2018-07-01

    This study aimed to test the feasibility of using a mobile device (Apple technology: iPodTouch®, iPhone® or iPad®) among people with severe mental illness (SMI) in a rehabilitation and recovery process and to document the parameters to be taken into account and the issues involved in implementing this technology in living environments and mental health care settings. A qualitative multiple case study design and multiple data sources were used to understand each case in depth. A clinical and comprehensive analysis of 11 cases was conducted with exploratory and descriptive aims (and the beginnings of explanation building). The multiple-case analysis brought out four typical profiles to illustrate the extent of integration of a personal digital assistant (PDA) as a tool to support mental health rehabilitation and recovery. Each profile highlights four categories of variables identified as determining factors in this process: (1) state of health and related difficulties (cognitive or functional); (2) relationship between comfort level with technology, motivation and personal effort deployed; (3) relationship between support required and support received; and (4) the living environment and follow-up context. This study allowed us to consider the contexts and conditions to be put in place for the successful integration of mobile technology in a mental health rehabilitation and recovery process.

  18. Integrated generation of complex optical quantum states and their coherent control

    NASA Astrophysics Data System (ADS)

    Roztocki, Piotr; Kues, Michael; Reimer, Christian; Romero Cortés, Luis; Sciara, Stefania; Wetzel, Benjamin; Zhang, Yanbing; Cino, Alfonso; Chu, Sai T.; Little, Brent E.; Moss, David J.; Caspani, Lucia; Azaña, José; Morandotti, Roberto

    2018-01-01

    Complex optical quantum states based on entangled photons are essential for investigations of fundamental physics and are the heart of applications in quantum information science. Recently, integrated photonics has become a leading platform for the compact, cost-efficient, and stable generation and processing of optical quantum states. However, onchip sources are currently limited to basic two-dimensional (qubit) two-photon states, whereas scaling the state complexity requires access to states composed of several (<2) photons and/or exhibiting high photon dimensionality. Here we show that the use of integrated frequency combs (on-chip light sources with a broad spectrum of evenly-spaced frequency modes) based on high-Q nonlinear microring resonators can provide solutions for such scalable complex quantum state sources. In particular, by using spontaneous four-wave mixing within the resonators, we demonstrate the generation of bi- and multi-photon entangled qubit states over a broad comb of channels spanning the S, C, and L telecommunications bands, and control these states coherently to perform quantum interference measurements and state tomography. Furthermore, we demonstrate the on-chip generation of entangled high-dimensional (quDit) states, where the photons are created in a coherent superposition of multiple pure frequency modes. Specifically, we confirm the realization of a quantum system with at least one hundred dimensions. Moreover, using off-the-shelf telecommunications components, we introduce a platform for the coherent manipulation and control of frequencyentangled quDit states. Our results suggest that microcavity-based entangled photon state generation and the coherent control of states using accessible telecommunications infrastructure introduce a powerful and scalable platform for quantum information science.

  19. Semantic Data Integration and Ontology Use within the Global Earth Observation System of Systems (GEOSS) Global Water Cycle Data Integration System

    NASA Astrophysics Data System (ADS)

    Pozzi, W.; Fekete, B.; Piasecki, M.; McGuinness, D.; Fox, P.; Lawford, R.; Vorosmarty, C.; Houser, P.; Imam, B.

    2008-12-01

    The inadequacies of water cycle observations for monitoring long-term changes in the global water system, as well as their feedback into the climate system, poses a major constraint on sustainable development of water resources and improvement of water management practices. Hence, The Group on Earth Observations (GEO) has established Task WA-08-01, "Integration of in situ and satellite data for water cycle monitoring," an integrative initiative combining different types of satellite and in situ observations related to key variables of the water cycle with model outputs for improved accuracy and global coverage. This presentation proposes development of the Rapid, Integrated Monitoring System for the Water Cycle (Global-RIMS)--already employed by the GEO Global Terrestrial Network for Hydrology (GTN-H)--as either one of the main components or linked with the Asian system to constitute the modeling system of GEOSS for water cycle monitoring. We further propose expanded, augmented capability to run multiple grids to embrace some of the heterogeneous methods and formats of the Earth Science, Hydrology, and Hydraulic Engineering communities. Different methodologies are employed by the Earth Science (land surface modeling), the Hydrological (GIS), and the Hydraulic Engineering Communities; with each community employing models that require different input data. Data will be routed as input variables to the models through web services, allowing satellite and in situ data to be integrated together within the modeling framework. Semantic data integration will provide the automation to enable this system to operate in near-real-time. Multiple data collections for ground water, precipitation, soil moisture satellite data, such as SMAP, and lake data will require multiple low level ontologies, and an upper level ontology will permit user-friendly water management knowledge to be synthesized. These ontologies will have to have overlapping terms mapped and linked together. so that they can cover an even wider net of data sources. The goal is to develop the means to link together the upper level and lower level ontologies and to have these registered within the GEOSS Registry. Actual operational ontologies that would link to models or link to data collections containing input variables required by models would have to be nested underneath this top level ontology, analogous to the mapping that has been carried out among ontologies within GEON.

  20. Computer simulation of a multiple-aperture coherent laser radar

    NASA Astrophysics Data System (ADS)

    Gamble, Kevin J.; Weeks, Arthur R.

    1996-06-01

    This paper presents the construction of a 2D multiple aperture coherent laser radar simulation that is capable of including the effects of the time evolution of speckle on the laser radar output. Every portion of a laser radar system is modeled in software, including quarter and half wave plates, beamsplitters (polarizing and non-polarizing), the detector, the laser source, and all necessary lenses. Free space propagation is implemented using the Rayleigh- Sommerfeld integral for both orthogonal polarizations. Atmospheric turbulence is also included in the simulation and is modeled using time correlated Kolmogorov phase screens. The simulation itself can be configured to simulate both monostatic and bistatic systems. The simulation allows the user to specify component level parameters such as extinction ratios for polarizing beam splitters, detector sizes and shapes. orientation of the slow axis for quarter/half wave plates and other components used in the system. This is useful from a standpoint of being a tool in the design of a multiple aperture laser radar system.

Top