Sample records for common information model

  1. Promoting Model-based Definition to Establish a Complete Product Definition

    PubMed Central

    Ruemler, Shawn P.; Zimmerman, Kyle E.; Hartman, Nathan W.; Hedberg, Thomas; Feeny, Allison Barnard

    2016-01-01

    The manufacturing industry is evolving and starting to use 3D models as the central knowledge artifact for product data and product definition, or what is known as Model-based Definition (MBD). The Model-based Enterprise (MBE) uses MBD as a way to transition away from using traditional paper-based drawings and documentation. As MBD grows in popularity, it is imperative to understand what information is needed in the transition from drawings to models so that models represent all the relevant information needed for processes to continue efficiently. Finding this information can help define what data is common amongst different models in different stages of the lifecycle, which could help establish a Common Information Model. The Common Information Model is a source that contains common information from domain specific elements amongst different aspects of the lifecycle. To help establish this Common Information Model, information about how models are used in industry within different workflows needs to be understood. To retrieve this information, a survey mechanism was administered to industry professionals from various sectors. Based on the results of the survey a Common Information Model could not be established. However, the results gave great insight that will help in further investigation of the Common Information Model. PMID:28070155

  2. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 2. CDMP Test Case Report.

    DTIC Science & Technology

    1985-11-01

    As a o11066v. nlstle VSuSY £6I5PSAY I’ Iu PAS 11. Title Integrated Information Support System (1SS) Vol V - Common Data Model Subsystem Part 2 - CIMP ...AD-Mel1 236 INTEGRATED INFORMATION SUPPORT SYSTEM (IISS) VOLUME 5 1/2 COMMON DATA MODEL S.. (U) GENERAL ELECTRIC CO SCHENECTADY NY PRODUCTION...Volume V - Common Data Model Subsystem Part 2 - CDMP Test Case Report General Electric Company Production Resources Consulting One River Road

  3. Information Interaction Study for DER and DMS Interoperability

    NASA Astrophysics Data System (ADS)

    Liu, Haitao; Lu, Yiming; Lv, Guangxian; Liu, Peng; Chen, Yu; Zhang, Xinhui

    The Common Information Model (CIM) is an abstract data model that can be used to represent the major objects in Distribution Management System (DMS) applications. Because the Common Information Model (CIM) doesn't modeling the Distributed Energy Resources (DERs), it can't meet the requirements of DER operation and management for Distribution Management System (DMS) advanced applications. Modeling of DER were studied based on a system point of view, the article initially proposed a CIM extended information model. By analysis the basic structure of the message interaction between DMS and DER, a bidirectional messaging mapping method based on data exchange was proposed.

  4. Standard Information Models for Representing Adverse Sensitivity Information in Clinical Documents.

    PubMed

    Topaz, M; Seger, D L; Goss, F; Lai, K; Slight, S P; Lau, J J; Nandigam, H; Zhou, L

    2016-01-01

    Adverse sensitivity (e.g., allergy and intolerance) information is a critical component of any electronic health record system. While several standards exist for structured entry of adverse sensitivity information, many clinicians record this data as free text. This study aimed to 1) identify and compare the existing common adverse sensitivity information models, and 2) to evaluate the coverage of the adverse sensitivity information models for representing allergy information on a subset of inpatient and outpatient adverse sensitivity clinical notes. We compared four common adverse sensitivity information models: Health Level 7 Allergy and Intolerance Domain Analysis Model, HL7-DAM; the Fast Healthcare Interoperability Resources, FHIR; the Consolidated Continuity of Care Document, C-CDA; and OpenEHR, and evaluated their coverage on a corpus of inpatient and outpatient notes (n = 120). We found that allergy specialists' notes had the highest frequency of adverse sensitivity attributes per note, whereas emergency department notes had the fewest attributes. Overall, the models had many similarities in the central attributes which covered between 75% and 95% of adverse sensitivity information contained within the notes. However, representations of some attributes (especially the value-sets) were not well aligned between the models, which is likely to present an obstacle for achieving data interoperability. Also, adverse sensitivity exceptions were not well represented among the information models. Although we found that common adverse sensitivity models cover a significant portion of relevant information in the clinical notes, our results highlight areas needed to be reconciled between the standards for data interoperability.

  5. "My Understanding Has Grown, My Perspective Has Switched": Linking Informal Writing to Learning Goals

    ERIC Educational Resources Information Center

    Hudd, Suzanne S.; Smart, Robert A.; Delohery, Andrew W.

    2011-01-01

    The use of informal writing is common in sociology. This article presents one model for integrating informal written work with learning goals through a theoretical framework known as concentric thinking. More commonly referred to as "the PTA model" because of the series of cognitive tasks it promotes--prioritization, translation, and analogy…

  6. Can Moral Hazard Be Resolved by Common-Knowledge in S4n-Knowledge?

    NASA Astrophysics Data System (ADS)

    Matsuhisa, Takashi

    This article investigates the relationship between common-knowledge and agreement in multi-agent system, and to apply the agreement result by common-knowledge to the principal-agent model under non-partition information. We treat the two problems: (1) how we capture the fact that the agents agree on an event or they get consensus on it from epistemic point of view, and (2) how the agreement theorem will be able to make progress to settle a moral hazard problem in the principal-agents model under non-partition information. We shall propose a solution program for the moral hazard in the principal-agents model under non-partition information by common-knowledge. Let us start that the agents have the knowledge structure induced from a reflexive and transitive relation associated with the multi-modal logic S4n. Each agent obtains the membership value of an event under his/her private information, so he/she considers the event as fuzzy set. Specifically consider the situation that the agents commonly know all membership values of the other agents. In this circumstance we shall show the agreement theorem that consensus on the membership values among all agents can still be guaranteed. Furthermore, under certain assumptions we shall show that the moral hazard can be resolved in the principal-agent model when all the expected marginal costs are common-knowledge among the principal and agents.

  7. Common world model for unmanned systems: Phase 2

    NASA Astrophysics Data System (ADS)

    Dean, Robert M. S.; Oh, Jean; Vinokurov, Jerry

    2014-06-01

    The Robotics Collaborative Technology Alliance (RCTA) seeks to provide adaptive robot capabilities which move beyond traditional metric algorithms to include cognitive capabilities. Key to this effort is the Common World Model, which moves beyond the state-of-the-art by representing the world using semantic and symbolic as well as metric information. It joins these layers of information to define objects in the world. These objects may be reasoned upon jointly using traditional geometric, symbolic cognitive algorithms and new computational nodes formed by the combination of these disciplines to address Symbol Grounding and Uncertainty. The Common World Model must understand how these objects relate to each other. It includes the concept of Self-Information about the robot. By encoding current capability, component status, task execution state, and their histories we track information which enables the robot to reason and adapt its performance using Meta-Cognition and Machine Learning principles. The world model also includes models of how entities in the environment behave which enable prediction of future world states. To manage complexity, we have adopted a phased implementation approach. Phase 1, published in these proceedings in 2013 [1], presented the approach for linking metric with symbolic information and interfaces for traditional planners and cognitive reasoning. Here we discuss the design of "Phase 2" of this world model, which extends the Phase 1 design API, data structures, and reviews the use of the Common World Model as part of a semantic navigation use case.

  8. Standardized reporting of functioning information on ICF-based common metrics.

    PubMed

    Prodinger, Birgit; Tennant, Alan; Stucki, Gerold

    2018-02-01

    In clinical practice and research a variety of clinical data collection tools are used to collect information on people's functioning for clinical practice and research and national health information systems. Reporting on ICF-based common metrics enables standardized documentation of functioning information in national health information systems. The objective of this methodological note on applying the ICF in rehabilitation is to demonstrate how to report functioning information collected with a data collection tool on ICF-based common metrics. We first specify the requirements for the standardized reporting of functioning information. Secondly, we introduce the methods needed for transforming functioning data to ICF-based common metrics. Finally, we provide an example. The requirements for standardized reporting are as follows: 1) having a common conceptual framework to enable content comparability between any health information; and 2) a measurement framework so that scores between two or more clinical data collection tools can be directly compared. The methods needed to achieve these requirements are the ICF Linking Rules and the Rasch measurement model. Using data collected incorporating the 36-item Short Form Health Survey (SF-36), the World Health Organization Disability Assessment Schedule 2.0 (WHODAS 2.0), and the Stroke Impact Scale 3.0 (SIS 3.0), the application of the standardized reporting based on common metrics is demonstrated. A subset of items from the three tools linked to common chapters of the ICF (d4 Mobility, d5 Self-care and d6 Domestic life), were entered as "super items" into the Rasch model. Good fit was achieved with no residual local dependency and a unidimensional metric. A transformation table allows for comparison between scales, and between a scale and the reporting common metric. Being able to report functioning information collected with commonly used clinical data collection tools with ICF-based common metrics enables clinicians and researchers to continue using their tools while still being able to compare and aggregate the information within and across tools.

  9. Common Object Library Description

    DTIC Science & Technology

    2012-08-01

    Information Modeling ( BIM ) technology to be successful, it must be consistently applied across many projects, by many teams. The National Building Information ...distribution is unlimited. 13. SUPPLEMENTARY NOTES 14. ABSTRACT For Building Information Modeling ( BIM ) technology to be successful, it must be... BIM standards and for future research projects. 15. SUBJECT TERMS building information modeling ( BIM ), object

  10. An information model for a virtual private optical network (OVPN) using virtual routers (VRs)

    NASA Astrophysics Data System (ADS)

    Vo, Viet Minh Nhat

    2002-05-01

    This paper describes a virtual private optical network architecture (Optical VPN - OVPN) based on virtual router (VR). It improves over architectures suggested for virtual private networks by using virtual routers with optical networks. The new things in this architecture are necessary changes to adapt to devices and protocols used in optical networks. This paper also presents information models for the OVPN: at the architecture level and at the service level. These are extensions to the DEN (directory enable network) and CIM (Common Information Model) for OVPNs using VRs. The goal is to propose a common management model using policies.

  11. Ontology for Life-Cycle Modeling of Electrical Distribution Systems: Model View Definition

    DTIC Science & Technology

    2013-06-01

    building information models ( BIM ) at the coordinated design stage of building construction. 1.3 Approach To...standard for exchanging Building Information Modeling ( BIM ) data, which defines hundreds of classes for common use in software, currently supported by...specifications, Construction Operations Building in- formation exchange (COBie), Building Information Modeling ( BIM ) 16. SECURITY CLASSIFICATION OF:

  12. A Concept of Constructing a Common Information Space for High Tech Programs Using Information Analytical Systems

    NASA Astrophysics Data System (ADS)

    Zakharova, Alexandra A.; Kolegova, Olga A.; Nekrasova, Maria E.

    2016-04-01

    The paper deals with the issues in program management used for engineering innovative products. The existing project management tools were analyzed. The aim is to develop a decision support system that takes into account the features of program management used for high-tech products: research intensity, a high level of technical risks, unpredictable results due to the impact of various external factors, availability of several implementing agencies. The need for involving experts and using intelligent techniques for information processing is demonstrated. A conceptual model of common information space to support communication between members of the collaboration on high-tech programs has been developed. The structure and objectives of the information analysis system “Geokhod” were formulated with the purpose to implement the conceptual model of common information space in the program “Development and production of new class mining equipment - “Geokhod”.

  13. Common and Innovative Visuals: A sparsity modeling framework for video.

    PubMed

    Abdolhosseini Moghadam, Abdolreza; Kumar, Mrityunjay; Radha, Hayder

    2014-05-02

    Efficient video representation models are critical for many video analysis and processing tasks. In this paper, we present a framework based on the concept of finding the sparsest solution to model video frames. To model the spatio-temporal information, frames from one scene are decomposed into two components: (i) a common frame, which describes the visual information common to all the frames in the scene/segment, and (ii) a set of innovative frames, which depicts the dynamic behaviour of the scene. The proposed approach exploits and builds on recent results in the field of compressed sensing to jointly estimate the common frame and the innovative frames for each video segment. We refer to the proposed modeling framework by CIV (Common and Innovative Visuals). We show how the proposed model can be utilized to find scene change boundaries and extend CIV to videos from multiple scenes. Furthermore, the proposed model is robust to noise and can be used for various video processing applications without relying on motion estimation and detection or image segmentation. Results for object tracking, video editing (object removal, inpainting) and scene change detection are presented to demonstrate the efficiency and the performance of the proposed model.

  14. Common world model for unmanned systems

    NASA Astrophysics Data System (ADS)

    Dean, Robert Michael S.

    2013-05-01

    The Robotic Collaborative Technology Alliance (RCTA) seeks to provide adaptive robot capabilities which move beyond traditional metric algorithms to include cognitive capabilities. Key to this effort is the Common World Model, which moves beyond the state-of-the-art by representing the world using metric, semantic, and symbolic information. It joins these layers of information to define objects in the world. These objects may be reasoned upon jointly using traditional geometric, symbolic cognitive algorithms and new computational nodes formed by the combination of these disciplines. The Common World Model must understand how these objects relate to each other. Our world model includes the concept of Self-Information about the robot. By encoding current capability, component status, task execution state, and histories we track information which enables the robot to reason and adapt its performance using Meta-Cognition and Machine Learning principles. The world model includes models of how aspects of the environment behave, which enable prediction of future world states. To manage complexity, we adopted a phased implementation approach to the world model. We discuss the design of "Phase 1" of this world model, and interfaces by tracing perception data through the system from the source to the meta-cognitive layers provided by ACT-R and SS-RICS. We close with lessons learned from implementation and how the design relates to Open Architecture.

  15. Applying the Common Sense Model to Understand Representations of Arsenic Contaminated Well Water

    PubMed Central

    Severtson, Dolores J.; Baumann, Linda C.; Brown, Roger L.

    2015-01-01

    Theory-based research is needed to understand how people respond to environmental health risk information. The common sense model of self-regulation and the mental models approach propose that information shapes individual’s personal understandings that influence their decisions and actions. We compare these frameworks and explain how the common sense model (CSM) was applied to describe and measure mental representations of arsenic contaminated well water. Educational information, key informant interviews, and environmental risk literature were used to develop survey items to measure dimensions of cognitive representations (identity, cause, timeline, consequences, control) and emotional representations. Surveys mailed to 1067 private well users with moderate and elevated arsenic levels yielded an 84% response rate (n=897). Exploratory and confirmatory factor analyses of data from the elevated arsenic group identified a factor structure that retained the CSM representational structure and was consistent across moderate and elevated arsenic groups. The CSM has utility for describing and measuring representations of environmental health risks thus supporting its application to environmental health risk communication research. PMID:18726811

  16. Covariance Structure Models for Gene Expression Microarray Data

    ERIC Educational Resources Information Center

    Xie, Jun; Bentler, Peter M.

    2003-01-01

    Covariance structure models are applied to gene expression data using a factor model, a path model, and their combination. The factor model is based on a few factors that capture most of the expression information. A common factor of a group of genes may represent a common protein factor for the transcript of the co-expressed genes, and hence, it…

  17. Multitask TSK fuzzy system modeling by mining intertask common hidden structure.

    PubMed

    Jiang, Yizhang; Chung, Fu-Lai; Ishibuchi, Hisao; Deng, Zhaohong; Wang, Shitong

    2015-03-01

    The classical fuzzy system modeling methods implicitly assume data generated from a single task, which is essentially not in accordance with many practical scenarios where data can be acquired from the perspective of multiple tasks. Although one can build an individual fuzzy system model for each task, the result indeed tells us that the individual modeling approach will get poor generalization ability due to ignoring the intertask hidden correlation. In order to circumvent this shortcoming, we consider a general framework for preserving the independent information among different tasks and mining hidden correlation information among all tasks in multitask fuzzy modeling. In this framework, a low-dimensional subspace (structure) is assumed to be shared among all tasks and hence be the hidden correlation information among all tasks. Under this framework, a multitask Takagi-Sugeno-Kang (TSK) fuzzy system model called MTCS-TSK-FS (TSK-FS for multiple tasks with common hidden structure), based on the classical L2-norm TSK fuzzy system, is proposed in this paper. The proposed model can not only take advantage of independent sample information from the original space for each task, but also effectively use the intertask common hidden structure among multiple tasks to enhance the generalization performance of the built fuzzy systems. Experiments on synthetic and real-world datasets demonstrate the applicability and distinctive performance of the proposed multitask fuzzy system model in multitask regression learning scenarios.

  18. Object-Oriented Technology-Based Software Library for Operations of Water Reclamation Centers

    NASA Astrophysics Data System (ADS)

    Otani, Tetsuo; Shimada, Takehiro; Yoshida, Norio; Abe, Wataru

    SCADA systems in water reclamation centers have been constructed based on hardware and software that each manufacturer produced according to their design. Even though this approach used to be effective to realize real-time and reliable execution, it is an obstacle to cost reduction about system construction and maintenance. A promising solution to address the problem is to set specifications that can be used commonly. In terms of software, information model approach has been adopted in SCADA systems in other field, such as telecommunications and power systems. An information model is a piece of software specification that describes a physical or logical object to be monitored. In this paper, we propose information models for operations of water reclamation centers, which have not ever existed. In addition, we show the feasibility of the information model in terms of common use and processing performance.

  19. Modeling individual tree survial

    Treesearch

    Quang V. Cao

    2016-01-01

    Information provided by growth and yield models is the basis for forest managers to make decisions on how to manage their forests. Among different types of growth models, whole-stand models offer predictions at stand level, whereas individual-tree models give detailed information at tree level. The well-known logistic regression is commonly used to predict tree...

  20. Language-Independent and Language-Specific Aspects of Early Literacy: An Evaluation of the Common Underlying Proficiency Model

    ERIC Educational Resources Information Center

    Goodrich, J. Marc; Lonigan, Christopher J.

    2017-01-01

    According to the common underlying proficiency model (Cummins, 1981), as children acquire academic knowledge and skills in their first language, they also acquire language-independent information about those skills that can be applied when learning a second language. The purpose of this study was to evaluate the relevance of the common underlying…

  1. Models and the mosaic of scientific knowledge. The case of immunology.

    PubMed

    Baetu, Tudor M

    2014-03-01

    A survey of models in immunology is conducted and distinct kinds of models are characterized based on whether models are material or conceptual, the distinctiveness of their epistemic purpose, and the criteria for evaluating the goodness of a model relative to its intended purpose. I argue that the diversity of models in interdisciplinary fields such as immunology reflects the fact that information about the phenomena of interest is gathered from different sources using multiple methods of investigation. To each model is attached a description specifying how information about a phenomenon of interest has been acquired, highlighting points of commonality and difference between the methodological and epistemic histories of the information encapsulated in different models. These points of commonality and difference allow investigators to integrate findings from different models into more comprehensive explanatory accounts, as well as to troubleshoot anomalies and faulty accounts by going back to the original building blocks. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Information Commons to Go

    ERIC Educational Resources Information Center

    Bayer, Marc Dewey

    2008-01-01

    Since 2004, Buffalo State College's E. H. Butler Library has used the Information Commons (IC) model to assist its 8,500 students with library research and computer applications. Campus Technology Services (CTS) plays a very active role in its IC, with a centrally located Computer Help Desk and a newly created Application Support Desk right in the…

  3. A Transparent Translation from Legacy System Model into Common Information Model: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Fei; Simpson, Jeffrey; Zhang, Yingchen

    Advance in smart grid is forcing utilities towards better monitoring, control and analysis of distribution systems, and requires extensive cyber-based intelligent systems and applications to realize various functionalities. The ability of systems, or components within systems, to interact and exchange services or information with each other is the key to the success of smart grid technologies, and it requires efficient information exchanging and data sharing infrastructure. The Common Information Model (CIM) is a standard that allows different applications to exchange information about an electrical system, and it has become a widely accepted solution for information exchange among different platforms andmore » applications. However, most existing legacy systems are not developed using CIM, but using their own languages. Integrating such legacy systems is a challenge for utilities, and the appropriate utilization of the integrated legacy systems is even more intricate. Thus, this paper has developed an approach and open-source tool in order to translate legacy system models into CIM format. The developed tool is tested for a commercial distribution management system and simulation results have proved its effectiveness.« less

  4. System and method of designing models in a feedback loop

    DOEpatents

    Gosink, Luke C.; Pulsipher, Trenton C.; Sego, Landon H.

    2017-02-14

    A method and system for designing models is disclosed. The method includes selecting a plurality of models for modeling a common event of interest. The method further includes aggregating the results of the models and analyzing each model compared to the aggregate result to obtain comparative information. The method also includes providing the information back to the plurality of models to design more accurate models through a feedback loop.

  5. Enforced Sparse Non-Negative Matrix Factorization

    DTIC Science & Technology

    2016-01-23

    documents to find interesting pieces of information. With limited resources, analysts often employ automated text - mining tools that highlight common...represented as an undirected bipartite graph. It has become a common method for generating topic models of text data because it is known to produce good results...model and the convergence rate of the underlying algorithm. I. Introduction A common analyst challenge is searching through large quantities of text

  6. Cross-cultural perspectives on physician and lay models of the common cold.

    PubMed

    Baer, Roberta D; Weller, Susan C; de Alba García, Javier García; Rocha, Ana L Salcedo

    2008-06-01

    We compare physicians and laypeople within and across cultures, focusing on similarities and differences across samples, to determine whether cultural differences or lay-professional differences have a greater effect on explanatory models of the common cold. Data on explanatory models for the common cold were collected from physicians and laypeople in South Texas and Guadalajara, Mexico. Structured interview materials were developed on the basis of open-ended interviews with samples of lay informants at each locale. A structured questionnaire was used to collect information from each sample on causes, symptoms, and treatments for the common cold. Consensus analysis was used to estimate the cultural beliefs for each sample. Instead of systematic differences between samples based on nationality or level of professional training, all four samples largely shared a single-explanatory model of the common cold, with some differences on subthemes, such as the role of hot and cold forces in the etiology of the common cold. An evaluation of our findings indicates that, although there has been conjecture about whether cultural or lay-professional differences are of greater importance in understanding variation in explanatory models of disease and illness, systematic data collected on community and professional beliefs indicate that such differences may be a function of the specific illness. Further generalizations about lay-professional differences need to be based on detailed data for a variety of illnesses, to discern patterns that may be present. Finally, a systematic approach indicates that agreement across individual explanatory models is sufficient to allow for a community-level explanatory model of the common cold.

  7. The CHIC Model: A Global Model for Coupled Binary Data

    ERIC Educational Resources Information Center

    Wilderjans, Tom; Ceulemans, Eva; Van Mechelen, Iven

    2008-01-01

    Often problems result in the collection of coupled data, which consist of different N-way N-mode data blocks that have one or more modes in common. To reveal the structure underlying such data, an integrated modeling strategy, with a single set of parameters for the common mode(s), that is estimated based on the information in all data blocks, may…

  8. The Common Patterns of Nature

    PubMed Central

    Frank, Steven A.

    2010-01-01

    We typically observe large-scale outcomes that arise from the interactions of many hidden, small-scale processes. Examples include age of disease onset, rates of amino acid substitutions, and composition of ecological communities. The macroscopic patterns in each problem often vary around a characteristic shape that can be generated by neutral processes. A neutral generative model assumes that each microscopic process follows unbiased or random stochastic fluctuations: random connections of network nodes; amino acid substitutions with no effect on fitness; species that arise or disappear from communities randomly. These neutral generative models often match common patterns of nature. In this paper, I present the theoretical background by which we can understand why these neutral generative models are so successful. I show where the classic patterns come from, such as the Poisson pattern, the normal or Gaussian pattern, and many others. Each classic pattern was often discovered by a simple neutral generative model. The neutral patterns share a special characteristic: they describe the patterns of nature that follow from simple constraints on information. For example, any aggregation of processes that preserves information only about the mean and variance attracts to the Gaussian pattern; any aggregation that preserves information only about the mean attracts to the exponential pattern; any aggregation that preserves information only about the geometric mean attracts to the power law pattern. I present a simple and consistent informational framework of the common patterns of nature based on the method of maximum entropy. This framework shows that each neutral generative model is a special case that helps to discover a particular set of informational constraints; those informational constraints define a much wider domain of non-neutral generative processes that attract to the same neutral pattern. PMID:19538344

  9. Modelling heterogeneity variances in multiple treatment comparison meta-analysis--are informative priors the better solution?

    PubMed

    Thorlund, Kristian; Thabane, Lehana; Mills, Edward J

    2013-01-11

    Multiple treatment comparison (MTC) meta-analyses are commonly modeled in a Bayesian framework, and weakly informative priors are typically preferred to mirror familiar data driven frequentist approaches. Random-effects MTCs have commonly modeled heterogeneity under the assumption that the between-trial variance for all involved treatment comparisons are equal (i.e., the 'common variance' assumption). This approach 'borrows strength' for heterogeneity estimation across treatment comparisons, and thus, ads valuable precision when data is sparse. The homogeneous variance assumption, however, is unrealistic and can severely bias variance estimates. Consequently 95% credible intervals may not retain nominal coverage, and treatment rank probabilities may become distorted. Relaxing the homogeneous variance assumption may be equally problematic due to reduced precision. To regain good precision, moderately informative variance priors or additional mathematical assumptions may be necessary. In this paper we describe four novel approaches to modeling heterogeneity variance - two novel model structures, and two approaches for use of moderately informative variance priors. We examine the relative performance of all approaches in two illustrative MTC data sets. We particularly compare between-study heterogeneity estimates and model fits, treatment effect estimates and 95% credible intervals, and treatment rank probabilities. In both data sets, use of moderately informative variance priors constructed from the pair wise meta-analysis data yielded the best model fit and narrower credible intervals. Imposing consistency equations on variance estimates, assuming variances to be exchangeable, or using empirically informed variance priors also yielded good model fits and narrow credible intervals. The homogeneous variance model yielded high precision at all times, but overall inadequate estimates of between-trial variances. Lastly, treatment rankings were similar among the novel approaches, but considerably different when compared with the homogenous variance approach. MTC models using a homogenous variance structure appear to perform sub-optimally when between-trial variances vary between comparisons. Using informative variance priors, assuming exchangeability or imposing consistency between heterogeneity variances can all ensure sufficiently reliable and realistic heterogeneity estimation, and thus more reliable MTC inferences. All four approaches should be viable candidates for replacing or supplementing the conventional homogeneous variance MTC model, which is currently the most widely used in practice.

  10. Detailed clinical models: representing knowledge, data and semantics in healthcare information technology.

    PubMed

    Goossen, William T F

    2014-07-01

    This paper will present an overview of the developmental effort in harmonizing clinical knowledge modeling using the Detailed Clinical Models (DCMs), and will explain how it can contribute to the preservation of Electronic Health Records (EHR) data. Clinical knowledge modeling is vital for the management and preservation of EHR and data. Such modeling provides common data elements and terminology binding with the intention of capturing and managing clinical information over time and location independent from technology. Any EHR data exchange without an agreed clinical knowledge modeling will potentially result in loss of information. Many attempts exist from the past to model clinical knowledge for the benefits of semantic interoperability using standardized data representation and common terminologies. The objective of each project is similar with respect to consistent representation of clinical data, using standardized terminologies, and an overall logical approach. However, the conceptual, logical, and the technical expressions are quite different in one clinical knowledge modeling approach versus another. There currently are synergies under the Clinical Information Modeling Initiative (CIMI) in order to create a harmonized reference model for clinical knowledge models. The goal for the CIMI is to create a reference model and formalisms based on for instance the DCM (ISO/TS 13972), among other work. A global repository of DCMs may potentially be established in the future.

  11. Modeling Common-Sense Decisions

    NASA Astrophysics Data System (ADS)

    Zak, Michail

    This paper presents a methodology for efficient synthesis of dynamical model simulating a common-sense decision making process. The approach is based upon the extension of the physics' First Principles that includes behavior of living systems. The new architecture consists of motor dynamics simulating actual behavior of the object, and mental dynamics representing evolution of the corresponding knowledge-base and incorporating it in the form of information flows into the motor dynamics. The autonomy of the decision making process is achieved by a feedback from mental to motor dynamics. This feedback replaces unavailable external information by an internal knowledgebase stored in the mental model in the form of probability distributions.

  12. The Importance of Statistical Modeling in Data Analysis and Inference

    ERIC Educational Resources Information Center

    Rollins, Derrick, Sr.

    2017-01-01

    Statistical inference simply means to draw a conclusion based on information that comes from data. Error bars are the most commonly used tool for data analysis and inference in chemical engineering data studies. This work demonstrates, using common types of data collection studies, the importance of specifying the statistical model for sound…

  13. A model of collaborative agency and common ground.

    PubMed

    Kuziemsky, Craig E; Cornett, Janet Alexandra

    2013-01-01

    As more healthcare delivery is provided via collaborative means there is a need to understand how to design information and communication technologies (ICTs) to support collaboration. Existing research has largely focused on individual aspects of ICT usage and not how they can support the coordination of collaborative activities. In order to understand how we can design ICTs to support collaboration we need to understand how agents, technologies, information and processes integrate while providing collaborative care delivery. Co-agency and common ground have both provided insight about the integration of different entities as part of collaboration practices. However there is still a lack of understanding about how to coordinate the integration of agents, processes and technologies to support collaboration. This paper combines co-agency and common ground to develop a model of collaborative agency and specific categories of common ground to facilitate its coordination.

  14. Assessment of Life Cycle Information Exchanges (LCie): Understanding the Value-Added Benefit of a COBie Process

    DTIC Science & Technology

    2013-10-01

    exchange (COBie), Building Information Modeling ( BIM ), value-added analysis, business processes, project management 16. SECURITY CLASSIFICATION OF: 17...equipment. The innovative aspect of Building In- formation Modeling ( BIM ) is that it creates a computable building descrip- tion. The ability to use a...interoperability. In order for the building information to be interoperable, it must also con- form to a common data model , or schema, that defines the class

  15. The role of digital imaging and communications in medicine in an evolving healthcare computing environment: the model is the message.

    PubMed

    Bidgood, W D; alSafadi, Y; Tucker, M; Prior, F; Hagan, G; Mattison, J E

    1998-02-01

    The decision to use Digital Imaging and Communications in Medicine (DICOM), Health Level 7 (HL7), a common object broker such as the Common Object Request Brokering Architecture (CORBA) or ActiveX (Microsoft Corp, Redmond, WA) or any other protocol for the transfer of DICOM data depends on the requirements of a particular implementation. The selection of protocol is independent of the information model. Our goal as message standards developers is to design a data interchange infrastructure that will faithfully convey the computer-based patient record and make it available to authorized health care providers when and where it is needed for patient care. DICOM accurately and expressively represents the clinically significant properties of images and the semantics of image-related information. The DICOM data model is small and well-defined. The model can be expressed in Standard Generalized Markup Language (SGML) or Object Management Group Interface Definition Language or other common syntax-and can be implemented using any reliable communications protocol. Therefore our opinion is that the DICOM semantic data model should serve as the basis for a logically equivalent set of specifications in HL7, CORBA, ActiveX, and SGML for the interchange of biomedical images and image-related information.

  16. SCA with rotation to distinguish common and distinctive information in linked data.

    PubMed

    Schouteden, Martijn; Van Deun, Katrijn; Pattyn, Sven; Van Mechelen, Iven

    2013-09-01

    Often data are collected that consist of different blocks that all contain information about the same entities (e.g., items, persons, or situations). In order to unveil both information that is common to all data blocks and information that is distinctive for one or a few of them, an integrated analysis of the whole of all data blocks may be most useful. Interesting classes of methods for such an approach are simultaneous-component and multigroup factor analysis methods. These methods yield dimensions underlying the data at hand. Unfortunately, however, in the results from such analyses, common and distinctive types of information are mixed up. This article proposes a novel method to disentangle the two kinds of information, by making use of the rotational freedom of component and factor models. We illustrate this method with data from a cross-cultural study of emotions.

  17. Modelling heterogeneity variances in multiple treatment comparison meta-analysis – Are informative priors the better solution?

    PubMed Central

    2013-01-01

    Background Multiple treatment comparison (MTC) meta-analyses are commonly modeled in a Bayesian framework, and weakly informative priors are typically preferred to mirror familiar data driven frequentist approaches. Random-effects MTCs have commonly modeled heterogeneity under the assumption that the between-trial variance for all involved treatment comparisons are equal (i.e., the ‘common variance’ assumption). This approach ‘borrows strength’ for heterogeneity estimation across treatment comparisons, and thus, ads valuable precision when data is sparse. The homogeneous variance assumption, however, is unrealistic and can severely bias variance estimates. Consequently 95% credible intervals may not retain nominal coverage, and treatment rank probabilities may become distorted. Relaxing the homogeneous variance assumption may be equally problematic due to reduced precision. To regain good precision, moderately informative variance priors or additional mathematical assumptions may be necessary. Methods In this paper we describe four novel approaches to modeling heterogeneity variance - two novel model structures, and two approaches for use of moderately informative variance priors. We examine the relative performance of all approaches in two illustrative MTC data sets. We particularly compare between-study heterogeneity estimates and model fits, treatment effect estimates and 95% credible intervals, and treatment rank probabilities. Results In both data sets, use of moderately informative variance priors constructed from the pair wise meta-analysis data yielded the best model fit and narrower credible intervals. Imposing consistency equations on variance estimates, assuming variances to be exchangeable, or using empirically informed variance priors also yielded good model fits and narrow credible intervals. The homogeneous variance model yielded high precision at all times, but overall inadequate estimates of between-trial variances. Lastly, treatment rankings were similar among the novel approaches, but considerably different when compared with the homogenous variance approach. Conclusions MTC models using a homogenous variance structure appear to perform sub-optimally when between-trial variances vary between comparisons. Using informative variance priors, assuming exchangeability or imposing consistency between heterogeneity variances can all ensure sufficiently reliable and realistic heterogeneity estimation, and thus more reliable MTC inferences. All four approaches should be viable candidates for replacing or supplementing the conventional homogeneous variance MTC model, which is currently the most widely used in practice. PMID:23311298

  18. Peer Evaluation of Teaching in an Online Information Literacy Course

    ERIC Educational Resources Information Center

    Vega García, Susan A.; Stacy-Bates, Kristine K.; Alger, Jeff; Marupova, Rano

    2017-01-01

    This paper reports on the development and implementation of a process of peer evaluation of teaching to assess librarian instruction in a high-enrollment online information literacy course for undergraduates. This paper also traces a shift within libraries from peer coaching to peer evaluation models. One common model for peer evaluation, using…

  19. High Level Information Fusion (HLIF) with nested fusion loops

    NASA Astrophysics Data System (ADS)

    Woodley, Robert; Gosnell, Michael; Fischer, Amber

    2013-05-01

    Situation modeling and threat prediction require higher levels of data fusion in order to provide actionable information. Beyond the sensor data and sources the analyst has access to, the use of out-sourced and re-sourced data is becoming common. Through the years, some common frameworks have emerged for dealing with information fusion—perhaps the most ubiquitous being the JDL Data Fusion Group and their initial 4-level data fusion model. Since these initial developments, numerous models of information fusion have emerged, hoping to better capture the human-centric process of data analyses within a machine-centric framework. 21st Century Systems, Inc. has developed Fusion with Uncertainty Reasoning using Nested Assessment Characterizer Elements (FURNACE) to address challenges of high level information fusion and handle bias, ambiguity, and uncertainty (BAU) for Situation Modeling, Threat Modeling, and Threat Prediction. It combines JDL fusion levels with nested fusion loops and state-of-the-art data reasoning. Initial research has shown that FURNACE is able to reduce BAU and improve the fusion process by allowing high level information fusion (HLIF) to affect lower levels without the double counting of information or other biasing issues. The initial FURNACE project was focused on the underlying algorithms to produce a fusion system able to handle BAU and repurposed data in a cohesive manner. FURNACE supports analyst's efforts to develop situation models, threat models, and threat predictions to increase situational awareness of the battlespace. FURNACE will not only revolutionize the military intelligence realm, but also benefit the larger homeland defense, law enforcement, and business intelligence markets.

  20. An Overview of Tools for Creating, Validating and Using PDS Metadata

    NASA Astrophysics Data System (ADS)

    King, T. A.; Hardman, S. H.; Padams, J.; Mafi, J. N.; Cecconi, B.

    2017-12-01

    NASA's Planetary Data System (PDS) has defined information models for creating metadata to describe bundles, collections and products for all the assets acquired by a planetary science projects. Version 3 of the PDS Information Model (commonly known as "PDS3") is widely used and is used to describe most of the existing planetary archive. Recently PDS has released version 4 of the Information Model (commonly known as "PDS4") which is designed to improve consistency, efficiency and discoverability of information. To aid in creating, validating and using PDS4 metadata the PDS and a few associated groups have developed a variety of tools. In addition, some commercial tools, both free and for a fee, can be used to create and work with PDS4 metadata. We present an overview of these tools, describe those tools currently under development and provide guidance as to which tools may be most useful for missions, instrument teams and the individual researcher.

  1. The Relationship between Clients' Conformity to Masculine Norms and Their Perceptions of Helpful Therapist Actions

    ERIC Educational Resources Information Center

    Owen, Jesse; Wong, Y. Joel; Rodolfa, Emil

    2010-01-01

    T. J. G. Tracey et al.'s (2003) common factors model derived from therapists and psychotherapy researchers has provided a parsimonious structure to inform research and practice. Accordingly, the current authors used the 14 common factor categories identified in Tracey et al.'s model as a guide to code clients' perceptions of helpful therapist…

  2. Can Cognitive Writing Models Inform the Design of the Common Core State Standards?

    ERIC Educational Resources Information Center

    Hayes, John R.; Olinghouse, Natalie G.

    2015-01-01

    In this article, we compare the Common Core State Standards in Writing to the Hayes cognitive model of writing, adapted to describe the performance of young and developing writers. Based on the comparison, we propose the inclusion of standards for motivation, goal setting, writing strategies, and attention by writers to the text they have just…

  3. Early experiences in evolving an enterprise-wide information model for laboratory and clinical observations.

    PubMed

    Chen, Elizabeth S; Zhou, Li; Kashyap, Vipul; Schaeffer, Molly; Dykes, Patricia C; Goldberg, Howard S

    2008-11-06

    As Electronic Healthcare Records become more prevalent, there is an increasing need to ensure unambiguous data capture, interpretation, and exchange within and across heterogeneous applications. To address this need, a common, uniform, and comprehensive approach for representing clinical information is essential. At Partners HealthCare System, we are investigating the development and implementation of enterprise-wide information models to specify the representation of clinical information to support semantic interoperability. This paper summarizes our early experiences in: (1) defining a process for information model development, (2) reviewing and comparing existing healthcare information models, (3) identifying requirements for representation of laboratory and clinical observations, and (4) exploring linkages to existing terminology and data standards. These initial findings provide insight to the various challenges ahead and guidance on next steps for adoption of information models at our organization.

  4. Skew-t partially linear mixed-effects models for AIDS clinical studies.

    PubMed

    Lu, Tao

    2016-01-01

    We propose partially linear mixed-effects models with asymmetry and missingness to investigate the relationship between two biomarkers in clinical studies. The proposed models take into account irregular time effects commonly observed in clinical studies under a semiparametric model framework. In addition, commonly assumed symmetric distributions for model errors are substituted by asymmetric distribution to account for skewness. Further, informative missing data mechanism is accounted for. A Bayesian approach is developed to perform parameter estimation simultaneously. The proposed model and method are applied to an AIDS dataset and comparisons with alternative models are performed.

  5. User Manual for SAHM package for VisTrails

    USGS Publications Warehouse

    Talbert, C.B.; Talbert, M.K.

    2012-01-01

    The Software for Assisted Habitat I\\•1odeling (SAHM) has been created to both expedite habitat modeling and help maintain a record of the various input data, pre-and post-processing steps and modeling options incorporated in the construction of a species distribution model. The four main advantages to using the combined VisTrail: SAHM package for species distribution modeling are: 1. formalization and tractable recording of the entire modeling process 2. easier collaboration through a common modeling framework 3. a user-friendly graphical interface to manage file input, model runs, and output 4. extensibility to incorporate future and additional modeling routines and tools. This user manual provides detailed information on each module within the SAHM package, their input, output, common connections, optional arguments, and default settings. This information can also be accessed for individual modules by right clicking on the documentation button for any module in VisTrail or by right clicking on any input or output for a module and selecting view documentation. This user manual is intended to accompany the user guide which provides detailed instructions on how to install the SAHM package within VisTrails and then presents information on the use of the package.

  6. Model Selection Indices for Polytomous Items

    ERIC Educational Resources Information Center

    Kang, Taehoon; Cohen, Allan S.; Sung, Hyun-Jung

    2009-01-01

    This study examines the utility of four indices for use in model selection with nested and nonnested polytomous item response theory (IRT) models: a cross-validation index and three information-based indices. Four commonly used polytomous IRT models are considered: the graded response model, the generalized partial credit model, the partial credit…

  7. Genetically informed ecological niche models improve climate change predictions.

    PubMed

    Ikeda, Dana H; Max, Tamara L; Allan, Gerard J; Lau, Matthew K; Shuster, Stephen M; Whitham, Thomas G

    2017-01-01

    We examined the hypothesis that ecological niche models (ENMs) more accurately predict species distributions when they incorporate information on population genetic structure, and concomitantly, local adaptation. Local adaptation is common in species that span a range of environmental gradients (e.g., soils and climate). Moreover, common garden studies have demonstrated a covariance between neutral markers and functional traits associated with a species' ability to adapt to environmental change. We therefore predicted that genetically distinct populations would respond differently to climate change, resulting in predicted distributions with little overlap. To test whether genetic information improves our ability to predict a species' niche space, we created genetically informed ecological niche models (gENMs) using Populus fremontii (Salicaceae), a widespread tree species in which prior common garden experiments demonstrate strong evidence for local adaptation. Four major findings emerged: (i) gENMs predicted population occurrences with up to 12-fold greater accuracy than models without genetic information; (ii) tests of niche similarity revealed that three ecotypes, identified on the basis of neutral genetic markers and locally adapted populations, are associated with differences in climate; (iii) our forecasts indicate that ongoing climate change will likely shift these ecotypes further apart in geographic space, resulting in greater niche divergence; (iv) ecotypes that currently exhibit the largest geographic distribution and niche breadth appear to be buffered the most from climate change. As diverse agents of selection shape genetic variability and structure within species, we argue that gENMs will lead to more accurate predictions of species distributions under climate change. © 2016 John Wiley & Sons Ltd.

  8. Systematic Applications of Metabolomics in Metabolic Engineering

    PubMed Central

    Dromms, Robert A.; Styczynski, Mark P.

    2012-01-01

    The goals of metabolic engineering are well-served by the biological information provided by metabolomics: information on how the cell is currently using its biochemical resources is perhaps one of the best ways to inform strategies to engineer a cell to produce a target compound. Using the analysis of extracellular or intracellular levels of the target compound (or a few closely related molecules) to drive metabolic engineering is quite common. However, there is surprisingly little systematic use of metabolomics datasets, which simultaneously measure hundreds of metabolites rather than just a few, for that same purpose. Here, we review the most common systematic approaches to integrating metabolite data with metabolic engineering, with emphasis on existing efforts to use whole-metabolome datasets. We then review some of the most common approaches for computational modeling of cell-wide metabolism, including constraint-based models, and discuss current computational approaches that explicitly use metabolomics data. We conclude with discussion of the broader potential of computational approaches that systematically use metabolomics data to drive metabolic engineering. PMID:24957776

  9. Software model of a machine vision system based on the common house fly.

    PubMed

    Madsen, Robert; Barrett, Steven; Wilcox, Michael

    2005-01-01

    The vision system of the common house fly has many properties, such as hyperacuity and parallel structure, which would be advantageous in a machine vision system. A software model has been developed which is ultimately intended to be a tool to guide the design of an analog real time vision system. The model starts by laying out cartridges over an image. The cartridges are analogous to the ommatidium of the fly's eye and contain seven photoreceptors each with a Gaussian profile. The spacing between photoreceptors is variable providing for more or less detail as needed. The cartridges provide information on what type of features they see and neighboring cartridges share information to construct a feature map.

  10. Sensitivity Analysis of Multiple Informant Models When Data Are Not Missing at Random

    ERIC Educational Resources Information Center

    Blozis, Shelley A.; Ge, Xiaojia; Xu, Shu; Natsuaki, Misaki N.; Shaw, Daniel S.; Neiderhiser, Jenae M.; Scaramella, Laura V.; Leve, Leslie D.; Reiss, David

    2013-01-01

    Missing data are common in studies that rely on multiple informant data to evaluate relationships among variables for distinguishable individuals clustered within groups. Estimation of structural equation models using raw data allows for incomplete data, and so all groups can be retained for analysis even if only 1 member of a group contributes…

  11. A comment on priors for Bayesian occupancy models.

    PubMed

    Northrup, Joseph M; Gerber, Brian D

    2018-01-01

    Understanding patterns of species occurrence and the processes underlying these patterns is fundamental to the study of ecology. One of the more commonly used approaches to investigate species occurrence patterns is occupancy modeling, which can account for imperfect detection of a species during surveys. In recent years, there has been a proliferation of Bayesian modeling in ecology, which includes fitting Bayesian occupancy models. The Bayesian framework is appealing to ecologists for many reasons, including the ability to incorporate prior information through the specification of prior distributions on parameters. While ecologists almost exclusively intend to choose priors so that they are "uninformative" or "vague", such priors can easily be unintentionally highly informative. Here we report on how the specification of a "vague" normally distributed (i.e., Gaussian) prior on coefficients in Bayesian occupancy models can unintentionally influence parameter estimation. Using both simulated data and empirical examples, we illustrate how this issue likely compromises inference about species-habitat relationships. While the extent to which these informative priors influence inference depends on the data set, researchers fitting Bayesian occupancy models should conduct sensitivity analyses to ensure intended inference, or employ less commonly used priors that are less informative (e.g., logistic or t prior distributions). We provide suggestions for addressing this issue in occupancy studies, and an online tool for exploring this issue under different contexts.

  12. DEVELOPMENT OF GUIDELINES FOR CALIBRATING, VALIDATING, AND EVALUATING HYDROLOGIC AND WATER QUALITY MODELS: ASABE ENGINEERING PRACTICE 621

    USDA-ARS?s Scientific Manuscript database

    Information to support application of hydrologic and water quality (H/WQ) models abounds, yet modelers commonly use arbitrary, ad hoc methods to conduct, document, and report model calibration, validation, and evaluation. Consistent methods are needed to improve model calibration, validation, and e...

  13. Decision Maker Perception of Information Quality: A Case Study of Military Command and Control

    ERIC Educational Resources Information Center

    Morgan, Grayson B.

    2013-01-01

    Decision maker perception of information quality cues from an "information system" (IS) and the process which creates such meta cueing, or data about cues, is a critical yet un-modeled component of "situation awareness" (SA). Examples of common information quality meta cueing for quality criteria include custom ring-tones for…

  14. Harmonizing routinely collected health information for strengthening quality management in health systems: requirements and practice.

    PubMed

    Prodinger, Birgit; Tennant, Alan; Stucki, Gerold; Cieza, Alarcos; Üstün, Tevfik Bedirhan

    2016-10-01

    Our aim was to specify the requirements of an architecture to serve as the foundation for standardized reporting of health information and to provide an exemplary application of this architecture. The World Health Organization's International Classification of Functioning, Disability and Health (ICF) served as the conceptual framework. Methods to establish content comparability were the ICF Linking Rules. The Rasch measurement model, as a special case of additive conjoint measurement, which satisfies the required criteria for fundamental measurement, allowed for the development of a common metric foundation for measurement unit conversion. Secondary analysis of data from the North Yorkshire Survey was used to illustrate these methods. Patients completed three instruments and the items were linked to the ICF. The Rasch measurement model was applied, first to each scale, and then to items across scales which were linked to a common domain. Based on the linking of items to the ICF, the majority of items were grouped into two domains, Mobility and Self-care. Analysis of the individual scales and of items linked to a common domain across scales satisfied the requirements of the Rasch measurement model. The measurement unit conversion between items from the three instruments linked to the Mobility and Self-care domains, respectively, was demonstrated. The realization of an ICF-based architecture for information on patients' functioning enables harmonization of health information while allowing clinicians and researchers to continue using their existing instruments. This architecture will facilitate access to comprehensive and consistently reported health information to serve as the foundation for informed decision-making. © The Author(s) 2016.

  15. Common elements of adolescent prevention programs: minimizing burden while maximizing reach.

    PubMed

    Boustani, Maya M; Frazier, Stacy L; Becker, Kimberly D; Bechor, Michele; Dinizulu, Sonya M; Hedemann, Erin R; Ogle, Robert R; Pasalich, Dave S

    2015-03-01

    A growing number of evidence-based youth prevention programs are available, but challenges related to dissemination and implementation limit their reach and impact. The current review identifies common elements across evidence-based prevention programs focused on the promotion of health-related outcomes in adolescents. We reviewed and coded descriptions of the programs for common practice and instructional elements. Problem-solving emerged as the most common practice element, followed by communication skills, and insight building. Psychoeducation, modeling, and role play emerged as the most common instructional elements. In light of significant comorbidity in poor outcomes for youth, and corresponding overlap in their underlying skills deficits, we propose that synthesizing the prevention literature using a common elements approach has the potential to yield novel information and inform prevention programming to minimize burden and maximize reach and impact for youth.

  16. TOWARDS AN AUTOMATED TOOL FOR CHANNEL-NETWORK CHARACTERIZATIONS, MODELING, AND ASSESSMENT

    EPA Science Inventory

    Detailed characterization of channel networks for hydrologic and geomorphic models has traditionally been a difficult and expensive proposition, and lack of information has thus been a common limitation of modeling efforts. With the advent of datasets derived from high-resolutio...

  17. Author’s response: A universal approach to modeling visual word recognition and reading: not only possible, but also inevitable.

    PubMed

    Frost, Ram

    2012-10-01

    I have argued that orthographic processing cannot be understood and modeled without considering the manner in which orthographic structure represents phonological, semantic, and morphological information in a given writing system. A reading theory, therefore, must be a theory of the interaction of the reader with his/her linguistic environment. This outlines a novel approach to studying and modeling visual word recognition, an approach that focuses on the common cognitive principles involved in processing printed words across different writing systems. These claims were challenged by several commentaries that contested the merits of my general theoretical agenda, the relevance of the evolution of writing systems, and the plausibility of finding commonalities in reading across orthographies. Other commentaries extended the scope of the debate by bringing into the discussion additional perspectives. My response addresses all these issues. By considering the constraints of neurobiology on modeling reading, developmental data, and a large scope of cross-linguistic evidence, I argue that front-end implementations of orthographic processing that do not stem from a comprehensive theory of the complex information conveyed by writing systems do not present a viable approach for understanding reading. The common principles by which writing systems have evolved to represent orthographic, phonological, and semantic information in a language reveal the critical distributional characteristics of orthographic structure that govern reading behavior. Models of reading should thus be learning models, primarily constrained by cross-linguistic developmental evidence that describes how the statistical properties of writing systems shape the characteristics of orthographic processing. When this approach is adopted, a universal model of reading is possible.

  18. A methodology proposal for collaborative business process elaboration using a model-driven approach

    NASA Astrophysics Data System (ADS)

    Mu, Wenxin; Bénaben, Frédérick; Pingaud, Hervé

    2015-05-01

    Business process management (BPM) principles are commonly used to improve processes within an organisation. But they can equally be applied to supporting the design of an Information System (IS). In a collaborative situation involving several partners, this type of BPM approach may be useful to support the design of a Mediation Information System (MIS), which would ensure interoperability between the partners' ISs (which are assumed to be service oriented). To achieve this objective, the first main task is to build a collaborative business process cartography. The aim of this article is to present a method for bringing together collaborative information and elaborating collaborative business processes from the information gathered (by using a collaborative situation framework, an organisational model, an informational model, a functional model and a metamodel and by using model transformation rules).

  19. The ISACA Business Model for Information Security: An Integrative and Innovative Approach

    NASA Astrophysics Data System (ADS)

    von Roessing, Rolf

    In recent years, information security management has matured into a professional discipline that covers both technical and managerial aspects in an organisational environment. Information security is increasingly dependent on business-driven parameters and interfaces to a variety of organisational units and departments. In contrast, common security models and frameworks have remained largely technical. A review of extant models ranging from [LaBe73] to more recent models shows that technical aspects are covered in great detail, while the managerial aspects of security are often neglected.Likewise, the business view on organisational security is frequently at odds with the demands of information security personnel or information technology management. In practice, senior and executive level management remain comparatively distant from technical requirements. As a result, information security is generally regarded as a cost factor rather than a benefit to the organisation.

  20. Expected versus Observed Information in SEM with Incomplete Normal and Nonnormal Data

    ERIC Educational Resources Information Center

    Savalei, Victoria

    2010-01-01

    Maximum likelihood is the most common estimation method in structural equation modeling. Standard errors for maximum likelihood estimates are obtained from the associated information matrix, which can be estimated from the sample using either expected or observed information. It is known that, with complete data, estimates based on observed or…

  1. The Significance of the Understanding of Balance and Coordination in Self-Cognitive "Bio-Electro-Biblio/Info" Systems.

    ERIC Educational Resources Information Center

    Tsai, Bor-sheng

    1991-01-01

    Examines the information communication process and proposes a fuzzy commonality model for improving communication systems. Topics discussed include components of an electronic information programing and processing system and the flow of the formation and transfer of information, including DOS (disk operating system) commands, computer programing…

  2. Managing Information Technology in Student Affairs: A Report on Policies, Practices, Staffing, and Technology.

    ERIC Educational Resources Information Center

    Barratt, Will

    This pilot study looks into how information technology practices are being conducted in student affairs. It compares common practices against which exemplary programs and best practices can be measured. After gathering information from five universities, a model was created that encompassed policy, staffing, technology, and practice as the best…

  3. Model Development for EHR Interdisciplinary Information Exchange of ICU Common Goals

    PubMed Central

    Collins, Sarah A.; Bakken, Suzanne; Vawdrey, David K.; Coiera, Enrico; Currie, Leanne

    2010-01-01

    Purpose Effective interdisciplinary exchange of patient information is an essential component of safe, efficient, and patient–centered care in the intensive care unit (ICU). Frequent handoffs of patient care, high acuity of patient illness, and the increasing amount of available data complicate information exchange. Verbal communication can be affected by interruptions and time limitations. To supplement verbal communication, many ICUs rely on documentation in electronic health records (EHRs) to reduce errors of omission and information loss. The purpose of this study was to develop a model of EHR interdisciplinary information exchange of ICU common goals. Methods The theoretical frameworks of distributed cognition and the clinical communication space were integrated and a previously published categorization of verbal information exchange was used. 59.5 hours of interdisciplinary rounds in a Neurovascular ICU were observed and five interviews and one focus group with ICU nurses and physicians were conducted. Results Current documentation tools in the ICU were not sufficient to capture the nurses' and physicians' collaborative decision-making and verbal communication of goal-directed actions and interactions. Clinicians perceived the EHR to be inefficient for information retrieval, leading to a further reliance on verbal information exchange. Conclusion The model suggests that EHRs should support: 1) Information tools for the explicit documentation of goals, interventions, and assessments with synthesized and summarized information outputs of events and updates; and 2) Messaging tools that support collaborative decision-making and patient safety double checks that currently occur between nurses and physicians in the absence of EHR support. PMID:20974549

  4. Information risk and security modeling

    NASA Astrophysics Data System (ADS)

    Zivic, Predrag

    2005-03-01

    This research paper presentation will feature current frameworks to addressing risk and security modeling and metrics. The paper will analyze technical level risk and security metrics of Common Criteria/ISO15408, Centre for Internet Security guidelines, NSA configuration guidelines and metrics used at this level. Information IT operational standards view on security metrics such as GMITS/ISO13335, ITIL/ITMS and architectural guidelines such as ISO7498-2 will be explained. Business process level standards such as ISO17799, COSO and CobiT will be presented with their control approach to security metrics. Top level, the maturity standards such as SSE-CMM/ISO21827, NSA Infosec Assessment and CobiT will be explored and reviewed. For each defined level of security metrics the research presentation will explore the appropriate usage of these standards. The paper will discuss standards approaches to conducting the risk and security metrics. The research findings will demonstrate the need for common baseline for both risk and security metrics. This paper will show the relation between the attribute based common baseline and corporate assets and controls for risk and security metrics. IT will be shown that such approach spans over all mentioned standards. The proposed approach 3D visual presentation and development of the Information Security Model will be analyzed and postulated. Presentation will clearly demonstrate the benefits of proposed attributes based approach and defined risk and security space for modeling and measuring.

  5. A comment on priors for Bayesian occupancy models

    PubMed Central

    Gerber, Brian D.

    2018-01-01

    Understanding patterns of species occurrence and the processes underlying these patterns is fundamental to the study of ecology. One of the more commonly used approaches to investigate species occurrence patterns is occupancy modeling, which can account for imperfect detection of a species during surveys. In recent years, there has been a proliferation of Bayesian modeling in ecology, which includes fitting Bayesian occupancy models. The Bayesian framework is appealing to ecologists for many reasons, including the ability to incorporate prior information through the specification of prior distributions on parameters. While ecologists almost exclusively intend to choose priors so that they are “uninformative” or “vague”, such priors can easily be unintentionally highly informative. Here we report on how the specification of a “vague” normally distributed (i.e., Gaussian) prior on coefficients in Bayesian occupancy models can unintentionally influence parameter estimation. Using both simulated data and empirical examples, we illustrate how this issue likely compromises inference about species-habitat relationships. While the extent to which these informative priors influence inference depends on the data set, researchers fitting Bayesian occupancy models should conduct sensitivity analyses to ensure intended inference, or employ less commonly used priors that are less informative (e.g., logistic or t prior distributions). We provide suggestions for addressing this issue in occupancy studies, and an online tool for exploring this issue under different contexts. PMID:29481554

  6. The relative roles of environment, history and local dispersal in controlling the distributions of common tree and shrub species in a tropical forest landscape, Panama

    USGS Publications Warehouse

    Svenning, J.-C.; Engelbrecht, B.M.J.; Kinner, D.A.; Kursar, T.A.; Stallard, R.F.; Wright, S.J.

    2006-01-01

    We used regression models and information-theoretic model selection to assess the relative importance of environment, local dispersal and historical contingency as controls of the distributions of 26 common plant species in tropical forest on Barro Colorado Island (BCI), Panama. We censused eighty-eight 0.09-ha plots scattered across the landscape. Environmental control, local dispersal and historical contingency were represented by environmental variables (soil moisture, slope, soil type, distance to shore, old-forest presence), a spatial autoregressive parameter (??), and four spatial trend variables, respectively. We built regression models, representing all combinations of the three hypotheses, for each species. The probability that the best model included the environmental variables, spatial trend variables and ?? averaged 33%, 64% and 50% across the study species, respectively. The environmental variables, spatial trend variables, ??, and a simple intercept model received the strongest support for 4, 15, 5 and 2 species, respectively. Comparing the model results to information on species traits showed that species with strong spatial trends produced few and heavy diaspores, while species with strong soil moisture relationships were particularly drought-sensitive. In conclusion, history and local dispersal appeared to be the dominant controls of the distributions of common plant species on BCI. Copyright ?? 2006 Cambridge University Press.

  7. Framework for a clinical information system.

    PubMed

    Van De Velde, R; Lansiers, R; Antonissen, G

    2002-01-01

    The design and implementation of Clinical Information System architecture is presented. This architecture has been developed and implemented based on components following a strong underlying conceptual and technological model. Common Object Request Broker and n-tier technology featuring centralised and departmental clinical information systems as the back-end store for all clinical data are used. Servers located in the "middle" tier apply the clinical (business) model and application rules. The main characteristics are the focus on modelling and reuse of both data and business logic. Scalability as well as adaptability to constantly changing requirements via component driven computing are the main reasons for that approach.

  8. An Ontology-Based Archive Information Model for the Planetary Science Community

    NASA Technical Reports Server (NTRS)

    Hughes, J. Steven; Crichton, Daniel J.; Mattmann, Chris

    2008-01-01

    The Planetary Data System (PDS) information model is a mature but complex model that has been used to capture over 30 years of planetary science data for the PDS archive. As the de-facto information model for the planetary science data archive, it is being adopted by the International Planetary Data Alliance (IPDA) as their archive data standard. However, after seventeen years of evolutionary change the model needs refinement. First a formal specification is needed to explicitly capture the model in a commonly accepted data engineering notation. Second, the core and essential elements of the model need to be identified to help simplify the overall archive process. A team of PDS technical staff members have captured the PDS information model in an ontology modeling tool. Using the resulting knowledge-base, work continues to identify the core elements, identify problems and issues, and then test proposed modifications to the model. The final deliverables of this work will include specifications for the next generation PDS information model and the initial set of IPDA archive data standards. Having the information model captured in an ontology modeling tool also makes the model suitable for use by Semantic Web applications.

  9. A Reference Architecture for Space Information Management

    NASA Technical Reports Server (NTRS)

    Mattmann, Chris A.; Crichton, Daniel J.; Hughes, J. Steven; Ramirez, Paul M.; Berrios, Daniel C.

    2006-01-01

    We describe a reference architecture for space information management systems that elegantly overcomes the rigid design of common information systems in many domains. The reference architecture consists of a set of flexible, reusable, independent models and software components that function in unison, but remain separately managed entities. The main guiding principle of the reference architecture is to separate the various models of information (e.g., data, metadata, etc.) from implemented system code, allowing each to evolve independently. System modularity, systems interoperability, and dynamic evolution of information system components are the primary benefits of the design of the architecture. The architecture requires the use of information models that are substantially more advanced than those used by the vast majority of information systems. These models are more expressive and can be more easily modularized, distributed and maintained than simpler models e.g., configuration files and data dictionaries. Our current work focuses on formalizing the architecture within a CCSDS Green Book and evaluating the architecture within the context of the C3I initiative.

  10. Postmodernism, Therapeutic Common Factors, and Innovation: Contemporary Influences Shaping a More Efficient and Effective Model of School Counseling

    ERIC Educational Resources Information Center

    Klein, James F.; Gee, Corrie R.

    2006-01-01

    School counseling lacks clarity. This confusion is the result of competing models and confusing standards, domains, and competencies. This article offers a simplified model of school counseling entitled the "Six 'C' Model" (i.e., Care, Collaboration, Champion, Challenge, Courage, and Commitment). The interactive model is informed by the…

  11. The Influence of Information Acquisition on the Complex Dynamics of Market Competition

    NASA Astrophysics Data System (ADS)

    Guo, Zhanbing; Ma, Junhai

    In this paper, we build a dynamical game model with three bounded rational players (firms) to study the influence of information on the complex dynamics of market competition, where useful information is about rival’s real decision. In this dynamical game model, one information-sharing team is composed of two firms, they acquire and share the information about their common competitor, however, they make their own decisions separately, where the amount of information acquired by this information-sharing team will determine the estimation accuracy about the rival’s real decision. Based on this dynamical game model and some creative 3D diagrams, the influence of the amount of information on the complex dynamics of market competition such as local dynamics, global dynamics and profits is studied. These results have significant theoretical and practical values to realize the influence of information.

  12. The Mechanisms of Water Exchange: The Regulatory Roles of Multiple Interactions in Social Wasps.

    PubMed

    Agrawal, Devanshu; Karsai, Istvan

    2016-01-01

    Evolutionary benefits of task fidelity and improving information acquisition via multiple transfers of materials between individuals in a task partitioned system have been shown before, but in this paper we provide a mechanistic explanation of these phenomena. Using a simple mathematical model describing the individual interactions of the wasps, we explain the functioning of the common stomach, an information center, which governs construction behavior and task change. Our central hypothesis is a symmetry between foragers who deposit water and foragers who withdraw water into and out of the common stomach. We combine this with a trade-off between acceptance and resistance to water transfer. We ultimately derive a mathematical function that relates the number of interactions that foragers complete with common stomach wasps during a foraging cycle. We use field data and additional model assumptions to calculate values of our model parameters, and we use these to explain why the fullness of the common stomach stabilizes just below 50 percent, why the average number of successful interactions between foragers and the wasps forming the common stomach is between 5 and 7, and why there is a variation in this number of interactions over time. Our explanation is that our proposed water exchange mechanism places natural bounds on the number of successful interactions possible, water exchange is set to optimize mediation of water through the common stomach, and the chance that foragers abort their task prematurely is very low.

  13. The Mechanisms of Water Exchange: The Regulatory Roles of Multiple Interactions in Social Wasps

    PubMed Central

    Agrawal, Devanshu; Karsai, Istvan

    2016-01-01

    Evolutionary benefits of task fidelity and improving information acquisition via multiple transfers of materials between individuals in a task partitioned system have been shown before, but in this paper we provide a mechanistic explanation of these phenomena. Using a simple mathematical model describing the individual interactions of the wasps, we explain the functioning of the common stomach, an information center, which governs construction behavior and task change. Our central hypothesis is a symmetry between foragers who deposit water and foragers who withdraw water into and out of the common stomach. We combine this with a trade-off between acceptance and resistance to water transfer. We ultimately derive a mathematical function that relates the number of interactions that foragers complete with common stomach wasps during a foraging cycle. We use field data and additional model assumptions to calculate values of our model parameters, and we use these to explain why the fullness of the common stomach stabilizes just below 50 percent, why the average number of successful interactions between foragers and the wasps forming the common stomach is between 5 and 7, and why there is a variation in this number of interactions over time. Our explanation is that our proposed water exchange mechanism places natural bounds on the number of successful interactions possible, water exchange is set to optimize mediation of water through the common stomach, and the chance that foragers abort their task prematurely is very low. PMID:26751076

  14. Information Fusion - Methods and Aggregation Operators

    NASA Astrophysics Data System (ADS)

    Torra, Vicenç

    Information fusion techniques are commonly applied in Data Mining and Knowledge Discovery. In this chapter, we will give an overview of such applications considering their three main uses. This is, we consider fusion methods for data preprocessing, model building and information extraction. Some aggregation operators (i.e. particular fusion methods) and their properties are briefly described as well.

  15. Student Modeling Based on Problem Solving Times

    ERIC Educational Resources Information Center

    Pelánek, Radek; Jarušek, Petr

    2015-01-01

    Student modeling in intelligent tutoring systems is mostly concerned with modeling correctness of students' answers. As interactive problem solving activities become increasingly common in educational systems, it is useful to focus also on timing information associated with problem solving. We argue that the focus on timing is natural for certain…

  16. Commonalities and differences in the implementation of models of care for arthritis: key informant interviews from Canada.

    PubMed

    Cott, Cheryl A; Davis, Aileen M; Badley, Elizabeth M; Wong, Rosalind; Canizares, Mayilee; Li, Linda C; Jones, Allyson; Brooks, Sydney; Ahlwalia, Vandana; Hawker, Gillian; Jaglal, Susan; Landry, Michel; MacKay, Crystal; Mosher, Dianne

    2016-08-19

    Timely access to effective treatments for arthritis is a priority at national, provincial and regional levels in Canada due to population aging coupled with limited health human resources. Models of care for arthritis are being implemented across the country but mainly in local contexts, not from an evidence-informed policy or framework. The purpose of this study is to examine existing models of care for arthritis in Canada at the local level in order to identify commonalities and differences in their implementation that could point to important considerations for health policy and service delivery. Semi-structured key informant interviews were conducted with 70 program managers and/or care providers in three Canadian provinces identified through purposive and snowball sampling followed by more detailed examination of 6 models of care (two per province). Interviews were transcribed verbatim and analyzed thematically using a qualitative descriptive approach. Two broad models of care were identified for Total Joint Replacement and Inflammatory Arthritis. Commonalities included lack of complete and appropriate referrals from primary care physicians and lack of health human resources to meet local demands. Strategies included standardized referrals and centralized intake and triage using non-specialist health care professionals. Differences included the nature of the care and follow-up, the role of the specialist, and location of service delivery. Current models of care are mainly focused on Total Joint Replacement and Inflammatory Arthritis. Given the increasing prevalence of arthritis and that published data report only a small proportion of current service delivery is specialist care; provision of timely, appropriate care requires development, implementation and evaluation of models of care across the continuum of care.

  17. Identifying appropriate reference data models for comparative effectiveness research (CER) studies based on data from clinical information systems.

    PubMed

    Ogunyemi, Omolola I; Meeker, Daniella; Kim, Hyeon-Eui; Ashish, Naveen; Farzaneh, Seena; Boxwala, Aziz

    2013-08-01

    The need for a common format for electronic exchange of clinical data prompted federal endorsement of applicable standards. However, despite obvious similarities, a consensus standard has not yet been selected in the comparative effectiveness research (CER) community. Using qualitative metrics for data retrieval and information loss across a variety of CER topic areas, we compare several existing models from a representative sample of organizations associated with clinical research: the Observational Medical Outcomes Partnership (OMOP), Biomedical Research Integrated Domain Group, the Clinical Data Interchange Standards Consortium, and the US Food and Drug Administration. While the models examined captured a majority of the data elements that are useful for CER studies, data elements related to insurance benefit design and plans were most detailed in OMOP's CDM version 4.0. Standardized vocabularies that facilitate semantic interoperability were included in the OMOP and US Food and Drug Administration Mini-Sentinel data models, but are left to the discretion of the end-user in Biomedical Research Integrated Domain Group and Analysis Data Model, limiting reuse opportunities. Among the challenges we encountered was the need to model data specific to a local setting. This was handled by extending the standard data models. We found that the Common Data Model from the OMOP met the broadest complement of CER objectives. Minimal information loss occurred in mapping data from institution-specific data warehouses onto the data models from the standards we assessed. However, to support certain scenarios, we found a need to enhance existing data dictionaries with local, institution-specific information.

  18. Model diagnostics in reduced-rank estimation

    PubMed Central

    Chen, Kun

    2016-01-01

    Reduced-rank methods are very popular in high-dimensional multivariate analysis for conducting simultaneous dimension reduction and model estimation. However, the commonly-used reduced-rank methods are not robust, as the underlying reduced-rank structure can be easily distorted by only a few data outliers. Anomalies are bound to exist in big data problems, and in some applications they themselves could be of the primary interest. While naive residual analysis is often inadequate for outlier detection due to potential masking and swamping, robust reduced-rank estimation approaches could be computationally demanding. Under Stein's unbiased risk estimation framework, we propose a set of tools, including leverage score and generalized information score, to perform model diagnostics and outlier detection in large-scale reduced-rank estimation. The leverage scores give an exact decomposition of the so-called model degrees of freedom to the observation level, which lead to exact decomposition of many commonly-used information criteria; the resulting quantities are thus named information scores of the observations. The proposed information score approach provides a principled way of combining the residuals and leverage scores for anomaly detection. Simulation studies confirm that the proposed diagnostic tools work well. A pattern recognition example with hand-writing digital images and a time series analysis example with monthly U.S. macroeconomic data further demonstrate the efficacy of the proposed approaches. PMID:28003860

  19. Model diagnostics in reduced-rank estimation.

    PubMed

    Chen, Kun

    2016-01-01

    Reduced-rank methods are very popular in high-dimensional multivariate analysis for conducting simultaneous dimension reduction and model estimation. However, the commonly-used reduced-rank methods are not robust, as the underlying reduced-rank structure can be easily distorted by only a few data outliers. Anomalies are bound to exist in big data problems, and in some applications they themselves could be of the primary interest. While naive residual analysis is often inadequate for outlier detection due to potential masking and swamping, robust reduced-rank estimation approaches could be computationally demanding. Under Stein's unbiased risk estimation framework, we propose a set of tools, including leverage score and generalized information score, to perform model diagnostics and outlier detection in large-scale reduced-rank estimation. The leverage scores give an exact decomposition of the so-called model degrees of freedom to the observation level, which lead to exact decomposition of many commonly-used information criteria; the resulting quantities are thus named information scores of the observations. The proposed information score approach provides a principled way of combining the residuals and leverage scores for anomaly detection. Simulation studies confirm that the proposed diagnostic tools work well. A pattern recognition example with hand-writing digital images and a time series analysis example with monthly U.S. macroeconomic data further demonstrate the efficacy of the proposed approaches.

  20. Extracting business vocabularies from business process models: SBVR and BPMN standards-based approach

    NASA Astrophysics Data System (ADS)

    Skersys, Tomas; Butleris, Rimantas; Kapocius, Kestutis

    2013-10-01

    Approaches for the analysis and specification of business vocabularies and rules are very relevant topics in both Business Process Management and Information Systems Development disciplines. However, in common practice of Information Systems Development, the Business modeling activities still are of mostly empiric nature. In this paper, basic aspects of the approach for business vocabularies' semi-automated extraction from business process models are presented. The approach is based on novel business modeling-level OMG standards "Business Process Model and Notation" (BPMN) and "Semantics for Business Vocabularies and Business Rules" (SBVR), thus contributing to OMG's vision about Model-Driven Architecture (MDA) and to model-driven development in general.

  1. Ubiquitous information for ubiquitous computing: expressing clinical data sets with openEHR archetypes.

    PubMed

    Garde, Sebastian; Hovenga, Evelyn; Buck, Jasmin; Knaup, Petra

    2006-01-01

    Ubiquitous computing requires ubiquitous access to information and knowledge. With the release of openEHR Version 1.0 there is a common model available to solve some of the problems related to accessing information and knowledge by improving semantic interoperability between clinical systems. Considerable work has been undertaken by various bodies to standardise Clinical Data Sets. Notwithstanding their value, several problems remain unsolved with Clinical Data Sets without the use of a common model underpinning them. This paper outlines these problems like incompatible basic data types and overlapping and incompatible definitions of clinical content. A solution to this based on openEHR archetypes is motivated and an approach to transform existing Clinical Data Sets into archetypes is presented. To avoid significant overlaps and unnecessary effort during archetype development, archetype development needs to be coordinated nationwide and beyond and also across the various health professions in a formalized process.

  2. Hash Functions and Information Theoretic Security

    NASA Astrophysics Data System (ADS)

    Bagheri, Nasour; Knudsen, Lars R.; Naderi, Majid; Thomsen, Søren S.

    Information theoretic security is an important security notion in cryptography as it provides a true lower bound for attack complexities. However, in practice attacks often have a higher cost than the information theoretic bound. In this paper we study the relationship between information theoretic attack costs and real costs. We show that in the information theoretic model, many well-known and commonly used hash functions such as MD5 and SHA-256 fail to be preimage resistant.

  3. Multi-level multi-task learning for modeling cross-scale interactions in nested geospatial data

    USGS Publications Warehouse

    Yuan, Shuai; Zhou, Jiayu; Tan, Pang-Ning; Fergus, Emi; Wagner, Tyler; Sorrano, Patricia

    2017-01-01

    Predictive modeling of nested geospatial data is a challenging problem as the models must take into account potential interactions among variables defined at different spatial scales. These cross-scale interactions, as they are commonly known, are particularly important to understand relationships among ecological properties at macroscales. In this paper, we present a novel, multi-level multi-task learning framework for modeling nested geospatial data in the lake ecology domain. Specifically, we consider region-specific models to predict lake water quality from multi-scaled factors. Our framework enables distinct models to be developed for each region using both its local and regional information. The framework also allows information to be shared among the region-specific models through their common set of latent factors. Such information sharing helps to create more robust models especially for regions with limited or no training data. In addition, the framework can automatically determine cross-scale interactions between the regional variables and the local variables that are nested within them. Our experimental results show that the proposed framework outperforms all the baseline methods in at least 64% of the regions for 3 out of 4 lake water quality datasets evaluated in this study. Furthermore, the latent factors can be clustered to obtain a new set of regions that is more aligned with the response variables than the original regions that were defined a priori from the ecology domain.

  4. Enabling interoperability in planetary sciences and heliophysics: The case for an information model

    NASA Astrophysics Data System (ADS)

    Hughes, J. Steven; Crichton, Daniel J.; Raugh, Anne C.; Cecconi, Baptiste; Guinness, Edward A.; Isbell, Christopher E.; Mafi, Joseph N.; Gordon, Mitchell K.; Hardman, Sean H.; Joyner, Ronald S.

    2018-01-01

    The Planetary Data System has developed the PDS4 Information Model to enable interoperability across diverse science disciplines. The Information Model is based on an integration of International Organization for Standardization (ISO) level standards for trusted digital archives, information model development, and metadata registries. Where controlled vocabularies provides a basic level of interoperability by providing a common set of terms for communication between both machines and humans the Information Model improves interoperability by means of an ontology that provides semantic information or additional related context for the terms. The information model was defined by team of computer scientists and science experts from each of the diverse disciplines in the Planetary Science community, including Atmospheres, Geosciences, Cartography and Imaging Sciences, Navigational and Ancillary Information, Planetary Plasma Interactions, Ring-Moon Systems, and Small Bodies. The model was designed to be extensible beyond the Planetary Science community, for example there are overlaps between certain PDS disciplines and the Heliophysics and Astrophysics disciplines. "Interoperability" can apply to many aspects of both the developer and the end-user experience, for example agency-to-agency, semantic level, and application level interoperability. We define these types of interoperability and focus on semantic level interoperability, the type of interoperability most directly enabled by an information model.

  5. Common data model for natural language processing based on two existing standard information models: CDA+GrAF.

    PubMed

    Meystre, Stéphane M; Lee, Sanghoon; Jung, Chai Young; Chevrier, Raphaël D

    2012-08-01

    An increasing need for collaboration and resources sharing in the Natural Language Processing (NLP) research and development community motivates efforts to create and share a common data model and a common terminology for all information annotated and extracted from clinical text. We have combined two existing standards: the HL7 Clinical Document Architecture (CDA), and the ISO Graph Annotation Format (GrAF; in development), to develop such a data model entitled "CDA+GrAF". We experimented with several methods to combine these existing standards, and eventually selected a method wrapping separate CDA and GrAF parts in a common standoff annotation (i.e., separate from the annotated text) XML document. Two use cases, clinical document sections, and the 2010 i2b2/VA NLP Challenge (i.e., problems, tests, and treatments, with their assertions and relations), were used to create examples of such standoff annotation documents, and were successfully validated with the XML schemata provided with both standards. We developed a tool to automatically translate annotation documents from the 2010 i2b2/VA NLP Challenge format to GrAF, and automatically generated 50 annotation documents using this tool, all successfully validated. Finally, we adapted the XSL stylesheet provided with HL7 CDA to allow viewing annotation XML documents in a web browser, and plan to adapt existing tools for translating annotation documents between CDA+GrAF and the UIMA and GATE frameworks. This common data model may ease directly comparing NLP tools and applications, combining their output, transforming and "translating" annotations between different NLP applications, and eventually "plug-and-play" of different modules in NLP applications. Copyright © 2011 Elsevier Inc. All rights reserved.

  6. Computer simulation modeling of recreation use: Current status, case studies, and future directions

    Treesearch

    David N. Cole

    2005-01-01

    This report compiles information about recent progress in the application of computer simulation modeling to planning and management of recreation use, particularly in parks and wilderness. Early modeling efforts are described in a chapter that provides an historical perspective. Another chapter provides an overview of modeling options, common data input requirements,...

  7. Exploring the Full-Information Bifactor Model in Vertical Scaling with Construct Shift

    ERIC Educational Resources Information Center

    Li, Ying; Lissitz, Robert W.

    2012-01-01

    To address the lack of attention to construct shift in item response theory (IRT) vertical scaling, a multigroup, bifactor model was proposed to model the common dimension for all grades and the grade-specific dimensions. Bifactor model estimation accuracy was evaluated through a simulation study with manipulated factors of percentage of common…

  8. What are the Starting Points? Evaluating Base-Year Assumptions in the Asian Modeling Exercise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chaturvedi, Vaibhav; Waldhoff, Stephanie; Clarke, Leon E.

    2012-12-01

    A common feature of model inter-comparison efforts is that the base year numbers for important parameters such as population and GDP can differ substantially across models. This paper explores the sources and implications of this variation in Asian countries across the models participating in the Asian Modeling Exercise (AME). Because the models do not all have a common base year, each team was required to provide data for 2005 for comparison purposes. This paper compares the year 2005 information for different models, noting the degree of variation in important parameters, including population, GDP, primary energy, electricity, and CO2 emissions. Itmore » then explores the difference in these key parameters across different sources of base-year information. The analysis confirms that the sources provide different values for many key parameters. This variation across data sources and additional reasons why models might provide different base-year numbers, including differences in regional definitions, differences in model base year, and differences in GDP transformation methodologies, are then discussed in the context of the AME scenarios. Finally, the paper explores the implications of base-year variation on long-term model results.« less

  9. Response moderation models for conditional dependence between response time and response accuracy.

    PubMed

    Bolsinova, Maria; Tijmstra, Jesper; Molenaar, Dylan

    2017-05-01

    It is becoming more feasible and common to register response times in the application of psychometric tests. Researchers thus have the opportunity to jointly model response accuracy and response time, which provides users with more relevant information. The most common choice is to use the hierarchical model (van der Linden, 2007, Psychometrika, 72, 287), which assumes conditional independence between response time and accuracy, given a person's speed and ability. However, this assumption may be violated in practice if, for example, persons vary their speed or differ in their response strategies, leading to conditional dependence between response time and accuracy and confounding measurement. We propose six nested hierarchical models for response time and accuracy that allow for conditional dependence, and discuss their relationship to existing models. Unlike existing approaches, the proposed hierarchical models allow for various forms of conditional dependence in the model and allow the effect of continuous residual response time on response accuracy to be item-specific, person-specific, or both. Estimation procedures for the models are proposed, as well as two information criteria that can be used for model selection. Parameter recovery and usefulness of the information criteria are investigated using simulation, indicating that the procedure works well and is likely to select the appropriate model. Two empirical applications are discussed to illustrate the different types of conditional dependence that may occur in practice and how these can be captured using the proposed hierarchical models. © 2016 The British Psychological Society.

  10. Summary of human social, cultural, behavioral (HSCB) modeling for information fusion panel discussion

    NASA Astrophysics Data System (ADS)

    Blasch, Erik; Salerno, John; Kadar, Ivan; Yang, Shanchieh J.; Fenstermacher, Laurie; Endsley, Mica; Grewe, Lynne

    2013-05-01

    During the SPIE 2012 conference, panelists convened to discuss "Real world issues and challenges in Human Social/Cultural/Behavioral modeling with Applications to Information Fusion." Each panelist presented their current trends and issues. The panel had agreement on advanced situation modeling, working with users for situation awareness and sense-making, and HSCB context modeling in focusing research activities. Each panelist added different perspectives based on the domain of interest such as physical, cyber, and social attacks from which estimates and projections can be forecasted. Also, additional techniques were addressed such as interest graphs, network modeling, and variable length Markov Models. This paper summarizes the panelists discussions to highlight the common themes and the related contrasting approaches to the domains in which HSCB applies to information fusion applications.

  11. Separate-channel analysis of two-channel microarrays: recovering inter-spot information.

    PubMed

    Smyth, Gordon K; Altman, Naomi S

    2013-05-26

    Two-channel (or two-color) microarrays are cost-effective platforms for comparative analysis of gene expression. They are traditionally analysed in terms of the log-ratios (M-values) of the two channel intensities at each spot, but this analysis does not use all the information available in the separate channel observations. Mixed models have been proposed to analyse intensities from the two channels as separate observations, but such models can be complex to use and the gain in efficiency over the log-ratio analysis is difficult to quantify. Mixed models yield test statistics for the null distributions can be specified only approximately, and some approaches do not borrow strength between genes. This article reformulates the mixed model to clarify the relationship with the traditional log-ratio analysis, to facilitate information borrowing between genes, and to obtain an exact distributional theory for the resulting test statistics. The mixed model is transformed to operate on the M-values and A-values (average log-expression for each spot) instead of on the log-expression values. The log-ratio analysis is shown to ignore information contained in the A-values. The relative efficiency of the log-ratio analysis is shown to depend on the size of the intraspot correlation. A new separate channel analysis method is proposed that assumes a constant intra-spot correlation coefficient across all genes. This approach permits the mixed model to be transformed into an ordinary linear model, allowing the data analysis to use a well-understood empirical Bayes analysis pipeline for linear modeling of microarray data. This yields statistically powerful test statistics that have an exact distributional theory. The log-ratio, mixed model and common correlation methods are compared using three case studies. The results show that separate channel analyses that borrow strength between genes are more powerful than log-ratio analyses. The common correlation analysis is the most powerful of all. The common correlation method proposed in this article for separate-channel analysis of two-channel microarray data is no more difficult to apply in practice than the traditional log-ratio analysis. It provides an intuitive and powerful means to conduct analyses and make comparisons that might otherwise not be possible.

  12. The caCORE Software Development Kit: streamlining construction of interoperable biomedical information services.

    PubMed

    Phillips, Joshua; Chilukuri, Ram; Fragoso, Gilberto; Warzel, Denise; Covitz, Peter A

    2006-01-06

    Robust, programmatically accessible biomedical information services that syntactically and semantically interoperate with other resources are challenging to construct. Such systems require the adoption of common information models, data representations and terminology standards as well as documented application programming interfaces (APIs). The National Cancer Institute (NCI) developed the cancer common ontologic representation environment (caCORE) to provide the infrastructure necessary to achieve interoperability across the systems it develops or sponsors. The caCORE Software Development Kit (SDK) was designed to provide developers both within and outside the NCI with the tools needed to construct such interoperable software systems. The caCORE SDK requires a Unified Modeling Language (UML) tool to begin the development workflow with the construction of a domain information model in the form of a UML Class Diagram. Models are annotated with concepts and definitions from a description logic terminology source using the Semantic Connector component. The annotated model is registered in the Cancer Data Standards Repository (caDSR) using the UML Loader component. System software is automatically generated using the Codegen component, which produces middleware that runs on an application server. The caCORE SDK was initially tested and validated using a seven-class UML model, and has been used to generate the caCORE production system, which includes models with dozens of classes. The deployed system supports access through object-oriented APIs with consistent syntax for retrieval of any type of data object across all classes in the original UML model. The caCORE SDK is currently being used by several development teams, including by participants in the cancer biomedical informatics grid (caBIG) program, to create compatible data services. caBIG compatibility standards are based upon caCORE resources, and thus the caCORE SDK has emerged as a key enabling technology for caBIG. The caCORE SDK substantially lowers the barrier to implementing systems that are syntactically and semantically interoperable by providing workflow and automation tools that standardize and expedite modeling, development, and deployment. It has gained acceptance among developers in the caBIG program, and is expected to provide a common mechanism for creating data service nodes on the data grid that is under development.

  13. Efficient terrestrial laser scan segmentation exploiting data structure

    NASA Astrophysics Data System (ADS)

    Mahmoudabadi, Hamid; Olsen, Michael J.; Todorovic, Sinisa

    2016-09-01

    New technologies such as lidar enable the rapid collection of massive datasets to model a 3D scene as a point cloud. However, while hardware technology continues to advance, processing 3D point clouds into informative models remains complex and time consuming. A common approach to increase processing efficiently is to segment the point cloud into smaller sections. This paper proposes a novel approach for point cloud segmentation using computer vision algorithms to analyze panoramic representations of individual laser scans. These panoramas can be quickly created using an inherent neighborhood structure that is established during the scanning process, which scans at fixed angular increments in a cylindrical or spherical coordinate system. In the proposed approach, a selected image segmentation algorithm is applied on several input layers exploiting this angular structure including laser intensity, range, normal vectors, and color information. These segments are then mapped back to the 3D point cloud so that modeling can be completed more efficiently. This approach does not depend on pre-defined mathematical models and consequently setting parameters for them. Unlike common geometrical point cloud segmentation methods, the proposed method employs the colorimetric and intensity data as another source of information. The proposed algorithm is demonstrated on several datasets encompassing variety of scenes and objects. Results show a very high perceptual (visual) level of segmentation and thereby the feasibility of the proposed algorithm. The proposed method is also more efficient compared to Random Sample Consensus (RANSAC), which is a common approach for point cloud segmentation.

  14. Landscape-scale distribution and density of raptor populations wintering in anthropogenic-dominated desert landscapes

    USGS Publications Warehouse

    Duerr, Adam E.; Miller, Tricia A.; Cornell Duerr, Kerri L; Lanzone, Michael J.; Fesnock, Amy; Katzner, Todd E.

    2015-01-01

    Anthropogenic development has great potential to affect fragile desert environments. Large-scale development of renewable energy infrastructure is planned for many desert ecosystems. Development plans should account for anthropogenic effects to distributions and abundance of rare or sensitive wildlife; however, baseline data on abundance and distribution of such wildlife are often lacking. We surveyed for predatory birds in the Sonoran and Mojave Deserts of southern California, USA, in an area designated for protection under the “Desert Renewable Energy Conservation Plan”, to determine how these birds are distributed across the landscape and how this distribution is affected by existing development. We developed species-specific models of resight probability to adjust estimates of abundance and density of each individual common species. Second, we developed combined-species models of resight probability for common and rare species so that we could make use of sparse data on the latter. We determined that many common species, such as red-tailed hawks, loggerhead shrikes, and especially common ravens, are associated with human development and likely subsidized by human activity. Species-specific and combined-species models of resight probability performed similarly, although the former model type provided higher quality information. Comparing abundance estimates with past surveys in the Mojave Desert suggests numbers of predatory birds associated with human development have increased while other sensitive species not associated with development have decreased. This approach gave us information beyond what we would have collected by focusing either on common or rare species, thus it provides a low-cost framework for others conducting surveys in similar desert environments outside of California.

  15. The CMIP5 Model Documentation Questionnaire: Development of a Metadata Retrieval System for the METAFOR Common Information Model

    NASA Astrophysics Data System (ADS)

    Pascoe, Charlotte; Lawrence, Bryan; Moine, Marie-Pierre; Ford, Rupert; Devine, Gerry

    2010-05-01

    The EU METAFOR Project (http://metaforclimate.eu) has created a web-based model documentation questionnaire to collect metadata from the modelling groups that are running simulations in support of the Coupled Model Intercomparison Project - 5 (CMIP5). The CMIP5 model documentation questionnaire will retrieve information about the details of the models used, how the simulations were carried out, how the simulations conformed to the CMIP5 experiment requirements and details of the hardware used to perform the simulations. The metadata collected by the CMIP5 questionnaire will allow CMIP5 data to be compared in a scientifically meaningful way. This paper describes the life-cycle of the CMIP5 questionnaire development which starts with relatively unstructured input from domain specialists and ends with formal XML documents that comply with the METAFOR Common Information Model (CIM). Each development step is associated with a specific tool. (1) Mind maps are used to capture information requirements from domain experts and build a controlled vocabulary, (2) a python parser processes the XML files generated by the mind maps, (3) Django (python) is used to generate the dynamic structure and content of the web based questionnaire from processed xml and the METAFOR CIM, (4) Python parsers ensure that information entered into the CMIP5 questionnaire is output as CIM compliant xml, (5) CIM compliant output allows automatic information capture tools to harvest questionnaire content into databases such as the Earth System Grid (ESG) metadata catalogue. This paper will focus on how Django (python) and XML input files are used to generate the structure and content of the CMIP5 questionnaire. It will also address how the choice of development tools listed above provided a framework that enabled working scientists (who we would never ordinarily get to interact with UML and XML) to be part the iterative development process and ensure that the CMIP5 model documentation questionnaire reflects what scientists want to know about the models. Keywords: metadata, CMIP5, automatic information capture, tool development

  16. Aggression and Moral Development: Integrating Social Information Processing and Moral Domain Models

    ERIC Educational Resources Information Center

    Arsenio, William F.; Lemerise, Elizabeth A.

    2004-01-01

    Social information processing and moral domain theories have developed in relative isolation from each other despite their common focus on intentional harm and victimization, and mutual emphasis on social cognitive processes in explaining aggressive, morally relevant behaviors. This article presents a selective summary of these literatures with…

  17. Enabling PBPK model development through the application of freely available techniques for the creation of a chemically-annotatedcollection of literature

    EPA Science Inventory

    The creation of Physiologically Based Pharmacokinetic (PBPK) models for a new chemical requires the selection of an appropriate model structure and the collection of a large amount of data for parameterization. Commonly, a large proportion of the needed information is collected ...

  18. Striking a Balance: Students' Tendencies to Oversimplify or Overcomplicate in Mathematical Modeling

    ERIC Educational Resources Information Center

    Gould, Heather; Wasserman, Nicholas H.

    2014-01-01

    With the adoption of the "Common Core State Standards for Mathematics" (CCSSM), the process of mathematical modeling has been given increased attention in mathematics education. This article reports on a study intended to inform the implementation of modeling in classroom contexts by examining students' interactions with the process of…

  19. A geographic data model for representing ground water systems.

    PubMed

    Strassberg, Gil; Maidment, David R; Jones, Norm L

    2007-01-01

    The Arc Hydro ground water data model is a geographic data model for representing spatial and temporal ground water information within a geographic information system (GIS). The data model is a standardized representation of ground water systems within a spatial database that provides a public domain template for GIS users to store, document, and analyze commonly used spatial and temporal ground water data sets. This paper describes the data model framework, a simplified version of the complete ground water data model that includes two-dimensional and three-dimensional (3D) object classes for representing aquifers, wells, and borehole data, and the 3D geospatial context in which these data exist. The framework data model also includes tabular objects for representing temporal information such as water levels and water quality samples that are related with spatial features.

  20. How do patient characteristics influence informal payments for inpatient and outpatient health care in Albania: Results of logit and OLS models using Albanian LSMS 2005

    PubMed Central

    2011-01-01

    Background Informal payments for health care are common in most former communist countries. This paper explores the demand side of these payments in Albania. By using data from the Living Standard Measurement Survey 2005 we control for individual determinants of informal payments in inpatient and outpatient health care. We use these results to explain the main factors contributing to the occurrence and extent of informal payments in Albania. Methods Using multivariate methods (logit and OLS) we test three models to explain informal payments: the cultural, economic and governance model. The results of logit models are presented here as odds ratios (OR) and results from OLS models as regression coefficients (RC). Results Our findings suggest differences in determinants of informal payments in inpatient and outpatient care. Generally our results show that informal payments are dependent on certain characteristics of patients, including age, area of residence, education, health status and health insurance. However, they are less dependent on income, suggesting homogeneity of payments across income categories. Conclusions We have found more evidence for the validity of governance and economic models than for the cultural model. PMID:21605459

  1. Model-based learning and the contribution of the orbitofrontal cortex to the model-free world

    PubMed Central

    McDannald, Michael A.; Takahashi, Yuji K.; Lopatina, Nina; Pietras, Brad W.; Jones, Josh L.; Schoenbaum, Geoffrey

    2012-01-01

    Learning is proposed to occur when there is a discrepancy between reward prediction and reward receipt. At least two separate systems are thought to exist: one in which predictions are proposed to be based on model-free or cached values; and another in which predictions are model-based. A basic neural circuit for model-free reinforcement learning has already been described. In the model-free circuit the ventral striatum (VS) is thought to supply a common-currency reward prediction to midbrain dopamine neurons that compute prediction errors and drive learning. In a model-based system, predictions can include more information about an expected reward, such as its sensory attributes or current, unique value. This detailed prediction allows for both behavioral flexibility and learning driven by changes in sensory features of rewards alone. Recent evidence from animal learning and human imaging suggests that, in addition to model-free information, the VS also signals model-based information. Further, there is evidence that the orbitofrontal cortex (OFC) signals model-based information. Here we review these data and suggest that the OFC provides model-based information to this traditional model-free circuitry and offer possibilities as to how this interaction might occur. PMID:22487030

  2. PROPOSED MODELS FOR ESTIMATING RELEVANT DOSE RESULTING FROM EXPOSURES BY THE GASTROINTESTINAL ROUTE

    EPA Science Inventory

    Simple first-order intestinal absorption commonly used in physiologically-based pharmacokinetic(PBPK) models can be made to fit many clinical administrations but may not provide relevant information to extrapolate to real-world exposure scenarios for risk assessment. Small hydr...

  3. An Analysis of Machine- and Human-Analytics in Classification.

    PubMed

    Tam, Gary K L; Kothari, Vivek; Chen, Min

    2017-01-01

    In this work, we present a study that traces the technical and cognitive processes in two visual analytics applications to a common theoretic model of soft knowledge that may be added into a visual analytics process for constructing a decision-tree model. Both case studies involved the development of classification models based on the "bag of features" approach. Both compared a visual analytics approach using parallel coordinates with a machine-learning approach using information theory. Both found that the visual analytics approach had some advantages over the machine learning approach, especially when sparse datasets were used as the ground truth. We examine various possible factors that may have contributed to such advantages, and collect empirical evidence for supporting the observation and reasoning of these factors. We propose an information-theoretic model as a common theoretic basis to explain the phenomena exhibited in these two case studies. Together we provide interconnected empirical and theoretical evidence to support the usefulness of visual analytics.

  4. Functional linear models for association analysis of quantitative traits.

    PubMed

    Fan, Ruzong; Wang, Yifan; Mills, James L; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao

    2013-11-01

    Functional linear models are developed in this paper for testing associations between quantitative traits and genetic variants, which can be rare variants or common variants or the combination of the two. By treating multiple genetic variants of an individual in a human population as a realization of a stochastic process, the genome of an individual in a chromosome region is a continuum of sequence data rather than discrete observations. The genome of an individual is viewed as a stochastic function that contains both linkage and linkage disequilibrium (LD) information of the genetic markers. By using techniques of functional data analysis, both fixed and mixed effect functional linear models are built to test the association between quantitative traits and genetic variants adjusting for covariates. After extensive simulation analysis, it is shown that the F-distributed tests of the proposed fixed effect functional linear models have higher power than that of sequence kernel association test (SKAT) and its optimal unified test (SKAT-O) for three scenarios in most cases: (1) the causal variants are all rare, (2) the causal variants are both rare and common, and (3) the causal variants are common. The superior performance of the fixed effect functional linear models is most likely due to its optimal utilization of both genetic linkage and LD information of multiple genetic variants in a genome and similarity among different individuals, while SKAT and SKAT-O only model the similarities and pairwise LD but do not model linkage and higher order LD information sufficiently. In addition, the proposed fixed effect models generate accurate type I error rates in simulation studies. We also show that the functional kernel score tests of the proposed mixed effect functional linear models are preferable in candidate gene analysis and small sample problems. The methods are applied to analyze three biochemical traits in data from the Trinity Students Study. © 2013 WILEY PERIODICALS, INC.

  5. Measuring transferring similarity via local information

    NASA Astrophysics Data System (ADS)

    Yin, Likang; Deng, Yong

    2018-05-01

    Recommender systems have developed along with the web science, and how to measure the similarity between users is crucial for processing collaborative filtering recommendation. Many efficient models have been proposed (i.g., the Pearson coefficient) to measure the direct correlation. However, the direct correlation measures are greatly affected by the sparsity of dataset. In other words, the direct correlation measures would present an inauthentic similarity if two users have a very few commonly selected objects. Transferring similarity overcomes this drawback by considering their common neighbors (i.e., the intermediates). Yet, the transferring similarity also has its drawback since it can only provide the interval of similarity. To break the limitations, we propose the Belief Transferring Similarity (BTS) model. The contributions of BTS model are: (1) BTS model addresses the issue of the sparsity of dataset by considering the high-order similarity. (2) BTS model transforms uncertain interval to a certain state based on fuzzy systems theory. (3) BTS model is able to combine the transferring similarity of different intermediates using information fusion method. Finally, we compare BTS models with nine different link prediction methods in nine different networks, and we also illustrate the convergence property and efficiency of the BTS model.

  6. The Value of Information for Populations in Varying Environments

    NASA Astrophysics Data System (ADS)

    Rivoire, Olivier; Leibler, Stanislas

    2011-04-01

    The notion of information pervades informal descriptions of biological systems, but formal treatments face the problem of defining a quantitative measure of information rooted in a concept of fitness, which is itself an elusive notion. Here, we present a model of population dynamics where this problem is amenable to a mathematical analysis. In the limit where any information about future environmental variations is common to the members of the population, our model is equivalent to known models of financial investment. In this case, the population can be interpreted as a portfolio of financial assets and previous analyses have shown that a key quantity of Shannon's communication theory, the mutual information, sets a fundamental limit on the value of information. We show that this bound can be violated when accounting for features that are irrelevant in finance but inherent to biological systems, such as the stochasticity present at the individual level. This leads us to generalize the measures of uncertainty and information usually encountered in information theory.

  7. Dynamic Integration of Value Information into a Common Probability Currency as a Theory for Flexible Decision Making

    PubMed Central

    Christopoulos, Vassilios; Schrater, Paul R.

    2015-01-01

    Decisions involve two fundamental problems, selecting goals and generating actions to pursue those goals. While simple decisions involve choosing a goal and pursuing it, humans evolved to survive in hostile dynamic environments where goal availability and value can change with time and previous actions, entangling goal decisions with action selection. Recent studies suggest the brain generates concurrent action-plans for competing goals, using online information to bias the competition until a single goal is pursued. This creates a challenging problem of integrating information across diverse types, including both the dynamic value of the goal and the costs of action. We model the computations underlying dynamic decision-making with disparate value types, using the probability of getting the highest pay-off with the least effort as a common currency that supports goal competition. This framework predicts many aspects of decision behavior that have eluded a common explanation. PMID:26394299

  8. Metadata-Driven SOA-Based Application for Facilitation of Real-Time Data Warehousing

    NASA Astrophysics Data System (ADS)

    Pintar, Damir; Vranić, Mihaela; Skočir, Zoran

    Service-oriented architecture (SOA) has already been widely recognized as an effective paradigm for achieving integration of diverse information systems. SOA-based applications can cross boundaries of platforms, operation systems and proprietary data standards, commonly through the usage of Web Services technology. On the other side, metadata is also commonly referred to as a potential integration tool given the fact that standardized metadata objects can provide useful information about specifics of unknown information systems with which one has interest in communicating with, using an approach commonly called "model-based integration". This paper presents the result of research regarding possible synergy between those two integration facilitators. This is accomplished with a vertical example of a metadata-driven SOA-based business process that provides ETL (Extraction, Transformation and Loading) and metadata services to a data warehousing system in need of a real-time ETL support.

  9. Information Warfare: Evaluation of Operator Information Processing Models

    DTIC Science & Technology

    1997-10-01

    that people can describe or report, including both episodic and semantic information. Declarative memory contains a network of knowledge represented...second dimension corresponds roughly to the distinction between episodic and semantic memory that is commonly made in cognitive psychology. Episodic ...3 is long-term memory for the discourse, a subset of episodic memory . Partition 4 is long-term semantic memory , or the knowledge-base. According to

  10. Transition to the Wired World: A Model for the Study of Potential Side-Effects of Information Inequity.

    ERIC Educational Resources Information Center

    Salvaggio, Jerry L.; Trettevik, Susan K.

    The possibility that industrial nations will become "global villages" or comprise a "wired world" with a common information system appears possible in light of technology, but there are five major reasons why such an information society will not occur for some decades, particularly in the United States. The reasons are as follows: (1) there is no…

  11. Integrating the Functions of Institutional Research, Institutional Effectiveness, and Information Management. Professional File. Number 126, Summer 2012

    ERIC Educational Resources Information Center

    Posey, James T.; Pitter, Gita Wijesinghe

    2012-01-01

    The objective of this paper is to identify common essential information and data needs of colleges and universities and to suggest a model to integrate these data needs into one office or department. The paper suggests there are five major data and information foundations that are essential to the effective functioning of an institution: (a)…

  12. Compassion Fatigue: An Application of the Concept to Informal Caregivers of Family Members with Dementia

    PubMed Central

    Day, Jennifer R.; Anderson, Ruth A.

    2011-01-01

    Introduction. Compassion fatigue is a concept used with increasing frequency in the nursing literature. The objective of this paper is to identify common themes across the literature and to apply these themes, and an existing model of compassion fatigue, to informal caregivers for family members with dementia. Findings. Caregivers for family members with dementia may be at risk for developing compassion fatigue. The model of compassion fatigue provides an informative framework for understanding compassion fatigue in the informal caregiver population. Limitations of the model when applied to this population were identified as traumatic memories and the emotional relationship between parent and child, suggesting areas for future research. Conclusions. Research is needed to better understand the impact of compassion fatigue on informal caregivers through qualitative interviews, to identify informal caregivers at risk for compassion fatigue, and to provide an empirical basis for developing nursing interventions for caregivers experiencing compassion fatigue. PMID:22229086

  13. Automated workflows for data curation and standardization of chemical structures for QSAR modeling

    EPA Science Inventory

    Large collections of chemical structures and associated experimental data are publicly available, and can be used to build robust QSAR models for applications in different fields. One common concern is the quality of both the chemical structure information and associated experime...

  14. Information and complexity measures for hydrologic model evaluation

    USDA-ARS?s Scientific Manuscript database

    Hydrological models are commonly evaluated through the residual-based performance measures such as the root-mean square error or efficiency criteria. Such measures, however, do not evaluate the degree of similarity of patterns in simulated and measured time series. The objective of this study was to...

  15. Understanding and Using Informants Reporting Discrepancies of Youth Victimization: A Conceptual Model and Recommendations for Research

    ERIC Educational Resources Information Center

    Goodman, Kimberly L.; De Los Reyes, Andres; Bradshaw, Catherine P.

    2010-01-01

    Discrepancies often occur among informants' reports of various domains of child and family functioning and are particularly common between parent and child reports of youth violence exposure. However, recent work suggests that discrepancies between parent and child reports predict subsequent poorer child outcomes. We propose a preliminary…

  16. A Structural Contingency Theory Model of Library and Technology Partnerships within an Academic Library Information Commons

    ERIC Educational Resources Information Center

    Tuai, Cameron K.

    2011-01-01

    The integration of librarians and technologists to deliver information services represents a new and potentially costly organizational challenge for many library administrators. To understand better how to control the costs of integration, the research presented here will use structural contingency theory to study the coordination of librarians…

  17. Schizophrenia and Depression Co-Morbidity: What We have Learned from Animal Models

    PubMed Central

    Samsom, James N.; Wong, Albert H. C.

    2015-01-01

    Patients with schizophrenia are at an increased risk for the development of depression. Overlap in the symptoms and genetic risk factors between the two disorders suggests a common etiological mechanism may underlie the presentation of comorbid depression in schizophrenia. Understanding these shared mechanisms will be important in informing the development of new treatments. Rodent models are powerful tools for understanding gene function as it relates to behavior. Examining rodent models relevant to both schizophrenia and depression reveals a number of common mechanisms. Current models which demonstrate endophenotypes of both schizophrenia and depression are reviewed here, including models of CUB and SUSHI multiple domains 1, PDZ and LIM domain 5, glutamate Delta 1 receptor, diabetic db/db mice, neuropeptide Y, disrupted in schizophrenia 1, and its interacting partners, reelin, maternal immune activation, and social isolation. Neurotransmission, brain connectivity, the immune system, the environment, and metabolism emerge as potential common mechanisms linking these models and potentially explaining comorbid depression in schizophrenia. PMID:25762938

  18. The fossilized birth–death process for coherent calibration of divergence-time estimates

    PubMed Central

    Heath, Tracy A.; Huelsenbeck, John P.; Stadler, Tanja

    2014-01-01

    Time-calibrated species phylogenies are critical for addressing a wide range of questions in evolutionary biology, such as those that elucidate historical biogeography or uncover patterns of coevolution and diversification. Because molecular sequence data are not informative on absolute time, external data—most commonly, fossil age estimates—are required to calibrate estimates of species divergence dates. For Bayesian divergence time methods, the common practice for calibration using fossil information involves placing arbitrarily chosen parametric distributions on internal nodes, often disregarding most of the information in the fossil record. We introduce the “fossilized birth–death” (FBD) process—a model for calibrating divergence time estimates in a Bayesian framework, explicitly acknowledging that extant species and fossils are part of the same macroevolutionary process. Under this model, absolute node age estimates are calibrated by a single diversification model and arbitrary calibration densities are not necessary. Moreover, the FBD model allows for inclusion of all available fossils. We performed analyses of simulated data and show that node age estimation under the FBD model results in robust and accurate estimates of species divergence times with realistic measures of statistical uncertainty, overcoming major limitations of standard divergence time estimation methods. We used this model to estimate the speciation times for a dataset composed of all living bears, indicating that the genus Ursus diversified in the Late Miocene to Middle Pliocene. PMID:25009181

  19. Dissecting effects of complex mixtures: who's afraid of informative priors?

    PubMed

    Thomas, Duncan C; Witte, John S; Greenland, Sander

    2007-03-01

    Epidemiologic studies commonly investigate multiple correlated exposures, which are difficult to analyze appropriately. Hierarchical modeling provides a promising approach for analyzing such data by adding a higher-level structure or prior model for the exposure effects. This prior model can incorporate additional information on similarities among the correlated exposures and can be parametric, semiparametric, or nonparametric. We discuss the implications of applying these models and argue for their expanded use in epidemiology. While a prior model adds assumptions to the conventional (first-stage) model, all statistical methods (including conventional methods) make strong intrinsic assumptions about the processes that generated the data. One should thus balance prior modeling assumptions against assumptions of validity, and use sensitivity analyses to understand their implications. In doing so - and by directly incorporating into our analyses information from other studies or allied fields - we can improve our ability to distinguish true causes of disease from noise and bias.

  20. Modeling Common-Sense Decisions in Artificial Intelligence

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2010-01-01

    A methodology has been conceived for efficient synthesis of dynamical models that simulate common-sense decision- making processes. This methodology is intended to contribute to the design of artificial-intelligence systems that could imitate human common-sense decision making or assist humans in making correct decisions in unanticipated circumstances. This methodology is a product of continuing research on mathematical models of the behaviors of single- and multi-agent systems known in biology, economics, and sociology, ranging from a single-cell organism at one extreme to the whole of human society at the other extreme. Earlier results of this research were reported in several prior NASA Tech Briefs articles, the three most recent and relevant being Characteristics of Dynamics of Intelligent Systems (NPO -21037), NASA Tech Briefs, Vol. 26, No. 12 (December 2002), page 48; Self-Supervised Dynamical Systems (NPO-30634), NASA Tech Briefs, Vol. 27, No. 3 (March 2003), page 72; and Complexity for Survival of Living Systems (NPO- 43302), NASA Tech Briefs, Vol. 33, No. 7 (July 2009), page 62. The methodology involves the concepts reported previously, albeit viewed from a different perspective. One of the main underlying ideas is to extend the application of physical first principles to the behaviors of living systems. Models of motor dynamics are used to simulate the observable behaviors of systems or objects of interest, and models of mental dynamics are used to represent the evolution of the corresponding knowledge bases. For a given system, the knowledge base is modeled in the form of probability distributions and the mental dynamics is represented by models of the evolution of the probability densities or, equivalently, models of flows of information. Autonomy is imparted to the decisionmaking process by feedback from mental to motor dynamics. This feedback replaces unavailable external information by information stored in the internal knowledge base. Representation of the dynamical models in a parameterized form reduces the task of common-sense-based decision making to a solution of the following hetero-associated-memory problem: store a set of m predetermined stochastic processes given by their probability distributions in such a way that when presented with an unexpected change in the form of an input out of the set of M inputs, the coupled motormental dynamics converges to the corresponding one of the m pre-assigned stochastic process, and a sample of this process represents the decision.

  1. Using Instrumental Variable (IV) Tests to Evaluate Model Specification in Latent Variable Structural Equation Models*

    PubMed Central

    Kirby, James B.; Bollen, Kenneth A.

    2009-01-01

    Structural Equation Modeling with latent variables (SEM) is a powerful tool for social and behavioral scientists, combining many of the strengths of psychometrics and econometrics into a single framework. The most common estimator for SEM is the full-information maximum likelihood estimator (ML), but there is continuing interest in limited information estimators because of their distributional robustness and their greater resistance to structural specification errors. However, the literature discussing model fit for limited information estimators for latent variable models is sparse compared to that for full information estimators. We address this shortcoming by providing several specification tests based on the 2SLS estimator for latent variable structural equation models developed by Bollen (1996). We explain how these tests can be used to not only identify a misspecified model, but to help diagnose the source of misspecification within a model. We present and discuss results from a Monte Carlo experiment designed to evaluate the finite sample properties of these tests. Our findings suggest that the 2SLS tests successfully identify most misspecified models, even those with modest misspecification, and that they provide researchers with information that can help diagnose the source of misspecification. PMID:20419054

  2. Model-Based Safety Analysis

    NASA Technical Reports Server (NTRS)

    Joshi, Anjali; Heimdahl, Mats P. E.; Miller, Steven P.; Whalen, Mike W.

    2006-01-01

    System safety analysis techniques are well established and are used extensively during the design of safety-critical systems. Despite this, most of the techniques are highly subjective and dependent on the skill of the practitioner. Since these analyses are usually based on an informal system model, it is unlikely that they will be complete, consistent, and error free. In fact, the lack of precise models of the system architecture and its failure modes often forces the safety analysts to devote much of their effort to gathering architectural details about the system behavior from several sources and embedding this information in the safety artifacts such as the fault trees. This report describes Model-Based Safety Analysis, an approach in which the system and safety engineers share a common system model created using a model-based development process. By extending the system model with a fault model as well as relevant portions of the physical system to be controlled, automated support can be provided for much of the safety analysis. We believe that by using a common model for both system and safety engineering and automating parts of the safety analysis, we can both reduce the cost and improve the quality of the safety analysis. Here we present our vision of model-based safety analysis and discuss the advantages and challenges in making this approach practical.

  3. Cognitive bias in back pain patients attending osteopathy: testing the enmeshment model in reference to future thinking.

    PubMed

    Read, Jessica; Pincus, Tamar

    2004-12-01

    Depressive symptoms are common in chronic pain. Previous research has found differences in information-processing biases in depressed pain patients and depressed people without pain. The schema enmeshment model of pain (SEMP) has been proposed to explain chronic pain patients' information-processing biases. Negative future thinking is common in depression but has not been explored in relation to chronic pain and information-processing models. The study aimed to test the SEMP with reference to future thinking. An information-processing paradigm compared endorsement and recall bias between depressed and non-depressed chronic low back pain patients and control participants. Twenty-five depressed and 35 non-depressed chronic low back pain patients and 25 control participants (student osteopaths) were recruited from an osteopathy practice. Participants were asked to endorse positive and negative ill-health, depression-related, and neutral (control) adjectives, encoded in reference to either current or future time-frame. Incidental recall of the adjectives was then tested. While the expected hypothesis of a recall bias by depressed pain patients towards ill-health stimuli in the current condition was confirmed, the recall bias was not present in the future condition. Additionally, patterns of endorsement and recall bias differed. Results extend understanding of future thinking in chronic pain within the context of the SEMP.

  4. 20180318 - Automated workflows for data curation and standardization of chemical structures for QSAR modeling (ACS Spring)

    EPA Science Inventory

    Large collections of chemical structures and associated experimental data are publicly available, and can be used to build robust QSAR models for applications in different fields. One common concern is the quality of both the chemical structure information and associated experime...

  5. Development of a Quantitative Model Incorporating Key Events in a Hepatoxic Mode of Action to Predict Tumor Incidence

    EPA Science Inventory

    Biologically-Based Dose Response (BBDR) modeling of environmental pollutants can be utilized to inform the mode of action (MOA) by which compounds elicit adverse health effects. Chemicals that produce tumors are typically described as either genotoxic or non-genotoxic. One common...

  6. Using Robust Variance Estimation to Combine Multiple Regression Estimates with Meta-Analysis

    ERIC Educational Resources Information Center

    Williams, Ryan

    2013-01-01

    The purpose of this study was to explore the use of robust variance estimation for combining commonly specified multiple regression models and for combining sample-dependent focal slope estimates from diversely specified models. The proposed estimator obviates traditionally required information about the covariance structure of the dependent…

  7. What's a Parent to Do? Coping with Crisis.

    ERIC Educational Resources Information Center

    Coleman, Trudy; And Others

    This instructional packet is one of a series of five modules that emphasize a systematic decision-making model for common problematic situations. The steps of the model are identifying the problem, gathering information, developing and assessing alternatives, implementing a solution, and evaluating and modifying the solution. Aimed at adult basic…

  8. An automated curation procedure for addressing chemical errors and inconsistencies in public datasets used in QSAR modeling

    EPA Science Inventory

    Increasing availability of large collections of chemical structures and associated experimental data provides an opportunity to build robust QSAR models for applications in different fields. One common concern is the quality of both the chemical structure information and associat...

  9. Design, Development, and Initial Evaluation of a Terminology for Clinical Decision Support and Electronic Clinical Quality Measurement.

    PubMed

    Lin, Yanhua; Staes, Catherine J; Shields, David E; Kandula, Vijay; Welch, Brandon M; Kawamoto, Kensaku

    2015-01-01

    When coupled with a common information model, a common terminology for clinical decision support (CDS) and electronic clinical quality measurement (eCQM) could greatly facilitate the distributed development and sharing of CDS and eCQM knowledge resources. To enable such scalable knowledge authoring and sharing, we systematically developed an extensible and standards-based terminology for CDS and eCQM in the context of the HL7 Virtual Medical Record (vMR) information model. The development of this terminology entailed three steps: (1) systematic, physician-curated concept identification from sources such as the Health Information Technology Standards Panel (HITSP) and the SNOMED-CT CORE problem list; (2) concept de-duplication leveraging the Unified Medical Language System (UMLS) MetaMap and Metathesaurus; and (3) systematic concept naming using standard terminologies and heuristic algorithms. This process generated 3,046 concepts spanning 68 domains. Evaluation against representative CDS and eCQM resources revealed approximately 50-70% concept coverage, indicating the need for continued expansion of the terminology.

  10. An approach for utilizing clinical statements in HL7 RIM to evaluate eligibility criteria.

    PubMed

    Bache, Richard; Daniel, Christel; James, Julie; Hussain, Sajjad; McGilchrist, Mark; Delaney, Brendan; Taweel, Adel

    2014-01-01

    The HL7 RIM (Reference Information Model) is a commonly used standard for the exchange of clinical data and can be employed for integrating the patient care and clinical research domains. Yet it is not sufficiently well specified to ensure a canonical representation of structured clinical data when used for the automated evaluation of eligibility criteria from a clinical trial protocol. We present an approach to further constrain the RIM to create a common information model to hold clinical data. In order to demonstrate our approach, we identified 132 distinct data elements from 10 rich clinical trails. We then defined a taxonomy to (i) identify the types of data elements that would need to be stored and (ii) define the types of predicate that would be used to evaluate them. This informed the definition of a pattern used to represent the data, which was shown to be sufficient for storing and evaluating the clinical statements required by the trials.

  11. Integrating competing dimensional models of personality: linking the SNAP, TCI, and NEO using Item Response Theory.

    PubMed

    Stepp, Stephanie D; Yu, Lan; Miller, Joshua D; Hallquist, Michael N; Trull, Timothy J; Pilkonis, Paul A

    2012-04-01

    Mounting evidence suggests that several inventories assessing both normal personality and personality disorders measure common dimensional personality traits (i.e., Antagonism, Constraint, Emotional Instability, Extraversion, and Unconventionality), albeit providing unique information along the underlying trait continuum. We used Widiger and Simonsen's (2005) pantheoretical integrative model of dimensional personality assessment as a guide to create item pools. We then used Item Response Theory (IRT) to compare the assessment of these five personality traits across three established dimensional measures of personality: the Schedule for Nonadaptive and Adaptive Personality (SNAP), the Temperament and Character Inventory (TCI), and the Revised NEO Personality Inventory (NEO PI-R). We found that items from each inventory map onto these five common personality traits in predictable ways. The IRT analyses, however, documented considerable variability in the item and test information derived from each inventory. Our findings support the notion that the integration of multiple perspectives will provide greater information about personality while minimizing the weaknesses of any single instrument.

  12. Mathematical models utilized in the retrieval of displacement information encoded in fringe patterns

    NASA Astrophysics Data System (ADS)

    Sciammarella, Cesar A.; Lamberti, Luciano

    2016-02-01

    All the techniques that measure displacements, whether in the range of visible optics or any other form of field methods, require the presence of a carrier signal. A carrier signal is a wave form modulated (modified) by an input, deformation of the medium. A carrier is tagged to the medium under analysis and deforms with the medium. The wave form must be known both in the unmodulated and the modulated conditions. There are two basic mathematical models that can be utilized to decode the information contained in the carrier, phase modulation or frequency modulation, both are closely connected. Basic problems connected to the detection and recovery of displacement information that are common to all optical techniques will be analyzed in this paper, focusing on the general theory common to all the methods independently of the type of signal utilized. The aspects discussed are those that have practical impact in the process of data gathering and data processing.

  13. Integrating Competing Dimensional Models of Personality: Linking the SNAP, TCI, and NEO Using Item Response Theory

    PubMed Central

    Stepp, Stephanie D.; Yu, Lan; Miller, Joshua D.; Hallquist, Michael N.; Trull, Timothy J.; Pilkonis, Paul A.

    2013-01-01

    Mounting evidence suggests that several inventories assessing both normal personality and personality disorders measure common dimensional personality traits (i.e., Antagonism, Constraint, Emotional Instability, Extraversion, and Unconventionality), albeit providing unique information along the underlying trait continuum. We used Widiger and Simonsen’s (2005) pantheoretical integrative model of dimensional personality assessment as a guide to create item pools. We then used Item Response Theory (IRT) to compare the assessment of these five personality traits across three established dimensional measures of personality: the Schedule for Nonadaptive and Adaptive Personality (SNAP), the Temperament and Character Inventory (TCI), and the Revised NEO Personality Inventory (NEO PI-R). We found that items from each inventory map onto these five common personality traits in predictable ways. The IRT analyses, however, documented considerable variability in the item and test information derived from each inventory. Our findings support the notion that the integration of multiple perspectives will provide greater information about personality while minimizing the weaknesses of any single instrument. PMID:22452759

  14. Design, Development, and Initial Evaluation of a Terminology for Clinical Decision Support and Electronic Clinical Quality Measurement

    PubMed Central

    Lin, Yanhua; Staes, Catherine J; Shields, David E; Kandula, Vijay; Welch, Brandon M; Kawamoto, Kensaku

    2015-01-01

    When coupled with a common information model, a common terminology for clinical decision support (CDS) and electronic clinical quality measurement (eCQM) could greatly facilitate the distributed development and sharing of CDS and eCQM knowledge resources. To enable such scalable knowledge authoring and sharing, we systematically developed an extensible and standards-based terminology for CDS and eCQM in the context of the HL7 Virtual Medical Record (vMR) information model. The development of this terminology entailed three steps: (1) systematic, physician-curated concept identification from sources such as the Health Information Technology Standards Panel (HITSP) and the SNOMED-CT CORE problem list; (2) concept de-duplication leveraging the Unified Medical Language System (UMLS) MetaMap and Metathesaurus; and (3) systematic concept naming using standard terminologies and heuristic algorithms. This process generated 3,046 concepts spanning 68 domains. Evaluation against representative CDS and eCQM resources revealed approximately 50–70% concept coverage, indicating the need for continued expansion of the terminology. PMID:26958220

  15. VAVUQ, Python and Matlab freeware for Verification and Validation, Uncertainty Quantification

    NASA Astrophysics Data System (ADS)

    Courtney, J. E.; Zamani, K.; Bombardelli, F. A.; Fleenor, W. E.

    2015-12-01

    A package of scripts is presented for automated Verification and Validation (V&V) and Uncertainty Quantification (UQ) for engineering codes that approximate Partial Differential Equations (PDFs). The code post-processes model results to produce V&V and UQ information. This information can be used to assess model performance. Automated information on code performance can allow for a systematic methodology to assess the quality of model approximations. The software implements common and accepted code verification schemes. The software uses the Method of Manufactured Solutions (MMS), the Method of Exact Solution (MES), Cross-Code Verification, and Richardson Extrapolation (RE) for solution (calculation) verification. It also includes common statistical measures that can be used for model skill assessment. Complete RE can be conducted for complex geometries by implementing high-order non-oscillating numerical interpolation schemes within the software. Model approximation uncertainty is quantified by calculating lower and upper bounds of numerical error from the RE results. The software is also able to calculate the Grid Convergence Index (GCI), and to handle adaptive meshes and models that implement mixed order schemes. Four examples are provided to demonstrate the use of the software for code and solution verification, model validation and uncertainty quantification. The software is used for code verification of a mixed-order compact difference heat transport solver; the solution verification of a 2D shallow-water-wave solver for tidal flow modeling in estuaries; the model validation of a two-phase flow computation in a hydraulic jump compared to experimental data; and numerical uncertainty quantification for 3D CFD modeling of the flow patterns in a Gust erosion chamber.

  16. An Integrated Approach for Physical and Cyber Security Risk Assessment: The U.S. Army Corps of Engineers Common Risk Model for Dams

    DTIC Science & Technology

    2016-07-01

    Common Risk Model for Dams ( CRM -D) Methodology,” for the Director, Cost Assessment and Program Evaluation, Office of Secretary of Defense and the...for Dams ( CRM -D), developed by the U.S. Army Corps of Engineers (USACE) in collaboration with the Institute for Defense Analyses (IDA) and the U.S...and cyber security risks across a portfolio of dams, and informing decisions on how to mitigate those risks. The CRM -D can effectively quantify the

  17. Design considerations, architecture, and use of the Mini-Sentinel distributed data system.

    PubMed

    Curtis, Lesley H; Weiner, Mark G; Boudreau, Denise M; Cooper, William O; Daniel, Gregory W; Nair, Vinit P; Raebel, Marsha A; Beaulieu, Nicolas U; Rosofsky, Robert; Woodworth, Tiffany S; Brown, Jeffrey S

    2012-01-01

    We describe the design, implementation, and use of a large, multiorganizational distributed database developed to support the Mini-Sentinel Pilot Program of the US Food and Drug Administration (FDA). As envisioned by the US FDA, this implementation will inform and facilitate the development of an active surveillance system for monitoring the safety of medical products (drugs, biologics, and devices) in the USA. A common data model was designed to address the priorities of the Mini-Sentinel Pilot and to leverage the experience and data of participating organizations and data partners. A review of existing common data models informed the process. Each participating organization designed a process to extract, transform, and load its source data, applying the common data model to create the Mini-Sentinel Distributed Database. Transformed data were characterized and evaluated using a series of programs developed centrally and executed locally by participating organizations. A secure communications portal was designed to facilitate queries of the Mini-Sentinel Distributed Database and transfer of confidential data, analytic tools were developed to facilitate rapid response to common questions, and distributed querying software was implemented to facilitate rapid querying of summary data. As of July 2011, information on 99,260,976 health plan members was included in the Mini-Sentinel Distributed Database. The database includes 316,009,067 person-years of observation time, with members contributing, on average, 27.0 months of observation time. All data partners have successfully executed distributed code and returned findings to the Mini-Sentinel Operations Center. This work demonstrates the feasibility of building a large, multiorganizational distributed data system in which organizations retain possession of their data that are used in an active surveillance system. Copyright © 2012 John Wiley & Sons, Ltd.

  18. OntoTrader: An Ontological Web Trading Agent Approach for Environmental Information Retrieval

    PubMed Central

    Iribarne, Luis; Padilla, Nicolás; Ayala, Rosa; Asensio, José A.; Criado, Javier

    2014-01-01

    Modern Web-based Information Systems (WIS) are becoming increasingly necessary to provide support for users who are in different places with different types of information, by facilitating their access to the information, decision making, workgroups, and so forth. Design of these systems requires the use of standardized methods and techniques that enable a common vocabulary to be defined to represent the underlying knowledge. Thus, mediation elements such as traders enrich the interoperability of web components in open distributed systems. These traders must operate with other third-party traders and/or agents in the system, which must also use a common vocabulary for communication between them. This paper presents the OntoTrader architecture, an Ontological Web Trading agent based on the OMG ODP trading standard. It also presents the ontology needed by some system agents to communicate with the trading agent and the behavioral framework for the SOLERES OntoTrader agent, an Environmental Management Information System (EMIS). This framework implements a “Query-Searching/Recovering-Response” information retrieval model using a trading service, SPARQL notation, and the JADE platform. The paper also presents reflection, delegation and, federation mediation models and describes formalization, an experimental testing environment in three scenarios, and a tool which allows our proposal to be evaluated and validated. PMID:24977211

  19. OntoTrader: an ontological Web trading agent approach for environmental information retrieval.

    PubMed

    Iribarne, Luis; Padilla, Nicolás; Ayala, Rosa; Asensio, José A; Criado, Javier

    2014-01-01

    Modern Web-based Information Systems (WIS) are becoming increasingly necessary to provide support for users who are in different places with different types of information, by facilitating their access to the information, decision making, workgroups, and so forth. Design of these systems requires the use of standardized methods and techniques that enable a common vocabulary to be defined to represent the underlying knowledge. Thus, mediation elements such as traders enrich the interoperability of web components in open distributed systems. These traders must operate with other third-party traders and/or agents in the system, which must also use a common vocabulary for communication between them. This paper presents the OntoTrader architecture, an Ontological Web Trading agent based on the OMG ODP trading standard. It also presents the ontology needed by some system agents to communicate with the trading agent and the behavioral framework for the SOLERES OntoTrader agent, an Environmental Management Information System (EMIS). This framework implements a "Query-Searching/Recovering-Response" information retrieval model using a trading service, SPARQL notation, and the JADE platform. The paper also presents reflection, delegation and, federation mediation models and describes formalization, an experimental testing environment in three scenarios, and a tool which allows our proposal to be evaluated and validated.

  20. Strong regularities in world wide web surfing

    PubMed

    Huberman; Pirolli; Pitkow; Lukose

    1998-04-03

    One of the most common modes of accessing information in the World Wide Web is surfing from one document to another along hyperlinks. Several large empirical studies have revealed common patterns of surfing behavior. A model that assumes that users make a sequence of decisions to proceed to another page, continuing as long as the value of the current page exceeds some threshold, yields the probability distribution for the number of pages that a user visits within a given Web site. This model was verified by comparing its predictions with detailed measurements of surfing patterns. The model also explains the observed Zipf-like distributions in page hits observed at Web sites.

  1. Framework for a clinical information system.

    PubMed

    Van de Velde, R

    2000-01-01

    The current status of our work towards the design and implementation of a reference architecture for a Clinical Information System is presented. This architecture has been developed and implemented based on components following a strong underlying conceptual and technological model. Common Object Request Broker and n-tier technology featuring centralised and departmental clinical information systems as the back-end store for all clinical data are used. Servers located in the 'middle' tier apply the clinical (business) model and application rules to communicate with so-called 'thin client' workstations. The main characteristics are the focus on modelling and reuse of both data and business logic as there is a shift away from data and functional modelling towards object modelling. Scalability as well as adaptability to constantly changing requirements via component driven computing are the main reasons for that approach.

  2. MMM: A toolbox for integrative structure modeling.

    PubMed

    Jeschke, Gunnar

    2018-01-01

    Structural characterization of proteins and their complexes may require integration of restraints from various experimental techniques. MMM (Multiscale Modeling of Macromolecules) is a Matlab-based open-source modeling toolbox for this purpose with a particular emphasis on distance distribution restraints obtained from electron paramagnetic resonance experiments on spin-labelled proteins and nucleic acids and their combination with atomistic structures of domains or whole protomers, small-angle scattering data, secondary structure information, homology information, and elastic network models. MMM does not only integrate various types of restraints, but also various existing modeling tools by providing a common graphical user interface to them. The types of restraints that can support such modeling and the available model types are illustrated by recent application examples. © 2017 The Protein Society.

  3. An Optical Model for Estimating the Underwater Light Field from Remote Sensing

    NASA Technical Reports Server (NTRS)

    Liu, Cheng-Chien; Miller, Richard L.

    2002-01-01

    A model of the wavelength-integrated scalar irradiance for a vertically homogeneous water column is developed. It runs twenty thousand times faster than simulations obtained using full Hydrolight code and limits the percentage error to less than 3.7%. Both the distribution of incident sky radiance and a wind-roughened surface are integrated in the model. Our model removes common limitations of earlier models and can be applied to waters with any composition of the inherent optical properties. Implementation of this new model, as well as the ancillary information required for processing global-scale satellite data, is discussed. This new model is fast, accurate, and flexible and therefore provides important information of the underwater light field from remote sensing.

  4. The HTA Core Model®-10 Years of Developing an International Framework to Share Multidimensional Value Assessment.

    PubMed

    Kristensen, Finn Børlum; Lampe, Kristian; Wild, Claudia; Cerbo, Marina; Goettsch, Wim; Becla, Lidia

    2017-02-01

    The HTA Core Model ® as a science-based framework for assessing dimensions of value was developed as a part of the European network for Health Technology Assessment project in the period 2006 to 2008 to facilitate production and sharing of health technology assessment (HTA) information, such as evidence on efficacy and effectiveness and patient aspects, to inform decisions. It covers clinical value as well as organizational, economic, and patient aspects of technologies and has been field-tested in two consecutive joint actions in the period 2010 to 2016. A large number of HTA institutions were involved in the work. The model has undergone revisions and improvement after iterations of piloting and can be used in a local, national, or international context to produce structured HTA information that can be taken forward by users into their own frameworks to fit their specific needs when informing decisions on technology. The model has a broad scope and offers a common ground to various stakeholders through offering a standard structure and a transparent set of proposed HTA questions. It consists of three main components: 1) the HTA ontology, 2) methodological guidance, and 3) a common reporting structure. It covers domains such as effectiveness, safety, and economics, and also includes domains covering organizational, patient, social, and legal aspects. There is a full model and a focused rapid relative effectiveness assessment model, and a third joint action is to continue till 2020. The HTA Core Model is now available for everyone around the world as a framework for assessing value. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  5. An Empirical Human Controller Model for Preview Tracking Tasks.

    PubMed

    van der El, Kasper; Pool, Daan M; Damveld, Herman J; van Paassen, Marinus Rene M; Mulder, Max

    2016-11-01

    Real-life tracking tasks often show preview information to the human controller about the future track to follow. The effect of preview on manual control behavior is still relatively unknown. This paper proposes a generic operator model for preview tracking, empirically derived from experimental measurements. Conditions included pursuit tracking, i.e., without preview information, and tracking with 1 s of preview. Controlled element dynamics varied between gain, single integrator, and double integrator. The model is derived in the frequency domain, after application of a black-box system identification method based on Fourier coefficients. Parameter estimates are obtained to assess the validity of the model in both the time domain and frequency domain. Measured behavior in all evaluated conditions can be captured with the commonly used quasi-linear operator model for compensatory tracking, extended with two viewpoints of the previewed target. The derived model provides new insights into how human operators use preview information in tracking tasks.

  6. Sample Size Determination for Regression Models Using Monte Carlo Methods in R

    ERIC Educational Resources Information Center

    Beaujean, A. Alexander

    2014-01-01

    A common question asked by researchers using regression models is, What sample size is needed for my study? While there are formulae to estimate sample sizes, their assumptions are often not met in the collected data. A more realistic approach to sample size determination requires more information such as the model of interest, strength of the…

  7. Information sharing and sorting in a community

    NASA Astrophysics Data System (ADS)

    Bhattacherjee, Biplab; Manna, S. S.; Mukherjee, Animesh

    2013-06-01

    We present the results of a detailed numerical study of a model for the sharing and sorting of information in a community consisting of a large number of agents. The information gathering takes place in a sequence of mutual bipartite interactions where randomly selected pairs of agents communicate with each other to enhance their knowledge and sort out the common information. Although our model is less restricted compared to the well-established naming game, the numerical results strongly indicate that the whole set of exponents characterizing this model are different from those of the naming game and they assume nontrivial values. Finally, it appears that in analogy to the emergence of clusters in the phenomenon of percolation, one can define clusters of agents here having the same information. We have studied in detail the growth of the largest cluster in this article and performed its finite-size scaling analysis.

  8. PIMMS tools for capturing metadata about simulations

    NASA Astrophysics Data System (ADS)

    Pascoe, Charlotte; Devine, Gerard; Tourte, Gregory; Pascoe, Stephen; Lawrence, Bryan; Barjat, Hannah

    2013-04-01

    PIMMS (Portable Infrastructure for the Metafor Metadata System) provides a method for consistent and comprehensive documentation of modelling activities that enables the sharing of simulation data and model configuration information. The aim of PIMMS is to package the metadata infrastructure developed by Metafor for CMIP5 so that it can be used by climate modelling groups in UK Universities. PIMMS tools capture information about simulations from the design of experiments to the implementation of experiments via simulations that run models. PIMMS uses the Metafor methodology which consists of a Common Information Model (CIM), Controlled Vocabularies (CV) and software tools. PIMMS software tools provide for the creation and consumption of CIM content via a web services infrastructure and portal developed by the ES-DOC community. PIMMS metadata integrates with the ESGF data infrastructure via the mapping of vocabularies onto ESGF facets. There are three paradigms of PIMMS metadata collection: Model Intercomparision Projects (MIPs) where a standard set of questions is asked of all models which perform standard sets of experiments. Disciplinary level metadata collection where a standard set of questions is asked of all models but experiments are specified by users. Bespoke metadata creation where the users define questions about both models and experiments. Examples will be shown of how PIMMS has been configured to suit each of these three paradigms. In each case PIMMS allows users to provide additional metadata beyond that which is asked for in an initial deployment. The primary target for PIMMS is the UK climate modelling community where it is common practice to reuse model configurations from other researchers. This culture of collaboration exists in part because climate models are very complex with many variables that can be modified. Therefore it has become common practice to begin a series of experiments by using another climate model configuration as a starting point. Usually this other configuration is provided by a researcher in the same research group or by a previous collaborator with whom there is an existing scientific relationship. Some efforts have been made at the university department level to create documentation but there is a wide diversity in the scope and purpose of this information. The consistent and comprehensive documentation enabled by PIMMS will enable the wider sharing of climate model data and configuration information. The PIMMS methodology assumes an initial effort to document standard model configurations. Once these descriptions have been created users need only describe the specific way in which their model configuration is different from the standard. Thus the documentation burden on the user is specific to the experiment they are performing and fits easily into the workflow of doing their science. PIMMS metadata is independent of data and as such is ideally suited for documenting model development. PIMMS provides a framework for sharing information about failed model configurations for which data are not kept, the negative results that don't appear in scientific literature. PIMMS is a UK project funded by JISC, The University of Reading, The University of Bristol and STFC.

  9. Essays on Information Technology (IT), Governance and Clouds

    ERIC Educational Resources Information Center

    Vithayathil, Joseph

    2013-01-01

    Why do some firms organize their IT departments as profit centers whereas other firms organize IT as a cost center? Due to information asymmetry regarding the cost and demand for IT, the firm is unable to achieve the first best outcome in terms of optimizing the value from IT services. Two commonly used organizational models for the IT department…

  10. The Risk-Informed Materials Management (RIMM) Tool System for Determining Safe-Levels of Contaminated Materials Managed on the Land

    EPA Science Inventory

    EPA’s Risk-Informed Materials Management (RIMM) tool system is a modeling approach that helps risk assessors evaluate the safety of managing raw, reused, or waste material streams via a variety of common scenarios (e.g., application to farms, use as a component in road cons...

  11. Beyond Keyword Search: Representations and Models for Personalization

    ERIC Educational Resources Information Center

    El-Arini, Khalid

    2013-01-01

    We live in an era of information overload. From online news to online shopping to scholarly research, we are inundated with a torrent of information on a daily basis. With our limited time, money and attention, we often struggle to extract actionable knowledge from this deluge of data. A common approach for addressing this challenge is…

  12. The caCORE Software Development Kit: Streamlining construction of interoperable biomedical information services

    PubMed Central

    Phillips, Joshua; Chilukuri, Ram; Fragoso, Gilberto; Warzel, Denise; Covitz, Peter A

    2006-01-01

    Background Robust, programmatically accessible biomedical information services that syntactically and semantically interoperate with other resources are challenging to construct. Such systems require the adoption of common information models, data representations and terminology standards as well as documented application programming interfaces (APIs). The National Cancer Institute (NCI) developed the cancer common ontologic representation environment (caCORE) to provide the infrastructure necessary to achieve interoperability across the systems it develops or sponsors. The caCORE Software Development Kit (SDK) was designed to provide developers both within and outside the NCI with the tools needed to construct such interoperable software systems. Results The caCORE SDK requires a Unified Modeling Language (UML) tool to begin the development workflow with the construction of a domain information model in the form of a UML Class Diagram. Models are annotated with concepts and definitions from a description logic terminology source using the Semantic Connector component. The annotated model is registered in the Cancer Data Standards Repository (caDSR) using the UML Loader component. System software is automatically generated using the Codegen component, which produces middleware that runs on an application server. The caCORE SDK was initially tested and validated using a seven-class UML model, and has been used to generate the caCORE production system, which includes models with dozens of classes. The deployed system supports access through object-oriented APIs with consistent syntax for retrieval of any type of data object across all classes in the original UML model. The caCORE SDK is currently being used by several development teams, including by participants in the cancer biomedical informatics grid (caBIG) program, to create compatible data services. caBIG compatibility standards are based upon caCORE resources, and thus the caCORE SDK has emerged as a key enabling technology for caBIG. Conclusion The caCORE SDK substantially lowers the barrier to implementing systems that are syntactically and semantically interoperable by providing workflow and automation tools that standardize and expedite modeling, development, and deployment. It has gained acceptance among developers in the caBIG program, and is expected to provide a common mechanism for creating data service nodes on the data grid that is under development. PMID:16398930

  13. A Comparison of Normal Forgetting, Psychopathology, and Information-Processing Models of Reported Amnesia for Recent Sexual Trauma

    PubMed Central

    Mechanic, Mindy B.; Resick, Patricia A.; Griffin, Michael G.

    2010-01-01

    This study assessed memories for sexual trauma in a nontreatment-seeking sample of recent rape victims and considered competing explanations for failed recall. Participants were 92 female rape victims assessed within 2 weeks of the rape; 62 were also assessed 3 months postassault. Memory deficits for parts of the rape were common 2 weeks postassault (37%) but improved over the 3-month window studied (16% still partially amnesic). Hypotheses evaluated competing models of explanation that may account for reported recall deficits. Results are most consistent with information-processing models of traumatic memory. PMID:9874908

  14. Examining the functionality of the DeLone and McLean information system success model as a framework for synthesis in nursing information and communication technology research.

    PubMed

    Booth, Richard G

    2012-06-01

    In this review, studies examining information and communication technology used by nurses in clinical practice were examined. Overall, a total of 39 studies were assessed spanning a time period from 1995 to 2008. The impacts of the various health information and communication technology evaluated by individual studies were synthesized using the DeLone and McLean's six-dimensional framework for evaluating information systems success (ie, System Quality, Information Quality, Service Quality, Use, User Satisfaction, and Net Benefits). Overall, the majority of researchers reported results related to the overall Net Benefits (positive, negative, and indifferent) of the health information and communication technology used by nurses. Attitudes and user satisfaction with technology were also commonly measured attributes. The current iteration of DeLone and McLean model is effective at synthesizing basic elements of health information and communication technology use by nurses. Regardless, the current model lacks the sociotechnical sensitivity to capture deeper nurse-technology relationalities. Limitations and recommendations are provided for researchers considering using the DeLone and McLean model for evaluating health information and communication technology used by nurses.

  15. A Framework to Manage Information Models

    NASA Astrophysics Data System (ADS)

    Hughes, J. S.; King, T.; Crichton, D.; Walker, R.; Roberts, A.; Thieman, J.

    2008-05-01

    The Information Model is the foundation on which an Information System is built. It defines the entities to be processed, their attributes, and the relationships that add meaning. The development and subsequent management of the Information Model is the single most significant factor for the development of a successful information system. A framework of tools has been developed that supports the management of an information model with the rigor typically afforded to software development. This framework provides for evolutionary and collaborative development independent of system implementation choices. Once captured, the modeling information can be exported to common languages for the generation of documentation, application databases, and software code that supports both traditional and semantic web applications. This framework is being successfully used for several science information modeling projects including those for the Planetary Data System (PDS), the International Planetary Data Alliance (IPDA), the National Cancer Institute's Early Detection Research Network (EDRN), and several Consultative Committee for Space Data Systems (CCSDS) projects. The objective of the Space Physics Archive Search and Exchange (SPASE) program is to promote collaboration and coordination of archiving activity for the Space Plasma Physics community and ensure the compatibility of the architectures used for a global distributed system and the individual data centers. Over the past several years, the SPASE data model working group has made great progress in developing the SPASE Data Model and supporting artifacts including a data dictionary, XML Schema, and two ontologies. The authors have captured the SPASE Information Model in this framework. This allows the generation of documentation that presents the SPASE Information Model in object-oriented notation including UML class diagrams and class hierarchies. The modeling information can also be exported to semantic web languages such as OWL and RDF and written to XML Metadata Interchange (XMI) files for import into UML tools.

  16. Building a Values-Informed Mental Model for New Orleans Climate Risk Management.

    PubMed

    Bessette, Douglas L; Mayer, Lauren A; Cwik, Bryan; Vezér, Martin; Keller, Klaus; Lempert, Robert J; Tuana, Nancy

    2017-10-01

    Individuals use values to frame their beliefs and simplify their understanding when confronted with complex and uncertain situations. The high complexity and deep uncertainty involved in climate risk management (CRM) lead to individuals' values likely being coupled to and contributing to their understanding of specific climate risk factors and management strategies. Most mental model approaches, however, which are commonly used to inform our understanding of people's beliefs, ignore values. In response, we developed a "Values-informed Mental Model" research approach, or ViMM, to elicit individuals' values alongside their beliefs and determine which values people use to understand and assess specific climate risk factors and CRM strategies. Our results show that participants consistently used one of three values to frame their understanding of risk factors and CRM strategies in New Orleans: (1) fostering a healthy economy, wealth, and job creation, (2) protecting and promoting healthy ecosystems and biodiversity, and (3) preserving New Orleans' unique culture, traditions, and historically significant neighborhoods. While the first value frame is common in analyses of CRM strategies, the latter two are often ignored, despite their mirroring commonly accepted pillars of sustainability. Other values like distributive justice and fairness were prioritized differently depending on the risk factor or strategy being discussed. These results suggest that the ViMM method could be a critical first step in CRM decision-support processes and may encourage adoption of CRM strategies more in line with stakeholders' values. © 2017 Society for Risk Analysis.

  17. PDS4 - Some Principles for Agile Data Curation

    NASA Astrophysics Data System (ADS)

    Hughes, J. S.; Crichton, D. J.; Hardman, S. H.; Joyner, R.; Algermissen, S.; Padams, J.

    2015-12-01

    PDS4, a research data management and curation system for NASA's Planetary Science Archive, was developed using principles that promote the characteristics of agile development. The result is an efficient system that produces better research data products while using less resources (time, effort, and money) and maximizes their usefulness for current and future scientists. The key principle is architectural. The PDS4 information architecture is developed and maintained independent of the infrastructure's process, application and technology architectures. The information architecture is based on an ontology-based information model developed to leverage best practices from standard reference models for digital archives, digital object registries, and metadata registries and capture domain knowledge from a panel of planetary science domain experts. The information model provides a sharable, stable, and formal set of information requirements for the system and is the primary source for information to configure most system components, including the product registry, search engine, validation and display tools, and production pipelines. Multi-level governance is also allowed for the effective management of the informational elements at the common, discipline, and project level. This presentation will describe the development principles, components, and uses of the information model and how an information model-driven architecture exhibits characteristics of agile curation including early delivery, evolutionary development, adaptive planning, continuous improvement, and rapid and flexible response to change.

  18. Online health information seeking among Jewish and Arab adolescents in Israel: results from a national school survey.

    PubMed

    Neumark, Yehuda; Lopez-Quintero, Catalina; Feldman, Becca S; Hirsch Allen, A J; Shtarkshall, Ronny

    2013-01-01

    This study examined patterns and determinants of seeking online health information among a nationally representative sample of 7,028 Jewish and Arab 7th- through 12th-grade students in 158 schools in Israel. Nearly all respondents (98.7%) reported Internet access, and 52.1% reported having sought online health information in the past year. Arab students (63%) were more likely than Jewish students (48%) to seek online health information. Population-group and sex differences in health topics sought online were identified, although fitness/exercise was most common across groups. Multivariate regression models revealed that having sought health information from other sources was the strongest independent correlate of online health information-seeking among Jews (adjusted odds ratio = 8.93, 95% CI [7.70, 10.36]) and Arabs (adjusted odds ratio = 9.77, 95% CI [7.27, 13.13]). Other factors associated with seeking online health information common to both groups were level of trust in online health information, Internet skill level, having discussed health/medical issues with a health care provider in the past year, and school performance. The most common reasons for not seeking online health information were a preference to receive information from a health professional and lack of interest in health/medical issues. The closing of the digital divide between Jews and Arabs represents a move toward equality. Identifying and addressing factors underpinning online health information-seeking behaviors is essential to improve the health status of Israeli youth and reduce health disparities.

  19. Development and Validation of Methodology to Model Flow in Ventilation Systems Commonly Found in Nuclear Facilities. Phase I

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strons, Philip; Bailey, James L.; Davis, John

    2016-03-01

    In this work, we apply the CFD in modeling airflow and particulate transport. This modeling is then compared to field validation studies to both inform and validate the modeling assumptions. Based on the results of field tests, modeling assumptions and boundary conditions are refined and the process is repeated until the results are found to be reliable with a high level of confidence.

  20. Accounting for uncertainty in health economic decision models by using model averaging.

    PubMed

    Jackson, Christopher H; Thompson, Simon G; Sharples, Linda D

    2009-04-01

    Health economic decision models are subject to considerable uncertainty, much of which arises from choices between several plausible model structures, e.g. choices of covariates in a regression model. Such structural uncertainty is rarely accounted for formally in decision models but can be addressed by model averaging. We discuss the most common methods of averaging models and the principles underlying them. We apply them to a comparison of two surgical techniques for repairing abdominal aortic aneurysms. In model averaging, competing models are usually either weighted by using an asymptotically consistent model assessment criterion, such as the Bayesian information criterion, or a measure of predictive ability, such as Akaike's information criterion. We argue that the predictive approach is more suitable when modelling the complex underlying processes of interest in health economics, such as individual disease progression and response to treatment.

  1. The use of network theory to model disparate ship design information

    NASA Astrophysics Data System (ADS)

    Rigterink, Douglas; Piks, Rebecca; Singer, David J.

    2014-06-01

    This paper introduces the use of network theory to model and analyze disparate ship design information. This work will focus on a ship's distributed systems and their intra- and intersystem structures and interactions. The three system to be analyzed are: a passageway system, an electrical system, and a fire fighting system. These systems will be analyzed individually using common network metrics to glean information regarding their structures and attributes. The systems will also be subjected to community detection algorithms both separately and as a multiplex network to compare their similarities, differences, and interactions. Network theory will be shown to be useful in the early design stage due to its simplicity and ability to model any shipboard system.

  2. Health level 7 development framework for medication administration.

    PubMed

    Kim, Hwa Sun; Cho, Hune

    2009-01-01

    We propose the creation of a standard data model for medication administration activities through the development of a clinical document architecture using the Health Level 7 Development Framework process based on an object-oriented analysis and the development method of Health Level 7 Version 3. Medication administration is the most common activity performed by clinical professionals in healthcare settings. A standardized information model and structured hospital information system are necessary to achieve evidence-based clinical activities. A virtual scenario is used to demonstrate the proposed method of administering medication. We used the Health Level 7 Development Framework and other tools to create the clinical document architecture, which allowed us to illustrate each step of the Health Level 7 Development Framework in the administration of medication. We generated an information model of the medication administration process as one clinical activity. It should become a fundamental conceptual model for understanding international-standard methodology by healthcare professionals and nursing practitioners with the objective of modeling healthcare information systems.

  3. Dynamic Measurement Modeling: Using Nonlinear Growth Models to Estimate Student Learning Capacity

    ERIC Educational Resources Information Center

    Dumas, Denis G.; McNeish, Daniel M.

    2017-01-01

    Single-timepoint educational measurement practices are capable of assessing student ability at the time of testing but are not designed to be informative of student capacity for developing in any particular academic domain, despite commonly being used in such a manner. For this reason, such measurement practice systematically underestimates the…

  4. A Test of Two Alternative Cognitive Processing Models: Learning Styles and Dual Coding

    ERIC Educational Resources Information Center

    Cuevas, Joshua; Dawson, Bryan L.

    2018-01-01

    This study tested two cognitive models, learning styles and dual coding, which make contradictory predictions about how learners process and retain visual and auditory information. Learning styles-based instructional practices are common in educational environments despite a questionable research base, while the use of dual coding is less…

  5. Natural brain-information interfaces: Recommending information by relevance inferred from human brain signals

    PubMed Central

    Eugster, Manuel J. A.; Ruotsalo, Tuukka; Spapé, Michiel M.; Barral, Oswald; Ravaja, Niklas; Jacucci, Giulio; Kaski, Samuel

    2016-01-01

    Finding relevant information from large document collections such as the World Wide Web is a common task in our daily lives. Estimation of a user’s interest or search intention is necessary to recommend and retrieve relevant information from these collections. We introduce a brain-information interface used for recommending information by relevance inferred directly from brain signals. In experiments, participants were asked to read Wikipedia documents about a selection of topics while their EEG was recorded. Based on the prediction of word relevance, the individual’s search intent was modeled and successfully used for retrieving new relevant documents from the whole English Wikipedia corpus. The results show that the users’ interests toward digital content can be modeled from the brain signals evoked by reading. The introduced brain-relevance paradigm enables the recommendation of information without any explicit user interaction and may be applied across diverse information-intensive applications. PMID:27929077

  6. Natural brain-information interfaces: Recommending information by relevance inferred from human brain signals

    NASA Astrophysics Data System (ADS)

    Eugster, Manuel J. A.; Ruotsalo, Tuukka; Spapé, Michiel M.; Barral, Oswald; Ravaja, Niklas; Jacucci, Giulio; Kaski, Samuel

    2016-12-01

    Finding relevant information from large document collections such as the World Wide Web is a common task in our daily lives. Estimation of a user’s interest or search intention is necessary to recommend and retrieve relevant information from these collections. We introduce a brain-information interface used for recommending information by relevance inferred directly from brain signals. In experiments, participants were asked to read Wikipedia documents about a selection of topics while their EEG was recorded. Based on the prediction of word relevance, the individual’s search intent was modeled and successfully used for retrieving new relevant documents from the whole English Wikipedia corpus. The results show that the users’ interests toward digital content can be modeled from the brain signals evoked by reading. The introduced brain-relevance paradigm enables the recommendation of information without any explicit user interaction and may be applied across diverse information-intensive applications.

  7. Improvements of Travel-time Tomography Models from Joint Inversion of Multi-channel and Wide-angle Seismic Data

    NASA Astrophysics Data System (ADS)

    Begović, Slaven; Ranero, César; Sallarès, Valentí; Meléndez, Adrià; Grevemeyer, Ingo

    2016-04-01

    Commonly multichannel seismic reflection (MCS) and wide-angle seismic (WAS) data are modeled and interpreted with different approaches. Conventional travel-time tomography models using solely WAS data lack the resolution to define the model properties and, particularly, the geometry of geologic boundaries (reflectors) with the required accuracy, specially in the shallow complex upper geological layers. We plan to mitigate this issue by combining these two different data sets, specifically taking advantage of the high redundancy of multichannel seismic (MCS) data, integrated with wide-angle seismic (WAS) data into a common inversion scheme to obtain higher-resolution velocity models (Vp), decrease Vp uncertainty and improve the geometry of reflectors. To do so, we have adapted the tomo2d and tomo3d joint refraction and reflection travel time tomography codes (Korenaga et al, 2000; Meléndez et al, 2015) to deal with streamer data and MCS acquisition geometries. The scheme results in a joint travel-time tomographic inversion based on integrated travel-time information from refracted and reflected phases from WAS data and reflected identified in the MCS common depth point (CDP) or shot gathers. To illustrate the advantages of a common inversion approach we have compared the modeling results for synthetic data sets using two different travel-time inversion strategies: We have produced seismic velocity models and reflector geometries following typical refraction and reflection travel-time tomographic strategy modeling just WAS data with a typical acquisition geometry (one OBS each 10 km). Second, we performed joint inversion of two types of seismic data sets, integrating two coincident data sets consisting of MCS data collected with a 8 km-long streamer and the WAS data into a common inversion scheme. Our synthetic results of the joint inversion indicate a 5-10 times smaller ray travel-time misfit in the deeper parts of the model, compared to models obtained using just wide-angle seismic data. As expected, there is an important improvement in the definition of the reflector geometry, which in turn, allows to improve the accuracy of the velocity retrieval just above and below the reflector. To test the joint inversion approach with real data, we combined wide-angle (WAS) seismic and coincident multichannel seismic reflection (MCS) data acquired in the northern Chile subduction zone into a common inversion scheme to obtain a higher-resolution information of upper plate and inter-plate boundary.

  8. Use of suprathreshold stochastic resonance in cochlear implant coding

    NASA Astrophysics Data System (ADS)

    Allingham, David; Stocks, Nigel G.; Morse, Robert P.

    2003-05-01

    In this article we discuss the possible use of a novel form of stochastic resonance, termed suprathreshold stochastic resonance (SSR), to improve signal encoding/transmission in cochlear implants. A model, based on the leaky-integrate-and-fire (LIF) neuron, has been developed from physiological data and use to model information flow in a population of cochlear nerve fibers. It is demonstrated that information flow can, in principle, be enhanced by the SSR effect. Furthermore, SSR was found to enhance information transmission for signal parameters that are commonly encountered in cochlear implants. This, therefore, gives hope that SSR may be implemented in cochlear implants to improve speech comprehension.

  9. Model averaging and muddled multimodel inferences.

    PubMed

    Cade, Brian S

    2015-09-01

    Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the t statistics on unstandardized estimates also can be used to provide more informative measures of relative importance than sums of AIC weights. Finally, I illustrate how seriously compromised statistical interpretations and predictions can be for all three of these flawed practices by critiquing their use in a recent species distribution modeling technique developed for predicting Greater Sage-Grouse (Centrocercus urophasianus) distribution in Colorado, USA. These model averaging issues are common in other ecological literature and ought to be discontinued if we are to make effective scientific contributions to ecological knowledge and conservation of natural resources.

  10. Model averaging and muddled multimodel inferences

    USGS Publications Warehouse

    Cade, Brian S.

    2015-01-01

    Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the tstatistics on unstandardized estimates also can be used to provide more informative measures of relative importance than sums of AIC weights. Finally, I illustrate how seriously compromised statistical interpretations and predictions can be for all three of these flawed practices by critiquing their use in a recent species distribution modeling technique developed for predicting Greater Sage-Grouse (Centrocercus urophasianus) distribution in Colorado, USA. These model averaging issues are common in other ecological literature and ought to be discontinued if we are to make effective scientific contributions to ecological knowledge and conservation of natural resources.

  11. Standardizing the information architecture for spacecraft operations

    NASA Technical Reports Server (NTRS)

    Easton, C. R.

    1994-01-01

    This paper presents an information architecture developed for the Space Station Freedom as a model from which to derive an information architecture standard for advanced spacecraft. The information architecture provides a way of making information available across a program, and among programs, assuming that the information will be in a variety of local formats, structures and representations. It provides a format that can be expanded to define all of the physical and logical elements that make up a program, add definitions as required, and import definitions from prior programs to a new program. It allows a spacecraft and its control center to work in different representations and formats, with the potential for supporting existing spacecraft from new control centers. It supports a common view of data and control of all spacecraft, regardless of their own internal view of their data and control characteristics, and of their communications standards, protocols and formats. This information architecture is central to standardizing spacecraft operations, in that it provides a basis for information transfer and translation, such that diverse spacecraft can be monitored and controlled in a common way.

  12. Terminology representation guidelines for biomedical ontologies in the semantic web notations.

    PubMed

    Tao, Cui; Pathak, Jyotishman; Solbrig, Harold R; Wei, Wei-Qi; Chute, Christopher G

    2013-02-01

    Terminologies and ontologies are increasingly prevalent in healthcare and biomedicine. However they suffer from inconsistent renderings, distribution formats, and syntax that make applications through common terminologies services challenging. To address the problem, one could posit a shared representation syntax, associated schema, and tags. We identified a set of commonly-used elements in biomedical ontologies and terminologies based on our experience with the Common Terminology Services 2 (CTS2) Specification as well as the Lexical Grid (LexGrid) project. We propose guidelines for precisely such a shared terminology model, and recommend tags assembled from SKOS, OWL, Dublin Core, RDF Schema, and DCMI meta-terms. We divide these guidelines into lexical information (e.g. synonyms, and definitions) and semantic information (e.g. hierarchies). The latter we distinguish for use by informal terminologies vs. formal ontologies. We then evaluate the guidelines with a spectrum of widely used terminologies and ontologies to examine how the lexical guidelines are implemented, and whether our proposed guidelines would enhance interoperability. Copyright © 2012 Elsevier Inc. All rights reserved.

  13. Common inputs in subthreshold membrane potential: The role of quiescent states in neuronal activity

    NASA Astrophysics Data System (ADS)

    Montangie, Lisandro; Montani, Fernando

    2018-06-01

    Experiments in certain regions of the cerebral cortex suggest that the spiking activity of neuronal populations is regulated by common non-Gaussian inputs across neurons. We model these deviations from random-walk processes with q -Gaussian distributions into simple threshold neurons, and investigate the scaling properties in large neural populations. We show that deviations from the Gaussian statistics provide a natural framework to regulate population statistics such as sparsity, entropy, and specific heat. This type of description allows us to provide an adequate strategy to explain the information encoding in the case of low neuronal activity and its possible implications on information transmission.

  14. Determining Smoking Cessation Related Information, Motivation, and Behavioral Skills among Opiate Dependent Smokers in Methadone Treatment

    PubMed Central

    Cooperman, Nina A.; Richter, Kimber P.; Bernstein, Steven L.; Steinberg, Marc L.; Williams, Jill M.

    2015-01-01

    Background Over 80% of people in methadone treatment smoke cigarettes, and existing smoking cessation interventions have been minimally effective. Objective To develop an Information-Motivation-Behavioral Skills (IMB) Model of behavior change based smoking cessation intervention for methadone maintained smokers, we examined smoking cessation related information, motivation, and behavioral skills in this population. Methods Current or former smokers in methadone treatment (n=35) participated in focus groups. Ten methadone clinic counselors participated in an individual interview. A content analysis was conducted using deductive and inductive approaches. Results Commonly known information, motivation, and behavioral skills factors related to smoking cessation were described. These factors included: the health effects of smoking and treatment options for quitting (information); pregnancy and cost of cigarettes (motivators); and coping with emotions, finding social support, and pharmacotherapy adherence (behavioral skills). Information, motivation, and behavioral skills factors specific to methadone maintained smokers were also described. These factors included: the relationship between quitting smoking and drug relapse (information), the belief that smoking is the same as using drugs (motivator); and coping with methadone clinic culture and applying skills used to quit drugs to quitting smoking (behavioral skills). Information, motivation, and behavioral skills strengths and deficits varied by individual. Conclusions Methadone maintained smokers could benefit from research on an IMB Model based smoking cessation intervention that is individualized, addresses IMB factors common among all smokers, and also addresses IMB factors unique to this population. PMID:25559697

  15. An information driven strategy to support multidisciplinary design

    NASA Technical Reports Server (NTRS)

    Rangan, Ravi M.; Fulton, Robert E.

    1990-01-01

    The design of complex engineering systems such as aircraft, automobiles, and computers is primarily a cooperative multidisciplinary design process involving interactions between several design agents. The common thread underlying this multidisciplinary design activity is the information exchange between the various groups and disciplines. The integrating component in such environments is the common data and the dependencies that exist between such data. This may be contrasted to classical multidisciplinary analyses problems where there is coupling between distinct design parameters. For example, they may be expressed as mathematically coupled relationships between aerodynamic and structural interactions in aircraft structures, between thermal and structural interactions in nuclear plants, and between control considerations and structural interactions in flexible robots. These relationships provide analytical based frameworks leading to optimization problem formulations. However, in multidisciplinary design problems, information based interactions become more critical. Many times, the relationships between different design parameters are not amenable to analytical characterization. Under such circumstances, information based interactions will provide the best integration paradigm, i.e., there is a need to model the data entities and their dependencies between design parameters originating from different design agents. The modeling of such data interactions and dependencies forms the basis for integrating the various design agents.

  16. A methodology for the design and evaluation of user interfaces for interactive information systems. Ph.D. Thesis Final Report, 1 Jul. 1985 - 31 Dec. 1987

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Farooq, Mohammad U.

    1986-01-01

    The definition of proposed research addressing the development and validation of a methodology for the design and evaluation of user interfaces for interactive information systems is given. The major objectives of this research are: the development of a comprehensive, objective, and generalizable methodology for the design and evaluation of user interfaces for information systems; the development of equations and/or analytical models to characterize user behavior and the performance of a designed interface; the design of a prototype system for the development and administration of user interfaces; and the design and use of controlled experiments to support the research and test/validate the proposed methodology. The proposed design methodology views the user interface as a virtual machine composed of three layers: an interactive layer, a dialogue manager layer, and an application interface layer. A command language model of user system interactions is presented because of its inherent simplicity and structured approach based on interaction events. All interaction events have a common structure based on common generic elements necessary for a successful dialogue. It is shown that, using this model, various types of interfaces could be designed and implemented to accommodate various categories of users. The implementation methodology is discussed in terms of how to store and organize the information.

  17. STEP-TRAMM - A modeling interface for simulating localized rainfall induced shallow landslides and debris flow runout pathways

    NASA Astrophysics Data System (ADS)

    Or, D.; von Ruette, J.; Lehmann, P.

    2017-12-01

    Landslides and subsequent debris-flows initiated by rainfall represent a common natural hazard in mountainous regions. We integrated a landslide hydro-mechanical triggering model with a simple model for debris flow runout pathways and developed a graphical user interface (GUI) to represent these natural hazards at catchment scale at any location. The STEP-TRAMM GUI provides process-based estimates of the initiation locations and sizes of landslides patterns based on digital elevation models (SRTM) linked with high resolution global soil maps (SoilGrids 250 m resolution) and satellite based information on rainfall statistics for the selected region. In the preprocessing phase the STEP-TRAMM model estimates soil depth distribution to supplement other soil information for delineating key hydrological and mechanical properties relevant to representing local soil failure. We will illustrate this publicly available GUI and modeling platform to simulate effects of deforestation on landslide hazards in several regions and compare model outcome with satellite based information.

  18. Hospital information system: reusability, designing, modelling, recommendations for implementing.

    PubMed

    Huet, B

    1998-01-01

    The aims of this paper are to precise some essential conditions for building reuse models for hospital information systems (HIS) and to present an application for hospital clinical laboratories. Reusability is a general trend in software, however reuse can involve a more or less part of design, classes, programs; consequently, a project involving reusability must be precisely defined. In the introduction it is seen trends in software, the stakes of reuse models for HIS and the special use case constituted with a HIS. The main three parts of this paper are: 1) Designing a reuse model (which objects are common to several information systems?) 2) A reuse model for hospital clinical laboratories (a genspec object model is presented for all laboratories: biochemistry, bacteriology, parasitology, pharmacology, ...) 3) Recommendations for generating plug-compatible software components (a reuse model can be implemented as a framework, concrete factors that increase reusability are presented). In conclusion reusability is a subtle exercise of which project must be previously and carefully defined.

  19. Model-based learning and the contribution of the orbitofrontal cortex to the model-free world.

    PubMed

    McDannald, Michael A; Takahashi, Yuji K; Lopatina, Nina; Pietras, Brad W; Jones, Josh L; Schoenbaum, Geoffrey

    2012-04-01

    Learning is proposed to occur when there is a discrepancy between reward prediction and reward receipt. At least two separate systems are thought to exist: one in which predictions are proposed to be based on model-free or cached values; and another in which predictions are model-based. A basic neural circuit for model-free reinforcement learning has already been described. In the model-free circuit the ventral striatum (VS) is thought to supply a common-currency reward prediction to midbrain dopamine neurons that compute prediction errors and drive learning. In a model-based system, predictions can include more information about an expected reward, such as its sensory attributes or current, unique value. This detailed prediction allows for both behavioral flexibility and learning driven by changes in sensory features of rewards alone. Recent evidence from animal learning and human imaging suggests that, in addition to model-free information, the VS also signals model-based information. Further, there is evidence that the orbitofrontal cortex (OFC) signals model-based information. Here we review these data and suggest that the OFC provides model-based information to this traditional model-free circuitry and offer possibilities as to how this interaction might occur. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  20. Common ground: the HealthWeb project as a model for Internet collaboration.

    PubMed Central

    Redman, P M; Kelly, J A; Albright, E D; Anderson, P F; Mulder, C; Schnell, E H

    1997-01-01

    The establishment of the HealthWeb project by twelve health sciences libraries provides a collaborative means of organizing and enhancing access to Internet resources for the international health sciences community. The project is based on the idea that the Internet is common ground for all libraries and that through collaboration a more comprehensive, robust, and long-lasting information product can be maintained. The participants include more than seventy librarians from the health sciences libraries of the Committee on Institutional Cooperation (CIC), an academic consortium of twelve major research universities. The Greater Midwest Region of the National Network of Libraries of Medicine serves as a cosponsor. HealthWeb is an information resource that provides access to evaluated, annotated Internet resources via the World Wide Web. The project vision as well as the progress reported on its implementation may serve as a model for other collaborative Internet projects. PMID:9431420

  1. Original data preprocessor for Femap/Nastran

    NASA Astrophysics Data System (ADS)

    Oanta, Emil M.; Panait, Cornel; Raicu, Alexandra

    2016-12-01

    Automatic data processing and visualization in the finite elements analysis of the structural problems is a long run concern in mechanical engineering. The paper presents the `common database' concept according to which the same information may be accessed from an analytical model, as well as from a numerical one. In this way, input data expressed as comma-separated-value (CSV) files are loaded into the Femap/Nastran environment using original API codes, being automatically generated: the geometry of the model, the loads and the constraints. The original API computer codes are general, being possible to generate the input data of any model. In the next stages, the user may create the discretization of the model, set the boundary conditions and perform a given analysis. If additional accuracy is needed, the analyst may delete the previous discretizations and using the same information automatically loaded, other discretizations and analyses may be done. Moreover, if new more accurate information regarding the loads or constraints is acquired, they may be modelled and then implemented in the data generating program which creates the `common database'. This means that new more accurate models may be easily generated. Other facility consists of the opportunity to control the CSV input files, several loading scenarios being possible to be generated in Femap/Nastran. In this way, using original intelligent API instruments the analyst is focused to accurately model the phenomena and on creative aspects, the repetitive and time-consuming activities being performed by the original computer-based instruments. Using this data processing technique we apply to the best Asimov's principle `minimum change required / maximum desired response'.

  2. Reflecting on the challenges of building a rich interconnected metadata database to describe the experiments of phase six of the coupled climate model intercomparison project (CMIP6) for the Earth System Documentation Project (ES-DOC) and anticipating the opportunities that tooling and services based on rich metadata can provide.

    NASA Astrophysics Data System (ADS)

    Pascoe, C. L.

    2017-12-01

    The Coupled Model Intercomparison Project (CMIP) has coordinated climate model experiments involving multiple international modelling teams since 1995. This has led to a better understanding of past, present, and future climate. The 2017 sixth phase of the CMIP process (CMIP6) consists of a suite of common experiments, and 21 separate CMIP-Endorsed Model Intercomparison Projects (MIPs) making a total of 244 separate experiments. Precise descriptions of the suite of CMIP6 experiments have been captured in a Common Information Model (CIM) database by the Earth System Documentation Project (ES-DOC). The database contains descriptions of forcings, model configuration requirements, ensemble information and citation links, as well as text descriptions and information about the rationale for each experiment. The database was built from statements about the experiments found in the academic literature, the MIP submissions to the World Climate Research Programme (WCRP), WCRP summary tables and correspondence with the principle investigators for each MIP. The database was collated using spreadsheets which are archived in the ES-DOC Github repository and then rendered on the ES-DOC website. A diagramatic view of the workflow of building the database of experiment metadata for CMIP6 is shown in the attached figure.The CIM provides the formalism to collect detailed information from diverse sources in a standard way across all the CMIP6 MIPs. The ES-DOC documentation acts as a unified reference for CMIP6 information to be used both by data producers and consumers. This is especially important given the federated nature of the CMIP6 project. Because the CIM allows forcing constraints and other experiment attributes to be referred to by more than one experiment, we can streamline the process of collecting information from modelling groups about how they set up their models for each experiment. End users of the climate model archive will be able to ask questions enabled by the interconnectedness of the metadata such as "Which MIPs make use of experiment A?" and "Which experiments use forcing constraint B?".

  3. Leaders Are the Network: Applying the Kotter Model in Shaping Future Information Systems

    DTIC Science & Technology

    2010-01-01

    common operational picture (COP) ( Hinson , 2009). Figure 3 demonstrates how CID combines Link 16 and FBCB2 feeds. The CID server polls different...Link 16 Info Exchange A B C S A D S Figure 3 FBCB2-Link 16 Information Exchange. Source: Created by author based on information derived from Hinson ...31552-new-army-leader-development-strategy- released/ (accessed July 30, 2010). Hinson , Jason and Summit, Bob, “Combat Identification Server: Blue

  4. Accounting for uncertainty in health economic decision models by using model averaging

    PubMed Central

    Jackson, Christopher H; Thompson, Simon G; Sharples, Linda D

    2009-01-01

    Health economic decision models are subject to considerable uncertainty, much of which arises from choices between several plausible model structures, e.g. choices of covariates in a regression model. Such structural uncertainty is rarely accounted for formally in decision models but can be addressed by model averaging. We discuss the most common methods of averaging models and the principles underlying them. We apply them to a comparison of two surgical techniques for repairing abdominal aortic aneurysms. In model averaging, competing models are usually either weighted by using an asymptotically consistent model assessment criterion, such as the Bayesian information criterion, or a measure of predictive ability, such as Akaike's information criterion. We argue that the predictive approach is more suitable when modelling the complex underlying processes of interest in health economics, such as individual disease progression and response to treatment. PMID:19381329

  5. Transactions in domain-specific information systems

    NASA Astrophysics Data System (ADS)

    Zacek, Jaroslav

    2017-07-01

    Substantial number of the current information system (IS) implementations is based on transaction approach. In addition, most of the implementations are domain-specific (e.g. accounting IS, resource planning IS). Therefore, we have to have a generic transaction model to build and verify domain-specific IS. The paper proposes a new transaction model for domain-specific ontologies. This model is based on value oriented business process modelling technique. The transaction model is formalized by the Petri Net theory. First part of the paper presents common business processes and analyses related to business process modeling. Second part defines the transactional model delimited by REA enterprise ontology paradigm and introduces states of the generic transaction model. The generic model proposal is defined and visualized by the Petri Net modelling tool. Third part shows application of the generic transaction model. Last part of the paper concludes results and discusses a practical usability of the generic transaction model.

  6. Competing for Attention in Social Media under Information Overload Conditions.

    PubMed

    Feng, Ling; Hu, Yanqing; Li, Baowen; Stanley, H Eugene; Havlin, Shlomo; Braunstein, Lidia A

    2015-01-01

    Modern social media are becoming overloaded with information because of the rapidly-expanding number of information feeds. We analyze the user-generated content in Sina Weibo, and find evidence that the spread of popular messages often follow a mechanism that differs from the spread of disease, in contrast to common belief. In this mechanism, an individual with more friends needs more repeated exposures to spread further the information. Moreover, our data suggest that for certain messages the chance of an individual to share the message is proportional to the fraction of its neighbours who shared it with him/her, which is a result of competition for attention. We model this process using a fractional susceptible infected recovered (FSIR) model, where the infection probability of a node is proportional to its fraction of infected neighbors. Our findings have dramatic implications for information contagion. For example, using the FSIR model we find that real-world social networks have a finite epidemic threshold in contrast to the zero threshold in disease epidemic models. This means that when individuals are overloaded with excess information feeds, the information either reaches out the population if it is above the critical epidemic threshold, or it would never be well received.

  7. Competing for Attention in Social Media under Information Overload Conditions

    PubMed Central

    Feng, Ling; Hu, Yanqing; Li, Baowen; Stanley, H. Eugene; Havlin, Shlomo; Braunstein, Lidia A.

    2015-01-01

    Modern social media are becoming overloaded with information because of the rapidly-expanding number of information feeds. We analyze the user-generated content in Sina Weibo, and find evidence that the spread of popular messages often follow a mechanism that differs from the spread of disease, in contrast to common belief. In this mechanism, an individual with more friends needs more repeated exposures to spread further the information. Moreover, our data suggest that for certain messages the chance of an individual to share the message is proportional to the fraction of its neighbours who shared it with him/her, which is a result of competition for attention. We model this process using a fractional susceptible infected recovered (FSIR) model, where the infection probability of a node is proportional to its fraction of infected neighbors. Our findings have dramatic implications for information contagion. For example, using the FSIR model we find that real-world social networks have a finite epidemic threshold in contrast to the zero threshold in disease epidemic models. This means that when individuals are overloaded with excess information feeds, the information either reaches out the population if it is above the critical epidemic threshold, or it would never be well received. PMID:26161956

  8. Dynamic Binding of Identity and Location Information: A Serial Model of Multiple Identity Tracking

    ERIC Educational Resources Information Center

    Oksama, Lauri; Hyona, Jukka

    2008-01-01

    Tracking of multiple moving objects is commonly assumed to be carried out by a fixed-capacity parallel mechanism. The present study proposes a serial model (MOMIT) to explain performance accuracy in the maintenance of multiple moving objects with distinct identities. A serial refresh mechanism is postulated, which makes recourse to continuous…

  9. Teaching Case: IS Security Requirements Identification from Conceptual Models in Systems Analysis and Design: The Fun & Fitness, Inc. Case

    ERIC Educational Resources Information Center

    Spears, Janine L.; Parrish, James L., Jr.

    2013-01-01

    This teaching case introduces students to a relatively simple approach to identifying and documenting security requirements within conceptual models that are commonly taught in systems analysis and design courses. An introduction to information security is provided, followed by a classroom example of a fictitious company, "Fun &…

  10. Aligning Perceptions of Laboratory Demonstrators' Responsibilities to Inform the Design of a Laboratory Teacher Development Program

    ERIC Educational Resources Information Center

    Flaherty, Aishling; O'Dwyer, Anne; Mannix-McNamara, Patricia; Leahy, J. J.

    2017-01-01

    Throughout countries such as Ireland, the U.K., and Australia, graduate students who fulfill teaching roles in the undergraduate laboratory are often referred to as "laboratory demonstrators". The laboratory demonstrator (LD) model of graduate teaching is similar to the more commonly known graduate teaching assistant (GTA) model that is…

  11. Delineating generalized species boundaries from species distribution data and a species distribution model

    Treesearch

    Matthew P. Peters; Stephen N. Matthews; Louis R. Iverson; Anantha M. Prasad

    2013-01-01

    Species distribution models (SDM) are commonly used to provide information about species ranges or extents, and often are intended to represent the entire area of potential occupancy or suitable habitat in which individuals occur. While SDMs can provide results over various geographic extents, they normally operate within a grid and cannot delimit distinct, smooth...

  12. DNA?RNA: What Do Students Think the Arrow Means?

    ERIC Educational Resources Information Center

    Wright, L. Kate; Fisk, J. Nick; Newman, Dina L.

    2014-01-01

    The central dogma of molecular biology, a model that has remained intact for decades, describes the transfer of genetic information from DNA to protein though an RNA intermediate. While recent work has illustrated many exceptions to the central dogma, it is still a common model used to describe and study the relationship between genes and protein…

  13. Portal of medical data models: information infrastructure for medical research and healthcare.

    PubMed

    Dugas, Martin; Neuhaus, Philipp; Meidt, Alexandra; Doods, Justin; Storck, Michael; Bruland, Philipp; Varghese, Julian

    2016-01-01

    Information systems are a key success factor for medical research and healthcare. Currently, most of these systems apply heterogeneous and proprietary data models, which impede data exchange and integrated data analysis for scientific purposes. Due to the complexity of medical terminology, the overall number of medical data models is very high. At present, the vast majority of these models are not available to the scientific community. The objective of the Portal of Medical Data Models (MDM, https://medical-data-models.org) is to foster sharing of medical data models. MDM is a registered European information infrastructure. It provides a multilingual platform for exchange and discussion of data models in medicine, both for medical research and healthcare. The system is developed in collaboration with the University Library of Münster to ensure sustainability. A web front-end enables users to search, view, download and discuss data models. Eleven different export formats are available (ODM, PDF, CDA, CSV, MACRO-XML, REDCap, SQL, SPSS, ADL, R, XLSX). MDM contents were analysed with descriptive statistics. MDM contains 4387 current versions of data models (in total 10,963 versions). 2475 of these models belong to oncology trials. The most common keyword (n = 3826) is 'Clinical Trial'; most frequent diseases are breast cancer, leukemia, lung and colorectal neoplasms. Most common languages of data elements are English (n = 328,557) and German (n = 68,738). Semantic annotations (UMLS codes) are available for 108,412 data items, 2453 item groups and 35,361 code list items. Overall 335,087 UMLS codes are assigned with 21,847 unique codes. Few UMLS codes are used several thousand times, but there is a long tail of rarely used codes in the frequency distribution. Expected benefits of the MDM portal are improved and accelerated design of medical data models by sharing best practice, more standardised data models with semantic annotation and better information exchange between information systems, in particular Electronic Data Capture (EDC) and Electronic Health Records (EHR) systems. Contents of the MDM portal need to be further expanded to reach broad coverage of all relevant medical domains. Database URL: https://medical-data-models.org. © The Author(s) 2016. Published by Oxford University Press.

  14. Uncovering multiple pathways to substance use: a comparison of methods for identifying population subgroups.

    PubMed

    Dierker, Lisa; Rose, Jennifer; Tan, Xianming; Li, Runze

    2010-12-01

    This paper describes and compares a selection of available modeling techniques for identifying homogeneous population subgroups in the interest of informing targeted substance use intervention. We present a nontechnical review of the common and unique features of three methods: (a) trajectory analysis, (b) functional hierarchical linear modeling (FHLM), and (c) decision tree methods. Differences among the techniques are described, including required data features, strengths and limitations in terms of the flexibility with which outcomes and predictors can be modeled, and the potential of each technique for helping to inform the selection of targets and timing of substance intervention programs.

  15. Reviewing innovative Earth observation solutions for filling science-policy gaps in hydrology

    NASA Astrophysics Data System (ADS)

    Lehmann, Anthony; Giuliani, Gregory; Ray, Nicolas; Rahman, Kazi; Abbaspour, Karim C.; Nativi, Stefano; Craglia, Massimo; Cripe, Douglas; Quevauviller, Philippe; Beniston, Martin

    2014-10-01

    Improved data sharing is needed for hydrological modeling and water management that require better integration of data, information and models. Technological advances in Earth observation and Web technologies have allowed the development of Spatial Data Infrastructures (SDIs) for improved data sharing at various scales. International initiatives catalyze data sharing by promoting interoperability standards to maximize the use of data and by supporting easy access to and utilization of geospatial data. A series of recent European projects are contributing to the promotion of innovative Earth observation solutions and the uptake of scientific outcomes in policy. Several success stories involving different hydrologists' communities can be reported around the World. Gaps still exist in hydrological, agricultural, meteorological and climatological data access because of various issues. While many sources of data exists at all scales it remains difficult and time-consuming to assemble hydrological information for most projects. Furthermore, data and sharing formats remain very heterogeneous. Improvements require implementing/endorsing some commonly agreed standards and documenting data with adequate metadata. The brokering approach allows binding heterogeneous resources published by different data providers and adapting them to tools and interfaces commonly used by consumers of these resources. The challenge is to provide decision-makers with reliable information, based on integrated data and tools derived from both Earth observations and scientific models. Successful SDIs rely therefore on various aspects: a shared vision between all participants, necessity to solve a common problem, adequate data policies, incentives, and sufficient resources. New data streams from remote sensing or crowd sourcing are also producing valuable information to improve our understanding of the water cycle, while field sensors are developing rapidly and becoming less costly. More recent data standards are enhancing interoperability between hydrology and other scientific disciplines, while solutions exist to communicate uncertainty of data and models, which is an essential pre-requisite for decision-making. Distributed computing infrastructures can handle complex and large hydrological data and models, while Web Processing Services bring the flexibility to develop and execute simple to complex workflows over the Internet. The need for capacity building at human, infrastructure and institutional levels is also a major driver for reinforcing the commitment to SDI concepts.

  16. Biomedical data integration - capturing similarities while preserving disparities.

    PubMed

    Bianchi, Stefano; Burla, Anna; Conti, Costanza; Farkash, Ariel; Kent, Carmel; Maman, Yonatan; Shabo, Amnon

    2009-01-01

    One of the challenges of healthcare data processing, analysis and warehousing is the integration of data gathered from disparate and diverse data sources. Promoting the adoption of worldwide accepted information standards along with common terminologies and the use of technologies derived from semantic web representation, is a suitable path to achieve that. To that end, the HL7 V3 Reference Information Model (RIM) [1] has been used as the underlying information model coupled with the Web Ontology Language (OWL) [2] as the semantic data integration technology. In this paper we depict a biomedical data integration process and demonstrate how it was used for integrating various data sources, containing clinical, environmental and genomic data, within Hypergenes, a European Commission funded project exploring the Essential Hypertension [3] disease model.

  17. Incorporating advanced language models into the P300 speller using particle filtering

    NASA Astrophysics Data System (ADS)

    Speier, W.; Arnold, C. W.; Deshpande, A.; Knall, J.; Pouratian, N.

    2015-08-01

    Objective. The P300 speller is a common brain-computer interface (BCI) application designed to communicate language by detecting event related potentials in a subject’s electroencephalogram signal. Information about the structure of natural language can be valuable for BCI communication, but attempts to use this information have thus far been limited to rudimentary n-gram models. While more sophisticated language models are prevalent in natural language processing literature, current BCI analysis methods based on dynamic programming cannot handle their complexity. Approach. Sampling methods can overcome this complexity by estimating the posterior distribution without searching the entire state space of the model. In this study, we implement sequential importance resampling, a commonly used particle filtering (PF) algorithm, to integrate a probabilistic automaton language model. Main result. This method was first evaluated offline on a dataset of 15 healthy subjects, which showed significant increases in speed and accuracy when compared to standard classification methods as well as a recently published approach using a hidden Markov model (HMM). An online pilot study verified these results as the average speed and accuracy achieved using the PF method was significantly higher than that using the HMM method. Significance. These findings strongly support the integration of domain-specific knowledge into BCI classification to improve system performance.

  18. Applying Multivariate Discrete Distributions to Genetically Informative Count Data.

    PubMed

    Kirkpatrick, Robert M; Neale, Michael C

    2016-03-01

    We present a novel method of conducting biometric analysis of twin data when the phenotypes are integer-valued counts, which often show an L-shaped distribution. Monte Carlo simulation is used to compare five likelihood-based approaches to modeling: our multivariate discrete method, when its distributional assumptions are correct, when they are incorrect, and three other methods in common use. With data simulated from a skewed discrete distribution, recovery of twin correlations and proportions of additive genetic and common environment variance was generally poor for the Normal, Lognormal and Ordinal models, but good for the two discrete models. Sex-separate applications to substance-use data from twins in the Minnesota Twin Family Study showed superior performance of two discrete models. The new methods are implemented using R and OpenMx and are freely available.

  19. Towards a Framework for Modeling Space Systems Architectures

    NASA Technical Reports Server (NTRS)

    Shames, Peter; Skipper, Joseph

    2006-01-01

    Topics covered include: 1) Statement of the problem: a) Space system architecture is complex; b) Existing terrestrial approaches must be adapted for space; c) Need a common architecture methodology and information model; d) Need appropriate set of viewpoints. 2) Requirements on a space systems model. 3) Model Based Engineering and Design (MBED) project: a) Evaluated different methods; b) Adapted and utilized RASDS & RM-ODP; c) Identified useful set of viewpoints; d) Did actual model exchanges among selected subset of tools. 4) Lessons learned & future vision.

  20. Designing scalable product families by the radial basis function-high-dimensional model representation metamodelling technique

    NASA Astrophysics Data System (ADS)

    Pirmoradi, Zhila; Haji Hajikolaei, Kambiz; Wang, G. Gary

    2015-10-01

    Product family design is cost-efficient for achieving the best trade-off between commonalization and diversification. However, for computationally intensive design functions which are viewed as black boxes, the family design would be challenging. A two-stage platform configuration method with generalized commonality is proposed for a scale-based family with unknown platform configuration. Unconventional sensitivity analysis and information on variation in the individual variants' optimal design are used for platform configuration design. Metamodelling is employed to provide the sensitivity and variable correlation information, leading to significant savings in function calls. A family of universal electric motors is designed for product performance and the efficiency of this method is studied. The impact of the employed parameters is also analysed. Then, the proposed method is modified for obtaining higher commonality. The proposed method is shown to yield design solutions with better objective function values, allowable performance loss and higher commonality than the previously developed methods in the literature.

  1. Integrating Empirical-Modeling Approaches to Improve Understanding of Terrestrial Ecology Processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCarthy, Heather; Luo, Yiqi; Wullschleger, Stan D

    Recent decades have seen tremendous increases in the quantity of empirical ecological data collected by individual investigators, as well as through research networks such as FLUXNET (Baldocchi et al., 2001). At the same time, advances in computer technology have facilitated the development and implementation of large and complex land surface and ecological process models. Separately, each of these information streams provides useful, but imperfect information about ecosystems. To develop the best scientific understanding of ecological processes, and most accurately predict how ecosystems may cope with global change, integration of empirical and modeling approaches is necessary. However, true integration - inmore » which models inform empirical research, which in turn informs models (Fig. 1) - is not yet common in ecological research (Luo et al., 2011). The goal of this workshop, sponsored by the Department of Energy, Office of Science, Biological and Environmental Research (BER) program, was to bring together members of the empirical and modeling communities to exchange ideas and discuss scientific practices for increasing empirical - model integration, and to explore infrastructure and/or virtual network needs for institutionalizing empirical - model integration (Yiqi Luo, University of Oklahoma, Norman, OK, USA). The workshop included presentations and small group discussions that covered topics ranging from model-assisted experimental design to data driven modeling (e.g. benchmarking and data assimilation) to infrastructure needs for empirical - model integration. Ultimately, three central questions emerged. How can models be used to inform experiments and observations? How can experimental and observational results be used to inform models? What are effective strategies to promote empirical - model integration?« less

  2. Web information retrieval based on ontology

    NASA Astrophysics Data System (ADS)

    Zhang, Jian

    2013-03-01

    The purpose of the Information Retrieval (IR) is to find a set of documents that are relevant for a specific information need of a user. Traditional Information Retrieval model commonly used in commercial search engine is based on keyword indexing system and Boolean logic queries. One big drawback of traditional information retrieval is that they typically retrieve information without an explicitly defined domain of interest to the users so that a lot of no relevance information returns to users, which burden the user to pick up useful answer from these no relevance results. In order to tackle this issue, many semantic web information retrieval models have been proposed recently. The main advantage of Semantic Web is to enhance search mechanisms with the use of Ontology's mechanisms. In this paper, we present our approach to personalize web search engine based on ontology. In addition, key techniques are also discussed in our paper. Compared to previous research, our works concentrate on the semantic similarity and the whole process including query submission and information annotation.

  3. Crisis Management Systems: A Case Study for Aspect-Oriented Modeling

    NASA Astrophysics Data System (ADS)

    Kienzle, Jörg; Guelfi, Nicolas; Mustafiz, Sadaf

    The intent of this document is to define a common case study for the aspect-oriented modeling research community. The domain of the case study is crisis management systems, i.e., systems that help in identifying, assessing, and handling a crisis situation by orchestrating the communication between all parties involved in handling the crisis, by allocating and managing resources, and by providing access to relevant crisis-related information to authorized users. This document contains informal requirements of crisis management systems (CMSs) in general, a feature model for a CMS product line, use case models for a car crash CMS (CCCMS), a domain model for the CCCMS, an informal physical architecture description of the CCCMS, as well as some design models of a possible object-oriented implementation of parts of the CCCMS backend. AOM researchers who want to demonstrate the power of their AOM approach or technique can hence apply the approach at the most appropriate level of abstraction.

  4. Conceptual data modeling of wildlife response indicators to ecosystem change in the Arctic

    USGS Publications Warehouse

    Walworth, Dennis; Pearce, John M.

    2015-08-06

    Large research studies are often challenged to effectively expose and document the types of information being collected and the reasons for data collection across what are often a diverse cadre of investigators of differing disciplines. We applied concepts from the field of information or data modeling to the U.S. Geological Survey (USGS) Changing Arctic Ecosystems (CAE) initiative to prototype an application of information modeling. The USGS CAE initiative is collecting information from marine and terrestrial environments in Alaska to identify and understand the links between rapid physical changes in the Arctic and response of wildlife populations to these ecosystem changes. An associated need is to understand how data collection strategies are informing the overall science initiative and facilitating communication of those strategies to a wide audience. We explored the use of conceptual data modeling to provide a method by which to document, describe, and visually communicate both enterprise and study level data; provide a simple means to analyze commonalities and differences in data acquisition strategies between studies; and provide a tool for discussing those strategies among researchers and managers.

  5. The "SIMCLAS" Model: Simultaneous Analysis of Coupled Binary Data Matrices with Noise Heterogeneity between and within Data Blocks

    ERIC Educational Resources Information Center

    Wilderjans, Tom F.; Ceulemans, E.; Van Mechelen, I.

    2012-01-01

    In many research domains different pieces of information are collected regarding the same set of objects. Each piece of information constitutes a data block, and all these (coupled) blocks have the object mode in common. When analyzing such data, an important aim is to obtain an overall picture of the structure underlying the whole set of coupled…

  6. Particle Filtering Methods for Incorporating Intelligence Updates

    DTIC Science & Technology

    2017-03-01

    methodology for incorporating intelligence updates into a stochastic model for target tracking. Due to the non -parametric assumptions of the PF...samples are taken with replacement from the remaining non -zero weighted particles at each iteration. With this methodology , a zero-weighted particle is...incorporation of information updates. A common method for incorporating information updates is Kalman filtering. However, given the probable nonlinear and non

  7. Standardized Representation of Clinical Study Data Dictionaries with CIMI Archetypes

    PubMed Central

    Sharma, Deepak K.; Solbrig, Harold R.; Prud’hommeaux, Eric; Pathak, Jyotishman; Jiang, Guoqian

    2016-01-01

    Researchers commonly use a tabular format to describe and represent clinical study data. The lack of standardization of data dictionary’s metadata elements presents challenges for their harmonization for similar studies and impedes interoperability outside the local context. We propose that representing data dictionaries in the form of standardized archetypes can help to overcome this problem. The Archetype Modeling Language (AML) as developed by the Clinical Information Modeling Initiative (CIMI) can serve as a common format for the representation of data dictionary models. We mapped three different data dictionaries (identified from dbGAP, PheKB and TCGA) onto AML archetypes by aligning dictionary variable definitions with the AML archetype elements. The near complete alignment of data dictionaries helped map them into valid AML models that captured all data dictionary model metadata. The outcome of the work would help subject matter experts harmonize data models for quality, semantic interoperability and better downstream data integration. PMID:28269909

  8. Standardized Representation of Clinical Study Data Dictionaries with CIMI Archetypes.

    PubMed

    Sharma, Deepak K; Solbrig, Harold R; Prud'hommeaux, Eric; Pathak, Jyotishman; Jiang, Guoqian

    2016-01-01

    Researchers commonly use a tabular format to describe and represent clinical study data. The lack of standardization of data dictionary's metadata elements presents challenges for their harmonization for similar studies and impedes interoperability outside the local context. We propose that representing data dictionaries in the form of standardized archetypes can help to overcome this problem. The Archetype Modeling Language (AML) as developed by the Clinical Information Modeling Initiative (CIMI) can serve as a common format for the representation of data dictionary models. We mapped three different data dictionaries (identified from dbGAP, PheKB and TCGA) onto AML archetypes by aligning dictionary variable definitions with the AML archetype elements. The near complete alignment of data dictionaries helped map them into valid AML models that captured all data dictionary model metadata. The outcome of the work would help subject matter experts harmonize data models for quality, semantic interoperability and better downstream data integration.

  9. Partially linear mixed-effects joint models for skewed and missing longitudinal competing risks outcomes.

    PubMed

    Lu, Tao; Lu, Minggen; Wang, Min; Zhang, Jun; Dong, Guang-Hui; Xu, Yong

    2017-12-18

    Longitudinal competing risks data frequently arise in clinical studies. Skewness and missingness are commonly observed for these data in practice. However, most joint models do not account for these data features. In this article, we propose partially linear mixed-effects joint models to analyze skew longitudinal competing risks data with missingness. In particular, to account for skewness, we replace the commonly assumed symmetric distributions by asymmetric distribution for model errors. To deal with missingness, we employ an informative missing data model. The joint models that couple the partially linear mixed-effects model for the longitudinal process, the cause-specific proportional hazard model for competing risks process and missing data process are developed. To estimate the parameters in the joint models, we propose a fully Bayesian approach based on the joint likelihood. To illustrate the proposed model and method, we implement them to an AIDS clinical study. Some interesting findings are reported. We also conduct simulation studies to validate the proposed method.

  10. Evaluation of confidence intervals for a steady-state leaky aquifer model

    USGS Publications Warehouse

    Christensen, S.; Cooley, R.L.

    1999-01-01

    The fact that dependent variables of groundwater models are generally nonlinear functions of model parameters is shown to be a potentially significant factor in calculating accurate confidence intervals for both model parameters and functions of the parameters, such as the values of dependent variables calculated by the model. The Lagrangian method of Vecchia and Cooley [Vecchia, A.V. and Cooley, R.L., Water Resources Research, 1987, 23(7), 1237-1250] was used to calculate nonlinear Scheffe-type confidence intervals for the parameters and the simulated heads of a steady-state groundwater flow model covering 450 km2 of a leaky aquifer. The nonlinear confidence intervals are compared to corresponding linear intervals. As suggested by the significant nonlinearity of the regression model, linear confidence intervals are often not accurate. The commonly made assumption that widths of linear confidence intervals always underestimate the actual (nonlinear) widths was not correct. Results show that nonlinear effects can cause the nonlinear intervals to be asymmetric and either larger or smaller than the linear approximations. Prior information on transmissivities helps reduce the size of the confidence intervals, with the most notable effects occurring for the parameters on which there is prior information and for head values in parameter zones for which there is prior information on the parameters.The fact that dependent variables of groundwater models are generally nonlinear functions of model parameters is shown to be a potentially significant factor in calculating accurate confidence intervals for both model parameters and functions of the parameters, such as the values of dependent variables calculated by the model. The Lagrangian method of Vecchia and Cooley was used to calculate nonlinear Scheffe-type confidence intervals for the parameters and the simulated heads of a steady-state groundwater flow model covering 450 km2 of a leaky aquifer. The nonlinear confidence intervals are compared to corresponding linear intervals. As suggested by the significant nonlinearity of the regression model, linear confidence intervals are often not accurate. The commonly made assumption that widths of linear confidence intervals always underestimate the actual (nonlinear) widths was not correct. Results show that nonlinear effects can cause the nonlinear intervals to be asymmetric and either larger or smaller than the linear approximations. Prior information on transmissivities helps reduce the size of the confidence intervals, with the most notable effects occurring for the parameters on which there is prior information and for head values in parameter zones for which there is prior information on the parameters.

  11. Combining the Generic Entity-Attribute-Value Model and Terminological Models into a Common Ontology to Enable Data Integration and Decision Support.

    PubMed

    Bouaud, Jacques; Guézennec, Gilles; Séroussi, Brigitte

    2018-01-01

    The integration of clinical information models and termino-ontological models into a unique ontological framework is highly desirable for it facilitates data integration and management using the same formal mechanisms for both data concepts and information model components. This is particularly true for knowledge-based decision support tools that aim to take advantage of all facets of semantic web technologies in merging ontological reasoning, concept classification, and rule-based inferences. We present an ontology template that combines generic data model components with (parts of) existing termino-ontological resources. The approach is developed for the guideline-based decision support module on breast cancer management within the DESIREE European project. The approach is based on the entity attribute value model and could be extended to other domains.

  12. Multiscale information modelling for heart morphogenesis

    NASA Astrophysics Data System (ADS)

    Abdulla, T.; Imms, R.; Schleich, J. M.; Summers, R.

    2010-07-01

    Science is made feasible by the adoption of common systems of units. As research has become more data intensive, especially in the biomedical domain, it requires the adoption of a common system of information models, to make explicit the relationship between one set of data and another, regardless of format. This is being realised through the OBO Foundry to develop a suite of reference ontologies, and NCBO Bioportal to provide services to integrate biomedical resources and functionality to visualise and create mappings between ontology terms. Biomedical experts tend to be focused at one level of spatial scale, be it biochemistry, cell biology, or anatomy. Likewise, the ontologies they use tend to be focused at a particular level of scale. There is increasing interest in a multiscale systems approach, which attempts to integrate between different levels of scale to gain understanding of emergent effects. This is a return to physiological medicine with a computational emphasis, exemplified by the worldwide Physiome initiative, and the European Union funded Network of Excellence in the Virtual Physiological Human. However, little work has been done on how information modelling itself may be tailored to a multiscale systems approach. We demonstrate how this can be done for the complex process of heart morphogenesis, which requires multiscale understanding in both time and spatial domains. Such an effort enables the integration of multiscale metrology.

  13. NADM Conceptual Model 1.0 -- A Conceptual Model for Geologic Map Information

    USGS Publications Warehouse

    ,

    2004-01-01

    Executive Summary -- The NADM Data Model Design Team was established in 1999 by the North American Geologic Map Data Model Steering Committee (NADMSC) with the purpose of drafting a geologic map data model for consideration as a standard for developing interoperable geologic map-centered databases by state, provincial, and federal geological surveys. The model is designed to be a technology-neutral conceptual model that can form the basis for a web-based interchange format using evolving information technology (e.g., XML, RDF, OWL), and guide implementation of geoscience databases in a common conceptual framework. The intended purpose is to allow geologic information sharing between geologic map data providers and users, independent of local information system implementation. The model emphasizes geoscience concepts and relationships related to information presented on geologic maps. Design has been guided by an informal requirements analysis, documentation of existing databases, technology developments, and other standardization efforts in the geoscience and computer-science communities. A key aspect of the model is the notion that representation of the conceptual framework (ontology) that underlies geologic map data must be part of the model, because this framework changes with time and understanding, and varies between information providers. The top level of the model distinguishes geologic concepts, geologic representation concepts, and metadata. The geologic representation part of the model provides a framework for representing the ontology that underlies geologic map data through a controlled vocabulary, and for establishing the relationships between this vocabulary and a geologic map visualization or portrayal. Top-level geologic classes in the model are Earth material (substance), geologic unit (parts of the Earth), geologic age, geologic structure, fossil, geologic process, geologic relation, and geologic event.

  14. Language-Independent and Language-Specific Aspects of Early Literacy: An Evaluation of the Common Underlying Proficiency Model.

    PubMed

    Goodrich, J Marc; Lonigan, Christopher J

    2017-08-01

    According to the common underlying proficiency model (Cummins, 1981), as children acquire academic knowledge and skills in their first language, they also acquire language-independent information about those skills that can be applied when learning a second language. The purpose of this study was to evaluate the relevance of the common underlying proficiency model for the early literacy skills of Spanish-speaking language-minority children using confirmatory factor analysis. Eight hundred fifty-eight Spanish-speaking language-minority preschoolers (mean age = 60.83 months, 50.2% female) participated in this study. Results indicated that bifactor models that consisted of language-independent as well as language-specific early literacy factors provided the best fits to the data for children's phonological awareness and print knowledge skills. Correlated factors models that only included skills specific to Spanish and English provided the best fits to the data for children's oral language skills. Children's language-independent early literacy skills were significantly related across constructs and to language-specific aspects of early literacy. Language-specific aspects of early literacy skills were significantly related within but not across languages. These findings suggest that language-minority preschoolers have a common underlying proficiency for code-related skills but not language-related skills that may allow them to transfer knowledge across languages.

  15. Tree Biomass Estimation of Chinese fir (Cunninghamia lanceolata) Based on Bayesian Method

    PubMed Central

    Zhang, Jianguo

    2013-01-01

    Chinese fir (Cunninghamia lanceolata (Lamb.) Hook.) is the most important conifer species for timber production with huge distribution area in southern China. Accurate estimation of biomass is required for accounting and monitoring Chinese forest carbon stocking. In the study, allometric equation was used to analyze tree biomass of Chinese fir. The common methods for estimating allometric model have taken the classical approach based on the frequency interpretation of probability. However, many different biotic and abiotic factors introduce variability in Chinese fir biomass model, suggesting that parameters of biomass model are better represented by probability distributions rather than fixed values as classical method. To deal with the problem, Bayesian method was used for estimating Chinese fir biomass model. In the Bayesian framework, two priors were introduced: non-informative priors and informative priors. For informative priors, 32 biomass equations of Chinese fir were collected from published literature in the paper. The parameter distributions from published literature were regarded as prior distributions in Bayesian model for estimating Chinese fir biomass. Therefore, the Bayesian method with informative priors was better than non-informative priors and classical method, which provides a reasonable method for estimating Chinese fir biomass. PMID:24278198

  16. Tree biomass estimation of Chinese fir (Cunninghamia lanceolata) based on Bayesian method.

    PubMed

    Zhang, Xiongqing; Duan, Aiguo; Zhang, Jianguo

    2013-01-01

    Chinese fir (Cunninghamia lanceolata (Lamb.) Hook.) is the most important conifer species for timber production with huge distribution area in southern China. Accurate estimation of biomass is required for accounting and monitoring Chinese forest carbon stocking. In the study, allometric equation W = a(D2H)b was used to analyze tree biomass of Chinese fir. The common methods for estimating allometric model have taken the classical approach based on the frequency interpretation of probability. However, many different biotic and abiotic factors introduce variability in Chinese fir biomass model, suggesting that parameters of biomass model are better represented by probability distributions rather than fixed values as classical method. To deal with the problem, Bayesian method was used for estimating Chinese fir biomass model. In the Bayesian framework, two priors were introduced: non-informative priors and informative priors. For informative priors, 32 biomass equations of Chinese fir were collected from published literature in the paper. The parameter distributions from published literature were regarded as prior distributions in Bayesian model for estimating Chinese fir biomass. Therefore, the Bayesian method with informative priors was better than non-informative priors and classical method, which provides a reasonable method for estimating Chinese fir biomass.

  17. Hierarchical models and Bayesian analysis of bird survey information

    USGS Publications Warehouse

    Sauer, J.R.; Link, W.A.; Royle, J. Andrew; Ralph, C. John; Rich, Terrell D.

    2005-01-01

    Summary of bird survey information is a critical component of conservation activities, but often our summaries rely on statistical methods that do not accommodate the limitations of the information. Prioritization of species requires ranking and analysis of species by magnitude of population trend, but often magnitude of trend is a misleading measure of actual decline when trend is poorly estimated. Aggregation of population information among regions is also complicated by varying quality of estimates among regions. Hierarchical models provide a reasonable means of accommodating concerns about aggregation and ranking of quantities of varying precision. In these models the need to consider multiple scales is accommodated by placing distributional assumptions on collections of parameters. For collections of species trends, this allows probability statements to be made about the collections of species-specific parameters, rather than about the estimates. We define and illustrate hierarchical models for two commonly encountered situations in bird conservation: (1) Estimating attributes of collections of species estimates, including ranking of trends, estimating number of species with increasing populations, and assessing population stability with regard to predefined trend magnitudes; and (2) estimation of regional population change, aggregating information from bird surveys over strata. User-friendly computer software makes hierarchical models readily accessible to scientists.

  18. Investigating the capabilities of semantic enrichment of 3D CityEngine data

    NASA Astrophysics Data System (ADS)

    Solou, Dimitra; Dimopoulou, Efi

    2016-08-01

    In recent years the development of technology and the lifting of several technical limitations, has brought the third dimension to the fore. The complexity of urban environments and the strong need for land administration, intensify the need of using a three-dimensional cadastral system. Despite the progress in the field of geographic information systems and 3D modeling techniques, there is no fully digital 3D cadastre. The existing geographic information systems and the different methods of three-dimensional modeling allow for better management, visualization and dissemination of information. Nevertheless, these opportunities cannot be totally exploited because of deficiencies in standardization and interoperability in these systems. Within this context, CityGML was developed as an international standard of the Open Geospatial Consortium (OGC) for 3D city models' representation and exchange. CityGML defines geometry and topology for city modeling, also focusing on semantic aspects of 3D city information. The scope of CityGML is to reach common terminology, also addressing the imperative need for interoperability and data integration, taking into account the number of available geographic information systems and modeling techniques. The aim of this paper is to develop an application for managing semantic information of a model generated based on procedural modeling. The model was initially implemented in CityEngine ESRI's software, and then imported to ArcGIS environment. Final goal was the original model's semantic enrichment and then its conversion to CityGML format. Semantic information management and interoperability seemed to be feasible by the use of the 3DCities Project ESRI tools, since its database structure ensures adding semantic information to the CityEngine model and therefore automatically convert to CityGML for advanced analysis and visualization in different application areas.

  19. In the Face of Cybersecurity: How the Common Information Model Can Be Used

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skare, Paul; Falk, Herbert; Rice, Mark

    2016-01-01

    Efforts are underway to combine smart grid information, devices, networking, and emergency response information to create messages that are not dependent on specific standards development organizations (SDOs). This supports a future-proof approach of allowing changes in the canonical data models (CDMs) going forward without having to perform forklift replacements of solutions that use the messages. This also allows end users (electric utilities) to upgrade individual components of a larger system while keeping the message payload definitions intact. The goal is to enable public and private information sharing securely in a standards-based approach that can be integrated into existing operations. Wemore » provide an example architecture that could benefit from this multi-SDO, secure message approach. This article also describes how to improve message security« less

  20. Clinical disease registries in acute myocardial infarction.

    PubMed

    Ashrafi, Reza; Hussain, Hussain; Brisk, Robert; Boardman, Leanne; Weston, Clive

    2014-06-26

    Disease registries, containing systematic records of cases, have for nearly 100 years been valuable in exploring and understanding various aspects of cardiology. This is particularly true for myocardial infarction, where such registries have provided both epidemiological and clinical information that was not readily available from randomised controlled trials in highly-selected populations. Registries, whether mandated or voluntary, prospective or retrospective in their analysis, have at their core a common study population and common data definitions. In this review we highlight how registries have diversified to offer information on epidemiology, risk modelling, quality assurance/improvement and original research-through data mining, transnational comparisons and the facilitation of enrolment in, and follow-up during registry-based randomised clinical trials.

  1. Forms of the Materials Shared between a Teacher and a Pupil

    ERIC Educational Resources Information Center

    Klubal, Libor; Kostolányová, Katerina

    2016-01-01

    Methods of using ICT is hereby amended. We merge from the original model of work on one computer to the model of cloud services and mobile touch screen devices use. Way of searching for and delivering of information between a pupil and a teacher is closely related with this matter as well. This work detects common and preferred procedures of…

  2. Degree of patient satisfaction with health care performance assesed by marketing surveys.

    PubMed

    Druguş, Daniela; Azoicăi, Doina

    2015-01-01

    Marketing surveys of the health system collect useful information to develop effective management strategies. The research aim consisted in measuring patient satisfaction with health care quality. The qualitative research was based on an online SurveyMonkey open-ended questionnaire. The analysis of patient satisfaction/dissatisfaction with healthcare professionals was performed in 1838 patients. Correlation analysis allowed the identification of some determinants associated with patient satisfaction. The variable most commonly associated with satisfaction was "I got adequate information about procedures/treatment" according to 32.2% of respondents. The patients who were dissatisfied most commonly complained that they were "Not adequately informed about maneuvers and treatment", reported by 40.0% of respondents. This study provides a basis for building an original model for determining the variables of an efficient healthcare system which to ensure a high degree of patient satisfaction.

  3. MyOcean Internal Information System (Dial-P)

    NASA Astrophysics Data System (ADS)

    Blanc, Frederique; Jolibois, Tony; Loubrieu, Thomas; Manzella, Giuseppe; Mazzetti, Paolo; Nativi, Stefano

    2010-05-01

    MyOcean is a three-year project (2008-2011) which goal is the development and pre-operational validation of the GMES Marine Core Service for ocean monitoring and forecasting. It's a transition project that will conduct the European "operational oceanography" community towards the operational phase of a GMES European service, which demands more European integration, more operationality, and more service. Observations, model-based data, and added-value products will be generated - and enhanced thanks to dedicated expertise - by the following production units: • Five Thematic Assembly Centers, each of them dealing with a specific set of observation data: Sea Level, Ocean colour, Sea Surface Temperature, Sea Ice & Wind, and In Situ data, • Seven Monitoring and Forecasting Centers to serve the Global Ocean, the Arctic area, the Baltic Sea, the Atlantic North-West shelves area, the Atlantic Iberian-Biscay-Ireland area, the Mediterranean Sea and the Black sea. Intermediate and final users will discover, view and get the products by means of a central web desk, a central re-active manned service desk and thematic experts distributed across Europe. The MyOcean Information System (MIS) is considering the various aspects of an interoperable - federated information system. Data models support data and computer systems by providing the definition and format of data. The possibility of including the information in the data file is depending on data model adopted. In general there is little effort in the actual project to develop a ‘generic' data model. A strong push to develop a common model is provided by the EU Directive INSPIRE. At present, there is no single de-facto data format for storing observational data. Data formats are still evolving, with their underlying data models moving towards the concept of Feature Types based on ISO/TC211 standards. For example, Unidata are developing the Common Data Model that can represent scientific data types such as point, trajectory, station, grid, etc., which will be implemented in netCDF format. SeaDataNet is recommending ODV and NetCDF formats. Another problem related to data curation and interoperability is the possibility to use common vocabularies. Common vocabularies are developed in many international initiatives, such as GEMET (promoted by INSPIRE as a multilingual thesaurus), UNIDATA, SeaDataNet, Marine Metadata Initiative (MMI). MIS is considering the SeaDataNet vocabulary as a base for interoperability. Four layers of different abstraction levels of interoperability an be defined: - Technical/basic: this layer is implemented at each TAC or MFC through internet connection and basic services for data transfer and browsing (e.g FTP, HTTP, etc). - Syntactic: allowing the interchange of metadata and protocol elements. This layer corresponds to a definition Core Metadata Set, the format of exchange/delivery for the data and associated metadata and possible software. This layer is implemented by the DIAL-P logical interface (e.g. adoption of INSPIRE compliant metadata set and common data formats). - Functional/pragmatic: based on a common set of functional primitives or on a common set of service definitions. This layer refers to the definition of services based on Web services standards. This layer is implemented by the DIAL-P logical interface (e.g. adoption of INSPIRE compliant network services). - Semantic: allowing to access similar classes of objects and services across multiple sites, with multilinguality of content as one specific aspect. This layer corresponds to MIS interface, terminology and thesaurus. Given the above requirements, the proposed solution is a federation of systems, where the individual participants are self-contained autonomous systems, but together form a consistent wider picture. A mid-tier integration layer mediates between existing systems, adapting their data and service model schema to the MIS. The developed MIS is a read-only system, i.e. does not allow updating (or inserting) data into the participant resource systems. The main advantages of the proposed approach are: • to enable information sources to join the MIS and publish their data and metadata in a secure way, without any modification to their existing resources and procedures and without any restriction to their autonomy; • to enable users to browse and query the MIS, receiving an aggregated result incorporating relevant data and metadata from across different sources; • to accommodate the growth of such a MIS, either in terms of its clients or of its information resources, as well as the evolution of the underlying data model.

  4. The Future Role of Information Technology in Erosion Modelling

    USDA-ARS?s Scientific Manuscript database

    Natural resources management and decision-making is a complex process requiring cooperation and communication among federal, state, and local stakeholders balancing biophysical and socio-economic concerns. Predicting soil erosion is common practice in natural resource management for assessing the e...

  5. Quantum-like Probabilistic Models Outside Physics

    NASA Astrophysics Data System (ADS)

    Khrennikov, Andrei

    We present a quantum-like (QL) model in that contexts (complexes of e.g. mental, social, biological, economic or even political conditions) are represented by complex probability amplitudes. This approach gives the possibility to apply the mathematical quantum formalism to probabilities induced in any domain of science. In our model quantum randomness appears not as irreducible randomness (as it is commonly accepted in conventional quantum mechanics, e.g. by von Neumann and Dirac), but as a consequence of obtaining incomplete information about a system. We pay main attention to the QL description of processing of incomplete information. Our QL model can be useful in cognitive, social and political sciences as well as economics and artificial intelligence. In this paper we consider in a more detail one special application — QL modeling of brain's functioning. The brain is modeled as a QL-computer.

  6. Meta-analysis for the comparison of two diagnostic tests to a common gold standard: A generalized linear mixed model approach.

    PubMed

    Hoyer, Annika; Kuss, Oliver

    2018-05-01

    Meta-analysis of diagnostic studies is still a rapidly developing area of biostatistical research. Especially, there is an increasing interest in methods to compare different diagnostic tests to a common gold standard. Restricting to the case of two diagnostic tests, in these meta-analyses the parameters of interest are the differences of sensitivities and specificities (with their corresponding confidence intervals) between the two diagnostic tests while accounting for the various associations across single studies and between the two tests. We propose statistical models with a quadrivariate response (where sensitivity of test 1, specificity of test 1, sensitivity of test 2, and specificity of test 2 are the four responses) as a sensible approach to this task. Using a quadrivariate generalized linear mixed model naturally generalizes the common standard bivariate model of meta-analysis for a single diagnostic test. If information on several thresholds of the tests is available, the quadrivariate model can be further generalized to yield a comparison of full receiver operating characteristic (ROC) curves. We illustrate our model by an example where two screening methods for the diagnosis of type 2 diabetes are compared.

  7. eTOXlab, an open source modeling framework for implementing predictive models in production environments.

    PubMed

    Carrió, Pau; López, Oriol; Sanz, Ferran; Pastor, Manuel

    2015-01-01

    Computational models based in Quantitative-Structure Activity Relationship (QSAR) methodologies are widely used tools for predicting the biological properties of new compounds. In many instances, such models are used as a routine in the industry (e.g. food, cosmetic or pharmaceutical industry) for the early assessment of the biological properties of new compounds. However, most of the tools currently available for developing QSAR models are not well suited for supporting the whole QSAR model life cycle in production environments. We have developed eTOXlab; an open source modeling framework designed to be used at the core of a self-contained virtual machine that can be easily deployed in production environments, providing predictions as web services. eTOXlab consists on a collection of object-oriented Python modules with methods mapping common tasks of standard modeling workflows. This framework allows building and validating QSAR models as well as predicting the properties of new compounds using either a command line interface or a graphic user interface (GUI). Simple models can be easily generated by setting a few parameters, while more complex models can be implemented by overriding pieces of the original source code. eTOXlab benefits from the object-oriented capabilities of Python for providing high flexibility: any model implemented using eTOXlab inherits the features implemented in the parent model, like common tools and services or the automatic exposure of the models as prediction web services. The particular eTOXlab architecture as a self-contained, portable prediction engine allows building models with confidential information within corporate facilities, which can be safely exported and used for prediction without disclosing the structures of the training series. The software presented here provides full support to the specific needs of users that want to develop, use and maintain predictive models in corporate environments. The technologies used by eTOXlab (web services, VM, object-oriented programming) provide an elegant solution to common practical issues; the system can be installed easily in heterogeneous environments and integrates well with other software. Moreover, the system provides a simple and safe solution for building models with confidential structures that can be shared without disclosing sensitive information.

  8. USGS perspectives on an integrated approach to watershed and coastal management

    USGS Publications Warehouse

    Larsen, Matthew C.; Hamilton, Pixie A.; Haines, John W.; Mason, Jr., Robert R.

    2010-01-01

    The writers discuss three critically important steps necessary for achieving the goal for improved integrated approaches on watershed and coastal protection and management. These steps involve modernization of monitoring networks, creation of common data and web services infrastructures, and development of modeling, assessment, and research tools. Long-term monitoring is needed for tracking the effectiveness approaches for controlling land-based sources of nutrients, contaminants, and invasive species. The integration of mapping and monitoring with conceptual and mathematical models, and multidisciplinary assessments is important in making well-informed decisions. Moreover, a better integrated data network is essential for mapping, statistical, and modeling applications, and timely dissemination of data and information products to a broad community of users.

  9. From hospital information system components to the medical record and clinical guidelines & protocols.

    PubMed

    Veloso, M; Estevão, N; Ferreira, P; Rodrigues, R; Costa, C T; Barahona, P

    1997-01-01

    This paper introduces an ongoing project towards the development of a new generation HIS, aiming at the integration of clinical and administrative information within a common framework. Its design incorporates explicit knowledge about domain objects and professional activities to be processed by the system together with related knowledge management services and act management services. The paper presents the conceptual model of the proposed HIS architecture, that supports a rich and fully integrated patient data model, enabling the implementation of a dynamic electronic patient record tightly coupled with computerised guideline knowledge bases.

  10. The I3I Model; Identifying Cultural Determinants of Information Sharing via C2 Information Technologies

    DTIC Science & Technology

    2009-06-01

    Individualist cultures represent loose ties between individuals where the interests of individuals prevail over the interests of the group and the...independence of individuals is emphasized. Individual accomplishments are valued whereas in collectivist cultures the group’s well being and common...goals and objectives are valued more. Collectivist cultures are characterized by tight social networks in which individuals strongly distinguish

  11. Determining Smoking Cessation Related Information, Motivation, and Behavioral Skills among Opiate Dependent Smokers in Methadone Treatment.

    PubMed

    Cooperman, Nina A; Richter, Kimber P; Bernstein, Steven L; Steinberg, Marc L; Williams, Jill M

    2015-04-01

    Over 80% of people in methadone treatment smoke cigarettes, and existing smoking cessation interventions have been minimally effective. To develop an Information-Motivation-Behavioral Skills (IMB) Model of behavior change based smoking cessation intervention for methadone maintained smokers, we examined smoking cessation related IMB factors in this population. Current or former smokers in methadone treatment (n = 35) participated in focus groups. Ten methadone clinic counselors participated in an individual interview. A content analysis was conducted using deductive and inductive approaches. Commonly known IMB factors related to smoking cessation were described. These factors included: the health effects of smoking and treatment options for quitting (information); pregnancy and cost of cigarettes (motivators); and coping with emotions, finding social support, and pharmacotherapy adherence (behavioral skills). IMB factors specific to methadone maintained smokers were also described. These factors included: the relationship between quitting smoking and drug relapse (information), the belief that smoking is the same as using drugs (motivator); and coping with methadone clinic culture and applying skills used to quit drugs to quitting smoking (behavioral skills). IMB strengths and deficits varied by individual. Methadone maintained smokers could benefit from research on an IMB Model based smoking cessation intervention that is individualized, addresses IMB factors common among all smokers, and also addresses IMB factors unique to this population.

  12. Minimally Informative Prior Distributions for PSA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dana L. Kelly; Robert W. Youngblood; Kurt G. Vedros

    2010-06-01

    A salient feature of Bayesian inference is its ability to incorporate information from a variety of sources into the inference model, via the prior distribution (hereafter simply “the prior”). However, over-reliance on old information can lead to priors that dominate new data. Some analysts seek to avoid this by trying to work with a minimally informative prior distribution. Another reason for choosing a minimally informative prior is to avoid the often-voiced criticism of subjectivity in the choice of prior. Minimally informative priors fall into two broad classes: 1) so-called noninformative priors, which attempt to be completely objective, in that themore » posterior distribution is determined as completely as possible by the observed data, the most well known example in this class being the Jeffreys prior, and 2) priors that are diffuse over the region where the likelihood function is nonnegligible, but that incorporate some information about the parameters being estimated, such as a mean value. In this paper, we compare four approaches in the second class, with respect to their practical implications for Bayesian inference in Probabilistic Safety Assessment (PSA). The most commonly used such prior, the so-called constrained noninformative prior, is a special case of the maximum entropy prior. This is formulated as a conjugate distribution for the most commonly encountered aleatory models in PSA, and is correspondingly mathematically convenient; however, it has a relatively light tail and this can cause the posterior mean to be overly influenced by the prior in updates with sparse data. A more informative prior that is capable, in principle, of dealing more effectively with sparse data is a mixture of conjugate priors. A particular diffuse nonconjugate prior, the logistic-normal, is shown to behave similarly for some purposes. Finally, we review the so-called robust prior. Rather than relying on the mathematical abstraction of entropy, as does the constrained noninformative prior, the robust prior places a heavy-tailed Cauchy prior on the canonical parameter of the aleatory model.« less

  13. Study on the standard architecture for geoinformation common services

    NASA Astrophysics Data System (ADS)

    Zha, Z.; Zhang, L.; Wang, C.; Jiang, J.; Huang, W.

    2014-04-01

    The construction of platform for geoinformation common services was completed or on going in in most provinces and cities in these years in China, and the platforms plays an important role in the economic and social activities. Geoinfromation and geoinfromation based services are the key issues in the platform. The standards on geoinormation common services play as bridges among the users, systems and designers of the platform. The standard architecture for geoinformation common services is the guideline for designing and using the standard system in which the standards integrated to each other to promote the development, sharing and services of geoinformation resources. To establish the standard architecture for geoinformation common services is one of the tasks of "Study on important standards for geonformation common services and management of public facilities in city". The scope of the standard architecture is defined, such as data or information model, interoperability interface or service, information management. Some Research work on the status of international standards of geoinormation common services in organization and countries, like ISO/TC 211, OGC and other countries or unions like USA, EU, Japan have done. Some principles are set up to evaluate the standard, such as availability, suitability and extensible ability. Then the development requirement and practical situation are analyzed, and a framework of the standard architecture for geoinformation common services are proposed. Finally, a summary and prospects of the geoinformation standards are made.

  14. Boosting probabilistic graphical model inference by incorporating prior knowledge from multiple sources.

    PubMed

    Praveen, Paurush; Fröhlich, Holger

    2013-01-01

    Inferring regulatory networks from experimental data via probabilistic graphical models is a popular framework to gain insights into biological systems. However, the inherent noise in experimental data coupled with a limited sample size reduces the performance of network reverse engineering. Prior knowledge from existing sources of biological information can address this low signal to noise problem by biasing the network inference towards biologically plausible network structures. Although integrating various sources of information is desirable, their heterogeneous nature makes this task challenging. We propose two computational methods to incorporate various information sources into a probabilistic consensus structure prior to be used in graphical model inference. Our first model, called Latent Factor Model (LFM), assumes a high degree of correlation among external information sources and reconstructs a hidden variable as a common source in a Bayesian manner. The second model, a Noisy-OR, picks up the strongest support for an interaction among information sources in a probabilistic fashion. Our extensive computational studies on KEGG signaling pathways as well as on gene expression data from breast cancer and yeast heat shock response reveal that both approaches can significantly enhance the reconstruction accuracy of Bayesian Networks compared to other competing methods as well as to the situation without any prior. Our framework allows for using diverse information sources, like pathway databases, GO terms and protein domain data, etc. and is flexible enough to integrate new sources, if available.

  15. Congruence analysis of geodetic networks - hypothesis tests versus model selection by information criteria

    NASA Astrophysics Data System (ADS)

    Lehmann, Rüdiger; Lösler, Michael

    2017-12-01

    Geodetic deformation analysis can be interpreted as a model selection problem. The null model indicates that no deformation has occurred. It is opposed to a number of alternative models, which stipulate different deformation patterns. A common way to select the right model is the usage of a statistical hypothesis test. However, since we have to test a series of deformation patterns, this must be a multiple test. As an alternative solution for the test problem, we propose the p-value approach. Another approach arises from information theory. Here, the Akaike information criterion (AIC) or some alternative is used to select an appropriate model for a given set of observations. Both approaches are discussed and applied to two test scenarios: A synthetic levelling network and the Delft test data set. It is demonstrated that they work but behave differently, sometimes even producing different results. Hypothesis tests are well-established in geodesy, but may suffer from an unfavourable choice of the decision error rates. The multiple test also suffers from statistical dependencies between the test statistics, which are neglected. Both problems are overcome by applying information criterions like AIC.

  16. Strategic action or self-control? Adolescent information management and delinquency.

    PubMed

    Grigoryeva, Maria S

    2018-05-01

    Recent scholarship has begun to challenge the prevailing view that children are passive recipients of parental socialization, including the common belief that parental disciplinary practices are central to explaining adolescent problem behaviors. This research shows that children exert a significant influence over parents via information management, or the degree to which children disclose information about their behavior to parents. Despite the incorporation of child information management into contemporary models of parenting, significant theoretical and empirical concerns cast doubt on its utility over classic parent-centered approaches. The current paper addresses these concerns and adjudicates between disparate definitions of adolescent information management in two ways. First, it provides a theoretically grounded definition of information management as agentic behavior. Second, it specifies a model that tests definitions of secret keeping as agentic against a non-agentic definition of secret keeping supplied by criminological theories of self-control. The model is estimated with three four-wave cross-lagged panel models, which disentangle the interrelationships between parenting, child concealment of information, and child problem behavior in a sample of high risk youth. The results offer support for a definition of concealment as strategic and self-regarding, and have implications for research on delinquency, parent-child interactions, and child agency. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. Care episode retrieval: distributional semantic models for information retrieval in the clinical domain.

    PubMed

    Moen, Hans; Ginter, Filip; Marsi, Erwin; Peltonen, Laura-Maria; Salakoski, Tapio; Salanterä, Sanna

    2015-01-01

    Patients' health related information is stored in electronic health records (EHRs) by health service providers. These records include sequential documentation of care episodes in the form of clinical notes. EHRs are used throughout the health care sector by professionals, administrators and patients, primarily for clinical purposes, but also for secondary purposes such as decision support and research. The vast amounts of information in EHR systems complicate information management and increase the risk of information overload. Therefore, clinicians and researchers need new tools to manage the information stored in the EHRs. A common use case is, given a--possibly unfinished--care episode, to retrieve the most similar care episodes among the records. This paper presents several methods for information retrieval, focusing on care episode retrieval, based on textual similarity, where similarity is measured through domain-specific modelling of the distributional semantics of words. Models include variants of random indexing and the semantic neural network model word2vec. Two novel methods are introduced that utilize the ICD-10 codes attached to care episodes to better induce domain-specificity in the semantic model. We report on experimental evaluation of care episode retrieval that circumvents the lack of human judgements regarding episode relevance. Results suggest that several of the methods proposed outperform a state-of-the art search engine (Lucene) on the retrieval task.

  18. Care episode retrieval: distributional semantic models for information retrieval in the clinical domain

    PubMed Central

    2015-01-01

    Patients' health related information is stored in electronic health records (EHRs) by health service providers. These records include sequential documentation of care episodes in the form of clinical notes. EHRs are used throughout the health care sector by professionals, administrators and patients, primarily for clinical purposes, but also for secondary purposes such as decision support and research. The vast amounts of information in EHR systems complicate information management and increase the risk of information overload. Therefore, clinicians and researchers need new tools to manage the information stored in the EHRs. A common use case is, given a - possibly unfinished - care episode, to retrieve the most similar care episodes among the records. This paper presents several methods for information retrieval, focusing on care episode retrieval, based on textual similarity, where similarity is measured through domain-specific modelling of the distributional semantics of words. Models include variants of random indexing and the semantic neural network model word2vec. Two novel methods are introduced that utilize the ICD-10 codes attached to care episodes to better induce domain-specificity in the semantic model. We report on experimental evaluation of care episode retrieval that circumvents the lack of human judgements regarding episode relevance. Results suggest that several of the methods proposed outperform a state-of-the art search engine (Lucene) on the retrieval task. PMID:26099735

  19. Computer retina that models the primate retina

    NASA Astrophysics Data System (ADS)

    Shah, Samir; Levine, Martin D.

    1994-06-01

    At the retinal level, the strategies utilized by biological visual systems allow them to outperform machine vision systems, serving to motivate the design of electronic or `smart' sensors based on similar principles. Design of such sensors in silicon first requires a model of retinal information processing which captures the essential features exhibited by biological retinas. In this paper, a simple retinal model is presented, which qualitatively accounts for the achromatic information processing in the primate cone system. The model exhibits many of the properties found in biological retina such as data reduction through nonuniform sampling, adaptation to a large dynamic range of illumination levels, variation of visual acuity with illumination level, and enhancement of spatio temporal contrast information. The model is validated by replicating experiments commonly performed by electrophysiologists on biological retinas and comparing the response of the computer retina to data from experiments in monkeys. In addition, the response of the model to synthetic images is shown. The experiments demonstrate that the model behaves in a manner qualitatively similar to biological retinas and thus may serve as a basis for the development of an `artificial retina.'

  20. What information is necessary for speech categorization? Harnessing variability in the speech signal by integrating cues computed relative to expectations

    PubMed Central

    McMurray, Bob; Jongman, Allard

    2012-01-01

    Most theories of categorization emphasize how continuous perceptual information is mapped to categories. However, equally important is the informational assumptions of a model, the type of information subserving this mapping. This is crucial in speech perception where the signal is variable and context-dependent. This study assessed the informational assumptions of several models of speech categorization, in particular, the number of cues that are the basis of categorization and whether these cues represent the input veridically or have undergone compensation. We collected a corpus of 2880 fricative productions (Jongman, Wayland & Wong, 2000) spanning many talker- and vowel-contexts and measured 24 cues for each. A subset was also presented to listeners in an 8AFC phoneme categorization task. We then trained a common classification model based on logistic regression to categorize the fricative from the cue values, and manipulated the information in the training set to contrast 1) models based on a small number of invariant cues; 2) models using all cues without compensation, and 3) models in which cues underwent compensation for contextual factors. Compensation was modeled by Computing Cues Relative to Expectations (C-CuRE), a new approach to compensation that preserves fine-grained detail in the signal. Only the compensation model achieved a similar accuracy to listeners, and showed the same effects of context. Thus, even simple categorization metrics can overcome the variability in speech when sufficient information is available and compensation schemes like C-CuRE are employed. PMID:21417542

  1. Modeling abundance using multinomial N-mixture models

    USGS Publications Warehouse

    Royle, Andy

    2016-01-01

    Multinomial N-mixture models are a generalization of the binomial N-mixture models described in Chapter 6 to allow for more complex and informative sampling protocols beyond simple counts. Many commonly used protocols such as multiple observer sampling, removal sampling, and capture-recapture produce a multivariate count frequency that has a multinomial distribution and for which multinomial N-mixture models can be developed. Such protocols typically result in more precise estimates than binomial mixture models because they provide direct information about parameters of the observation process. We demonstrate the analysis of these models in BUGS using several distinct formulations that afford great flexibility in the types of models that can be developed, and we demonstrate likelihood analysis using the unmarked package. Spatially stratified capture-recapture models are one class of models that fall into the multinomial N-mixture framework, and we discuss analysis of stratified versions of classical models such as model Mb, Mh and other classes of models that are only possible to describe within the multinomial N-mixture framework.

  2. Effects of model complexity and priors on estimation using sequential importance sampling/resampling for species conservation

    USGS Publications Warehouse

    Dunham, Kylee; Grand, James B.

    2016-01-01

    We examined the effects of complexity and priors on the accuracy of models used to estimate ecological and observational processes, and to make predictions regarding population size and structure. State-space models are useful for estimating complex, unobservable population processes and making predictions about future populations based on limited data. To better understand the utility of state space models in evaluating population dynamics, we used them in a Bayesian framework and compared the accuracy of models with differing complexity, with and without informative priors using sequential importance sampling/resampling (SISR). Count data were simulated for 25 years using known parameters and observation process for each model. We used kernel smoothing to reduce the effect of particle depletion, which is common when estimating both states and parameters with SISR. Models using informative priors estimated parameter values and population size with greater accuracy than their non-informative counterparts. While the estimates of population size and trend did not suffer greatly in models using non-informative priors, the algorithm was unable to accurately estimate demographic parameters. This model framework provides reasonable estimates of population size when little to no information is available; however, when information on some vital rates is available, SISR can be used to obtain more precise estimates of population size and process. Incorporating model complexity such as that required by structured populations with stage-specific vital rates affects precision and accuracy when estimating latent population variables and predicting population dynamics. These results are important to consider when designing monitoring programs and conservation efforts requiring management of specific population segments.

  3. Using latent semantic analysis and the predication algorithm to improve extraction of meanings from a diagnostic corpus.

    PubMed

    Jorge-Botana, Guillermo; Olmos, Ricardo; León, José Antonio

    2009-11-01

    There is currently a widespread interest in indexing and extracting taxonomic information from large text collections. An example is the automatic categorization of informally written medical or psychological diagnoses, followed by the extraction of epidemiological information or even terms and structures needed to formulate guiding questions as an heuristic tool for helping doctors. Vector space models have been successfully used to this end (Lee, Cimino, Zhu, Sable, Shanker, Ely & Yu, 2006; Pakhomov, Buntrock & Chute, 2006). In this study we use a computational model known as Latent Semantic Analysis (LSA) on a diagnostic corpus with the aim of retrieving definitions (in the form of lists of semantic neighbors) of common structures it contains (e.g. "storm phobia", "dog phobia") or less common structures that might be formed by logical combinations of categories and diagnostic symptoms (e.g. "gun personality" or "germ personality"). In the quest to bring definitions into line with the meaning of structures and make them in some way representative, various problems commonly arise while recovering content using vector space models. We propose some approaches which bypass these problems, such as Kintsch's (2001) predication algorithm and some corrections to the way lists of neighbors are obtained, which have already been tested on semantic spaces in a non-specific domain (Jorge-Botana, León, Olmos & Hassan-Montero, under review). The results support the idea that the predication algorithm may also be useful for extracting more precise meanings of certain structures from scientific corpora, and that the introduction of some corrections based on vector length may increases its efficiency on non-representative terms.

  4. Dimensions of service quality in healthcare: a systematic review of literature.

    PubMed

    Fatima, Iram; Humayun, Ayesha; Iqbal, Usman; Shafiq, Muhammad

    2018-06-13

    Various dimensions of healthcare service quality were used and discussed in literature across the globe. This study presents an updated meaningful review of the extensive research that has been conducted on measuring dimensions of healthcare service quality. Systematic review method in current study is based on PRISMA guidelines. We searched for literature using databases such as Google, Google Scholar, PubMed and Social Science, Citation Index. In this study, we screened 1921 identified papers using search terms/phrases. Snowball strategies were adopted to extract published articles from January 1997 till December 2016. Two-hundred and fourteen papers were identified as relevant for data extraction; completed by two researchers, double checked by the other two to develop agreement in discrepancies. In total, 74 studies fulfilled our pre-defined inclusion and exclusion criteria for data analysis. Service quality is mainly measured as technical and functional, incorporating many sub-dimensions. We synthesized the information about dimensions of healthcare service quality with reference to developed and developing countries. 'Tangibility' is found to be the most common contributing factor whereas 'SERVQUAL' as the most commonly used model to measure healthcare service quality. There are core dimensions of healthcare service quality that are commonly found in all models used in current reviewed studies. We found a little difference in these core dimensions while focusing dimensions in both developed and developing countries, as mostly SERVQUAL is being used as the basic model to either generate a new one or to add further contextual dimensions. The current study ranked the contributing factors based on their frequency in literature. Based on these priorities, if factors are addressed irrespective of any context, may lead to contribute to improve healthcare quality and may provide an important information for evidence-informed decision-making.

  5. Community, Collective or Movement? Evaluating Theoretical Perspectives on Network Building

    NASA Astrophysics Data System (ADS)

    Spitzer, W.

    2015-12-01

    Since 2007, the New England Aquarium has led a national effort to increase the capacity of informal science venues to effectively communicate about climate change. We are now leading the NSF-funded National Network for Ocean and Climate Change Interpretation (NNOCCI), partnering with the Association of Zoos and Aquariums, FrameWorks Institute, Woods Hole Oceanographic Institution, Monterey Bay Aquarium, and National Aquarium, with evaluation conducted by the New Knowledge Organization, Pennsylvania State University, and Ohio State University. NNOCCI enables teams of informal science interpreters across the country to serve as "communication strategists" - beyond merely conveying information they can influence public perceptions, given their high level of commitment, knowledge, public trust, social networks, and visitor contact. We provide in-depth training as well as an alumni network for ongoing learning, implementation support, leadership development, and coalition building. Our goals are to achieve a systemic national impact, embed our work within multiple ongoing regional and national climate change education networks, and leave an enduring legacy. What is the most useful theoretical model for conceptualizing the work of the NNOCCI community? This presentation will examine the pros and cons of three perspectives -- community of practice, collective impact, and social movements. The community of practice approach emphasizes use of common tools, support for practice, social learning, and organic development of leadership. A collective impact model focuses on defining common outcomes, aligning activities toward a common goal, structured collaboration. A social movement emphasizes building group identity and creating a sense of group efficacy. This presentation will address how these models compare in terms of their utility in program planning and evaluation, their fit with the unique characteristics of the NNOCCI community, and their relevance to our program goals.

  6. The dual impact of ecology and management on social incentives in marine common-pool resource systems.

    PubMed

    Klein, E S; Barbier, M R; Watson, J R

    2017-08-01

    Understanding how and when cooperative human behaviour forms in common-pool resource systems is critical to illuminating social-ecological systems and designing governance institutions that promote sustainable resource use. Before assessing the full complexity of social dynamics, it is essential to understand, concretely and mechanistically, how resource dynamics and human actions interact to create incentives and pay-offs for social behaviours. Here, we investigated how such incentives for information sharing are affected by spatial dynamics and management in a common-pool resource system. Using interviews with fishermen to inform an agent-based model, we reveal generic mechanisms through which, for a given ecological setting characterized by the spatial dynamics of the resource, the two 'human factors' of information sharing and management may heterogeneously impact various members of a group for whom theory would otherwise predict the same strategy. When users can deplete the resource, these interactions are further affected by the management approach. Finally, we discuss the implications of alternative motivations, such as equity among fishermen and consistency of the fleet's output. Our results indicate that resource spatial dynamics, form of management and level of depletion can interact to alter the sociality of people in common-pool resource systems, providing necessary insight for future study of strategic decision processes.

  7. Information extraction from multi-institutional radiology reports.

    PubMed

    Hassanpour, Saeed; Langlotz, Curtis P

    2016-01-01

    The radiology report is the most important source of clinical imaging information. It documents critical information about the patient's health and the radiologist's interpretation of medical findings. It also communicates information to the referring physicians and records that information for future clinical and research use. Although efforts to structure some radiology report information through predefined templates are beginning to bear fruit, a large portion of radiology report information is entered in free text. The free text format is a major obstacle for rapid extraction and subsequent use of information by clinicians, researchers, and healthcare information systems. This difficulty is due to the ambiguity and subtlety of natural language, complexity of described images, and variations among different radiologists and healthcare organizations. As a result, radiology reports are used only once by the clinician who ordered the study and rarely are used again for research and data mining. In this work, machine learning techniques and a large multi-institutional radiology report repository are used to extract the semantics of the radiology report and overcome the barriers to the re-use of radiology report information in clinical research and other healthcare applications. We describe a machine learning system to annotate radiology reports and extract report contents according to an information model. This information model covers the majority of clinically significant contents in radiology reports and is applicable to a wide variety of radiology study types. Our automated approach uses discriminative sequence classifiers for named-entity recognition to extract and organize clinically significant terms and phrases consistent with the information model. We evaluated our information extraction system on 150 radiology reports from three major healthcare organizations and compared its results to a commonly used non-machine learning information extraction method. We also evaluated the generalizability of our approach across different organizations by training and testing our system on data from different organizations. Our results show the efficacy of our machine learning approach in extracting the information model's elements (10-fold cross-validation average performance: precision: 87%, recall: 84%, F1 score: 85%) and its superiority and generalizability compared to the common non-machine learning approach (p-value<0.05). Our machine learning information extraction approach provides an effective automatic method to annotate and extract clinically significant information from a large collection of free text radiology reports. This information extraction system can help clinicians better understand the radiology reports and prioritize their review process. In addition, the extracted information can be used by researchers to link radiology reports to information from other data sources such as electronic health records and the patient's genome. Extracted information also can facilitate disease surveillance, real-time clinical decision support for the radiologist, and content-based image retrieval. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Importance of Personalized Health-Care Models: A Case Study in Activity Recognition.

    PubMed

    Zdravevski, Eftim; Lameski, Petre; Trajkovik, Vladimir; Pombo, Nuno; Garcia, Nuno

    2018-01-01

    Novel information and communication technologies create possibilities to change the future of health care. Ambient Assisted Living (AAL) is seen as a promising supplement of the current care models. The main goal of AAL solutions is to apply ambient intelligence technologies to enable elderly people to continue to live in their preferred environments. Applying trained models from health data is challenging because the personalized environments could differ significantly than the ones which provided training data. This paper investigates the effects on activity recognition accuracy using single accelerometer of personalized models compared to models built on general population. In addition, we propose a collaborative filtering based approach which provides balance between fully personalized models and generic models. The results show that the accuracy could be improved to 95% with fully personalized models, and up to 91.6% with collaborative filtering based models, which is significantly better than common models that exhibit accuracy of 85.1%. The collaborative filtering approach seems to provide highly personalized models with substantial accuracy, while overcoming the cold start problem that is common for fully personalized models.

  9. Neuroinflammation in epileptogenesis: Insights and translational perspectives from new models of epilepsy.

    PubMed

    Barker-Haliski, Melissa L; Löscher, Wolfgang; White, H Steve; Galanopoulou, Aristea S

    2017-07-01

    Animal models have provided a wealth of information on mechanisms of epileptogenesis and comorbidogenesis, and have significantly advanced our ability to investigate the potential of new therapies. Processes implicating brain inflammation have been increasingly observed in epilepsy research. Herein we discuss the progress on animal models of epilepsy and comorbidities that inform us on the potential role of inflammation in epileptogenesis and comorbidity pathogenesis in rodent models of West syndrome and the Theiler's murine encephalomyelitis virus (TMEV) mouse model of viral encephalitis-induced epilepsy. Rat models of infantile spasms were generated in rat pups after right intracerebral injections of proinflammatory compounds (lipopolysaccharides with or without doxorubicin, or cytokines) and were longitudinally monitored for epileptic spasms and neurodevelopmental and cognitive deficits. Anti-inflammatory treatments were tested after the onset of spasms. The TMEV mouse model was induced with intracerebral administration of TMEV and prospective monitoring for handling-induced seizures or seizure susceptibility, as well as long-term evaluations of behavioral comorbidities of epilepsy. Inflammatory processes are evident in both models and are implicated in the pathogenesis of the observed seizures and comorbidities. A common feature of these models, based on the data so far available, is their pharmacoresistant profile. The presented data support the role of inflammatory pathways in epileptogenesis and comorbidities in two distinct epilepsy models. Pharmacoresistance is a common feature of both inflammation-based models. Utilization of these models may facilitate the identification of age-specific, syndrome- or etiology-specific therapies for the epilepsies and attendant comorbidities, including the drug-resistant forms. Wiley Periodicals, Inc. © 2017 International League Against Epilepsy.

  10. A Combined IRT and SEM Approach for Individual-Level Assessment in Test-Retest Studies

    ERIC Educational Resources Information Center

    Ferrando, Pere J.

    2015-01-01

    The standard two-wave multiple-indicator model (2WMIM) commonly used to analyze test-retest data provides information at both the group and item level. Furthermore, when applied to binary and graded item responses, it is related to well-known item response theory (IRT) models. In this article the IRT-2WMIM relations are used to obtain additional…

  11. Using game theory approach to interpret stable policies for Iran's oil and gas common resources conflicts with Iraq and Qatar

    NASA Astrophysics Data System (ADS)

    Esmaeili, Maryam; Bahrini, Aram; Shayanrad, Sepideh

    2015-12-01

    Oil and gas as the non-renewable resources are considered very valuable for the countries with petroleum economics. These resources are not only diffused equally around the world, but also they are common in some places which their neighbors often come into conflicts. Consequently, it is vital for those countries to manage their resource utilization. Lately, game theory was applied in conflict resolution of common resources, such as water, which is a proof of its efficacy and capability. This paper models the conflicts between Iran and its neighbors namely Qatar and Iraq between their oil and gas common resources using game theory approach. In other words, the future of these countries will be introduced and analyzed by some well-known 2 × 2 games to achieve a better perspective of their conflicts. Because of information inadequacy of the players, in addition to Nash Stability, various solution concepts are used based on the foresight, disimprovements, and knowledge of preferences. The results of mathematical models show how the countries could take a reasonable strategy to exploit their common resources.

  12. Accounting for false-positive acoustic detections of bats using occupancy models

    USGS Publications Warehouse

    Clement, Matthew J.; Rodhouse, Thomas J.; Ormsbee, Patricia C.; Szewczak, Joseph M.; Nichols, James D.

    2014-01-01

    4. Synthesis and applications. Our results suggest that false positives sufficient to affect inferences may be common in acoustic surveys for bats. We demonstrate an approach that can estimate occupancy, regardless of the false-positive rate, when acoustic surveys are paired with capture surveys. Applications of this approach include monitoring the spread of White-Nose Syndrome, estimating the impact of climate change and informing conservation listing decisions. We calculate a site-specific probability of occupancy, conditional on survey results, which could inform local permitting decisions, such as for wind energy projects. More generally, the magnitude of false positives suggests that false-positive occupancy models can improve accuracy in research and monitoring of bats and provide wildlife managers with more reliable information.

  13. A Robust Design Capture-Recapture Analysis of Abundance, Survival and Temporary Emigration of Three Odontocete Species in the Gulf of Corinth, Greece

    PubMed Central

    Bonizzoni, Silvia; Bearzi, Giovanni; Eddy, Lavinia; Gimenez, Olivier

    2016-01-01

    While the Mediterranean Sea has been designated as a Global Biodiversity Hotspot, assessments of cetacean population abundance are lacking for large portions of the region, particularly in the southern and eastern basins. The challenges and costs of obtaining the necessary data often result in absent or poor abundance information. We applied capture-recapture models to estimate abundance, survival and temporary emigration of odontocete populations within a 2,400 km2 semi-enclosed Mediterranean bay, the Gulf of Corinth. Boat surveys were conducted in 2011–2015 to collect photo-identification data on striped dolphins Stenella coeruleoalba, short-beaked common dolphins Delphinus delphis (always found together with striped dolphins in mixed groups) and common bottlenose dolphins Tursiops truncatus, totaling 1,873 h of tracking. After grading images for quality and marking distinctiveness, 23,995 high-quality photos were included in a striped and common dolphin catalog, and 2,472 in a bottlenose dolphin catalog. The proportions of striped and common dolphins were calculated from the photographic sample and used to scale capture-recapture estimates. Best-fitting robust design capture-recapture models denoted no temporary emigration between years for striped and common dolphins, and random temporary emigration for bottlenose dolphins, suggesting different residency patterns in agreement with previous studies. Average estimated abundance over the five years was 1,331 (95% CI 1,122–1,578) striped dolphins, 22 (16–32) common dolphins, 55 (36–84) “intermediate” animals (potential striped x common dolphin hybrids) and 38 (32–46) bottlenose dolphins. Apparent survival was constant for striped, common and intermediate dolphins (0.94, 95% CI 0.92–0.96) and year-dependent for bottlenose dolphins (an average of 0.85, 95% CI 0.76–0.95). Our work underlines the importance of long-term monitoring to contribute reliable baseline information that can help assess the conservation status of wildlife populations. PMID:27926926

  14. Intelligent diagnosis of jaundice with dynamic uncertain causality graph model.

    PubMed

    Hao, Shao-Rui; Geng, Shi-Chao; Fan, Lin-Xiao; Chen, Jia-Jia; Zhang, Qin; Li, Lan-Juan

    2017-05-01

    Jaundice is a common and complex clinical symptom potentially occurring in hepatology, general surgery, pediatrics, infectious diseases, gynecology, and obstetrics, and it is fairly difficult to distinguish the cause of jaundice in clinical practice, especially for general practitioners in less developed regions. With collaboration between physicians and artificial intelligence engineers, a comprehensive knowledge base relevant to jaundice was created based on demographic information, symptoms, physical signs, laboratory tests, imaging diagnosis, medical histories, and risk factors. Then a diagnostic modeling and reasoning system using the dynamic uncertain causality graph was proposed. A modularized modeling scheme was presented to reduce the complexity of model construction, providing multiple perspectives and arbitrary granularity for disease causality representations. A "chaining" inference algorithm and weighted logic operation mechanism were employed to guarantee the exactness and efficiency of diagnostic reasoning under situations of incomplete and uncertain information. Moreover, the causal interactions among diseases and symptoms intuitively demonstrated the reasoning process in a graphical manner. Verification was performed using 203 randomly pooled clinical cases, and the accuracy was 99.01% and 84.73%, respectively, with or without laboratory tests in the model. The solutions were more explicable and convincing than common methods such as Bayesian Networks, further increasing the objectivity of clinical decision-making. The promising results indicated that our model could be potentially used in intelligent diagnosis and help decrease public health expenditure.

  15. Intelligent diagnosis of jaundice with dynamic uncertain causality graph model*

    PubMed Central

    Hao, Shao-rui; Geng, Shi-chao; Fan, Lin-xiao; Chen, Jia-jia; Zhang, Qin; Li, Lan-juan

    2017-01-01

    Jaundice is a common and complex clinical symptom potentially occurring in hepatology, general surgery, pediatrics, infectious diseases, gynecology, and obstetrics, and it is fairly difficult to distinguish the cause of jaundice in clinical practice, especially for general practitioners in less developed regions. With collaboration between physicians and artificial intelligence engineers, a comprehensive knowledge base relevant to jaundice was created based on demographic information, symptoms, physical signs, laboratory tests, imaging diagnosis, medical histories, and risk factors. Then a diagnostic modeling and reasoning system using the dynamic uncertain causality graph was proposed. A modularized modeling scheme was presented to reduce the complexity of model construction, providing multiple perspectives and arbitrary granularity for disease causality representations. A “chaining” inference algorithm and weighted logic operation mechanism were employed to guarantee the exactness and efficiency of diagnostic reasoning under situations of incomplete and uncertain information. Moreover, the causal interactions among diseases and symptoms intuitively demonstrated the reasoning process in a graphical manner. Verification was performed using 203 randomly pooled clinical cases, and the accuracy was 99.01% and 84.73%, respectively, with or without laboratory tests in the model. The solutions were more explicable and convincing than common methods such as Bayesian Networks, further increasing the objectivity of clinical decision-making. The promising results indicated that our model could be potentially used in intelligent diagnosis and help decrease public health expenditure. PMID:28471111

  16. Two-trait-locus linkage analysis: A powerful strategy for mapping complex genetic traits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schork, N.J.; Boehnke, M.; Terwilliger, J.D.

    1993-11-01

    Nearly all diseases mapped to date follow clear Mendelian, single-locus segregation patterns. In contrast, many common familial diseases such as diabetes, psoriasis, several forms of cancer, and schizophrenia are familial and appear to have a genetic component but do not exhibit simple Mendelian transmission. More complex models are required to explain the genetics of these important diseases. In this paper, the authors explore two-trait-locus, two-marker-locus linkage analysis in which two trait loci are mapped simultaneously to separate genetic markers. The authors compare the utility of this approach to standard one-trait-locus, one-marker-locus linkage analysis with and without allowance for heterogeneity. Themore » authors also compare the utility of the two-trait-locus, two-marker-locus analysis to two-trait-locus, one-marker-locus linkage analysis. For common diseases, pedigrees are often bilineal, with disease genes entering via two or more unrelated pedigree members. Since such pedigrees often are avoided in linkage studies, the authors also investigate the relative information content of unilineal and bilineal pedigrees. For the dominant-or-recessive and threshold models that the authors consider, the authors find that two-trait-locus, two-marker-locus linkage analysis can provide substantially more linkage information, as measured by expected maximum lod score, than standard one-trait-locus, one-marker-locus methods, even allowing for heterogeneity, while, for a dominant-or-dominant generating model, one-locus models that allow for heterogeneity extract essentially as much information as the two-trait-locus methods. For these three models, the authors also find that bilineal pedigrees provide sufficient linkage information to warrant their inclusion in such studies. The authors discuss strategies for assessing the significance of the two linkages assumed in two-trait-locus, two-marker-locus models. 37 refs., 1 fig., 4 tabs.« less

  17. Effects of Individual Health Topic Familiarity on Activity Patterns During Health Information Searches

    PubMed Central

    Moriyama, Koichi; Fukui, Ken–ichi; Numao, Masayuki

    2015-01-01

    Background Non-medical professionals (consumers) are increasingly using the Internet to support their health information needs. However, the cognitive effort required to perform health information searches is affected by the consumer’s familiarity with health topics. Consumers may have different levels of familiarity with individual health topics. This variation in familiarity may cause misunderstandings because the information presented by search engines may not be understood correctly by the consumers. Objective As a first step toward the improvement of the health information search process, we aimed to examine the effects of health topic familiarity on health information search behaviors by identifying the common search activity patterns exhibited by groups of consumers with different levels of familiarity. Methods Each participant completed a health terminology familiarity questionnaire and health information search tasks. The responses to the familiarity questionnaire were used to grade the familiarity of participants with predefined health topics. The search task data were transcribed into a sequence of search activities using a coding scheme. A computational model was constructed from the sequence data using a Markov chain model to identify the common search patterns in each familiarity group. Results Forty participants were classified into L1 (not familiar), L2 (somewhat familiar), and L3 (familiar) groups based on their questionnaire responses. They had different levels of familiarity with four health topics. The video data obtained from all of the participants were transcribed into 4595 search activities (mean 28.7, SD 23.27 per session). The most frequent search activities and transitions in all the familiarity groups were related to evaluations of the relevancy of selected web pages in the retrieval results. However, the next most frequent transitions differed in each group and a chi-squared test confirmed this finding (P<.001). Next, according to the results of a perplexity evaluation, the health information search patterns were best represented as a 5-gram sequence pattern. The most common patterns in group L1 were frequent query modifications, with relatively low search efficiency, and accessing and evaluating selected results from a health website. Group L2 performed frequent query modifications, but with better search efficiency, and accessed and evaluated selected results from a health website. Finally, the members of group L3 successfully discovered relevant results from the first query submission, performed verification by accessing several health websites after they discovered relevant results, and directly accessed consumer health information websites. Conclusions Familiarity with health topics affects health information search behaviors. Our analysis of state transitions in search activities detected unique behaviors and common search activity patterns in each familiarity group during health information searches. PMID:25783222

  18. Russian Ural and Siberian Media Education Centers

    ERIC Educational Resources Information Center

    Fedorov, Alexander

    2014-01-01

    The comparative analysis of the models and functions of the media education centres showed that despite having some definite differences and peculiarities, they have the following common features: differentiated financing resources (public financing, grants, business organizations, etc.) and regional media information support; presence of famous…

  19. Mathematical modeling and growth kinetics of Clostridium sporogenes in cooked beef

    USDA-ARS?s Scientific Manuscript database

    Clostridium sporogenes PA 3679 is a common surrogate for proteolytic Clostridium botulinum for thermal process development and validation. However, little information is available concerning the growth kinetics of C. sporogenes in food. Therefore, the objective of this study was to investigate the...

  20. Autism Spectrum Disorder Updates - Relevant Information for Early Interventionists to Consider.

    PubMed

    Allen-Meares, Paula; MacDonald, Megan; McGee, Kristin

    2016-01-01

    Autism spectrum disorder (ASD) is a pervasive developmental disorder characterized by deficits in social communication skills as well as repetitive, restricted or stereotyped behaviors (1). Early interventionists are often found at the forefront of assessment, evaluation, and early intervention services for children with ASD. The role of an early intervention specialist may include assessing developmental history, providing group and individual counseling, working in partnership with families on home, school, and community environments, mobilizing school and community resources, and assisting in the development of positive early intervention strategies (2, 3). The commonality among these roles resides in the importance of providing up-to-date, relevant information to families and children. The purpose of this review is to provide pertinent up-to-date knowledge for early interventionists to help inform practice in working with individuals with ASD, including common behavioral models of intervention.

  1. Sentiment Analysis Using Common-Sense and Context Information

    PubMed Central

    Mittal, Namita; Bansal, Pooja; Garg, Sonal

    2015-01-01

    Sentiment analysis research has been increasing tremendously in recent times due to the wide range of business and social applications. Sentiment analysis from unstructured natural language text has recently received considerable attention from the research community. In this paper, we propose a novel sentiment analysis model based on common-sense knowledge extracted from ConceptNet based ontology and context information. ConceptNet based ontology is used to determine the domain specific concepts which in turn produced the domain specific important features. Further, the polarities of the extracted concepts are determined using the contextual polarity lexicon which we developed by considering the context information of a word. Finally, semantic orientations of domain specific features of the review document are aggregated based on the importance of a feature with respect to the domain. The importance of the feature is determined by the depth of the feature in the ontology. Experimental results show the effectiveness of the proposed methods. PMID:25866505

  2. Development of an Ontology to Model Medical Errors, Information Needs, and the Clinical Communication Space

    PubMed Central

    Stetson, Peter D.; McKnight, Lawrence K.; Bakken, Suzanne; Curran, Christine; Kubose, Tate T.; Cimino, James J.

    2002-01-01

    Medical errors are common, costly and often preventable. Work in understanding the proximal causes of medical errors demonstrates that systems failures predispose to adverse clinical events. Most of these systems failures are due to lack of appropriate information at the appropriate time during the course of clinical care. Problems with clinical communication are common proximal causes of medical errors. We have begun a project designed to measure the impact of wireless computing on medical errors. We report here on our efforts to develop an ontology representing the intersection of medical errors, information needs and the communication space. We will use this ontology to support the collection, storage and interpretation of project data. The ontology’s formal representation of the concepts in this novel domain will help guide the rational deployment of our informatics interventions. A real-life scenario is evaluated using the ontology in order to demonstrate its utility.

  3. Sentiment analysis using common-sense and context information.

    PubMed

    Agarwal, Basant; Mittal, Namita; Bansal, Pooja; Garg, Sonal

    2015-01-01

    Sentiment analysis research has been increasing tremendously in recent times due to the wide range of business and social applications. Sentiment analysis from unstructured natural language text has recently received considerable attention from the research community. In this paper, we propose a novel sentiment analysis model based on common-sense knowledge extracted from ConceptNet based ontology and context information. ConceptNet based ontology is used to determine the domain specific concepts which in turn produced the domain specific important features. Further, the polarities of the extracted concepts are determined using the contextual polarity lexicon which we developed by considering the context information of a word. Finally, semantic orientations of domain specific features of the review document are aggregated based on the importance of a feature with respect to the domain. The importance of the feature is determined by the depth of the feature in the ontology. Experimental results show the effectiveness of the proposed methods.

  4. Autism Spectrum Disorder Updates – Relevant Information for Early Interventionists to Consider

    PubMed Central

    Allen-Meares, Paula; MacDonald, Megan; McGee, Kristin

    2016-01-01

    Autism spectrum disorder (ASD) is a pervasive developmental disorder characterized by deficits in social communication skills as well as repetitive, restricted or stereotyped behaviors (1). Early interventionists are often found at the forefront of assessment, evaluation, and early intervention services for children with ASD. The role of an early intervention specialist may include assessing developmental history, providing group and individual counseling, working in partnership with families on home, school, and community environments, mobilizing school and community resources, and assisting in the development of positive early intervention strategies (2, 3). The commonality among these roles resides in the importance of providing up-to-date, relevant information to families and children. The purpose of this review is to provide pertinent up-to-date knowledge for early interventionists to help inform practice in working with individuals with ASD, including common behavioral models of intervention. PMID:27840812

  5. Simulation-based Bayesian inference for latent traits of item response models: Introduction to the ltbayes package for R.

    PubMed

    Johnson, Timothy R; Kuhn, Kristine M

    2015-12-01

    This paper introduces the ltbayes package for R. This package includes a suite of functions for investigating the posterior distribution of latent traits of item response models. These include functions for simulating realizations from the posterior distribution, profiling the posterior density or likelihood function, calculation of posterior modes or means, Fisher information functions and observed information, and profile likelihood confidence intervals. Inferences can be based on individual response patterns or sets of response patterns such as sum scores. Functions are included for several common binary and polytomous item response models, but the package can also be used with user-specified models. This paper introduces some background and motivation for the package, and includes several detailed examples of its use.

  6. Life sciences domain analysis model

    PubMed Central

    Freimuth, Robert R; Freund, Elaine T; Schick, Lisa; Sharma, Mukesh K; Stafford, Grace A; Suzek, Baris E; Hernandez, Joyce; Hipp, Jason; Kelley, Jenny M; Rokicki, Konrad; Pan, Sue; Buckler, Andrew; Stokes, Todd H; Fernandez, Anna; Fore, Ian; Buetow, Kenneth H

    2012-01-01

    Objective Meaningful exchange of information is a fundamental challenge in collaborative biomedical research. To help address this, the authors developed the Life Sciences Domain Analysis Model (LS DAM), an information model that provides a framework for communication among domain experts and technical teams developing information systems to support biomedical research. The LS DAM is harmonized with the Biomedical Research Integrated Domain Group (BRIDG) model of protocol-driven clinical research. Together, these models can facilitate data exchange for translational research. Materials and methods The content of the LS DAM was driven by analysis of life sciences and translational research scenarios and the concepts in the model are derived from existing information models, reference models and data exchange formats. The model is represented in the Unified Modeling Language and uses ISO 21090 data types. Results The LS DAM v2.2.1 is comprised of 130 classes and covers several core areas including Experiment, Molecular Biology, Molecular Databases and Specimen. Nearly half of these classes originate from the BRIDG model, emphasizing the semantic harmonization between these models. Validation of the LS DAM against independently derived information models, research scenarios and reference databases supports its general applicability to represent life sciences research. Discussion The LS DAM provides unambiguous definitions for concepts required to describe life sciences research. The processes established to achieve consensus among domain experts will be applied in future iterations and may be broadly applicable to other standardization efforts. Conclusions The LS DAM provides common semantics for life sciences research. Through harmonization with BRIDG, it promotes interoperability in translational science. PMID:22744959

  7. Platelet-derived growth factor receptors differentially inform intertumoral and intratumoral heterogeneity

    PubMed Central

    Kim, Youngmi; Kim, Eunhee; Wu, Qiulian; Guryanova, Olga; Hitomi, Masahiro; Lathia, Justin D.; Serwanski, David; Sloan, Andrew E.; Weil, Robert J.; Lee, Jeongwu; Nishiyama, Akiko; Bao, Shideng; Hjelmeland, Anita B.; Rich, Jeremy N.

    2012-01-01

    Growth factor-mediated proliferation and self-renewal maintain tissue-specific stem cells and are frequently dysregulated in cancers. Platelet-derived growth factor (PDGF) ligands and receptors (PDGFRs) are commonly overexpressed in gliomas and initiate tumors, as proven in genetically engineered models. While PDGFRα alterations inform intertumoral heterogeneity toward a proneural glioblastoma (GBM) subtype, we interrogated the role of PDGFRs in intratumoral GBM heterogeneity. We found that PDGFRα is expressed only in a subset of GBMs, while PDGFRβ is more commonly expressed in tumors but is preferentially expressed by self-renewing tumorigenic GBM stem cells (GSCs). Genetic or pharmacological targeting of PDGFRβ (but not PDGFRα) attenuated GSC self-renewal, survival, tumor growth, and invasion. PDGFRβ inhibition decreased activation of the cancer stem cell signaling node STAT3, while constitutively active STAT3 rescued the loss of GSC self-renewal caused by PDGFRβ targeting. In silico survival analysis demonstrated that PDGFRB informed poor prognosis, while PDGFRA was a positive prognostic factor. Our results may explain mixed clinical responses of anti-PDGFR-based approaches and suggest the need for integration of models of cancer as an organ system into development of cancer therapies. PMID:22661233

  8. Bayes factors based on robust TDT-type tests for family trio design.

    PubMed

    Yuan, Min; Pan, Xiaoqing; Yang, Yaning

    2015-06-01

    Adaptive transmission disequilibrium test (aTDT) and MAX3 test are two robust-efficient association tests for case-parent family trio data. Both tests incorporate information of common genetic models including recessive, additive and dominant models and are efficient in power and robust to genetic model specifications. The aTDT uses information of departure from Hardy-Weinberg disequilibrium to identify the potential genetic model underlying the data and then applies the corresponding TDT-type test, and the MAX3 test is defined as the maximum of the absolute value of three TDT-type tests under the three common genetic models. In this article, we propose three robust Bayes procedures, the aTDT based Bayes factor, MAX3 based Bayes factor and Bayes model averaging (BMA), for association analysis with case-parent trio design. The asymptotic distributions of aTDT under the null and alternative hypothesis are derived in order to calculate its Bayes factor. Extensive simulations show that the Bayes factors and the p-values of the corresponding tests are generally consistent and these Bayes factors are robust to genetic model specifications, especially so when the priors on the genetic models are equal. When equal priors are used for the underlying genetic models, the Bayes factor method based on aTDT is more powerful than those based on MAX3 and Bayes model averaging. When the prior placed a small (large) probability on the true model, the Bayes factor based on aTDT (BMA) is more powerful. Analysis of a simulation data about RA from GAW15 is presented to illustrate applications of the proposed methods.

  9. Rain/No-Rain Identification from Bispectral Satellite Information using Deep Neural Networks

    NASA Astrophysics Data System (ADS)

    Tao, Y.

    2016-12-01

    Satellite-based precipitation estimation products have the advantage of high resolution and global coverage. However, they still suffer from insufficient accuracy. To accurately estimate precipitation from satellite data, there are two most important aspects: sufficient precipitation information in the satellite information and proper methodologies to extract such information effectively. This study applies the state-of-the-art machine learning methodologies to bispectral satellite information for Rain/No-Rain detection. Specifically, we use deep neural networks to extract features from infrared and water vapor channels and connect it to precipitation identification. To evaluate the effectiveness of the methodology, we first applies it to the infrared data only (Model DL-IR only), the most commonly used inputs for satellite-based precipitation estimation. Then we incorporates water vapor data (Model DL-IR + WV) to further improve the prediction performance. Radar stage IV dataset is used as ground measurement for parameter calibration. The operational product, Precipitation Estimation from Remotely Sensed Information Using Artificial Neural Networks Cloud Classification System (PERSIANN-CCS), is used as a reference to compare the performance of both models in both winter and summer seasons.The experiments show significant improvement for both models in precipitation identification. The overall performance gains in the Critical Success Index (CSI) are 21.60% and 43.66% over the verification periods for Model DL-IR only and Model DL-IR+WV model compared to PERSIANN-CCS, respectively. Moreover, specific case studies show that the water vapor channel information and the deep neural networks effectively help recover a large number of missing precipitation pixels under warm clouds while reducing false alarms under cold clouds.

  10. Architectural approaches for HL7-based health information systems implementation.

    PubMed

    López, D M; Blobel, B

    2010-01-01

    Information systems integration is hard, especially when semantic and business process interoperability requirements need to be met. To succeed, a unified methodology, approaching different aspects of systems architecture such as business, information, computational, engineering and technology viewpoints, has to be considered. The paper contributes with an analysis and demonstration on how the HL7 standard set can support health information systems integration. Based on the Health Information Systems Development Framework (HIS-DF), common architectural models for HIS integration are analyzed. The framework is a standard-based, consistent, comprehensive, customizable, scalable methodology that supports the design of semantically interoperable health information systems and components. Three main architectural models for system integration are analyzed: the point to point interface, the messages server and the mediator models. Point to point interface and messages server models are completely supported by traditional HL7 version 2 and version 3 messaging. The HL7 v3 standard specification, combined with service-oriented, model-driven approaches provided by HIS-DF, makes the mediator model possible. The different integration scenarios are illustrated by describing a proof-of-concept implementation of an integrated public health surveillance system based on Enterprise Java Beans technology. Selecting the appropriate integration architecture is a fundamental issue of any software development project. HIS-DF provides a unique methodological approach guiding the development of healthcare integration projects. The mediator model - offered by the HIS-DF and supported in HL7 v3 artifacts - is the more promising one promoting the development of open, reusable, flexible, semantically interoperable, platform-independent, service-oriented and standard-based health information systems.

  11. Hierarchical species distribution models

    USGS Publications Warehouse

    Hefley, Trevor J.; Hooten, Mevin B.

    2016-01-01

    Determining the distribution pattern of a species is important to increase scientific knowledge, inform management decisions, and conserve biodiversity. To infer spatial and temporal patterns, species distribution models have been developed for use with many sampling designs and types of data. Recently, it has been shown that count, presence-absence, and presence-only data can be conceptualized as arising from a point process distribution. Therefore, it is important to understand properties of the point process distribution. We examine how the hierarchical species distribution modeling framework has been used to incorporate a wide array of regression and theory-based components while accounting for the data collection process and making use of auxiliary information. The hierarchical modeling framework allows us to demonstrate how several commonly used species distribution models can be derived from the point process distribution, highlight areas of potential overlap between different models, and suggest areas where further research is needed.

  12. Characterizing Dark Energy Through Supernovae

    NASA Astrophysics Data System (ADS)

    Davis, Tamara M.; Parkinson, David

    Type Ia supernovae are a powerful cosmological probe that gave the first strong evidence that the expansion of the universe is accelerating. Here we provide an overview of how supernovae can go further to reveal information about what is causing the acceleration, be it dark energy or some modification to our laws of gravity. We first review the methods of statistical inference that are commonly used, making a point of separating parameter estimation from model selection. We then summarize the many different approaches used to explain or test the acceleration, including parametric models (like the standard model, ΛCDM), nonparametric models, dark fluid models such as quintessence, and extensions to standard gravity. Finally, we also show how supernova data can be used beyond the Hubble diagram, to give information on gravitational lensing and peculiar velocities that can be used to distinguish between models that predict the same expansion history.

  13. Aphasia and the Diagram Makers Revisited: an Update of Information Processing Models

    PubMed Central

    2006-01-01

    Aphasic syndromes from diseases such as stroke and degenerative disorders are still common and disabling neurobehavioral disorders. Diagnosis, management and treatment of these communication disorders are often dependent upon understanding the neuropsychological mechanisms that underlie these disorders. Since the work of Broca it has been recognized that the human brain is organized in a modular fashion. Wernicke realized that the types of signs and symptoms displayed by aphasic patients reflect the degradation or disconnection of the modules that comprise this speech-language network. Thus, he was the first to propose a diagrammatic or information processing model of this modular language-speech network. Since he first published this model many new aphasic syndromes have been discovered and this has led to modifications of this model. This paper reviews some of the early (nineteenth century) models and then attempts to develop a more up-to-date and complete model. PMID:20396501

  14. Mouse Tumor Biology (MTB): a database of mouse models for human cancer.

    PubMed

    Bult, Carol J; Krupke, Debra M; Begley, Dale A; Richardson, Joel E; Neuhauser, Steven B; Sundberg, John P; Eppig, Janan T

    2015-01-01

    The Mouse Tumor Biology (MTB; http://tumor.informatics.jax.org) database is a unique online compendium of mouse models for human cancer. MTB provides online access to expertly curated information on diverse mouse models for human cancer and interfaces for searching and visualizing data associated with these models. The information in MTB is designed to facilitate the selection of strains for cancer research and is a platform for mining data on tumor development and patterns of metastases. MTB curators acquire data through manual curation of peer-reviewed scientific literature and from direct submissions by researchers. Data in MTB are also obtained from other bioinformatics resources including PathBase, the Gene Expression Omnibus and ArrayExpress. Recent enhancements to MTB improve the association between mouse models and human genes commonly mutated in a variety of cancers as identified in large-scale cancer genomics studies, provide new interfaces for exploring regions of the mouse genome associated with cancer phenotypes and incorporate data and information related to Patient-Derived Xenograft models of human cancers. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  15. Virtual terrain: a security-based representation of a computer network

    NASA Astrophysics Data System (ADS)

    Holsopple, Jared; Yang, Shanchieh; Argauer, Brian

    2008-03-01

    Much research has been put forth towards detection, correlating, and prediction of cyber attacks in recent years. As this set of research progresses, there is an increasing need for contextual information of a computer network to provide an accurate situational assessment. Typical approaches adopt contextual information as needed; yet such ad hoc effort may lead to unnecessary or even conflicting features. The concept of virtual terrain is, therefore, developed and investigated in this work. Virtual terrain is a common representation of crucial information about network vulnerabilities, accessibilities, and criticalities. A virtual terrain model encompasses operating systems, firewall rules, running services, missions, user accounts, and network connectivity. It is defined as connected graphs with arc attributes defining dynamic relationships among vertices modeling network entities, such as services, users, and machines. The virtual terrain representation is designed to allow feasible development and maintenance of the model, as well as efficacy in terms of the use of the model. This paper will describe the considerations in developing the virtual terrain schema, exemplary virtual terrain models, and algorithms utilizing the virtual terrain model for situation and threat assessment.

  16. Problem-oriented patient record model as a conceptual foundation for a multi-professional electronic patient record.

    PubMed

    De Clercq, Etienne

    2008-09-01

    It is widely accepted that the development of electronic patient records, or even of a common electronic patient record, is one possible way to improve cooperation and data communication between nurses and physicians. Yet, little has been done so far to develop a common conceptual model for both medical and nursing patient records, which is a first challenge that should be met to set up a common electronic patient record. In this paper, we describe a problem-oriented conceptual model and we show how it may suit both nursing and medical perspectives in a hospital setting. We started from existing nursing theory and from an initial model previously set up for primary care. In a hospital pilot site, a multi-disciplinary team refined this model using one large and complex clinical case (retrospective study) and nine ongoing cases (prospective study). An internal validation was performed through hospital-wide multi-professional interviews and through discussions around a graphical user interface prototype. To assess the consistency of the model, a computer engineer specified it. Finally, a Belgian expert working group performed an external assessment of the model. As a basis for a common patient record we propose a simple problem-oriented conceptual model with two levels of meta-information. The model is mapped with current nursing theories and it includes the following concepts: "health care element", "health approach", "health agent", "contact", "subcontact" and "service". These concepts, their interrelationships and some practical rules for using the model are illustrated in this paper. Our results are compatible with ongoing standardization work at the Belgian and European levels. Our conceptual model is potentially a foundation for a multi-professional electronic patient record that is problem-oriented and therefore patient-centred.

  17. An Approach for harmonizing European Water Portals

    NASA Astrophysics Data System (ADS)

    Pesquer, Lluís; Stasch, Christoph; Masó, Joan; Jirka, Simon; Domingo, Xavier; Guitart, Francesc; Turner, Thomas; Hinderk Jürrens, Eike

    2017-04-01

    A number of European funded research projects is developing novel solutions for water monitoring, modeling and management. To generate innovations in the water sector, third parties from industry and the public sector need to take up the solutions and bring them into the market. A variety of portals exists to support this move into the market. Examples on the European level are the EIP Water Online Marketplace(1), the WaterInnEU Marketplace(2), the WISE RTD Water knowledge portal(3), the WIDEST- ICT for Water Observatory(4) or the SWITCH-ON Virtual Product Market and Virtual Water-Science Laboratory(5). Further innovation portals and initiatives exist on the national or regional level, for example, the Denmark knows water platform6 or the Dutch water alliance(7). However, the different portals often cover the same projects, the same products and the same services. Since they are technically separated and have their own data models and databases, people need to duplicate information and maintain it at several endpoints. This requires additional efforts and hinders the interoperable exchange between these portals and tools using the underlying data. In this work, we provide an overview on the existing portals and present an approach for harmonizing and integrating common information that is provided across different portals. The approach aims to integrate the common in formation in a common database utilizing existing vocabularies, where possible. An Application Programming Interface allows access the information in a machine-readable way and utilizing the information in other applications beyond description and discovery purposes. (1) http://www.eip-water.eu/my-market-place (2) https://marketplace.waterinneu.org (3) http://www.wise-rtd.info/ (4) http://iwo.widest.eu (5) http://www.switch-on-vwsl.eu/ (6) http://www.rethinkwater.dk/ (7) http://wateralliance.nl/

  18. State-Mandated (Mis)Information and Women's Endorsement of Common Abortion Myths.

    PubMed

    Berglas, Nancy F; Gould, Heather; Turok, David K; Sanders, Jessica N; Perrucci, Alissa C; Roberts, Sarah C M

    The extent that state-mandated informed consent scripts affect women's knowledge about abortion is unknown. We examine women's endorsement of common abortion myths before and after receiving state-mandated information that included accurate and inaccurate statements about abortion. In Utah, women presenting for an abortion information visit completed baseline surveys (n = 494) and follow-up interviews 3 weeks later (n = 309). Women answered five items about abortion risks, indicating which of two statements was closer to the truth (as established by prior research) or responding "don't know." We developed a continuous myth endorsement scale (range, 0-1) and, using multivariable regression models, examined predictors of myth endorsement at baseline and change in myth endorsement from baseline to follow-up. At baseline, many women reported not knowing about abortion risks (range, 36%-70% across myths). Women who were younger, non-White, and had previously given birth but not had a prior abortion reported higher myth endorsement at baseline. Overall, myth endorsement decreased after the information visit (0.37-0.31; p < .001). However, endorsement of the myth that was included in the state script-describing inaccurate risks of depression and anxiety-increased at follow-up (0.47-0.52; p < .05). Lack of knowledge about the effects of abortion is common. Knowledge of information that was accurately presented or not referenced in state-mandated scripts increased. In contrast, inaccurate information was associated with decreases in women's knowledge about abortion, violating accepted principles of informed consent. State policies that require or result in the provision of inaccurate information should be reconsidered. Copyright © 2016 Jacobs Institute of Women's Health. Published by Elsevier Inc. All rights reserved.

  19. Legislative coalitions with incomplete information

    PubMed Central

    Dragu, Tiberiu; Laver, Michael

    2017-01-01

    In most parliamentary democracies, proportional representation electoral rules mean that no single party controls a majority of seats in the legislature. This in turn means that the formation of majority legislative coalitions in such settings is of critical political importance. Conventional approaches to modeling the formation of such legislative coalitions typically make the “common knowledge” assumption that the preferences of all politicians are public information. In this paper, we develop a theoretical framework to investigate which legislative coalitions form when politicians’ policy preferences are private information, not known with certainty by the other politicians with whom they are negotiating over what policies to implement. The model we develop has distinctive implications. It suggests that legislative coalitions should typically be either of the center left or the center right. In other words our model, distinctively, predicts only center-left or center-right policy coalitions, not coalitions comprising the median party plus parties both to its left and to its right. PMID:28242675

  20. Legislative coalitions with incomplete information.

    PubMed

    Dragu, Tiberiu; Laver, Michael

    2017-03-14

    In most parliamentary democracies, proportional representation electoral rules mean that no single party controls a majority of seats in the legislature. This in turn means that the formation of majority legislative coalitions in such settings is of critical political importance. Conventional approaches to modeling the formation of such legislative coalitions typically make the "common knowledge" assumption that the preferences of all politicians are public information. In this paper, we develop a theoretical framework to investigate which legislative coalitions form when politicians' policy preferences are private information, not known with certainty by the other politicians with whom they are negotiating over what policies to implement. The model we develop has distinctive implications. It suggests that legislative coalitions should typically be either of the center left or the center right. In other words our model, distinctively, predicts only center-left or center-right policy coalitions, not coalitions comprising the median party plus parties both to its left and to its right.

  1. How model and input uncertainty impact maize yield simulations in West Africa

    NASA Astrophysics Data System (ADS)

    Waha, Katharina; Huth, Neil; Carberry, Peter; Wang, Enli

    2015-02-01

    Crop models are common tools for simulating crop yields and crop production in studies on food security and global change. Various uncertainties however exist, not only in the model design and model parameters, but also and maybe even more important in soil, climate and management input data. We analyze the performance of the point-scale crop model APSIM and the global scale crop model LPJmL with different climate and soil conditions under different agricultural management in the low-input maize-growing areas of Burkina Faso, West Africa. We test the models’ response to different levels of input information from little to detailed information on soil, climate (1961-2000) and agricultural management and compare the models’ ability to represent the observed spatial (between locations) and temporal variability (between years) in crop yields. We found that the resolution of different soil, climate and management information influences the simulated crop yields in both models. However, the difference between models is larger than between input data and larger between simulations with different climate and management information than between simulations with different soil information. The observed spatial variability can be represented well from both models even with little information on soils and management but APSIM simulates a higher variation between single locations than LPJmL. The agreement of simulated and observed temporal variability is lower due to non-climatic factors e.g. investment in agricultural research and development between 1987 and 1991 in Burkina Faso which resulted in a doubling of maize yields. The findings of our study highlight the importance of scale and model choice and show that the most detailed input data does not necessarily improve model performance.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthias C. M. Troffaes; Gero Walter; Dana Kelly

    In a standard Bayesian approach to the alpha-factor model for common-cause failure, a precise Dirichlet prior distribution models epistemic uncertainty in the alpha-factors. This Dirichlet prior is then updated with observed data to obtain a posterior distribution, which forms the basis for further inferences. In this paper, we adapt the imprecise Dirichlet model of Walley to represent epistemic uncertainty in the alpha-factors. In this approach, epistemic uncertainty is expressed more cautiously via lower and upper expectations for each alpha-factor, along with a learning parameter which determines how quickly the model learns from observed data. For this application, we focus onmore » elicitation of the learning parameter, and find that values in the range of 1 to 10 seem reasonable. The approach is compared with Kelly and Atwood's minimally informative Dirichlet prior for the alpha-factor model, which incorporated precise mean values for the alpha-factors, but which was otherwise quite diffuse. Next, we explore the use of a set of Gamma priors to model epistemic uncertainty in the marginal failure rate, expressed via a lower and upper expectation for this rate, again along with a learning parameter. As zero counts are generally less of an issue here, we find that the choice of this learning parameter is less crucial. Finally, we demonstrate how both epistemic uncertainty models can be combined to arrive at lower and upper expectations for all common-cause failure rates. Thereby, we effectively provide a full sensitivity analysis of common-cause failure rates, properly reflecting epistemic uncertainty of the analyst on all levels of the common-cause failure model.« less

  3. Impact of user influence on information multi-step communication in a micro-blog

    NASA Astrophysics Data System (ADS)

    Wu, Yue; Hu, Yong; He, Xiao-Hai; Deng, Ken

    2014-06-01

    User influence is generally considered as one of the most critical factors that affect information cascading spreading. Based on this common assumption, this paper proposes a theoretical model to examine user influence on the information multi-step communication in a micro-blog. The multi-steps of information communication are divided into first-step and non-first-step, and user influence is classified into five dimensions. Actual data from the Sina micro-blog is collected to construct the model by means of an approach based on structural equations that uses the Partial Least Squares (PLS) technique. Our experimental results indicate that the dimensions of the number of fans and their authority significantly impact the information of first-step communication. Leader rank has a positive impact on both first-step and non-first-step communication. Moreover, global centrality and weight of friends are positively related to the information non-first-step communication, but authority is found to have much less relation to it.

  4. Modelling financial markets with agents competing on different time scales and with different amount of information

    NASA Astrophysics Data System (ADS)

    Wohlmuth, Johannes; Andersen, Jørgen Vitting

    2006-05-01

    We use agent-based models to study the competition among investors who use trading strategies with different amount of information and with different time scales. We find that mixing agents that trade on the same time scale but with different amount of information has a stabilizing impact on the large and extreme fluctuations of the market. Traders with the most information are found to be more likely to arbitrage traders who use less information in the decision making. On the other hand, introducing investors who act on two different time scales has a destabilizing effect on the large and extreme price movements, increasing the volatility of the market. Closeness in time scale used in the decision making is found to facilitate the creation of local trends. The larger the overlap in commonly shared information the more the traders in a mixed system with different time scales are found to profit from the presence of traders acting at another time scale than themselves.

  5. Convoys of care: Theorizing intersections of formal and informal care

    PubMed Central

    Kemp, Candace L.; Ball, Mary M.; Perkins, Molly M.

    2013-01-01

    Although most care to frail elders is provided informally, much of this care is paired with formal care services. Yet, common approaches to conceptualizing the formal–informal intersection often are static, do not consider self-care, and typically do not account for multi-level influences. In response, we introduce the “convoy of care” model as an alternative way to conceptualize the intersection and to theorize connections between care convoy properties and caregiver and recipient outcomes. The model draws on Kahn and Antonucci's (1980) convoy model of social relations, expanding it to include both formal and informal care providers and also incorporates theoretical and conceptual threads from life course, feminist gerontology, social ecology, and symbolic interactionist perspectives. This article synthesizes theoretical and empirical knowledge and demonstrates the convoy of care model in an increasingly popular long-term care setting, assisted living. We conceptualize care convoys as dynamic, evolving, person- and family-specific, and influenced by a host of multi-level factors. Care convoys have implications for older adults’ quality of care and ability to age in place, for job satisfaction and retention among formal caregivers, and for informal caregiver burden. The model moves beyond existing conceptual work to provide a comprehensive, multi-level, multi-factor framework that can be used to inform future research, including research in other care settings, and to spark further theoretical development. PMID:23273553

  6. Peer-to-peer communication, cancer prevention, and the internet

    PubMed Central

    Ancker, Jessica S.; Carpenter, Kristen M.; Greene, Paul; Hoffmann, Randi; Kukafka, Rita; Marlow, Laura A.V.; Prigerson, Holly G.; Quillin, John M.

    2013-01-01

    Online communication among patients and consumers through support groups, discussion boards, and knowledge resources is becoming more common. In this paper, we discuss key methods through which such web-based peer-to-peer communication may affect health promotion and disease prevention behavior (exchanges of information, emotional and instrumental support, and establishment of group norms and models). We also discuss several theoretical models for studying online peer communication, including social theory, health communication models, and health behavior models. Although online peer communication about health and disease is very common, research evaluating effects on health behaviors, mediators, and outcomes is still relatively sparse. We suggest that future research in this field should include formative evaluation and studies of effects on mediators of behavior change, behaviors, and outcomes. It will also be important to examine spontaneously emerging peer communication efforts to see how they can be integrated with theory-based efforts initiated by researchers. PMID:19449267

  7. Timetabling: A Shared Services Model

    ERIC Educational Resources Information Center

    O'Regan, Carmel

    2012-01-01

    This paper identifies common timetabling issues and options as experienced in Australian universities, and develops a rationale to inform management decisions on a suitable system and the associated policies, procedures, management structure and resources at the University of Newcastle, to enable more effective timetabling in line with the needs…

  8. Microbial Source Module (MSM): Documenting the Science and Software for Discovery, Evaluation, and Integration

    EPA Science Inventory

    The Microbial Source Module (MSM) estimates microbial loading rates to land surfaces from non-point sources, and to streams from point sources for each subwatershed within a watershed. A subwatershed, the smallest modeling unit, represents the common basis for information consume...

  9. Transaction-neutral implanted data collection interface as EMR driver: a model for emerging distributed medical technologies.

    PubMed

    Lorence, Daniel; Sivaramakrishnan, Anusha; Richards, Michael

    2010-08-01

    Electronic Medical Record (EMR) and Electronic Health Record (EHR) adoption continues to lag across the US. Cost, inconsistent formats, and concerns about control of patient information are among the most common reasons for non-adoption in physician practice settings. The emergence of wearable and implanted mobile technologies, employed in distributed environments, promises a fundamentally different information infrastructure, which could serve to minimize existing adoption resistance. Proposed here is one technology model for overcoming adoption inconsistency and high organization-specific implementation costs, using seamless, patient controlled data collection. While the conceptual applications employed in this technology set are provided by way of illustration, they may also serve as a transformative model for emerging EMR/EHR requirements.

  10. Common Data Models and Efficient Reproducible Workflows for Distributed Ocean Model Skill Assessment

    NASA Astrophysics Data System (ADS)

    Signell, R. P.; Snowden, D. P.; Howlett, E.; Fernandes, F. A.

    2014-12-01

    Model skill assessment requires discovery, access, analysis, and visualization of information from both sensors and models, and traditionally has been possible only by a few experts. The US Integrated Ocean Observing System (US-IOOS) consists of 17 Federal Agencies and 11 Regional Associations that produce data from various sensors and numerical models; exactly the information required for model skill assessment. US-IOOS is seeking to develop documented skill assessment workflows that are standardized, efficient, and reproducible so that a much wider community can participate in the use and assessment of model results. Standardization requires common data models for observational and model data. US-IOOS relies on the CF Conventions for observations and structured grid data, and on the UGRID Conventions for unstructured (e.g. triangular) grid data. This allows applications to obtain only the data they require in a uniform and parsimonious way using web services: OPeNDAP for model output and OGC Sensor Observation Service (SOS) for observed data. Reproducibility is enabled with IPython Notebooks shared on GitHub (http://github.com/ioos). These capture the entire skill assessment workflow, including user input, search, access, analysis, and visualization, ensuring that workflows are self-documenting and reproducible by anyone, using free software. Python packages for common data models are Pyugrid and the British Met Office Iris package. Python packages required to run the workflows (pyugrid, pyoos, and the British Met Office Iris package) are also available on GitHub and on Binstar.org so that users can run scenarios using the free Anaconda Python distribution. Hosted services such as Wakari enable anyone to reproduce these workflows for free, without installing any software locally, using just their web browser. We are also experimenting with Wakari Enterprise, which allows multi-user access from a web browser to an IPython Server running where large quantities of model output reside, increasing the efficiency. The open development and distribution of these workflows, and the software on which they depend, is an educational resource for those new to the field and a center of focus where practitioners can contribute new software and ideas.

  11. Two Analyte Calibration From The Transient Response Of Potentiometric Sensors Employed With The SIA Technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cartas, Raul; Mimendia, Aitor; Valle, Manel del

    2009-05-23

    Calibration models for multi-analyte electronic tongues have been commonly built using a set of sensors, at least one per analyte under study. Complex signals recorded with these systems are formed by the sensors' responses to the analytes of interest plus interferents, from which a multivariate response model is then developed. This work describes a data treatment method for the simultaneous quantification of two species in solution employing the signal from a single sensor. The approach used here takes advantage of the complex information recorded with one electrode's transient after insertion of sample for building the calibration models for both analytes.more » The departure information from the electrode was firstly processed by discrete wavelet for transforming the signals to extract useful information and reduce its length, and then by artificial neural networks for fitting a model. Two different potentiometric sensors were used as study case for simultaneously corroborating the effectiveness of the approach.« less

  12. Comparing the use of an online expert health network against common information sources to answer health questions.

    PubMed

    Rhebergen, Martijn D F; Lenderink, Annet F; van Dijk, Frank J H; Hulshof, Carel T J

    2012-02-02

    Many workers have questions about occupational safety and health (OSH). It is unknown whether workers are able to find correct, evidence-based answers to OSH questions when they use common information sources, such as websites, or whether they would benefit from using an easily accessible, free-of-charge online network of OSH experts providing advice. To assess the rate of correct, evidence-based answers to OSH questions in a group of workers who used an online network of OSH experts (intervention group) compared with a group of workers who used common information sources (control group). In a quasi-experimental study, workers in the intervention and control groups were randomly offered 2 questions from a pool of 16 standardized OSH questions. Both questions were sent by mail to all participants, who had 3 weeks to answer them. The intervention group was instructed to use only the online network ArboAntwoord, a network of about 80 OSH experts, to solve the questions. The control group was instructed that they could use all information sources available to them. To assess answer correctness as the main study outcome, 16 standardized correct model answers were constructed with the help of reviewers who performed literature searches. Subsequently, the answers provided by all participants in the intervention (n = 94 answers) and control groups (n = 124 answers) were blinded and compared with the correct model answers on the degree of correctness. Of the 94 answers given by participants in the intervention group, 58 were correct (62%), compared with 24 of the 124 answers (19%) in the control group, who mainly used informational websites found via Google. The difference between the 2 groups was significant (rate difference = 43%, 95% confidence interval [CI] 30%-54%). Additional analysis showed that the rate of correct main conclusions of the answers was 85 of 94 answers (90%) in the intervention group and 75 of 124 answers (61%) in the control group (rate difference = 29%, 95% CI 19%-40%). Remarkably, we could not identify differences between workers who provided correct answers and workers who did not on how they experienced the credibility, completeness, and applicability of the information found (P > .05). Workers are often unable to find correct answers to OSH questions when using common information sources, generally informational websites. Because workers frequently misjudge the quality of the information they find, other strategies are required to assist workers in finding correct answers. Expert advice provided through an online expert network can be effective for this purpose. As many people experience difficulties in finding correct answers to their health questions, expert networks may be an attractive new source of information for health fields in general.

  13. Comparing the Use of an Online Expert Health Network against Common Information Sources to Answer Health Questions

    PubMed Central

    Lenderink, Annet F; van Dijk, Frank JH; Hulshof, Carel TJ

    2012-01-01

    Background Many workers have questions about occupational safety and health (OSH). It is unknown whether workers are able to find correct, evidence-based answers to OSH questions when they use common information sources, such as websites, or whether they would benefit from using an easily accessible, free-of-charge online network of OSH experts providing advice. Objective To assess the rate of correct, evidence-based answers to OSH questions in a group of workers who used an online network of OSH experts (intervention group) compared with a group of workers who used common information sources (control group). Methods In a quasi-experimental study, workers in the intervention and control groups were randomly offered 2 questions from a pool of 16 standardized OSH questions. Both questions were sent by mail to all participants, who had 3 weeks to answer them. The intervention group was instructed to use only the online network ArboAntwoord, a network of about 80 OSH experts, to solve the questions. The control group was instructed that they could use all information sources available to them. To assess answer correctness as the main study outcome, 16 standardized correct model answers were constructed with the help of reviewers who performed literature searches. Subsequently, the answers provided by all participants in the intervention (n = 94 answers) and control groups (n = 124 answers) were blinded and compared with the correct model answers on the degree of correctness. Results Of the 94 answers given by participants in the intervention group, 58 were correct (62%), compared with 24 of the 124 answers (19%) in the control group, who mainly used informational websites found via Google. The difference between the 2 groups was significant (rate difference = 43%, 95% confidence interval [CI] 30%–54%). Additional analysis showed that the rate of correct main conclusions of the answers was 85 of 94 answers (90%) in the intervention group and 75 of 124 answers (61%) in the control group (rate difference = 29%, 95% CI 19%–40%). Remarkably, we could not identify differences between workers who provided correct answers and workers who did not on how they experienced the credibility, completeness, and applicability of the information found (P > .05). Conclusions Workers are often unable to find correct answers to OSH questions when using common information sources, generally informational websites. Because workers frequently misjudge the quality of the information they find, other strategies are required to assist workers in finding correct answers. Expert advice provided through an online expert network can be effective for this purpose. As many people experience difficulties in finding correct answers to their health questions, expert networks may be an attractive new source of information for health fields in general. PMID:22356848

  14. Research on Crowdsourcing Emergency Information Extraction of Based on Events' Frame

    NASA Astrophysics Data System (ADS)

    Yang, Bo; Wang, Jizhou; Ma, Weijun; Mao, Xi

    2018-01-01

    At present, the common information extraction method cannot extract the structured emergency event information accurately; the general information retrieval tool cannot completely identify the emergency geographic information; these ways also do not have an accurate assessment of these results of distilling. So, this paper proposes an emergency information collection technology based on event framework. This technique is to solve the problem of emergency information picking. It mainly includes emergency information extraction model (EIEM), complete address recognition method (CARM) and the accuracy evaluation model of emergency information (AEMEI). EIEM can be structured to extract emergency information and complements the lack of network data acquisition in emergency mapping. CARM uses a hierarchical model and the shortest path algorithm and allows the toponomy pieces to be joined as a full address. AEMEI analyzes the results of the emergency event and summarizes the advantages and disadvantages of the event framework. Experiments show that event frame technology can solve the problem of emergency information drawing and provides reference cases for other applications. When the emergency disaster is about to occur, the relevant departments query emergency's data that has occurred in the past. They can make arrangements ahead of schedule which defense and reducing disaster. The technology decreases the number of casualties and property damage in the country and world. This is of great significance to the state and society.

  15. Multiple neural states of representation in short-term memory? It's a matter of attention.

    PubMed

    Larocque, Joshua J; Lewis-Peacock, Jarrod A; Postle, Bradley R

    2014-01-01

    Short-term memory (STM) refers to the capacity-limited retention of information over a brief period of time, and working memory (WM) refers to the manipulation and use of that information to guide behavior. In recent years it has become apparent that STM and WM interact and overlap with other cognitive processes, including attention (the selection of a subset of information for further processing) and long-term memory (LTM-the encoding and retention of an effectively unlimited amount of information for a much longer period of time). Broadly speaking, there have been two classes of memory models: systems models, which posit distinct stores for STM and LTM (Atkinson and Shiffrin, 1968; Baddeley and Hitch, 1974); and state-based models, which posit a common store with different activation states corresponding to STM and LTM (Cowan, 1995; McElree, 1996; Oberauer, 2002). In this paper, we will focus on state-based accounts of STM. First, we will consider several theoretical models that postulate, based on considerable behavioral evidence, that information in STM can exist in multiple representational states. We will then consider how neural data from recent studies of STM can inform and constrain these theoretical models. In the process we will highlight the inferential advantage of multivariate, information-based analyses of neuroimaging data (fMRI and electroencephalography (EEG)) over conventional activation-based analysis approaches (Postle, in press). We will conclude by addressing lingering questions regarding the fractionation of STM, highlighting differences between the attention to information vs. the retention of information during brief memory delays.

  16. Mixture of autoregressive modeling orders and its implication on single trial EEG classification

    PubMed Central

    Atyabi, Adham; Shic, Frederick; Naples, Adam

    2016-01-01

    Autoregressive (AR) models are of commonly utilized feature types in Electroencephalogram (EEG) studies due to offering better resolution, smoother spectra and being applicable to short segments of data. Identifying correct AR’s modeling order is an open challenge. Lower model orders poorly represent the signal while higher orders increase noise. Conventional methods for estimating modeling order includes Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Final Prediction Error (FPE). This article assesses the hypothesis that appropriate mixture of multiple AR orders is likely to better represent the true signal compared to any single order. Better spectral representation of underlying EEG patterns can increase utility of AR features in Brain Computer Interface (BCI) systems by increasing timely & correctly responsiveness of such systems to operator’s thoughts. Two mechanisms of Evolutionary-based fusion and Ensemble-based mixture are utilized for identifying such appropriate mixture of modeling orders. The classification performance of the resultant AR-mixtures are assessed against several conventional methods utilized by the community including 1) A well-known set of commonly used orders suggested by the literature, 2) conventional order estimation approaches (e.g., AIC, BIC and FPE), 3) blind mixture of AR features originated from a range of well-known orders. Five datasets from BCI competition III that contain 2, 3 and 4 motor imagery tasks are considered for the assessment. The results indicate superiority of Ensemble-based modeling order mixture and evolutionary-based order fusion methods within all datasets. PMID:28740331

  17. Event models and the fan effect.

    PubMed

    Radvansky, G A; O'Rear, Andrea E; Fisher, Jerry S

    2017-08-01

    The current study explored the persistence of event model organizations and how this influences the experience of interference during retrieval. People in this study memorized lists of sentences about objects in locations, such as "The potted palm is in the hotel." Previous work has shown that such information can either be stored in separate event models, thereby producing retrieval interference, or integrated into common event models, thereby eliminating retrieval interference. Unlike prior studies, the current work explored the impact of forgetting up to 2 weeks later on this pattern of performance. We explored three possible outcomes across the various retention intervals. First, consistent with research showing that longer delays reduce proactive and retroactive interference, any retrieval interference effects of competing event models could be reduced over time. Second, the binding of information into events models may weaken over time, causing interference effects to emerge when they had previously been absent. Third, and finally, the organization of information into event models could remain stable over long periods of time. The results reported here are most consistent with the last outcome. While there were some minor variations across the various retention intervals, the basic pattern of event model organization remained preserved over the two-week retention period.

  18. A Common Open Space or a Digital Divide? A Social Model Perspective on the Online Disability Community in China

    ERIC Educational Resources Information Center

    Guo, Baorong; Bricout, John C.; Huang, Jin

    2005-01-01

    This paper explores the use and impact of the Internet by disabled people in China, informed by the social model of disability. Based on survey data from 122 disabled individuals across 25 provinces in China, study findings suggest that there is an emerging digital divide in the use of Internet amongst the disability community in China. Internet…

  19. Acoustic surface perception from naturally occurring step sounds of a dexterous hexapod robot

    NASA Astrophysics Data System (ADS)

    Cuneyitoglu Ozkul, Mine; Saranli, Afsar; Yazicioglu, Yigit

    2013-10-01

    Legged robots that exhibit dynamic dexterity naturally interact with the surface to generate complex acoustic signals carrying rich information on the surface as well as the robot platform itself. However, the nature of a legged robot, which is a complex, hybrid dynamic system, renders the more common approach of model-based system identification impractical. The present paper focuses on acoustic surface identification and proposes a non-model-based analysis and classification approach adopted from the speech processing literature. A novel feature set composed of spectral band energies augmented by their vector time derivatives and time-domain averaged zero crossing rate is proposed. Using a multi-dimensional vector classifier, these features carry enough information to accurately classify a range of commonly occurring indoor and outdoor surfaces without using of any mechanical system model. A comparative experimental study is carried out and classification performance and computational complexity are characterized. Different feature combinations, classifiers and changes in critical design parameters are investigated. A realistic and representative acoustic data set is collected with the robot moving at different speeds on a number of surfaces. The study demonstrates promising performance of this non-model-based approach, even in an acoustically uncontrolled environment. The approach also has good chance of performing in real-time.

  20. Integrative Etiopathogenetic Models of Psychotic Disorders: Methods, Evidence and Concepts

    PubMed Central

    Gaebel, Wolfgang; Zielasek, Jürgen

    2011-01-01

    Integrative models of the etiopathogesnesis of psychotic disorders are needed since a wealth of information from such diverse fields as neurobiology, psychology, and the social sciences is currently changing the concepts of mental disorders. Several approaches to integrate these streams of information into coherent concepts of psychosis are feasible and will need to be assessed in future experimental studies. Common to these concepts are the notion of psychotic disorders as brain disorders and a polythetic approach in that it is increasingly realized that a multitude of interindividually partially different pathogenetic factors interact in individual persons in a complex fashion resulting in the clinical symptoms of psychosis. PMID:21860047

  1. Agent-Based Modeling of Cancer Stem Cell Driven Solid Tumor Growth.

    PubMed

    Poleszczuk, Jan; Macklin, Paul; Enderling, Heiko

    2016-01-01

    Computational modeling of tumor growth has become an invaluable tool to simulate complex cell-cell interactions and emerging population-level dynamics. Agent-based models are commonly used to describe the behavior and interaction of individual cells in different environments. Behavioral rules can be informed and calibrated by in vitro assays, and emerging population-level dynamics may be validated with both in vitro and in vivo experiments. Here, we describe the design and implementation of a lattice-based agent-based model of cancer stem cell driven tumor growth.

  2. Reach of mass media among tobacco users in India: a preliminary report.

    PubMed

    Rooban, T; Madan Kumar, P D; Ranganathan, K

    2010-07-01

    Tobacco use is a health hazard and its use is attributed to a lack of knowledge regarding the ill effects of tobacco. To identify the exposure of different mass media among a representative cohort population in the Indian subcontinent and compare the reach of the different mass media among tobacco users and nonusers using the "reach of HIV information" as a model. Secondary Data Analysis of Indian National Family Health Survey-3. Any tobacco use, gender, source of HIV information. Use of mass media. Of the study group, 27% of males and 54.4% of females never read newspaper or magazine; 29.3% of males and 52.6% of females never heard radio; 12.4% of males and 25% of females never see television; and 79.3% of males and 93.46% of females did not see a movie at least once a month. The most common source of information of HIV was television among males (71.8%) and females (81%), whereas the least common source was leaders among males (0.8%) and females (0.2%). Television is the single largest media used by both genders and was a major source of HIV information dissemination. A well-designed tobacco control program similar to HIV awareness program will help to curb tobacco use. The reach of different media among Indian tobacco users is presented and HIV model of information dissemination may prove to be effective in tobacco control.

  3. Boosting Probabilistic Graphical Model Inference by Incorporating Prior Knowledge from Multiple Sources

    PubMed Central

    Praveen, Paurush; Fröhlich, Holger

    2013-01-01

    Inferring regulatory networks from experimental data via probabilistic graphical models is a popular framework to gain insights into biological systems. However, the inherent noise in experimental data coupled with a limited sample size reduces the performance of network reverse engineering. Prior knowledge from existing sources of biological information can address this low signal to noise problem by biasing the network inference towards biologically plausible network structures. Although integrating various sources of information is desirable, their heterogeneous nature makes this task challenging. We propose two computational methods to incorporate various information sources into a probabilistic consensus structure prior to be used in graphical model inference. Our first model, called Latent Factor Model (LFM), assumes a high degree of correlation among external information sources and reconstructs a hidden variable as a common source in a Bayesian manner. The second model, a Noisy-OR, picks up the strongest support for an interaction among information sources in a probabilistic fashion. Our extensive computational studies on KEGG signaling pathways as well as on gene expression data from breast cancer and yeast heat shock response reveal that both approaches can significantly enhance the reconstruction accuracy of Bayesian Networks compared to other competing methods as well as to the situation without any prior. Our framework allows for using diverse information sources, like pathway databases, GO terms and protein domain data, etc. and is flexible enough to integrate new sources, if available. PMID:23826291

  4. Utility-preserving anonymization for health data publishing.

    PubMed

    Lee, Hyukki; Kim, Soohyung; Kim, Jong Wook; Chung, Yon Dohn

    2017-07-11

    Publishing raw electronic health records (EHRs) may be considered as a breach of the privacy of individuals because they usually contain sensitive information. A common practice for the privacy-preserving data publishing is to anonymize the data before publishing, and thus satisfy privacy models such as k-anonymity. Among various anonymization techniques, generalization is the most commonly used in medical/health data processing. Generalization inevitably causes information loss, and thus, various methods have been proposed to reduce information loss. However, existing generalization-based data anonymization methods cannot avoid excessive information loss and preserve data utility. We propose a utility-preserving anonymization for privacy preserving data publishing (PPDP). To preserve data utility, the proposed method comprises three parts: (1) utility-preserving model, (2) counterfeit record insertion, (3) catalog of the counterfeit records. We also propose an anonymization algorithm using the proposed method. Our anonymization algorithm applies full-domain generalization algorithm. We evaluate our method in comparison with existence method on two aspects, information loss measured through various quality metrics and error rate of analysis result. With all different types of quality metrics, our proposed method show the lower information loss than the existing method. In the real-world EHRs analysis, analysis results show small portion of error between the anonymized data through the proposed method and original data. We propose a new utility-preserving anonymization method and an anonymization algorithm using the proposed method. Through experiments on various datasets, we show that the utility of EHRs anonymized by the proposed method is significantly better than those anonymized by previous approaches.

  5. Hook, Line and Infection: A Guide to Culturing Parasites, Establishing Infections and Assessing Immune Responses in the Three-Spined Stickleback.

    PubMed

    Stewart, Alexander; Jackson, Joseph; Barber, Iain; Eizaguirre, Christophe; Paterson, Rachel; van West, Pieter; Williams, Chris; Cable, Joanne

    2017-01-01

    The three-spined stickleback (Gasterosteus aculeatus) is a model organism with an extremely well-characterized ecology, evolutionary history, behavioural repertoire and parasitology that is coupled with published genomic data. These small temperate zone fish therefore provide an ideal experimental system to study common diseases of coldwater fish, including those of aquacultural importance. However, detailed information on the culture of stickleback parasites, the establishment and maintenance of infections and the quantification of host responses is scattered between primary and grey literature resources, some of which is not readily accessible. Our aim is to lay out a framework of techniques based on our experience to inform new and established laboratories about culture techniques and recent advances in the field. Here, essential knowledge on the biology, capture and laboratory maintenance of sticklebacks, and their commonly studied parasites is drawn together, highlighting recent advances in our understanding of the associated immune responses. In compiling this guide on the maintenance of sticklebacks and a range of common, taxonomically diverse parasites in the laboratory, we aim to engage a broader interdisciplinary community to consider this highly tractable model when addressing pressing questions in evolution, infection and aquaculture. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Archetype-based semantic integration and standardization of clinical data.

    PubMed

    Moner, David; Maldonado, Jose A; Bosca, Diego; Fernandez, Jesualdo T; Angulo, Carlos; Crespo, Pere; Vivancos, Pedro J; Robles, Montserrat

    2006-01-01

    One of the basic needs for any healthcare professional is to be able to access to clinical information of patients in an understandable and normalized way. The lifelong clinical information of any person supported by electronic means configures his/her Electronic Health Record (EHR). This information is usually distributed among several independent and heterogeneous systems that may be syntactically or semantically incompatible. The Dual Model architecture has appeared as a new proposal for maintaining a homogeneous representation of the EHR with a clear separation between information and knowledge. Information is represented by a Reference Model which describes common data structures with minimal semantics. Knowledge is specified by archetypes, which are formal representations of clinical concepts built upon a particular Reference Model. This kind of architecture is originally thought for implantation of new clinical information systems, but archetypes can be also used for integrating data of existing and not normalized systems, adding at the same time a semantic meaning to the integrated data. In this paper we explain the possible use of a Dual Model approach for semantic integration and standardization of heterogeneous clinical data sources and present LinkEHR-Ed, a tool for developing archetypes as elements for integration purposes. LinkEHR-Ed has been designed to be easily used by the two main participants of the creation process of archetypes for clinical data integration: the Health domain expert and the Information Technologies domain expert.

  7. Trans-species learning of cellular signaling systems with bimodal deep belief networks

    PubMed Central

    Chen, Lujia; Cai, Chunhui; Chen, Vicky; Lu, Xinghua

    2015-01-01

    Motivation: Model organisms play critical roles in biomedical research of human diseases and drug development. An imperative task is to translate information/knowledge acquired from model organisms to humans. In this study, we address a trans-species learning problem: predicting human cell responses to diverse stimuli, based on the responses of rat cells treated with the same stimuli. Results: We hypothesized that rat and human cells share a common signal-encoding mechanism but employ different proteins to transmit signals, and we developed a bimodal deep belief network and a semi-restricted bimodal deep belief network to represent the common encoding mechanism and perform trans-species learning. These ‘deep learning’ models include hierarchically organized latent variables capable of capturing the statistical structures in the observed proteomic data in a distributed fashion. The results show that the models significantly outperform two current state-of-the-art classification algorithms. Our study demonstrated the potential of using deep hierarchical models to simulate cellular signaling systems. Availability and implementation: The software is available at the following URL: http://pubreview.dbmi.pitt.edu/TransSpeciesDeepLearning/. The data are available through SBV IMPROVER website, https://www.sbvimprover.com/challenge-2/overview, upon publication of the report by the organizers. Contact: xinghua@pitt.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25995230

  8. Exponential Modelling for Mutual-Cohering of Subband Radar Data

    NASA Astrophysics Data System (ADS)

    Siart, U.; Tejero, S.; Detlefsen, J.

    2005-05-01

    Increasing resolution and accuracy is an important issue in almost any type of radar sensor application. However, both resolution and accuracy are strongly related to the available signal bandwidth and energy that can be used. Nowadays, often several sensors operating in different frequency bands become available on a sensor platform. It is an attractive goal to use the potential of advanced signal modelling and optimization procedures by making proper use of information stemming from different frequency bands at the RF signal level. An important prerequisite for optimal use of signal energy is coherence between all contributing sensors. Coherent multi-sensor platforms are greatly expensive and are thus not available in general. This paper presents an approach for accurately estimating object radar responses using subband measurements at different RF frequencies. An exponential model approach allows to compensate for the lack of mutual coherence between independently operating sensors. Mutual coherence is recovered from the a-priori information that both sensors have common scattering centers in view. Minimizing the total squared deviation between measured data and a full-range exponential signal model leads to more accurate pole angles and pole magnitudes compared to single-band optimization. The model parameters (range and magnitude of point scatterers) after this full-range optimization process are also more accurate than the parameters obtained from a commonly used super-resolution procedure (root-MUSIC) applied to the non-coherent subband data.

  9. Assessment of diffuse radiation models in Azores

    NASA Astrophysics Data System (ADS)

    Magarreiro, Clarisse; Brito, Miguel; Soares, Pedro; Azevedo, Eduardo

    2014-05-01

    Measured irradiance databases usually consist of global solar radiation data with limited spatial coverage. Hence, solar radiation models have been developed to estimate the diffuse fraction from the measured global irradiation. This information is critical for the assessment of the potential of solar energy technologies; for example, the decision to use photovoltaic systems with tracking system. The different solar radiation models for this purpose differ on the parameters used as input. The simplest, and most common, are models which use global radiation information only. More sophisticated models require meteorological parameters such as information from clouds, atmospheric turbidity, temperature or precipitable water content. Most of these models comprise correlations with the clearness index, kt (portion of horizontal extra-terrestrial radiation reaching the Earth's surface) to obtain the diffuse fraction kd (portion of diffuse component from global radiation). The applicability of these different models is related to the local atmospheric conditions and its climatic characteristics. The models are not of general validity and can only be applicable to locations where the albedo of the surrounding terrain and the atmospheric contamination by dust are not significantly different from those where the corresponding methods were developed. Thus, models of diffuse fraction exhibit a relevant degree of location dependence: e.g. models developed considering data acquired in Europe are mainly linked to Northern, Central or, more recently, Mediterranean areas. The Azores Archipelago, with its particular climate and cloud cover characteristics, different from mainland Europe, has not yet been considered for the development of testing of such models. The Azorean climate reveals large amounts of cloud cover in its annual cycle, with spatial and temporal variabilities more complex than the common Summer/Winter pattern. This study explores the applicability of different existing correlation models of diffuse fraction and clearness index or other plain parameters to the Azorean region. Reliable data provided by the Atmospheric Radiation Measurements (ARM) Climate Research Facility from the Graciosa Island deployment of the ARM Mobile Facility (http://www.arm.gov/sites/amf/grw) was used to perform the analysis. Model results showed a tendency to underestimate higher values of diffuse radiation. From the performance results of the correlation models reviewed it was clear that there is room for improvement.

  10. A METHODOLOGY FOR INTEGRATING IMAGES AND TEXT FOR OBJECT IDENTIFICATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paulson, Patrick R.; Hohimer, Ryan E.; Doucette, Peter J.

    2006-02-13

    Often text and imagery contain information that must be combined to solve a problem. One approach begins with transforming the raw text and imagery into a common structure that contains the critical information in a usable form. This paper presents an application in which the imagery of vehicles and the text from police reports were combined to demonstrate the power of data fusion to correctly identify the target vehicle--e.g., a red 2002 Ford truck identified in a police report--from a collection of diverse vehicle images. The imagery was abstracted into a common signature by first capturing the conceptual models ofmore » the imagery experts in software. Our system then (1) extracted fundamental features (e.g., wheel base, color), (2) made inferences about the information (e.g., it’s a red Ford) and then (3) translated the raw information into an abstract knowledge signature that was designed to both capture the important features and account for uncertainty. Likewise, the conceptual models of text analysis experts were instantiated into software that was used to generate an abstract knowledge signature that could be readily compared to the imagery knowledge signature. While this experiment primary focus was to demonstrate the power of text and imagery fusion for a specific example it also suggested several ways that text and geo-registered imagery could be combined to help solve other types of problems.« less

  11. Integrated Data & Analysis in Support of Informed and Transparent Decision Making

    NASA Astrophysics Data System (ADS)

    Guivetchi, K.

    2012-12-01

    The California Water Plan includes a framework for improving water reliability, environmental stewardship, and economic stability through two initiatives - integrated regional water management to make better use of local water sources by integrating multiple aspects of managing water and related resources; and maintaining and improving statewide water management systems. The Water Plan promotes ways to develop a common approach for data standards and for understanding, evaluating, and improving regional and statewide water management systems, and for common ways to evaluate and select from alternative management strategies and projects. The California Water Plan acknowledges that planning for the future is uncertain and that change will continue to occur. It is not possible to know for certain how population growth, land use decisions, water demand patterns, environmental conditions, the climate, and many other factors that affect water use and supply may change by 2050. To anticipate change, our approach to water management and planning for the future needs to consider and quantify uncertainty, risk, and sustainability. There is a critical need for information sharing and information management to support over-arching and long-term water policy decisions that cross-cut multiple programs across many organizations and provide a common and transparent understanding of water problems and solutions. Achieving integrated water management with multiple benefits requires a transparent description of dynamic linkages between water supply, flood management, water quality, land use, environmental water, and many other factors. Water Plan Update 2013 will include an analytical roadmap for improving data, analytical tools, and decision-support to advance integrated water management at statewide and regional scales. It will include recommendations for linking collaborative processes with technical enhancements, providing effective analytical tools, and improving and sharing data and information. Specifically, this includes achieving better integration and consistency with other planning activities; obtaining consensus on quantitative deliverables; building a common conceptual understanding of the water management system; developing common schematics of the water management system; establishing modeling protocols and standards; and improving transparency and exchange of Water Plan information.

  12. Sting, Carry and Stock: How Corpse Availability Can Regulate De-Centralized Task Allocation in a Ponerine Ant Colony

    PubMed Central

    Schmickl, Thomas; Karsai, Istvan

    2014-01-01

    We develop a model to produce plausible patterns of task partitioning in the ponerine ant Ectatomma ruidum based on the availability of living prey and prey corpses. The model is based on the organizational capabilities of a “common stomach” through which the colony utilizes the availability of a natural (food) substance as a major communication channel to regulate the income and expenditure of the very same substance. This communication channel has also a central role in regulating task partitioning of collective hunting behavior in a supply&demand-driven manner. Our model shows that task partitioning of the collective hunting behavior in E. ruidum can be explained by regulation due to a common stomach system. The saturation of the common stomach provides accessible information to individual ants so that they can adjust their hunting behavior accordingly by engaging in or by abandoning from stinging or transporting tasks. The common stomach is able to establish and to keep stabilized an effective mix of workforce to exploit the prey population and to transport food into the nest. This system is also able to react to external perturbations in a de-centralized homeostatic way, such as to changes in the prey density or to accumulation of food in the nest. In case of stable conditions the system develops towards an equilibrium concerning colony size and prey density. Our model shows that organization of work through a common stomach system can allow Ectatomma ruidum to collectively forage for food in a robust, reactive and reliable way. The model is compared to previously published models that followed a different modeling approach. Based on our model analysis we also suggest a series of experiments for which our model gives plausible predictions. These predictions are used to formulate a set of testable hypotheses that should be investigated empirically in future experimentation. PMID:25493558

  13. Generic Educational Knowledge Representation for Adaptive and Cognitive Systems

    ERIC Educational Resources Information Center

    Caravantes, Arturo; Galan, Ramon

    2011-01-01

    The interoperability of educational systems, encouraged by the development of specifications, standards and tools related to the Semantic Web is limited to the exchange of information in domain and student models. High system interoperability requires that a common framework be defined that represents the functional essence of educational systems.…

  14. Setting analyst: A practical harvest planning technique

    Treesearch

    Olivier R.M. Halleux; W. Dale Greene

    2001-01-01

    Setting Analyst is an ArcView extension that facilitates practical harvest planning for ground-based systems. By modeling the travel patterns of ground-based machines, it compares different harvesting settings based on projected average skidding distance, logging costs, and site disturbance levels. Setting Analyst uses information commonly available to consulting...

  15. Differential Language Influence on Math Achievement

    ERIC Educational Resources Information Center

    Chen, Fang

    2010-01-01

    New models are commonly designed to solve certain limitations of other ones. Quantile regression is introduced in this paper because it can provide information that a regular mean regression misses. This research aims to demonstrate its utility in the educational research and measurement field for questions that may not be detected otherwise.…

  16. Practical Guide to Conducting an Item Response Theory Analysis

    ERIC Educational Resources Information Center

    Toland, Michael D.

    2014-01-01

    Item response theory (IRT) is a psychometric technique used in the development, evaluation, improvement, and scoring of multi-item scales. This pedagogical article provides the necessary information needed to understand how to conduct, interpret, and report results from two commonly used ordered polytomous IRT models (Samejima's graded…

  17. Sex-specific movements in postfledging juvenile Ovenbirds (Seiurus aurocapilla)

    Treesearch

    Julianna M. A. Jenkins; Mikenzie Hart; Lori S. Eggert; John Faaborg

    2017-01-01

    Understanding sex-specific differences in behavior and survival is essential for informative population modeling and evolutionary biology in animal populations. Uneven sex ratios are common in many migrant passerine species; however, the underlying mechanisms remain unclear. We used molecular sex determination, nest monitoring, and radio telemetry of fledgling...

  18. THE WHITE MORPHOTYPE OF MYCOBACTERIUM AVIUM-INTRACELLULARE IS COMMON IN INFECTED HUMANS AND VIRULENT IN ANIMAL MODELS. (R826828)

    EPA Science Inventory

    The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...

  19. A community effort towards a knowledge-base and mathematical model of the human pathogen Salmonella Typhimurium LT2

    USDA-ARS?s Scientific Manuscript database

    Metabolic reconstructions (MRs) are common denominators in systems biology and represent biochemical, genetic, and genomic (BiGG) knowledge-bases for target organisms by capturing currently available information in a consistent, structured manner. Salmonella enterica subspecies I serovar Typhimurium...

  20. The World-Wide Web and Mosaic: An Overview for Librarians.

    ERIC Educational Resources Information Center

    Morgan, Eric Lease

    1994-01-01

    Provides an overview of the Internet's World-Wide Web (Web), a hypertext system. Highlights include the client/server model; Uniform Resource Locator; examples of software; Web servers versus Gopher servers; HyperText Markup Language (HTML); converting files; Common Gateway Interface; organizing Web information; and the role of librarians in…

  1. Intake Procedures in College Counseling Centers.

    ERIC Educational Resources Information Center

    Pappas, James P.; And Others

    Intake procedures is the common subject of four papers presented in this booklet. James P. Pappas discusses trends, a decision theory model, information and issues in his article "Intake Procedures in Counseling Centers--Trends and Theory." In the second article "The Utilization of Standardized Tests in Intake Procedures or 'Where's the Post…

  2. Bayesian structural equation modeling in sport and exercise psychology.

    PubMed

    Stenling, Andreas; Ivarsson, Andreas; Johnson, Urban; Lindwall, Magnus

    2015-08-01

    Bayesian statistics is on the rise in mainstream psychology, but applications in sport and exercise psychology research are scarce. In this article, the foundations of Bayesian analysis are introduced, and we will illustrate how to apply Bayesian structural equation modeling in a sport and exercise psychology setting. More specifically, we contrasted a confirmatory factor analysis on the Sport Motivation Scale II estimated with the most commonly used estimator, maximum likelihood, and a Bayesian approach with weakly informative priors for cross-loadings and correlated residuals. The results indicated that the model with Bayesian estimation and weakly informative priors provided a good fit to the data, whereas the model estimated with a maximum likelihood estimator did not produce a well-fitting model. The reasons for this discrepancy between maximum likelihood and Bayesian estimation are discussed as well as potential advantages and caveats with the Bayesian approach.

  3. Integrating a geographic information system, a scientific visualization system and an orographic precipitation model

    USGS Publications Warehouse

    Hay, L.; Knapp, L.

    1996-01-01

    Investigating natural, potential, and man-induced impacts on hydrological systems commonly requires complex modelling with overlapping data requirements, and massive amounts of one- to four-dimensional data at multiple scales and formats. Given the complexity of most hydrological studies, the requisite software infrastructure must incorporate many components including simulation modelling, spatial analysis and flexible, intuitive displays. There is a general requirement for a set of capabilities to support scientific analysis which, at this time, can only come from an integration of several software components. Integration of geographic information systems (GISs) and scientific visualization systems (SVSs) is a powerful technique for developing and analysing complex models. This paper describes the integration of an orographic precipitation model, a GIS and a SVS. The combination of these individual components provides a robust infrastructure which allows the scientist to work with the full dimensionality of the data and to examine the data in a more intuitive manner.

  4. Metadata Design in the New PDS4 Standards - Something for Everybody

    NASA Astrophysics Data System (ADS)

    Raugh, Anne C.; Hughes, John S.

    2015-11-01

    The Planetary Data System (PDS) archives, supports, and distributes data of diverse targets, from diverse sources, to diverse users. One of the core problems addressed by the PDS4 data standard redesign was that of metadata - how to accommodate the increasingly sophisticated demands of search interfaces, analytical software, and observational documentation into label standards without imposing limits and constraints that would impinge on the quality or quantity of metadata that any particular observer or team could supply. And yet, as an archive, PDS must have detailed documentation for the metadata in the labels it supports, or the institutional knowledge encoded into those attributes will be lost - putting the data at risk.The PDS4 metadata solution is based on a three-step approach. First, it is built on two key ISO standards: ISO 11179 "Information Technology - Metadata Registries", which provides a common framework and vocabulary for defining metadata attributes; and ISO 14721 "Space Data and Information Transfer Systems - Open Archival Information System (OAIS) Reference Model", which provides the framework for the information architecture that enforces the object-oriented paradigm for metadata modeling. Second, PDS has defined a hierarchical system that allows it to divide its metadata universe into namespaces ("data dictionaries", conceptually), and more importantly to delegate stewardship for a single namespace to a local authority. This means that a mission can develop its own data model with a high degree of autonomy and effectively extend the PDS model to accommodate its own metadata needs within the common ISO 11179 framework. Finally, within a single namespace - even the core PDS namespace - existing metadata structures can be extended and new structures added to the model as new needs are identifiedThis poster illustrates the PDS4 approach to metadata management and highlights the expected return on the development investment for PDS, users and data preparers.

  5. Informed consent in direct-to-consumer personal genome testing: the outline of a model between specific and generic consent.

    PubMed

    Bunnik, Eline M; Janssens, A Cecile J W; Schermer, Maartje H N

    2014-09-01

    Broad genome-wide testing is increasingly finding its way to the public through the online direct-to-consumer marketing of so-called personal genome tests. Personal genome tests estimate genetic susceptibilities to multiple diseases and other phenotypic traits simultaneously. Providers commonly make use of Terms of Service agreements rather than informed consent procedures. However, to protect consumers from the potential physical, psychological and social harms associated with personal genome testing and to promote autonomous decision-making with regard to the testing offer, we argue that current practices of information provision are insufficient and that there is a place--and a need--for informed consent in personal genome testing, also when it is offered commercially. The increasing quantity, complexity and diversity of most testing offers, however, pose challenges for information provision and informed consent. Both specific and generic models for informed consent fail to meet its moral aims when applied to personal genome testing. Consumers should be enabled to know the limitations, risks and implications of personal genome testing and should be given control over the genetic information they do or do not wish to obtain. We present the outline of a new model for informed consent which can meet both the norm of providing sufficient information and the norm of providing understandable information. The model can be used for personal genome testing, but will also be applicable to other, future forms of broad genetic testing or screening in commercial and clinical settings. © 2012 John Wiley & Sons Ltd.

  6. Eukaryotic major facilitator superfamily transporter modeling based on the prokaryotic GlpT crystal structure.

    PubMed

    Lemieux, M Joanne

    2007-01-01

    The major facilitator superfamily (MFS) of transporters represents the largest family of secondary active transporters and has a diverse range of substrates. With structural information for four MFS transporters, we can see a strong structural commonality suggesting, as predicted, a common architecture for MFS transporters. The rate for crystal structure determination of MFS transporters is slow, making modeling of both prokaryotic and eukaryotic transporters more enticing. In this review, models of eukaryotic transporters Glut1, G6PT, OCT1, OCT2 and Pho84, based on the crystal structures of the prokaryotic GlpT, based on the crystal structure of LacY are discussed. The techniques used to generate the different models are compared. In addition, the validity of these models and the strategy of using prokaryotic crystal structures to model eukaryotic proteins are discussed. For comparison, E. coli GlpT was modeled based on the E. coli LacY structure and compared to the crystal structure of GlpT demonstrating that experimental evidence is essential for accurate modeling of membrane proteins.

  7. BPMN as a Communication Language for the Process- and Event-Oriented Perspectives in Fact-Oriented Conceptual Models

    NASA Astrophysics Data System (ADS)

    Bollen, Peter

    In this paper we will show how the OMG specification of BPMN (Business Process Modeling Notation) can be used to model the process- and event-oriented perspectives of an application subject area. We will illustrate how the fact-oriented conceptual models for the information-, process- and event perspectives can be used in a 'bottom-up' approach for creating a BPMN model in combination with other approaches, e.g. the use of a textual description. We will use the common doctor's office example as a running example in this article.

  8. An attempt to obtain a detailed declination chart from the United States magnetic anomaly map

    USGS Publications Warehouse

    Alldredge, L.R.

    1989-01-01

    Modern declination charts of the United States show almost no details. It was hoped that declination details could be derived from the information contained in the existing magnetic anomaly map of the United States. This could be realized only if all of the survey data were corrected to a common epoch, at which time a main-field vector model was known, before the anomaly values were computed. Because this was not done, accurate declination values cannot be determined. In spite of this conclusion, declination values were computed using a common main-field model for the entire United States to see how well they compared with observed values. The computed detailed declination values were found to compare less favourably with observed values of declination than declination values computed from the IGRF 1985 model itself. -from Author

  9. Fundamental procedures of geographic information analysis

    NASA Technical Reports Server (NTRS)

    Berry, J. K.; Tomlin, C. D.

    1981-01-01

    Analytical procedures common to most computer-oriented geographic information systems are composed of fundamental map processing operations. A conceptual framework for such procedures is developed and basic operations common to a broad range of applications are described. Among the major classes of primitive operations identified are those associated with: reclassifying map categories as a function of the initial classification, the shape, the position, or the size of the spatial configuration associated with each category; overlaying maps on a point-by-point, a category-wide, or a map-wide basis; measuring distance; establishing visual or optimal path connectivity; and characterizing cartographic neighborhoods based on the thematic or spatial attributes of the data values within each neighborhood. By organizing such operations in a coherent manner, the basis for a generalized cartographic modeling structure can be developed which accommodates a variety of needs in a common, flexible and intuitive manner. The use of each is limited only by the general thematic and spatial nature of the data to which it is applied.

  10. Mechanisms for integration of information models across related domains

    NASA Astrophysics Data System (ADS)

    Atkinson, Rob

    2010-05-01

    It is well recognised that there are opportunities and challenges in cross-disciplinary data integration. A significant barrier, however, is creating a conceptual model of the combined domains and the area of integration. For example, a groundwater domain application may require information from several related domains: geology, hydrology, water policy, etc. Each domain may have its own data holdings and conceptual models, but these will share various common concepts (eg. The concept of an aquifer). These areas of semantic overlap present significant challenges, firstly to choose a single representation (model) of a concept that appears in multiple disparate models,, then to harmonise these other models with the single representation. In addition, models may exist at different levels of abstraction depending on how closely aligned they are with a particular implementation. This makes it hard for modellers in one domain to introduce elements from another domain without either introducing a specific style of implementation, or conversely dealing with a set of abstract patterns that are hard to integrate with existing implementations. Models are easier to integrate if they are broken down into small units, with common concepts implemented using common models from well-known, and predictably managed shared libraries. This vision however requires development of a set of mechanisms (tools and procedures) for implementing and exploiting libraries of model components. These mechanisms need to handle publication, discovery, subscription, versioning and implementation of models in different forms. In this presentation a coherent suite of such mechanisms is proposed, using a scenario based on re-use of geosciences models. This approach forms the basis of a comprehensive strategy to empower domain modellers to create more interoperable systems. The strategy address a range of concerns and practice, and includes methodologies, an accessible toolkit, improvements to available modelling software, a community of practice and design of model registries. These mechanisms have been used to decouple the generation of simplified data products from a data and metadata maintenance environment, where the simplified products conform to implementation styles, and the data maintenance environment is a modular, extensible implementation of a more complete set of related domain models. Another case study is the provisioning of authoritative place names (a gazetteer) from more complex multi-lingual and historical archives of related place name usage.

  11. Using focus groups to design systems science models that promote oral health equity.

    PubMed

    Kum, Susan S; Northridge, Mary E; Metcalf, Sara S

    2018-06-04

    While the US population overall has experienced improvements in oral health over the past 60 years, oral diseases remain among the most common chronic conditions across the life course. Further, lack of access to oral health care contributes to profound and enduring oral health inequities worldwide. Vulnerable and underserved populations who commonly lack access to oral health care include racial/ethnic minority older adults living in urban environments. The aim of this study was to use a systematic approach to explicate cause and effect relationships in creating a causal map, a type of concept map in which the links between nodes represent causality or influence. To improve our mental models of the real world and devise strategies to promote oral health equity, methods including system dynamics, agent-based modeling, geographic information science, and social network simulation have been leveraged by the research team. The practice of systems science modeling is situated amidst an ongoing modeling process of observing the real world, formulating mental models of how it works, setting decision rules to guide behavior, and from these heuristics, making decisions that in turn affect the state of the real world. Qualitative data were obtained from focus groups conducted with community-dwelling older adults who self-identify as African American, Dominican, or Puerto Rican to elicit their lived experiences in accessing oral health care in their northern Manhattan neighborhoods. The findings of this study support the multi-dimensional and multi-level perspective of access to oral health care and affirm a theorized discrepancy in fit between available dental providers and patients. The lack of information about oral health at the community level may be compromising the use and quality of oral health care among racial/ethnic minority older adults. Well-informed community members may fill critical roles in oral health promotion, as they are viewed as highly credible sources of information and recommendations for dental providers. The next phase of this research will involve incorporating the knowledge gained from this study into simulation models that will be used to explore alternative paths toward improving oral health and health care for racial/ethnic minority older adults.

  12. Modeling web-based information seeking by users who are blind.

    PubMed

    Brunsman-Johnson, Carissa; Narayanan, Sundaram; Shebilske, Wayne; Alakke, Ganesh; Narakesari, Shruti

    2011-01-01

    This article describes website information seeking strategies used by users who are blind and compares those with sighted users. It outlines how assistive technologies and website design can aid users who are blind while information seeking. People who are blind and sighted are tested using an assessment tool and performing several tasks on websites. The times and keystrokes are recorded for all tasks as well as commands used and spatial questioning. Participants who are blind used keyword-based search strategies as their primary tool to seek information. Sighted users also used keyword search techniques if they were unable to find the information using a visual scan of the home page of a website. A proposed model based on the present study for information seeking is described. Keywords are important in the strategies used by both groups of participants and providing these common and consistent keywords in locations that are accessible to the users may be useful for efficient information searching. The observations suggest that there may be a difference in how users search a website that is familiar compared to one that is unfamiliar. © 2011 Informa UK, Ltd.

  13. NASA/DoD Aerospace Knowledge Diffusion Research Project. Paper 31: The information-seeking behavior of engineers

    NASA Technical Reports Server (NTRS)

    Pinelli, Thomas E.; Bishop, Ann P.; Barclay, Rebecca O.; Kennedy, John M.

    1993-01-01

    Engineers are an extraordinarily diverse group of professionals, but an attribute common to all engineers is their use of information. Engineering can be conceptualized as an information processing system that must deal with work-related uncertainty through patterns of technical communications. Throughout the process, data, information, and tacit knowledge are being acquired, produced, transferred, and utilized. While acknowledging that other models exist, we have chosen to view the information-seeking behavior of engineers within a conceptual framework of the engineer as an information processor. This article uses the chosen framework to discuss information-seeking behavior of engineers, reviewing selected literature and empirical studies from library and information science, management, communications, and sociology. The article concludes by proposing a research agenda designed to extend our current, limited knowledge of the way engineers process information.

  14. Developing a workflow to identify inconsistencies in volunteered geographic information: a phenological case study

    USGS Publications Warehouse

    Mehdipoor, Hamed; Zurita-Milla, Raul; Rosemartin, Alyssa; Gerst, Katharine L.; Weltzin, Jake F.

    2015-01-01

    Recent improvements in online information communication and mobile location-aware technologies have led to the production of large volumes of volunteered geographic information. Widespread, large-scale efforts by volunteers to collect data can inform and drive scientific advances in diverse fields, including ecology and climatology. Traditional workflows to check the quality of such volunteered information can be costly and time consuming as they heavily rely on human interventions. However, identifying factors that can influence data quality, such as inconsistency, is crucial when these data are used in modeling and decision-making frameworks. Recently developed workflows use simple statistical approaches that assume that the majority of the information is consistent. However, this assumption is not generalizable, and ignores underlying geographic and environmental contextual variability that may explain apparent inconsistencies. Here we describe an automated workflow to check inconsistency based on the availability of contextual environmental information for sampling locations. The workflow consists of three steps: (1) dimensionality reduction to facilitate further analysis and interpretation of results, (2) model-based clustering to group observations according to their contextual conditions, and (3) identification of inconsistent observations within each cluster. The workflow was applied to volunteered observations of flowering in common and cloned lilac plants (Syringa vulgaris and Syringa x chinensis) in the United States for the period 1980 to 2013. About 97% of the observations for both common and cloned lilacs were flagged as consistent, indicating that volunteers provided reliable information for this case study. Relative to the original dataset, the exclusion of inconsistent observations changed the apparent rate of change in lilac bloom dates by two days per decade, indicating the importance of inconsistency checking as a key step in data quality assessment for volunteered geographic information. Initiatives that leverage volunteered geographic information can adapt this workflow to improve the quality of their datasets and the robustness of their scientific analyses.

  15. The Application of Typology Method in Historical Building Information Modelling (hbim) Taking the Information Surveying and Mapping of Jiayuguan Fortress Town as AN Example

    NASA Astrophysics Data System (ADS)

    Li, D. Y.; Li, K.; Wu, C.

    2017-08-01

    With the promotion of fine degree of the heritage building surveying and mapping, building information modelling technology(BIM) begins to be used in surveying and mapping, renovation, recording and research of heritage building, called historical building information modelling(HBIM). The hierarchical frameworks of parametric component library of BIM, belonging to the same type with the same parameters, has the same internal logic with archaeological typology which is more and more popular in the age identification of ancient buildings. Compared with the common materials, 2D drawings and photos, typology with HBIM has two advantages — (1) comprehensive building information both in collection and representation and (2) uniform and reasonable classification criteria This paper will take the information surveying and mapping of Jiayuguan Fortress Town as an example to introduce the field work method of information surveying and mapping based on HBIM technology and the construction of Revit family library.And then in order to prove the feasibility and advantage of HBIM technology used in typology method, this paper will identify the age of Guanghua gate tower, Rouyuan gate tower, Wenchang pavilion and the theater building of Jiayuguan Fortress Town with HBIM technology and typology method.

  16. On the Black-Scholes European Option Pricing Model Robustness and Generality

    NASA Astrophysics Data System (ADS)

    Takada, Hellinton Hatsuo; de Oliveira Siqueira, José

    2008-11-01

    The common presentation of the widely known and accepted Black-Scholes European option pricing model explicitly imposes some restrictions such as the geometric Brownian motion assumption for the underlying stock price. In this paper, these usual restrictions are relaxed using maximum entropy principle of information theory, Pearson's distribution system, market frictionless and risk-neutrality theories to the calculation of a unique risk-neutral probability measure calibrated with market parameters.

  17. Linking young men who have sex with men (YMSM) to STI physicians: a nationwide cross-sectional survey in China.

    PubMed

    Cao, Bolin; Zhao, Peipei; Bien, Cedric; Pan, Stephen; Tang, Weiming; Watson, Julia; Mi, Guodong; Ding, Yi; Luo, Zhenzhou; Tucker, Joseph D

    2018-05-18

    Many young men who have sex with men (YMSM) are reluctant to seek health services and trust local physicians. Online information seeking may encourage YMSM to identify and see trustworthy physicians, obtain sexual health services, and obtain testing for sexually transmitted infections (STIs). This study examined online STI information seeking behaviors among Chinese YMSM and its association with offline physician visits. We conducted a nationwide online survey among YMSM through WeChat, the largest social media platform in China. We collected information on individual demographics, sexual behaviors, online STI information seeking, offline STI testing, and STI physician visits. We examined the most commonly used platforms (search engines, governmental websites, counseling websites, generic social media, gay mobile apps, and mobile medical apps) and their trustworthiness. We assessed interest and willingness to use an MSM-friendly physician finder function embedded within a gay mobile app. Logistic regression models were used to examine the correlation between online STI information searching and offline physician visits. A total of 503 men completed the survey. Most men (425/503, 84.5%) searched for STI information online. The most commonly used platform to obtain STI information were search engines (402/425, 94.5%), followed by gay mobile apps (201/425, 47.3%). Men reported high trustworthiness of information received from gay mobile apps. Men also reported high interest (465/503, 92.4%) and willingness (463/503, 92.0%) to use a MSM-friendly physician finder function within such apps. Both using general social media (aOR =1.14, 95%CI: 1.04-1.26) and mobile medical apps (aOR =1.16, 95%CI: 1.01-1.34) for online information seeking were associated with visiting a physician. Online STI information seeking is common and correlated with visiting a physician among YMSM. Cultivating partnerships with the emerging mobile medical apps may be useful for disseminating STI information and providing better physician services to YMSM.

  18. Seeking Medical Information Using Mobile Apps and the Internet: Are Family Caregivers Different from the General Public?

    PubMed

    Kim, Hyunmin; Paige Powell, M; Bhuyan, Soumitra S; Bhuyan, Soumitra Sudip

    2017-03-01

    Family caregivers play an important role to care cancer patients since they exchange medical information with health care providers. However, relatively little is known about how family caregivers seek medical information using mobile apps and the Internet. We examined factors associated with medical information seeking by using mobile apps and the Internet among family caregivers and the general public using data from the 2014 Health Information National Trends Survey 4 Cycle 1. The study sample consisted of 2425 family caregivers and 1252 non-family caregivers (the general public). Guided by Comprehensive Model of Information Seeking (CMIS), we examined related factors' impact on two outcome variables for medical information seeking: mobile apps use and Internet use with multivariate logistic regression analyses. We found that online medical information seeking is different between family caregivers and the general public. Overall, the use of the Internet for medical information seeking is more common among family caregivers, while the use of mobile apps is less common among family caregivers compared with the general public. Married family caregivers were less likely to use mobile apps, while family caregivers who would trust cancer information were more likely to use the Internet for medical information seeking as compared to the general public. Medical information seeking behavior among family caregivers can be an important predictor of both their health and the health of their cancer patients. Future research should explore the low usage of mobile health applications among family caregiver population.

  19. The Dilution Effect and Information Integration in Perceptual Decision Making

    PubMed Central

    Hotaling, Jared M.; Cohen, Andrew L.; Shiffrin, Richard M.; Busemeyer, Jerome R.

    2015-01-01

    In cognitive science there is a seeming paradox: On the one hand, studies of human judgment and decision making have repeatedly shown that people systematically violate optimal behavior when integrating information from multiple sources. On the other hand, optimal models, often Bayesian, have been successful at accounting for information integration in fields such as categorization, memory, and perception. This apparent conflict could be due, in part, to different materials and designs that lead to differences in the nature of processing. Stimuli that require controlled integration of information, such as the quantitative or linguistic information (commonly found in judgment studies), may lead to suboptimal performance. In contrast, perceptual stimuli may lend themselves to automatic processing, resulting in integration that is closer to optimal. We tested this hypothesis with an experiment in which participants categorized faces based on resemblance to a family patriarch. The amount of evidence contained in the top and bottom halves of each test face was independently manipulated. These data allow us to investigate a canonical example of sub-optimal information integration from the judgment and decision making literature, the dilution effect. Splitting the top and bottom halves of a face, a manipulation meant to encourage controlled integration of information, produced farther from optimal behavior and larger dilution effects. The Multi-component Information Accumulation model, a hybrid optimal/averaging model of information integration, successfully accounts for key accuracy, response time, and dilution effects. PMID:26406323

  20. The Dilution Effect and Information Integration in Perceptual Decision Making.

    PubMed

    Hotaling, Jared M; Cohen, Andrew L; Shiffrin, Richard M; Busemeyer, Jerome R

    2015-01-01

    In cognitive science there is a seeming paradox: On the one hand, studies of human judgment and decision making have repeatedly shown that people systematically violate optimal behavior when integrating information from multiple sources. On the other hand, optimal models, often Bayesian, have been successful at accounting for information integration in fields such as categorization, memory, and perception. This apparent conflict could be due, in part, to different materials and designs that lead to differences in the nature of processing. Stimuli that require controlled integration of information, such as the quantitative or linguistic information (commonly found in judgment studies), may lead to suboptimal performance. In contrast, perceptual stimuli may lend themselves to automatic processing, resulting in integration that is closer to optimal. We tested this hypothesis with an experiment in which participants categorized faces based on resemblance to a family patriarch. The amount of evidence contained in the top and bottom halves of each test face was independently manipulated. These data allow us to investigate a canonical example of sub-optimal information integration from the judgment and decision making literature, the dilution effect. Splitting the top and bottom halves of a face, a manipulation meant to encourage controlled integration of information, produced farther from optimal behavior and larger dilution effects. The Multi-component Information Accumulation model, a hybrid optimal/averaging model of information integration, successfully accounts for key accuracy, response time, and dilution effects.

  1. FuGEFlow: data model and markup language for flow cytometry.

    PubMed

    Qian, Yu; Tchuvatkina, Olga; Spidlen, Josef; Wilkinson, Peter; Gasparetto, Maura; Jones, Andrew R; Manion, Frank J; Scheuermann, Richard H; Sekaly, Rafick-Pierre; Brinkman, Ryan R

    2009-06-16

    Flow cytometry technology is widely used in both health care and research. The rapid expansion of flow cytometry applications has outpaced the development of data storage and analysis tools. Collaborative efforts being taken to eliminate this gap include building common vocabularies and ontologies, designing generic data models, and defining data exchange formats. The Minimum Information about a Flow Cytometry Experiment (MIFlowCyt) standard was recently adopted by the International Society for Advancement of Cytometry. This standard guides researchers on the information that should be included in peer reviewed publications, but it is insufficient for data exchange and integration between computational systems. The Functional Genomics Experiment (FuGE) formalizes common aspects of comprehensive and high throughput experiments across different biological technologies. We have extended FuGE object model to accommodate flow cytometry data and metadata. We used the MagicDraw modelling tool to design a UML model (Flow-OM) according to the FuGE extension guidelines and the AndroMDA toolkit to transform the model to a markup language (Flow-ML). We mapped each MIFlowCyt term to either an existing FuGE class or to a new FuGEFlow class. The development environment was validated by comparing the official FuGE XSD to the schema we generated from the FuGE object model using our configuration. After the Flow-OM model was completed, the final version of the Flow-ML was generated and validated against an example MIFlowCyt compliant experiment description. The extension of FuGE for flow cytometry has resulted in a generic FuGE-compliant data model (FuGEFlow), which accommodates and links together all information required by MIFlowCyt. The FuGEFlow model can be used to build software and databases using FuGE software toolkits to facilitate automated exchange and manipulation of potentially large flow cytometry experimental data sets. Additional project documentation, including reusable design patterns and a guide for setting up a development environment, was contributed back to the FuGE project. We have shown that an extension of FuGE can be used to transform minimum information requirements in natural language to markup language in XML. Extending FuGE required significant effort, but in our experiences the benefits outweighed the costs. The FuGEFlow is expected to play a central role in describing flow cytometry experiments and ultimately facilitating data exchange including public flow cytometry repositories currently under development.

  2. Zebrafish and Streptococcal Infections.

    PubMed

    Saralahti, A; Rämet, M

    2015-09-01

    Streptococcal bacteria are a versatile group of gram-positive bacteria capable of infecting several host organisms, including humans and fish. Streptococcal species are common colonizers of the human respiratory and gastrointestinal tract, but they also cause some of the most common life-threatening, invasive infections in humans and aquaculture. With its unique characteristics and efficient tools for genetic and imaging applications, the zebrafish (Danio rerio) has emerged as a powerful vertebrate model for infectious diseases. Several zebrafish models introduced so far have shown that zebrafish are suitable models for both zoonotic and human-specific infections. Recently, several zebrafish models mimicking human streptococcal infections have also been developed. These models show great potential in providing novel information about the pathogenic mechanisms and host responses associated with human streptococcal infections. Here, we review the zebrafish infection models for the most relevant streptococcal species: the human-specific Streptococcus pneumoniae and Streptococcus pyogenes, and the zoonotic Streptococcus iniae and Streptococcus agalactiae. The recent success and the future potential of these models for the study of host-pathogen interactions in streptococcal infections are also discussed. © 2015 The Foundation for the Scandinavian Journal of Immunology.

  3. Investigating Relationships Between Health-Related Problems and Online Health Information Seeking.

    PubMed

    Oh, Young Sam; Song, Na Kyoung

    2017-01-01

    Online health information seeking (OHIS) functions as a coping strategy to relieve health-related stress and problems. When people rate their health as poor or felt concern about their health, they frequently visit the Internet to seek health-related information in order to understand their symptoms and treatments. Regarding this role of OHIS, it is important to understand the relationships between health-related problems and OHIS. This study applies the Common-Sense Model as a theoretical lens to examine the relationship between health-related problems (ie, diagnosis of cancer, poor self-rated health, and psychological distress) and OHIS of adults in the US. Using the Health Information National Trends Survey 4 Cycle 1 (2012), a total of 2351 adult Internet users was included in this research. Hierarchical logistic regression analyses were conducted to examine the research model, and the model adding psychological distress resulted in a statistically significant improvement in model fit. In this study, lower levels of self-rated health and higher levels of psychological distress were significantly associated with higher odds of OHIS. Study findings support the idea that individuals' low levels of self-rated health and high levels of perceived psychological distress make people search for health-related information via the Internet in order to cope with health-related concern and distress.

  4. A review of statistical updating methods for clinical prediction models.

    PubMed

    Su, Ting-Li; Jaki, Thomas; Hickey, Graeme L; Buchan, Iain; Sperrin, Matthew

    2018-01-01

    A clinical prediction model is a tool for predicting healthcare outcomes, usually within a specific population and context. A common approach is to develop a new clinical prediction model for each population and context; however, this wastes potentially useful historical information. A better approach is to update or incorporate the existing clinical prediction models already developed for use in similar contexts or populations. In addition, clinical prediction models commonly become miscalibrated over time, and need replacing or updating. In this article, we review a range of approaches for re-using and updating clinical prediction models; these fall in into three main categories: simple coefficient updating, combining multiple previous clinical prediction models in a meta-model and dynamic updating of models. We evaluated the performance (discrimination and calibration) of the different strategies using data on mortality following cardiac surgery in the United Kingdom: We found that no single strategy performed sufficiently well to be used to the exclusion of the others. In conclusion, useful tools exist for updating existing clinical prediction models to a new population or context, and these should be implemented rather than developing a new clinical prediction model from scratch, using a breadth of complementary statistical methods.

  5. Modeling market mechanism with the minority game

    NASA Astrophysics Data System (ADS)

    Challet, Damien; Marsili, Matteo; Zhang, Yi-Cheng

    2000-01-01

    Using the minority game model we study a broad spectrum of problems of market mechanism. We study the role of different types of agents: producers, speculators as well as noise traders. The central issue here is the information flow: producers feed in the information whereas speculators make it away. How well each agent fares in the common game depends on the market conditions, as well as their sophistication. Sometimes there is much to gain with little effort, sometimes great effort virtually brings no more incremental gain. Market impact is also shown to play an important role, a strategy should be judged when it is actually used in play for its quality. Though the minority game is an extremely simplified market model, it allows to ask, analyze and answer many questions which arise in real markets.

  6. Modeling and predicting abstract concept or idea introduction and propagation through geopolitical groups

    NASA Astrophysics Data System (ADS)

    Jaenisch, Holger M.; Handley, James W.; Hicklen, Michael L.

    2007-04-01

    This paper describes a novel capability for modeling known idea propagation transformations and predicting responses to new ideas from geopolitical groups. Ideas are captured using semantic words that are text based and bear cognitive definitions. We demonstrate a unique algorithm for converting these into analytical predictive equations. Using the illustrative idea of "proposing a gasoline price increase of 1 per gallon from 2" and its changing perceived impact throughout 5 demographic groups, we identify 13 cost of living Diplomatic, Information, Military, and Economic (DIME) features common across all 5 demographic groups. This enables the modeling and monitoring of Political, Military, Economic, Social, Information, and Infrastructure (PMESII) effects of each group to this idea and how their "perception" of this proposal changes. Our algorithm and results are summarized in this paper.

  7. Human eye haptics-based multimedia.

    PubMed

    Velandia, David; Uribe-Quevedo, Alvaro; Perez-Gutierrez, Byron

    2014-01-01

    Immersive and interactive multimedia applications offer complementary study tools in anatomy as users can explore 3D models while obtaining information about the organ, tissue or part being explored. Haptics increases the sense of interaction with virtual objects improving user experience in a more realistic manner. Common eye studying tools are books, illustrations, assembly models, and more recently these are being complemented with mobile apps whose 3D capabilities, computing power and customers are increasing. The goal of this project is to develop a complementary eye anatomy and pathology study tool using deformable models within a multimedia application, offering the students the opportunity for exploring the eye from up close and within with relevant information. Validation of the tool provided feedback on the potential of the development, along with suggestions on improving haptic feedback and navigation.

  8. Does the choice of nucleotide substitution models matter topologically?

    PubMed

    Hoff, Michael; Orf, Stefan; Riehm, Benedikt; Darriba, Diego; Stamatakis, Alexandros

    2016-03-24

    In the context of a master level programming practical at the computer science department of the Karlsruhe Institute of Technology, we developed and make available an open-source code for testing all 203 possible nucleotide substitution models in the Maximum Likelihood (ML) setting under the common Akaike, corrected Akaike, and Bayesian information criteria. We address the question if model selection matters topologically, that is, if conducting ML inferences under the optimal, instead of a standard General Time Reversible model, yields different tree topologies. We also assess, to which degree models selected and trees inferred under the three standard criteria (AIC, AICc, BIC) differ. Finally, we assess if the definition of the sample size (#sites versus #sites × #taxa) yields different models and, as a consequence, different tree topologies. We find that, all three factors (by order of impact: nucleotide model selection, information criterion used, sample size definition) can yield topologically substantially different final tree topologies (topological difference exceeding 10 %) for approximately 5 % of the tree inferences conducted on the 39 empirical datasets used in our study. We find that, using the best-fit nucleotide substitution model may change the final ML tree topology compared to an inference under a default GTR model. The effect is less pronounced when comparing distinct information criteria. Nonetheless, in some cases we did obtain substantial topological differences.

  9. Forecasting an invasive species’ distribution with global distribution data, local data, and physiological information

    USGS Publications Warehouse

    Jarnevich, Catherine S.; Young, Nicholas E.; Talbert, Marian; Talbert, Colin

    2018-01-01

    Understanding invasive species distributions and potential invasions often requires broad‐scale information on the environmental tolerances of the species. Further, resource managers are often faced with knowing these broad‐scale relationships as well as nuanced environmental factors related to their landscape that influence where an invasive species occurs and potentially could occur. Using invasive buffelgrass (Cenchrus ciliaris), we developed global models and local models for Saguaro National Park, Arizona, USA, based on location records and literature on physiological tolerances to environmental factors to investigate whether environmental relationships of a species at a global scale are also important at local scales. In addition to correlative models with five commonly used algorithms, we also developed a model using a priori user‐defined relationships between occurrence and environmental characteristics based on a literature review. All correlative models at both scales performed well based on statistical evaluations. The user‐defined curves closely matched those produced by the correlative models, indicating that the correlative models may be capturing mechanisms driving the distribution of buffelgrass. Given climate projections for the region, both global and local models indicate that conditions at Saguaro National Park may become more suitable for buffelgrass. Combining global and local data with correlative models and physiological information provided a holistic approach to forecasting invasive species distributions.

  10. Information Environments

    NASA Technical Reports Server (NTRS)

    Follen, Gregory J.; Naiman, Cynthia

    2003-01-01

    The objective of GRC CNIS/IE work is to build a plug-n-play infrastructure that provides the Grand Challenge Applications with a suite of tools for coupling codes together, numerical zooming between fidelity of codes and gaining deployment of these simulations onto the Information Power Grid. The GRC CNIS/IE work will streamline and improve this process by providing tighter integration of various tools through the use of object oriented design of component models and data objects and through the use of CORBA (Common Object Request Broker Architecture).

  11. Legume information system (LegumeInfo.org): a key component of a set of federated data resources for the legume family.

    PubMed

    Dash, Sudhansu; Campbell, Jacqueline D; Cannon, Ethalinda K S; Cleary, Alan M; Huang, Wei; Kalberer, Scott R; Karingula, Vijay; Rice, Alex G; Singh, Jugpreet; Umale, Pooja E; Weeks, Nathan T; Wilkey, Andrew P; Farmer, Andrew D; Cannon, Steven B

    2016-01-04

    Legume Information System (LIS), at http://legumeinfo.org, is a genomic data portal (GDP) for the legume family. LIS provides access to genetic and genomic information for major crop and model legumes. With more than two-dozen domesticated legume species, there are numerous specialists working on particular species, and also numerous GDPs for these species. LIS has been redesigned in the last three years both to better integrate data sets across the crop and model legumes, and to better accommodate specialized GDPs that serve particular legume species. To integrate data sets, LIS provides genome and map viewers, holds synteny mappings among all sequenced legume species and provides a set of gene families to allow traversal among orthologous and paralogous sequences across the legumes. To better accommodate other specialized GDPs, LIS uses open-source GMOD components where possible, and advocates use of common data templates, formats, schemas and interfaces so that data collected by one legume research community are accessible across all legume GDPs, through similar interfaces and using common APIs. This federated model for the legumes is managed as part of the 'Legume Federation' project (accessible via http://legumefederation.org), which can be thought of as an umbrella project encompassing LIS and other legume GDPs. Published by Oxford University Press on behalf of Nucleic Acids Research 2015. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  12. Hydrologic Process-oriented Optimization of Electrical Resistivity Tomography

    NASA Astrophysics Data System (ADS)

    Hinnell, A.; Bechtold, M.; Ferre, T. A.; van der Kruk, J.

    2010-12-01

    Electrical resistivity tomography (ERT) is commonly used in hydrologic investigations. Advances in joint and coupled hydrogeophysical inversion have enhanced the quantitative use of ERT to construct and condition hydrologic models (i.e. identify hydrologic structure and estimate hydrologic parameters). However the selection of which electrical resistivity data to collect and use is often determined by a combination of data requirements for geophysical analysis, intuition on the part of the hydrogeophysicist and logistical constraints of the laboratory or field site. One of the advantages of coupled hydrogeophysical inversion is the direct link between the hydrologic model and the individual geophysical data used to condition the model. That is, there is no requirement to collect geophysical data suitable for independent geophysical inversion. The geophysical measurements collected can be optimized for estimation of hydrologic model parameters rather than to develop a geophysical model. Using a synthetic model of drip irrigation we evaluate the value of individual resistivity measurements to describe the soil hydraulic properties and then use this information to build a data set optimized for characterizing hydrologic processes. We then compare the information content in the optimized data set with the information content in a data set optimized using a Jacobian sensitivity analysis.

  13. Investigating the Use of 3d Geovisualizations for Urban Design in Informal Settlement Upgrading in South Africa

    NASA Astrophysics Data System (ADS)

    Rautenbach, V.; Coetzee, S.; Çöltekin, A.

    2016-06-01

    Informal settlements are a common occurrence in South Africa, and to improve in-situ circumstances of communities living in informal settlements, upgrades and urban design processes are necessary. Spatial data and maps are essential throughout these processes to understand the current environment, plan new developments, and communicate the planned developments. All stakeholders need to understand maps to actively participate in the process. However, previous research demonstrated that map literacy was relatively low for many planning professionals in South Africa, which might hinder effective planning. Because 3D visualizations resemble the real environment more than traditional maps, many researchers posited that they would be easier to interpret. Thus, our goal is to investigate the effectiveness of 3D geovisualizations for urban design in informal settlement upgrading in South Africa. We consider all involved processes: 3D modelling, visualization design, and cognitive processes during map reading. We found that procedural modelling is a feasible alternative to time-consuming manual modelling, and can produce high quality models. When investigating the visualization design, the visual characteristics of 3D models and relevance of a subset of visual variables for urban design activities of informal settlement upgrades were qualitatively assessed. The results of three qualitative user experiments contributed to understanding the impact of various levels of complexity in 3D city models and map literacy of future geoinformatics and planning professionals when using 2D maps and 3D models. The research results can assist planners in designing suitable 3D models that can be used throughout all phases of the process.

  14. The Semantic Environment: Heuristics for a Cross-Context Human-Information Interaction Model

    NASA Astrophysics Data System (ADS)

    Resmini, Andrea; Rosati, Luca

    This chapter introduces a multidisciplinary holistic approach for the general design of successful bridge experiences as a cross-context human-information interaction model. Nowadays it is common to interact through a number of different domains in order to communicate successfully, complete a task, or elicit a desired response: Users visit a reseller’s web site to find a specific item, book it, then drive to the closest store to complete their purchase. As such, one of the crucial challenges user experience design will face in the near future is how to structure and provide bridge experiences seamlessly spanning multiple communication channels or media formats for a specific purpose.

  15. Between-Person and Within-Person Subscore Reliability: Comparison of Unidimensional and Multidimensional IRT Models

    ERIC Educational Resources Information Center

    Bulut, Okan

    2013-01-01

    The importance of subscores in educational and psychological assessments is undeniable. Subscores yield diagnostic information that can be used for determining how each examinee's abilities/skills vary over different content domains. One of the most common criticisms about reporting and using subscores is insufficient reliability of subscores.…

  16. Drinking and Driving PSAs: A Content Analysis of Behavioral Influence Strategies.

    ERIC Educational Resources Information Center

    Slater, Michael D.

    1999-01-01

    Study randomly samples 66 drinking and driving television public service announcements that were then coded using a categorical and dimensional scheme. Data set reveals that informational/testimonial messages made up almost half of the total; positive appeals were the next most common, followed by empathy, fear, and modeling appeals. (Contains 34…

  17. Elevation, aspect, and cove size effects on southern Appalachian salamanders

    Treesearch

    W. Mark Ford; Michael A. Menzel; Richard H. Odom

    2002-01-01

    Using museum collection records and variables computed by digital terrain modeling in a geographic information system, we examined the relationship of elevation, aspect, and "cove" patch size to the presence or absence of 7 common woodland salamanders in mature cove hardwood and northern hardwood forests in the southern Appalachians of Georgia, North Carolina...

  18. Implications of Informal Education Experiences for Mathematics Teachers' Ability to Make Connections beyond Formal Classroom

    ERIC Educational Resources Information Center

    Popovic, Gorjana; Lederman, Judith S.

    2015-01-01

    The Common Core Standard for Mathematical Practice 4: Model with Mathematics specifies that mathematically proficient students are able to make connections between school mathematics and its applications to solving real-world problems. Hence, mathematics teachers are expected to incorporate connections between mathematical concepts they teach and…

  19. Supporting Evidence Use in Networked Professional Learning: The Role of the Middle Leader

    ERIC Educational Resources Information Center

    LaPointe-McEwan, Danielle; DeLuca, Christopher; Klinger, Don A.

    2017-01-01

    Background: In Canada, contemporary collaborative professional learning models for educators utilise multiple forms of evidence to inform practice. Commonly, two forms of evidence are prioritised: (a) research-based evidence and (b) classroom-based evidence of student learning. In Ontario, the integration of these two forms of evidence within…

  20. Modeling of n-hexadecane and water sorption in wood

    Treesearch

    Ganna Baglayeva; Gautham Krishnamoorthy; Charles R. Frihart; Wayne S. Seamus; Jane O’Dell; Evguenii Kozliak

    2016-01-01

    Contamination of wooden framing structures with semivolatile organic chemicals is a common occurrence from the spillage of chemicals, such as impregnation with fuel oil hydrocarbons during floods. Little information is available to understand the penetration of fuel oil hydrocarbons into wood under ambient conditions. To imitate flood and storage scenarios, the...

  1. The Use of Structure Coefficients to Address Multicollinearity in Sport and Exercise Science

    ERIC Educational Resources Information Center

    Yeatts, Paul E.; Barton, Mitch; Henson, Robin K.; Martin, Scott B.

    2017-01-01

    A common practice in general linear model (GLM) analyses is to interpret regression coefficients (e.g., standardized ß weights) as indicators of variable importance. However, focusing solely on standardized beta weights may provide limited or erroneous information. For example, ß weights become increasingly unreliable when predictor variables are…

  2. Simultaneous Estimation of Overall and Domain Abilities: A Higher-Order IRT Model Approach

    ERIC Educational Resources Information Center

    de la Torre, Jimmy; Song, Hao

    2009-01-01

    Assessments consisting of different domains (e.g., content areas, objectives) are typically multidimensional in nature but are commonly assumed to be unidimensional for estimation purposes. The different domains of these assessments are further treated as multi-unidimensional tests for the purpose of obtaining diagnostic information. However, when…

  3. A Proposal for Studying the Values/Reasoning Distinction in Moral Development and Training.

    ERIC Educational Resources Information Center

    Kaplan, Martin F.

    Application of a common framework in studies of the development of social cognition can reduce conceptual and methodological ambiguities and enable clearer study of core issues. This paper describes the core issues and their attendant problems, outlines a model of information integration that addresses the issues, and describes some illustrative…

  4. On the Delusiveness of Adopting a Common Space for Modeling IR Objects: Are Queries Documents?

    ERIC Educational Resources Information Center

    Bollmann-Sdorra, Peter; Raghavan, Vjay V.

    1993-01-01

    Proposes that document space and query space have different structures in information retrieval and discusses similarity measures, term independence, and linear structure. Examples are given using the retrieval functions of dot-product, the cosine measure, the coefficient of Jaccard, and the overlap function. (Contains 28 references.) (LRW)

  5. Cognitive Models of Scientific Work and Their Implications for the Design of Knowledge Delivery Systems.

    ERIC Educational Resources Information Center

    Mavor, A. S.; And Others

    Part of a sustained program that has involved the design of personally tailored information systems responsive to the needs of scientists performing common research and teaching tasks, this project focuses on the procedural and content requirements for accomplishing need diagnosis and presents these requirements as specifications for an…

  6. Combining Information to Answer Questions about Names and Categories

    ERIC Educational Resources Information Center

    Kelso, Ginger L.

    2009-01-01

    Children's language and world knowledge grows explosively in the preschool years. One critical contributor to this growth is their developing ability to infer relations beyond those that have been directly taught or modeled. Categorization is one type of skill commonly taught in preschool in which inference is an important aspect. This study…

  7. Simultaneous Semi-Distributed Model Calibration Guided by ...

    EPA Pesticide Factsheets

    Modelling approaches to transfer hydrologically-relevant information from locations with streamflow measurements to locations without such measurements continues to be an active field of research for hydrologists. The Pacific Northwest Hydrologic Landscapes (PNW HL) provide a solid conceptual classification framework based on our understanding of dominant processes. A Hydrologic Landscape code (5 letter descriptor based on physical and climatic properties) describes each assessment unit area, and these units average area 60km2. The core function of these HL codes is to relate and transfer hydrologically meaningful information between watersheds without the need for streamflow time series. We present a novel approach based on the HL framework to answer the question “How can we calibrate models across separate watersheds simultaneously, guided by our understanding of dominant processes?“. We should be able to apply the same parameterizations to assessment units of common HL codes if 1) the Hydrologic Landscapes contain hydrologic information transferable between watersheds at a sub-watershed-scale and 2) we use a conceptual hydrologic model and parameters that reflect the hydrologic behavior of a watershed. In this study, This work specifically tests the ability or inability to use HL-codes to inform and share model parameters across watersheds in the Pacific Northwest. EPA’s Western Ecology Division has published and is refining a framework for defining la

  8. Influence of three common calibration metrics on the diagnosis of climate change impacts on water resources

    NASA Astrophysics Data System (ADS)

    Seiller, G.; Roy, R.; Anctil, F.

    2017-04-01

    Uncertainties associated to the evaluation of the impacts of climate change on water resources are broad, from multiple sources, and lead to diagnoses sometimes difficult to interpret. Quantification of these uncertainties is a key element to yield confidence in the analyses and to provide water managers with valuable information. This work specifically evaluates the influence of hydrological modeling calibration metrics on future water resources projections, on thirty-seven watersheds in the Province of Québec, Canada. Twelve lumped hydrologic models, representing a wide range of operational options, are calibrated with three common objective functions derived from the Nash-Sutcliffe efficiency. The hydrologic models are forced with climate simulations corresponding to two RCP, twenty-nine GCM from CMIP5 (Coupled Model Intercomparison Project phase 5) and two post-treatment techniques, leading to future projections in the 2041-2070 period. Results show that the diagnosis of the impacts of climate change on water resources are quite affected by the hydrologic models selection and calibration metrics. Indeed, for the four selected hydrological indicators, dedicated to water management, parameters from the three objective functions can provide different interpretations in terms of absolute and relative changes, as well as projected changes direction and climatic ensemble consensus. The GR4J model and a multimodel approach offer the best modeling options, based on calibration performance and robustness. Overall, these results illustrate the need to provide water managers with detailed information on relative changes analysis, but also absolute change values, especially for hydrological indicators acting as security policy thresholds.

  9. BRIDG: a domain information model for translational and clinical protocol-driven research.

    PubMed

    Becnel, Lauren B; Hastak, Smita; Ver Hoef, Wendy; Milius, Robert P; Slack, MaryAnn; Wold, Diane; Glickman, Michael L; Brodsky, Boris; Jaffe, Charles; Kush, Rebecca; Helton, Edward

    2017-09-01

    It is critical to integrate and analyze data from biological, translational, and clinical studies with data from health systems; however, electronic artifacts are stored in thousands of disparate systems that are often unable to readily exchange data. To facilitate meaningful data exchange, a model that presents a common understanding of biomedical research concepts and their relationships with health care semantics is required. The Biomedical Research Integrated Domain Group (BRIDG) domain information model fulfills this need. Software systems created from BRIDG have shared meaning "baked in," enabling interoperability among disparate systems. For nearly 10 years, the Clinical Data Standards Interchange Consortium, the National Cancer Institute, the US Food and Drug Administration, and Health Level 7 International have been key stakeholders in developing BRIDG. BRIDG is an open-source Unified Modeling Language-class model developed through use cases and harmonization with other models. With its 4+ releases, BRIDG includes clinical and now translational research concepts in its Common, Protocol Representation, Study Conduct, Adverse Events, Regulatory, Statistical Analysis, Experiment, Biospecimen, and Molecular Biology subdomains. The model is a Clinical Data Standards Interchange Consortium, Health Level 7 International, and International Standards Organization standard that has been utilized in national and international standards-based software development projects. It will continue to mature and evolve in the areas of clinical imaging, pathology, ontology, and vocabulary support. BRIDG 4.1.1 and prior releases are freely available at https://bridgmodel.nci.nih.gov . © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  10. Experimentally Derived δ13C and δ15N Discrimination Factors for Gray Wolves and the Impact of Prior Information in Bayesian Mixing Models

    PubMed Central

    Bucci, Melanie E.; Callahan, Peggy; Koprowski, John L.; Polfus, Jean L.; Krausman, Paul R.

    2015-01-01

    Stable isotope analysis of diet has become a common tool in conservation research. However, the multiple sources of uncertainty inherent in this analysis framework involve consequences that have not been thoroughly addressed. Uncertainty arises from the choice of trophic discrimination factors, and for Bayesian stable isotope mixing models (SIMMs), the specification of prior information; the combined effect of these aspects has not been explicitly tested. We used a captive feeding study of gray wolves (Canis lupus) to determine the first experimentally-derived trophic discrimination factors of C and N for this large carnivore of broad conservation interest. Using the estimated diet in our controlled system and data from a published study on wild wolves and their prey in Montana, USA, we then investigated the simultaneous effect of discrimination factors and prior information on diet reconstruction with Bayesian SIMMs. Discrimination factors for gray wolves and their prey were 1.97‰ for δ13C and 3.04‰ for δ15N. Specifying wolf discrimination factors, as opposed to the commonly used red fox (Vulpes vulpes) factors, made little practical difference to estimates of wolf diet, but prior information had a strong effect on bias, precision, and accuracy of posterior estimates. Without specifying prior information in our Bayesian SIMM, it was not possible to produce SIMM posteriors statistically similar to the estimated diet in our controlled study or the diet of wild wolves. Our study demonstrates the critical effect of prior information on estimates of animal diets using Bayesian SIMMs, and suggests species-specific trophic discrimination factors are of secondary importance. When using stable isotope analysis to inform conservation decisions researchers should understand the limits of their data. It may be difficult to obtain useful information from SIMMs if informative priors are omitted and species-specific discrimination factors are unavailable. PMID:25803664

  11. Experimentally derived δ¹³C and δ¹⁵N discrimination factors for gray wolves and the impact of prior information in Bayesian mixing models.

    PubMed

    Derbridge, Jonathan J; Merkle, Jerod A; Bucci, Melanie E; Callahan, Peggy; Koprowski, John L; Polfus, Jean L; Krausman, Paul R

    2015-01-01

    Stable isotope analysis of diet has become a common tool in conservation research. However, the multiple sources of uncertainty inherent in this analysis framework involve consequences that have not been thoroughly addressed. Uncertainty arises from the choice of trophic discrimination factors, and for Bayesian stable isotope mixing models (SIMMs), the specification of prior information; the combined effect of these aspects has not been explicitly tested. We used a captive feeding study of gray wolves (Canis lupus) to determine the first experimentally-derived trophic discrimination factors of C and N for this large carnivore of broad conservation interest. Using the estimated diet in our controlled system and data from a published study on wild wolves and their prey in Montana, USA, we then investigated the simultaneous effect of discrimination factors and prior information on diet reconstruction with Bayesian SIMMs. Discrimination factors for gray wolves and their prey were 1.97‰ for δ13C and 3.04‰ for δ15N. Specifying wolf discrimination factors, as opposed to the commonly used red fox (Vulpes vulpes) factors, made little practical difference to estimates of wolf diet, but prior information had a strong effect on bias, precision, and accuracy of posterior estimates. Without specifying prior information in our Bayesian SIMM, it was not possible to produce SIMM posteriors statistically similar to the estimated diet in our controlled study or the diet of wild wolves. Our study demonstrates the critical effect of prior information on estimates of animal diets using Bayesian SIMMs, and suggests species-specific trophic discrimination factors are of secondary importance. When using stable isotope analysis to inform conservation decisions researchers should understand the limits of their data. It may be difficult to obtain useful information from SIMMs if informative priors are omitted and species-specific discrimination factors are unavailable.

  12. Interoperability and information discovery

    USGS Publications Warehouse

    Christian, E.

    2001-01-01

    In the context of information systems, there is interoperability when the distinctions between separate information systems are not a barrier to accomplishing a task that spans those systems. Interoperability so defined implies that there are commonalities among the systems involved and that one can exploit such commonalities to achieve interoperability. The challenge of a particular interoperability task is to identify relevant commonalities among the systems involved and to devise mechanisms that exploit those commonalities. The present paper focuses on the particular interoperability task of information discovery. The Global Information Locator Service (GILS) is described as a policy, standards, and technology framework for addressing interoperable information discovery on a global and long-term basis. While there are many mechanisms for people to discover and use all manner of data and information resources, GILS initiatives exploit certain key commonalities that seem to be sufficient to realize useful information discovery interoperability at a global, long-term scale. This paper describes ten of the specific commonalities that are key to GILS initiatives. It presents some of the practical implications for organizations in various roles: content provider, system engineer, intermediary, and searcher. The paper also provides examples of interoperable information discovery as deployed using GILS in four types of information communities: bibliographic, geographic, environmental, and government.

  13. Sheep (Ovis aries) as a Model for Cardiovascular Surgery and Management before, during, and after Cardiopulmonary Bypass

    PubMed Central

    DiVincenti, Louis; Westcott, Robin; Lee, Candice

    2014-01-01

    Because of its similarity to humans in important respects, sheep (Ovis aries) are a common animal model for translational research in cardiovascular surgery. However, some unique aspects of sheep anatomy and physiology present challenges to its use in these complicated experiments. In this review, we discuss relevant anatomy and physiology of sheep and discuss management before, during, and after procedures requiring cardiopulmonary bypass to provide a concise source of information for veterinarians, technicians, and researchers developing and implementing protocols with this model. PMID:25255065

  14. AR(p) -based detrended fluctuation analysis

    NASA Astrophysics Data System (ADS)

    Alvarez-Ramirez, J.; Rodriguez, E.

    2018-07-01

    Autoregressive models are commonly used for modeling time-series from nature, economics and finance. This work explored simple autoregressive AR(p) models to remove long-term trends in detrended fluctuation analysis (DFA). Crude oil prices and bitcoin exchange rate were considered, with the former corresponding to a mature market and the latter to an emergent market. Results showed that AR(p) -based DFA performs similar to traditional DFA. However, the former DFA provides information on stability of long-term trends, which is valuable for understanding and quantifying the dynamics of complex time series from financial systems.

  15. Privacy-Preserving Predictive Modeling: Harmonization of Contextual Embeddings From Different Sources.

    PubMed

    Huang, Yingxiang; Lee, Junghye; Wang, Shuang; Sun, Jimeng; Liu, Hongfang; Jiang, Xiaoqian

    2018-05-16

    Data sharing has been a big challenge in biomedical informatics because of privacy concerns. Contextual embedding models have demonstrated a very strong representative capability to describe medical concepts (and their context), and they have shown promise as an alternative way to support deep-learning applications without the need to disclose original data. However, contextual embedding models acquired from individual hospitals cannot be directly combined because their embedding spaces are different, and naive pooling renders combined embeddings useless. The aim of this study was to present a novel approach to address these issues and to promote sharing representation without sharing data. Without sacrificing privacy, we also aimed to build a global model from representations learned from local private data and synchronize information from multiple sources. We propose a methodology that harmonizes different local contextual embeddings into a global model. We used Word2Vec to generate contextual embeddings from each source and Procrustes to fuse different vector models into one common space by using a list of corresponding pairs as anchor points. We performed prediction analysis with harmonized embeddings. We used sequential medical events extracted from the Medical Information Mart for Intensive Care III database to evaluate the proposed methodology in predicting the next likely diagnosis of a new patient using either structured data or unstructured data. Under different experimental scenarios, we confirmed that the global model built from harmonized local models achieves a more accurate prediction than local models and global models built from naive pooling. Such aggregation of local models using our unique harmonization can serve as the proxy for a global model, combining information from a wide range of institutions and information sources. It allows information unique to a certain hospital to become available to other sites, increasing the fluidity of information flow in health care. ©Yingxiang Huang, Junghye Lee, Shuang Wang, Jimeng Sun, Hongfang Liu, Xiaoqian Jiang. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 16.05.2018.

  16. Product component genealogy modeling and field-failure prediction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, Caleb; Hong, Yili; Meeker, William Q.

    Many industrial products consist of multiple components that are necessary for system operation. There is an abundance of literature on modeling the lifetime of such components through competing risks models. During the life-cycle of a product, it is common for there to be incremental design changes to improve reliability, to reduce costs, or due to changes in availability of certain part numbers. These changes can affect product reliability but are often ignored in system lifetime modeling. By incorporating this information about changes in part numbers over time (information that is readily available in most production databases), better accuracy can bemore » achieved in predicting time to failure, thus yielding more accurate field-failure predictions. This paper presents methods for estimating parameters and predictions for this generational model and a comparison with existing methods through the use of simulation. Our results indicate that the generational model has important practical advantages and outperforms the existing methods in predicting field failures.« less

  17. Product component genealogy modeling and field-failure prediction

    DOE PAGES

    King, Caleb; Hong, Yili; Meeker, William Q.

    2016-04-13

    Many industrial products consist of multiple components that are necessary for system operation. There is an abundance of literature on modeling the lifetime of such components through competing risks models. During the life-cycle of a product, it is common for there to be incremental design changes to improve reliability, to reduce costs, or due to changes in availability of certain part numbers. These changes can affect product reliability but are often ignored in system lifetime modeling. By incorporating this information about changes in part numbers over time (information that is readily available in most production databases), better accuracy can bemore » achieved in predicting time to failure, thus yielding more accurate field-failure predictions. This paper presents methods for estimating parameters and predictions for this generational model and a comparison with existing methods through the use of simulation. Our results indicate that the generational model has important practical advantages and outperforms the existing methods in predicting field failures.« less

  18. Ensembles vs. information theory: supporting science under uncertainty

    NASA Astrophysics Data System (ADS)

    Nearing, Grey S.; Gupta, Hoshin V.

    2018-05-01

    Multi-model ensembles are one of the most common ways to deal with epistemic uncertainty in hydrology. This is a problem because there is no known way to sample models such that the resulting ensemble admits a measure that has any systematic (i.e., asymptotic, bounded, or consistent) relationship with uncertainty. Multi-model ensembles are effectively sensitivity analyses and cannot - even partially - quantify uncertainty. One consequence of this is that multi-model approaches cannot support a consistent scientific method - in particular, multi-model approaches yield unbounded errors in inference. In contrast, information theory supports a coherent hypothesis test that is robust to (i.e., bounded under) arbitrary epistemic uncertainty. This paper may be understood as advocating a procedure for hypothesis testing that does not require quantifying uncertainty, but is coherent and reliable (i.e., bounded) in the presence of arbitrary (unknown and unknowable) uncertainty. We conclude by offering some suggestions about how this proposed philosophy of science suggests new ways to conceptualize and construct simulation models of complex, dynamical systems.

  19. A Framework for Modeling Emerging Diseases to Inform Management

    PubMed Central

    Katz, Rachel A.; Richgels, Katherine L.D.; Walsh, Daniel P.; Grant, Evan H.C.

    2017-01-01

    The rapid emergence and reemergence of zoonotic diseases requires the ability to rapidly evaluate and implement optimal management decisions. Actions to control or mitigate the effects of emerging pathogens are commonly delayed because of uncertainty in the estimates and the predicted outcomes of the control tactics. The development of models that describe the best-known information regarding the disease system at the early stages of disease emergence is an essential step for optimal decision-making. Models can predict the potential effects of the pathogen, provide guidance for assessing the likelihood of success of different proposed management actions, quantify the uncertainty surrounding the choice of the optimal decision, and highlight critical areas for immediate research. We demonstrate how to develop models that can be used as a part of a decision-making framework to determine the likelihood of success of different management actions given current knowledge. PMID:27983501

  20. A Framework for Modeling Emerging Diseases to Inform Management.

    PubMed

    Russell, Robin E; Katz, Rachel A; Richgels, Katherine L D; Walsh, Daniel P; Grant, Evan H C

    2017-01-01

    The rapid emergence and reemergence of zoonotic diseases requires the ability to rapidly evaluate and implement optimal management decisions. Actions to control or mitigate the effects of emerging pathogens are commonly delayed because of uncertainty in the estimates and the predicted outcomes of the control tactics. The development of models that describe the best-known information regarding the disease system at the early stages of disease emergence is an essential step for optimal decision-making. Models can predict the potential effects of the pathogen, provide guidance for assessing the likelihood of success of different proposed management actions, quantify the uncertainty surrounding the choice of the optimal decision, and highlight critical areas for immediate research. We demonstrate how to develop models that can be used as a part of a decision-making framework to determine the likelihood of success of different management actions given current knowledge.

  1. A 360° Vision for Virtual Organizations Characterization and Modelling: Two Intentional Level Aspects

    NASA Astrophysics Data System (ADS)

    Priego-Roche, Luz-María; Rieu, Dominique; Front, Agnès

    Nowadays, organizations aiming to be successful in an increasingly competitive market tend to group together into virtual organizations. Designing the information system (IS) of such virtual organizations on the basis of the IS of those participating is a real challenge. The IS of a virtual organization plays an important role in the collaboration and cooperation of the participants organizations and in reaching the common goal. This article proposes criteria allowing virtual organizations to be identified and classified at an intentional level, as well as the information necessary for designing the organizations’ IS. Instantiation of criteria for a specific virtual organization and its participants, will allow simple graphical models to be generated in a modelling tool. The models will be used as bases for the IS design at organizational and operational levels. The approach is illustrated by the example of the virtual organization UGRT (a regional stockbreeders union in Tabasco, Mexico).

  2. Motion and force control for multiple cooperative manipulators

    NASA Technical Reports Server (NTRS)

    Wen, John T.; Kreutz, Kenneth

    1989-01-01

    The motion and force control of multiple robot arms manipulating a commonly held object is addressed. A general control paradigm that decouples the motion and force control problems is introduced. For motion control, there are three natural choices: (1) joint torques, (2) arm-tip force vectors, and (3) the acceleration of a generalized coordinate. Choice (1) allows a class of relatively model-independent control laws by exploiting the Hamiltonian structure of the open-loop system; (2) and (3) require the full model information but produce simpler problems. To resolve the nonuniqueness of the joint torques, two methods are introduced. If the arm and object models are available, the allocation of the desired end-effector control force to the joint actuators can be optimized; otherwise the internal force can be controlled about some set point. It is shown that effective force regulation can be achieved even if little model information is available.

  3. A framework for modeling emerging diseases to inform management

    USGS Publications Warehouse

    Russell, Robin E.; Katz, Rachel A.; Richgels, Katherine L. D.; Walsh, Daniel P.; Grant, Evan H. Campbell

    2017-01-01

    The rapid emergence and reemergence of zoonotic diseases requires the ability to rapidly evaluate and implement optimal management decisions. Actions to control or mitigate the effects of emerging pathogens are commonly delayed because of uncertainty in the estimates and the predicted outcomes of the control tactics. The development of models that describe the best-known information regarding the disease system at the early stages of disease emergence is an essential step for optimal decision-making. Models can predict the potential effects of the pathogen, provide guidance for assessing the likelihood of success of different proposed management actions, quantify the uncertainty surrounding the choice of the optimal decision, and highlight critical areas for immediate research. We demonstrate how to develop models that can be used as a part of a decision-making framework to determine the likelihood of success of different management actions given current knowledge.

  4. Polya's bees: A model of decentralized decision-making.

    PubMed

    Golman, Russell; Hagmann, David; Miller, John H

    2015-09-01

    How do social systems make decisions with no single individual in control? We observe that a variety of natural systems, including colonies of ants and bees and perhaps even neurons in the human brain, make decentralized decisions using common processes involving information search with positive feedback and consensus choice through quorum sensing. We model this process with an urn scheme that runs until hitting a threshold, and we characterize an inherent tradeoff between the speed and the accuracy of a decision. The proposed common mechanism provides a robust and effective means by which a decentralized system can navigate the speed-accuracy tradeoff and make reasonably good, quick decisions in a variety of environments. Additionally, consensus choice exhibits systemic risk aversion even while individuals are idiosyncratically risk-neutral. This too is adaptive. The model illustrates how natural systems make decentralized decisions, illuminating a mechanism that engineers of social and artificial systems could imitate.

  5. Best (but oft-forgotten) practices: mediation analysis.

    PubMed

    Fairchild, Amanda J; McDaniel, Heather L

    2017-06-01

    This contribution in the "Best (but Oft-Forgotten) Practices" series considers mediation analysis. A mediator (sometimes referred to as an intermediate variable, surrogate endpoint, or intermediate endpoint) is a third variable that explains how or why ≥2 other variables relate in a putative causal pathway. The current article discusses mediation analysis with the ultimate intention of helping nutrition researchers to clarify the rationale for examining mediation, avoid common pitfalls when using the model, and conduct well-informed analyses that can contribute to improving causal inference in evaluations of underlying mechanisms of effects on nutrition-related behavioral and health outcomes. We give specific attention to underevaluated limitations inherent in common approaches to mediation. In addition, we discuss how to conduct a power analysis for mediation models and offer an applied example to demonstrate mediation analysis. Finally, we provide an example write-up of mediation analysis results as a model for applied researchers. © 2017 American Society for Nutrition.

  6. Best (but oft-forgotten) practices: mediation analysis12

    PubMed Central

    McDaniel, Heather L

    2017-01-01

    This contribution in the “Best (but Oft-Forgotten) Practices” series considers mediation analysis. A mediator (sometimes referred to as an intermediate variable, surrogate endpoint, or intermediate endpoint) is a third variable that explains how or why ≥2 other variables relate in a putative causal pathway. The current article discusses mediation analysis with the ultimate intention of helping nutrition researchers to clarify the rationale for examining mediation, avoid common pitfalls when using the model, and conduct well-informed analyses that can contribute to improving causal inference in evaluations of underlying mechanisms of effects on nutrition-related behavioral and health outcomes. We give specific attention to underevaluated limitations inherent in common approaches to mediation. In addition, we discuss how to conduct a power analysis for mediation models and offer an applied example to demonstrate mediation analysis. Finally, we provide an example write-up of mediation analysis results as a model for applied researchers. PMID:28446497

  7. Polya’s bees: A model of decentralized decision-making

    PubMed Central

    Golman, Russell; Hagmann, David; Miller, John H.

    2015-01-01

    How do social systems make decisions with no single individual in control? We observe that a variety of natural systems, including colonies of ants and bees and perhaps even neurons in the human brain, make decentralized decisions using common processes involving information search with positive feedback and consensus choice through quorum sensing. We model this process with an urn scheme that runs until hitting a threshold, and we characterize an inherent tradeoff between the speed and the accuracy of a decision. The proposed common mechanism provides a robust and effective means by which a decentralized system can navigate the speed-accuracy tradeoff and make reasonably good, quick decisions in a variety of environments. Additionally, consensus choice exhibits systemic risk aversion even while individuals are idiosyncratically risk-neutral. This too is adaptive. The model illustrates how natural systems make decentralized decisions, illuminating a mechanism that engineers of social and artificial systems could imitate. PMID:26601255

  8. Key Factors for Determining Risk of Groundwater Impacts Due to Leakage from Geologic Carbon Sequestration Reservoirs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carroll, Susan; Keating, Elizabeth; Mansoor, Kayyum

    2014-01-06

    The National Risk Assessment Partnership (NRAP) is developing a science-based toolset for the analysis of potential impacts to groundwater chemistry from CO 2 injection (www.netldoe.gov/nrap). The toolset adopts a stochastic approach in which predictions address uncertainties in shallow underwater and leakage scenarios. It is derived from detailed physics and chemistry simulation results that are used to train more computationally efficient models,l referred to here as reduced-order models (ROMs), for each component system. In particular, these tools can be used to help regulators and operators understand the expected sizes and longevity of plumes in pH, TDS, and dissolved metals that couldmore » result from a leakage of brine and/or CO 2 from a storage reservoir into aquifers. This information can inform, for example, decisions on monitoring strategies that are both effective and efficient. We have used this approach to develop predictive reduced-order models for two common types of reservoirs, but the approach could be used to develop a model for a specific aquifer or other common types of aquifers. In this paper we describe potential impacts to groundwater quality due to CO 2 and brine leakage, discuss an approach to calculate thresholds under which "no impact" to groundwater occurs, describe the time scale for impact on groundwater, and discuss the probability of detecting a groundwater plume should leakage occur.« less

  9. Sensitivity-Based Guided Model Calibration

    NASA Astrophysics Data System (ADS)

    Semnani, M.; Asadzadeh, M.

    2017-12-01

    A common practice in automatic calibration of hydrologic models is applying the sensitivity analysis prior to the global optimization to reduce the number of decision variables (DVs) by identifying the most sensitive ones. This two-stage process aims to improve the optimization efficiency. However, Parameter sensitivity information can be used to enhance the ability of the optimization algorithms to find good quality solutions in a fewer number of solution evaluations. This improvement can be achieved by increasing the focus of optimization on sampling from the most sensitive parameters in each iteration. In this study, the selection process of the dynamically dimensioned search (DDS) optimization algorithm is enhanced by utilizing a sensitivity analysis method to put more emphasis on the most sensitive decision variables for perturbation. The performance of DDS with the sensitivity information is compared to the original version of DDS for different mathematical test functions and a model calibration case study. Overall, the results show that DDS with sensitivity information finds nearly the same solutions as original DDS, however, in a significantly fewer number of solution evaluations.

  10. Information loss and reconstruction in diffuse fluorescence tomography

    PubMed Central

    Bonfert-Taylor, Petra; Leblond, Frederic; Holt, Robert W.; Tichauer, Kenneth; Pogue, Brian W.; Taylor, Edward C.

    2012-01-01

    This paper is a theoretical exploration of spatial resolution in diffuse fluorescence tomography. It is demonstrated that, given a fixed imaging geometry, one cannot—relative to standard techniques such as Tikhonov regularization and truncated singular value decomposition—improve the spatial resolution of the optical reconstructions via increasing the node density of the mesh considered for modeling light transport. Using techniques from linear algebra, it is shown that, as one increases the number of nodes beyond the number of measurements, information is lost by the forward model. It is demonstrated that this information cannot be recovered using various common reconstruction techniques. Evidence is provided showing that this phenomenon is related to the smoothing properties of the elliptic forward model that is used in the diffusion approximation to light transport in tissue. This argues for reconstruction techniques that are sensitive to boundaries, such as L1-reconstruction and the use of priors, as well as the natural approach of building a measurement geometry that reflects the desired image resolution. PMID:22472763

  11. Towards a distributed information architecture for avionics data

    NASA Technical Reports Server (NTRS)

    Mattmann, Chris; Freeborn, Dana; Crichton, Dan

    2003-01-01

    Avionics data at the National Aeronautics and Space Administration's (NASA) Jet Propulsion Laboratory (JPL consists of distributed, unmanaged, and heterogeneous information that is hard for flight system design engineers to find and use on new NASA/JPL missions. The development of a systematic approach for capturing, accessing and sharing avionics data critical to the support of NASA/JPL missions and projects is required. We propose a general information architecture for managing the existing distributed avionics data sources and a method for querying and retrieving avionics data using the Object Oriented Data Technology (OODT) framework. OODT uses XML messaging infrastructure that profiles data products and their locations using the ISO-11179 data model for describing data products. Queries against a common data dictionary (which implements the ISO model) are translated to domain dependent source data models, and distributed data products are returned asynchronously through the OODT middleware. Further work will include the ability to 'plug and play' new manufacturer data sources, which are distributed at avionics component manufacturer locations throughout the United States.

  12. A flexible count data regression model for risk analysis.

    PubMed

    Guikema, Seth D; Coffelt, Jeremy P; Goffelt, Jeremy P

    2008-02-01

    In many cases, risk and reliability analyses involve estimating the probabilities of discrete events such as hardware failures and occurrences of disease or death. There is often additional information in the form of explanatory variables that can be used to help estimate the likelihood of different numbers of events in the future through the use of an appropriate regression model, such as a generalized linear model. However, existing generalized linear models (GLM) are limited in their ability to handle the types of variance structures often encountered in using count data in risk and reliability analysis. In particular, standard models cannot handle both underdispersed data (variance less than the mean) and overdispersed data (variance greater than the mean) in a single coherent modeling framework. This article presents a new GLM based on a reformulation of the Conway-Maxwell Poisson (COM) distribution that is useful for both underdispersed and overdispersed count data and demonstrates this model by applying it to the assessment of electric power system reliability. The results show that the proposed COM GLM can provide as good of fits to data as the commonly used existing models for overdispered data sets while outperforming these commonly used models for underdispersed data sets.

  13. Applying a health behavior theory to explore the influence of information and experience on arsenic risk representations, policy beliefs, and protective behavior.

    PubMed

    Severtson, Dolores J; Baumann, Linda C; Brown, Roger L

    2006-04-01

    The common sense model (CSM) shows how people process information to construct representations, or mental models, that guide responses to health threats. We applied the CSM to understand how people responded to information about arsenic-contaminated well water. Constructs included external information (arsenic level and information use), experience (perceived water quality and arsenic-related health effects), representations, safety judgments, opinions about policies to mitigate environmental arsenic, and protective behavior. Of 649 surveys mailed to private well users with arsenic levels exceeding the maximum contaminant level, 545 (84%) were analyzed. Structural equation modeling quantified CSM relationships. Both external information and experience had substantial effects on behavior. Participants who identified a water problem were more likely to reduce exposure to arsenic. However, about 60% perceived good water quality and 60% safe water. Participants with higher arsenic levels selected higher personal safety thresholds and 20% reported a lower arsenic level than indicated by their well test. These beliefs would support judgments of safe water. A variety of psychological and contextual factors may explain judgments of safe water when information suggested otherwise. Information use had an indirect effect on policy beliefs through understanding environmental causes of arsenic. People need concrete information about environmental risk at both personal and environmental-systems levels to promote a comprehensive understanding and response. The CSM explained responses to arsenic information and may have application to other environmental risks.

  14. Sediment-Hosted Zinc-Lead Deposits of the World - Database and Grade and Tonnage Models

    USGS Publications Warehouse

    Singer, Donald A.; Berger, Vladimir I.; Moring, Barry C.

    2009-01-01

    This report provides information on sediment-hosted zinc-lead mineral deposits based on the geologic settings that are observed on regional geologic maps. The foundation of mineral-deposit models is information about known deposits. The purpose of this publication is to make this kind of information available in digital form for sediment-hosted zinc-lead deposits. Mineral-deposit models are important in exploration planning and quantitative resource assessments: Grades and tonnages among deposit types are significantly different, and many types occur in different geologic settings that can be identified from geologic maps. Mineral-deposit models are the keystone in combining the diverse geoscience information on geology, mineral occurrences, geophysics, and geochemistry used in resource assessments and mineral exploration. Too few thoroughly explored mineral deposits are available in most local areas for reliable identification of the important geoscience variables, or for robust estimation of undiscovered deposits - thus, we need mineral-deposit models. Globally based deposit models allow recognition of important features because the global models demonstrate how common different features are. Well-designed and -constructed deposit models allow geologists to know from observed geologic environments the possible mineral-deposit types that might exist, and allow economists to determine the possible economic viability of these resources in the region. Thus, mineral-deposit models play the central role in transforming geoscience information to a form useful to policy makers. This publication contains a computer file of information on sediment-hosted zinc-lead deposits from around the world. It also presents new grade and tonnage models for nine types of these deposits and a file allowing locations of all deposits to be plotted in Google Earth. The data are presented in FileMaker Pro, Excel and text files to make the information available to as many as possible. The value of this information and any derived analyses depends critically on the consistent manner of data gathering. For this reason, we first discuss the rules applied in this compilation. Next, the fields of the data file are considered. Finally, we provide new grade and tonnage models that are, for the most part, based on a classification of deposits using observable geologic units from regional-scaled maps.

  15. Influence of prior information on pain involves biased perceptual decision-making.

    PubMed

    Wiech, Katja; Vandekerckhove, Joachim; Zaman, Jonas; Tuerlinckx, Francis; Vlaeyen, Johan W S; Tracey, Irene

    2014-08-04

    Prior information about features of a stimulus is a strong modulator of perception. For instance, the prospect of more intense pain leads to an increased perception of pain, whereas the expectation of analgesia reduces pain, as shown in placebo analgesia and expectancy modulations during drug administration. This influence is commonly assumed to be rooted in altered sensory processing and expectancy-related modulations in the spinal cord, are often taken as evidence for this notion. Contemporary models of perception, however, suggest that prior information can also modulate perception by biasing perceptual decision-making - the inferential process underlying perception in which prior information is used to interpret sensory information. In this type of bias, the information is already present in the system before the stimulus is observed. Computational models can distinguish between changes in sensory processing and altered decision-making as they result in different response times for incorrect choices in a perceptual decision-making task (Figure S1A,B). Using a drift-diffusion model, we investigated the influence of both processes in two independent experiments. The results of both experiments strongly suggest that these changes in pain perception are predominantly based on altered perceptual decision-making. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  16. Clinical information modeling processes for semantic interoperability of electronic health records: systematic review and inductive analysis.

    PubMed

    Moreno-Conde, Alberto; Moner, David; Cruz, Wellington Dimas da; Santos, Marcelo R; Maldonado, José Alberto; Robles, Montserrat; Kalra, Dipak

    2015-07-01

    This systematic review aims to identify and compare the existing processes and methodologies that have been published in the literature for defining clinical information models (CIMs) that support the semantic interoperability of electronic health record (EHR) systems. Following the preferred reporting items for systematic reviews and meta-analyses systematic review methodology, the authors reviewed published papers between 2000 and 2013 that covered that semantic interoperability of EHRs, found by searching the PubMed, IEEE Xplore, and ScienceDirect databases. Additionally, after selection of a final group of articles, an inductive content analysis was done to summarize the steps and methodologies followed in order to build CIMs described in those articles. Three hundred and seventy-eight articles were screened and thirty six were selected for full review. The articles selected for full review were analyzed to extract relevant information for the analysis and characterized according to the steps the authors had followed for clinical information modeling. Most of the reviewed papers lack a detailed description of the modeling methodologies used to create CIMs. A representative example is the lack of description related to the definition of terminology bindings and the publication of the generated models. However, this systematic review confirms that most clinical information modeling activities follow very similar steps for the definition of CIMs. Having a robust and shared methodology could improve their correctness, reliability, and quality. Independently of implementation technologies and standards, it is possible to find common patterns in methods for developing CIMs, suggesting the viability of defining a unified good practice methodology to be used by any clinical information modeler. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  17. Impacts of suppressing guide on information spreading

    NASA Astrophysics Data System (ADS)

    Xu, Jinghong; Zhang, Lin; Ma, Baojun; Wu, Ye

    2016-02-01

    It is quite common that guides are introduced to suppress the information spreading in modern society for different purposes. In this paper, an agent-based model is established to quantitatively analyze the impacts of suppressing guides on information spreading. We find that the spreading threshold depends on the attractiveness of the information and the topology of the social network with no suppressing guides at all. Usually, one would expect that the existence of suppressing guides in the spreading procedure may result in less diffusion of information within the overall network. However, we find that sometimes the opposite is true: the manipulating nodes of suppressing guides may lead to more extensive information spreading when there are audiences with the reversal mind. These results can provide valuable theoretical references to public opinion guidance on various information, e.g., rumor or news spreading.

  18. From epidemics to information propagation: Striking differences in structurally similar adaptive network models

    NASA Astrophysics Data System (ADS)

    Trajanovski, Stojan; Guo, Dongchao; Van Mieghem, Piet

    2015-09-01

    The continuous-time adaptive susceptible-infected-susceptible (ASIS) epidemic model and the adaptive information diffusion (AID) model are two adaptive spreading processes on networks, in which a link in the network changes depending on the infectious state of its end nodes, but in opposite ways: (i) In the ASIS model a link is removed between two nodes if exactly one of the nodes is infected to suppress the epidemic, while a link is created in the AID model to speed up the information diffusion; (ii) a link is created between two susceptible nodes in the ASIS model to strengthen the healthy part of the network, while a link is broken in the AID model due to the lack of interest in informationless nodes. The ASIS and AID models may be considered as first-order models for cascades in real-world networks. While the ASIS model has been exploited in the literature, we show that the AID model is realistic by obtaining a good fit with Facebook data. Contrary to the common belief and intuition for such similar models, we show that the ASIS and AID models exhibit different but not opposite properties. Most remarkably, a unique metastable state always exists in the ASIS model, while there an hourglass-shaped region of instability in the AID model. Moreover, the epidemic threshold is a linear function in the effective link-breaking rate in the AID model, while it is almost constant but noisy in the AID model.

  19. Studying Gene and Gene-Environment Effects of Uncommon and Common Variants on Continuous Traits: A Marker-Set Approach Using Gene-Trait Similarity Regression

    PubMed Central

    Tzeng, Jung-Ying; Zhang, Daowen; Pongpanich, Monnat; Smith, Chris; McCarthy, Mark I.; Sale, Michèle M.; Worrall, Bradford B.; Hsu, Fang-Chi; Thomas, Duncan C.; Sullivan, Patrick F.

    2011-01-01

    Genomic association analyses of complex traits demand statistical tools that are capable of detecting small effects of common and rare variants and modeling complex interaction effects and yet are computationally feasible. In this work, we introduce a similarity-based regression method for assessing the main genetic and interaction effects of a group of markers on quantitative traits. The method uses genetic similarity to aggregate information from multiple polymorphic sites and integrates adaptive weights that depend on allele frequencies to accomodate common and uncommon variants. Collapsing information at the similarity level instead of the genotype level avoids canceling signals that have the opposite etiological effects and is applicable to any class of genetic variants without the need for dichotomizing the allele types. To assess gene-trait associations, we regress trait similarities for pairs of unrelated individuals on their genetic similarities and assess association by using a score test whose limiting distribution is derived in this work. The proposed regression framework allows for covariates, has the capacity to model both main and interaction effects, can be applied to a mixture of different polymorphism types, and is computationally efficient. These features make it an ideal tool for evaluating associations between phenotype and marker sets defined by linkage disequilibrium (LD) blocks, genes, or pathways in whole-genome analysis. PMID:21835306

  20. The caBIG® Life Science Business Architecture Model

    PubMed Central

    Boyd, Lauren Becnel; Hunicke-Smith, Scott P.; Stafford, Grace A.; Freund, Elaine T.; Ehlman, Michele; Chandran, Uma; Dennis, Robert; Fernandez, Anna T.; Goldstein, Stephen; Steffen, David; Tycko, Benjamin; Klemm, Juli D.

    2011-01-01

    Motivation: Business Architecture Models (BAMs) describe what a business does, who performs the activities, where and when activities are performed, how activities are accomplished and which data are present. The purpose of a BAM is to provide a common resource for understanding business functions and requirements and to guide software development. The cancer Biomedical Informatics Grid (caBIG®) Life Science BAM (LS BAM) provides a shared understanding of the vocabulary, goals and processes that are common in the business of LS research. Results: LS BAM 1.1 includes 90 goals and 61 people and groups within Use Case and Activity Unified Modeling Language (UML) Diagrams. Here we report on the model's current release, LS BAM 1.1, its utility and usage, and plans for future use and continuing development for future releases. Availability and Implementation: The LS BAM is freely available as UML, PDF and HTML (https://wiki.nci.nih.gov/x/OFNyAQ). Contact: lbboyd@bcm.edu; laurenbboyd@gmail.com Supplementary information: Supplementary data) are avaliable at Bioinformatics online. PMID:21450709

  1. Evaluating cell lines as tumour models by comparison of genomic profiles

    PubMed Central

    Domcke, Silvia; Sinha, Rileen; Levine, Douglas A.; Sander, Chris; Schultz, Nikolaus

    2013-01-01

    Cancer cell lines are frequently used as in vitro tumour models. Recent molecular profiles of hundreds of cell lines from The Cancer Cell Line Encyclopedia and thousands of tumour samples from the Cancer Genome Atlas now allow a systematic genomic comparison of cell lines and tumours. Here we analyse a panel of 47 ovarian cancer cell lines and identify those that have the highest genetic similarity to ovarian tumours. Our comparison of copy-number changes, mutations and mRNA expression profiles reveals pronounced differences in molecular profiles between commonly used ovarian cancer cell lines and high-grade serous ovarian cancer tumour samples. We identify several rarely used cell lines that more closely resemble cognate tumour profiles than commonly used cell lines, and we propose these lines as the most suitable models of ovarian cancer. Our results indicate that the gap between cell lines and tumours can be bridged by genomically informed choices of cell line models for all tumour types. PMID:23839242

  2. Case studies, cross-site comparisons, and the challenge of generalization: comparing agent-based models of land-use change in frontier regions

    PubMed Central

    Parker, Dawn C.; Entwisle, Barbara; Rindfuss, Ronald R.; Vanwey, Leah K.; Manson, Steven M.; Moran, Emilio; An, Li; Deadman, Peter; Evans, Tom P.; Linderman, Marc; Rizi, S. Mohammad Mussavi; Malanson, George

    2009-01-01

    Cross-site comparisons of case studies have been identified as an important priority by the land-use science community. From an empirical perspective, such comparisons potentially allow generalizations that may contribute to production of global-scale land-use and land-cover change projections. From a theoretical perspective, such comparisons can inform development of a theory of land-use science by identifying potential hypotheses and supporting or refuting evidence. This paper undertakes a structured comparison of four case studies of land-use change in frontier regions that follow an agent-based modeling approach. Our hypothesis is that each case study represents a particular manifestation of a common process. Given differences in initial conditions among sites and the time at which the process is observed, actual mechanisms and outcomes are anticipated to differ substantially between sites. Our goal is to reveal both commonalities and differences among research sites, model implementations, and ultimately, conclusions derived from the modeling process. PMID:19960107

  3. Case studies, cross-site comparisons, and the challenge of generalization: comparing agent-based models of land-use change in frontier regions.

    PubMed

    Parker, Dawn C; Entwisle, Barbara; Rindfuss, Ronald R; Vanwey, Leah K; Manson, Steven M; Moran, Emilio; An, Li; Deadman, Peter; Evans, Tom P; Linderman, Marc; Rizi, S Mohammad Mussavi; Malanson, George

    2008-01-01

    Cross-site comparisons of case studies have been identified as an important priority by the land-use science community. From an empirical perspective, such comparisons potentially allow generalizations that may contribute to production of global-scale land-use and land-cover change projections. From a theoretical perspective, such comparisons can inform development of a theory of land-use science by identifying potential hypotheses and supporting or refuting evidence. This paper undertakes a structured comparison of four case studies of land-use change in frontier regions that follow an agent-based modeling approach. Our hypothesis is that each case study represents a particular manifestation of a common process. Given differences in initial conditions among sites and the time at which the process is observed, actual mechanisms and outcomes are anticipated to differ substantially between sites. Our goal is to reveal both commonalities and differences among research sites, model implementations, and ultimately, conclusions derived from the modeling process.

  4. Supermodeling With A Global Atmospheric Model

    NASA Astrophysics Data System (ADS)

    Wiegerinck, Wim; Burgers, Willem; Selten, Frank

    2013-04-01

    In weather and climate prediction studies it often turns out to be the case that the multi-model ensemble mean prediction has the best prediction skill scores. One possible explanation is that the major part of the model error is random and is averaged out in the ensemble mean. In the standard multi-model ensemble approach, the models are integrated in time independently and the predicted states are combined a posteriori. Recently an alternative ensemble prediction approach has been proposed in which the models exchange information during the simulation and synchronize on a common solution that is closer to the truth than any of the individual model solutions in the standard multi-model ensemble approach or a weighted average of these. This approach is called the super modeling approach (SUMO). The potential of the SUMO approach has been demonstrated in the context of simple, low-order, chaotic dynamical systems. The information exchange takes the form of linear nudging terms in the dynamical equations that nudge the solution of each model to the solution of all other models in the ensemble. With a suitable choice of the connection strengths the models synchronize on a common solution that is indeed closer to the true system than any of the individual model solutions without nudging. This approach is called connected SUMO. An alternative approach is to integrate a weighted averaged model, weighted SUMO. At each time step all models in the ensemble calculate the tendency, these tendencies are weighted averaged and the state is integrated one time step into the future with this weighted averaged tendency. It was shown that in case the connected SUMO synchronizes perfectly, the connected SUMO follows the weighted averaged trajectory and both approaches yield the same solution. In this study we pioneer both approaches in the context of a global, quasi-geostrophic, three-level atmosphere model that is capable of simulating quite realistically the extra-tropical circulation in the Northern Hemisphere winter.

  5. An Interoperability Platform Enabling Reuse of Electronic Health Records for Signal Verification Studies

    PubMed Central

    Yuksel, Mustafa; Gonul, Suat; Laleci Erturkmen, Gokce Banu; Sinaci, Ali Anil; Invernizzi, Paolo; Facchinetti, Sara; Migliavacca, Andrea; Bergvall, Tomas; Depraetere, Kristof; De Roo, Jos

    2016-01-01

    Depending mostly on voluntarily sent spontaneous reports, pharmacovigilance studies are hampered by low quantity and quality of patient data. Our objective is to improve postmarket safety studies by enabling safety analysts to seamlessly access a wide range of EHR sources for collecting deidentified medical data sets of selected patient populations and tracing the reported incidents back to original EHRs. We have developed an ontological framework where EHR sources and target clinical research systems can continue using their own local data models, interfaces, and terminology systems, while structural interoperability and Semantic Interoperability are handled through rule-based reasoning on formal representations of different models and terminology systems maintained in the SALUS Semantic Resource Set. SALUS Common Information Model at the core of this set acts as the common mediator. We demonstrate the capabilities of our framework through one of the SALUS safety analysis tools, namely, the Case Series Characterization Tool, which have been deployed on top of regional EHR Data Warehouse of the Lombardy Region containing about 1 billion records from 16 million patients and validated by several pharmacovigilance researchers with real-life cases. The results confirm significant improvements in signal detection and evaluation compared to traditional methods with the missing background information. PMID:27123451

  6. Tracking interface and common curve dynamics for two-fluid flow in porous media

    DOE PAGES

    Mcclure, James E.; Miller, Cass T.; Gray, W. G.; ...

    2016-04-29

    Pore-scale studies of multiphase flow in porous medium systems can be used to understand transport mechanisms and quantitatively determine closure relations that better incorporate microscale physics into macroscale models. Multiphase flow simulators constructed using the lattice Boltzmann method provide a means to conduct such studies, including both the equilibrium and dynamic aspects. Moving, storing, and analyzing the large state space presents a computational challenge when highly-resolved models are applied. We present an approach to simulate multiphase flow processes in which in-situ analysis is applied to track multiphase flow dynamics at high temporal resolution. We compute a comprehensive set of measuresmore » of the phase distributions and the system dynamics, which can be used to aid fundamental understanding and inform closure relations for macroscale models. The measures computed include microscale point representations and macroscale averages of fluid saturations, the pressure and velocity of the fluid phases, interfacial areas, interfacial curvatures, interface and common curve velocities, interfacial orientation tensors, phase velocities and the contact angle between the fluid-fluid interface and the solid surface. Test cases are studied to validate the approach and illustrate how measures of system state can be obtained and used to inform macroscopic theory.« less

  7. Comparing perceptual and preferential decision making.

    PubMed

    Dutilh, Gilles; Rieskamp, Jörg

    2016-06-01

    Perceptual and preferential decision making have been studied largely in isolation. Perceptual decisions are considered to be at a non-deliberative cognitive level and have an outside criterion that defines the quality of decisions. Preferential decisions are considered to be at a higher cognitive level and the quality of decisions depend on the decision maker's subjective goals. Besides these crucial differences, both types of decisions also have in common that uncertain information about the choice situation has to be processed before a decision can be made. The present work aims to acknowledge the commonalities of both types of decision making to lay bare the crucial differences. For this aim we examine perceptual and preferential decisions with a novel choice paradigm that uses the identical stimulus material for both types of decisions. This paradigm allows us to model the decisions and response times of both types of decisions with the same sequential sampling model, the drift diffusion model. The results illustrate that the different incentive structure in both types of tasks changes people's behavior so that they process information more efficiently and respond more cautiously in the perceptual as compared to the preferential task. These findings set out a perspective for further integration of perceptual and preferential decision making in a single ramework.

  8. Forecasting Chikungunya spread in the Americas via data-driven empirical approaches.

    PubMed

    Escobar, Luis E; Qiao, Huijie; Peterson, A Townsend

    2016-02-29

    Chikungunya virus (CHIKV) is endemic to Africa and Asia, but the Asian genotype invaded the Americas in 2013. The fast increase of human infections in the American epidemic emphasized the urgency of developing detailed predictions of case numbers and the potential geographic spread of this disease. We developed a simple model incorporating cases generated locally and cases imported from other countries, and forecasted transmission hotspots at the level of countries and at finer scales, in terms of ecological features. By late January 2015, >1.2 M CHIKV cases were reported from the Americas, with country-level prevalences between nil and more than 20 %. In the early stages of the epidemic, exponential growth in case numbers was common; later, however, poor and uneven reporting became more common, in a phenomenon we term "surveillance fatigue." Economic activity of countries was not associated with prevalence, but diverse social factors may be linked to surveillance effort and reporting. Our model predictions were initially quite inaccurate, but improved markedly as more data accumulated within the Americas. The data-driven methodology explored in this study provides an opportunity to generate descriptive and predictive information on spread of emerging diseases in the short-term under simple models based on open-access tools and data that can inform early-warning systems and public health intelligence.

  9. Common carotid artery intima-media thickness is as good as carotid intima-media thickness of all carotid artery segments in improving prediction of coronary heart disease risk in the Atherosclerosis Risk in Communities (ARIC) study.

    PubMed

    Nambi, Vijay; Chambless, Lloyd; He, Max; Folsom, Aaron R; Mosley, Tom; Boerwinkle, Eric; Ballantyne, Christie M

    2012-01-01

    Carotid intima-media thickness (CIMT) and plaque information can improve coronary heart disease (CHD) risk prediction when added to traditional risk factors (TRF). However, obtaining adequate images of all carotid artery segments (A-CIMT) may be difficult. Of A-CIMT, the common carotid artery intima-media thickness (CCA-IMT) is relatively more reliable and easier to measure. We evaluated whether CCA-IMT is comparable to A-CIMT when added to TRF and plaque information in improving CHD risk prediction in the Atherosclerosis Risk in Communities (ARIC) study. Ten-year CHD risk prediction models using TRF alone, TRF + A-CIMT + plaque, and TRF + CCA-IMT + plaque were developed for the overall cohort, men, and women. The area under the receiver operator characteristic curve (AUC), per cent individuals reclassified, net reclassification index (NRI), and model calibration by the Grønnesby-Borgan test were estimated. There were 1722 incident CHD events in 12 576 individuals over a mean follow-up of 15.2 years. The AUC for TRF only, TRF + A-CIMT + plaque, and TRF + CCA-IMT + plaque models were 0.741, 0.754, and 0.753, respectively. Although there was some discordance when the CCA-IMT + plaque- and A-CIMT + plaque-based risk estimation was compared, the NRI and clinical NRI (NRI in the intermediate-risk group) when comparing the CIMT models with TRF-only model, per cent reclassified, and test for model calibration were not significantly different. Coronary heart disease risk prediction can be improved by adding A-CIMT + plaque or CCA-IMT + plaque information to TRF. Therefore, evaluating the carotid artery for plaque presence and measuring CCA-IMT, which is easier and more reliable than measuring A-CIMT, provide a good alternative to measuring A-CIMT for CHD risk prediction.

  10. Opinion evolution influenced by informed agents

    NASA Astrophysics Data System (ADS)

    Fan, Kangqi; Pedrycz, Witold

    2016-11-01

    Guiding public opinions toward a pre-set target by informed agents can be a strategy adopted in some practical applications. The informed agents are common agents who are employed or chosen to spread the pre-set opinion. In this work, we propose a social judgment based opinion (SJBO) dynamics model to explore the opinion evolution under the influence of informed agents. The SJBO model distinguishes between inner opinions and observable choices, and incorporates both the compromise between similar opinions and the repulsion between dissimilar opinions. Three choices (support, opposition, and remaining undecided) are considered in the SJBO model. Using the SJBO model, both the inner opinions and the observable choices can be tracked during the opinion evolution process. The simulation results indicate that if the exchanges of inner opinions among agents are not available, the effect of informed agents is mainly dependent on the characteristics of regular agents, including the assimilation threshold, decay threshold, and initial opinions. Increasing the assimilation threshold and decay threshold can improve the guiding effectiveness of informed agents. Moreover, if the initial opinions of regular agents are close to null, the full and unanimous consensus at the pre-set opinion can be realized, indicating that, to maximize the influence of informed agents, the guidance should be started when regular agents have little knowledge about a subject under consideration. If the regular agents have had clear opinions, the full and unanimous consensus at the pre-set opinion cannot be achieved. However, the introduction of informed agents can make the majority of agents choose the pre-set opinion.

  11. Modeling open-set spoken word recognition in postlingually deafened adults after cochlear implantation: some preliminary results with the neighborhood activation model.

    PubMed

    Meyer, Ted A; Frisch, Stefan A; Pisoni, David B; Miyamoto, Richard T; Svirsky, Mario A

    2003-07-01

    Do cochlear implants provide enough information to allow adult cochlear implant users to understand words in ways that are similar to listeners with acoustic hearing? Can we use a computational model to gain insight into the underlying mechanisms used by cochlear implant users to recognize spoken words? The Neighborhood Activation Model has been shown to be a reasonable model of word recognition for listeners with normal hearing. The Neighborhood Activation Model assumes that words are recognized in relation to other similar-sounding words in a listener's lexicon. The probability of correctly identifying a word is based on the phoneme perception probabilities from a listener's closed-set consonant and vowel confusion matrices modified by the relative frequency of occurrence of the target word compared with similar-sounding words (neighbors). Common words with few similar-sounding neighbors are more likely to be selected as responses than less common words with many similar-sounding neighbors. Recent studies have shown that several of the assumptions of the Neighborhood Activation Model also hold true for cochlear implant users. Closed-set consonant and vowel confusion matrices were obtained from 26 postlingually deafened adults who use cochlear implants. Confusion matrices were used to represent input errors to the Neighborhood Activation Model. Responses to the different stimuli were then generated by the Neighborhood Activation Model after incorporating the frequency of occurrence counts of the stimuli and their neighbors. Model outputs were compared with obtained performance measures on the Consonant-Vowel Nucleus-Consonant word test. Information transmission analysis was used to assess whether the Neighborhood Activation Model was able to successfully generate and predict word and individual phoneme recognition by cochlear implant users. The Neighborhood Activation Model predicted Consonant-Vowel Nucleus-Consonant test words at levels similar to those correctly identified by the cochlear implant users. The Neighborhood Activation Model also predicted phoneme feature information well. The results obtained suggest that the Neighborhood Activation Model provides a reasonable explanation of word recognition by postlingually deafened adults after cochlear implantation. It appears that multichannel cochlear implants give cochlear implant users access to their mental lexicons in a manner that is similar to listeners with acoustic hearing. The lexical properties of the test stimuli used to assess performance are important to spoken-word recognition and should be included in further models of the word recognition process.

  12. The Effects of Climate Model Similarity on Local, Risk-Based Adaptation Planning

    NASA Astrophysics Data System (ADS)

    Steinschneider, S.; Brown, C. M.

    2014-12-01

    The climate science community has recently proposed techniques to develop probabilistic projections of climate change from ensemble climate model output. These methods provide a means to incorporate the formal concept of risk, i.e., the product of impact and probability, into long-term planning assessments for local systems under climate change. However, approaches for pdf development often assume that different climate models provide independent information for the estimation of probabilities, despite model similarities that stem from a common genealogy. Here we utilize an ensemble of projections from the Coupled Model Intercomparison Project Phase 5 (CMIP5) to develop probabilistic climate information, with and without an accounting of inter-model correlations, and use it to estimate climate-related risks to a local water utility in Colorado, U.S. We show that the tail risk of extreme climate changes in both mean precipitation and temperature is underestimated if model correlations are ignored. When coupled with impact models of the hydrology and infrastructure of the water utility, the underestimation of extreme climate changes substantially alters the quantification of risk for water supply shortages by mid-century. We argue that progress in climate change adaptation for local systems requires the recognition that there is less information in multi-model climate ensembles than previously thought. Importantly, adaptation decisions cannot be limited to the spread in one generation of climate models.

  13. A scoping review of malaria forecasting: past work and future directions

    PubMed Central

    Zinszer, Kate; Verma, Aman D; Charland, Katia; Brewer, Timothy F; Brownstein, John S; Sun, Zhuoyu; Buckeridge, David L

    2012-01-01

    Objectives There is a growing body of literature on malaria forecasting methods and the objective of our review is to identify and assess methods, including predictors, used to forecast malaria. Design Scoping review. Two independent reviewers searched information sources, assessed studies for inclusion and extracted data from each study. Information sources Search strategies were developed and the following databases were searched: CAB Abstracts, EMBASE, Global Health, MEDLINE, ProQuest Dissertations & Theses and Web of Science. Key journals and websites were also manually searched. Eligibility criteria for included studies We included studies that forecasted incidence, prevalence or epidemics of malaria over time. A description of the forecasting model and an assessment of the forecast accuracy of the model were requirements for inclusion. Studies were restricted to human populations and to autochthonous transmission settings. Results We identified 29 different studies that met our inclusion criteria for this review. The forecasting approaches included statistical modelling, mathematical modelling and machine learning methods. Climate-related predictors were used consistently in forecasting models, with the most common predictors being rainfall, relative humidity, temperature and the normalised difference vegetation index. Model evaluation was typically based on a reserved portion of data and accuracy was measured in a variety of ways including mean-squared error and correlation coefficients. We could not compare the forecast accuracy of models from the different studies as the evaluation measures differed across the studies. Conclusions Applying different forecasting methods to the same data, exploring the predictive ability of non-environmental variables, including transmission reducing interventions and using common forecast accuracy measures will allow malaria researchers to compare and improve models and methods, which should improve the quality of malaria forecasting. PMID:23180505

  14. A common type system for clinical natural language processing

    PubMed Central

    2013-01-01

    Background One challenge in reusing clinical data stored in electronic medical records is that these data are heterogenous. Clinical Natural Language Processing (NLP) plays an important role in transforming information in clinical text to a standard representation that is comparable and interoperable. Information may be processed and shared when a type system specifies the allowable data structures. Therefore, we aim to define a common type system for clinical NLP that enables interoperability between structured and unstructured data generated in different clinical settings. Results We describe a common type system for clinical NLP that has an end target of deep semantics based on Clinical Element Models (CEMs), thus interoperating with structured data and accommodating diverse NLP approaches. The type system has been implemented in UIMA (Unstructured Information Management Architecture) and is fully functional in a popular open-source clinical NLP system, cTAKES (clinical Text Analysis and Knowledge Extraction System) versions 2.0 and later. Conclusions We have created a type system that targets deep semantics, thereby allowing for NLP systems to encapsulate knowledge from text and share it alongside heterogenous clinical data sources. Rather than surface semantics that are typically the end product of NLP algorithms, CEM-based semantics explicitly build in deep clinical semantics as the point of interoperability with more structured data types. PMID:23286462

  15. A common type system for clinical natural language processing.

    PubMed

    Wu, Stephen T; Kaggal, Vinod C; Dligach, Dmitriy; Masanz, James J; Chen, Pei; Becker, Lee; Chapman, Wendy W; Savova, Guergana K; Liu, Hongfang; Chute, Christopher G

    2013-01-03

    One challenge in reusing clinical data stored in electronic medical records is that these data are heterogenous. Clinical Natural Language Processing (NLP) plays an important role in transforming information in clinical text to a standard representation that is comparable and interoperable. Information may be processed and shared when a type system specifies the allowable data structures. Therefore, we aim to define a common type system for clinical NLP that enables interoperability between structured and unstructured data generated in different clinical settings. We describe a common type system for clinical NLP that has an end target of deep semantics based on Clinical Element Models (CEMs), thus interoperating with structured data and accommodating diverse NLP approaches. The type system has been implemented in UIMA (Unstructured Information Management Architecture) and is fully functional in a popular open-source clinical NLP system, cTAKES (clinical Text Analysis and Knowledge Extraction System) versions 2.0 and later. We have created a type system that targets deep semantics, thereby allowing for NLP systems to encapsulate knowledge from text and share it alongside heterogenous clinical data sources. Rather than surface semantics that are typically the end product of NLP algorithms, CEM-based semantics explicitly build in deep clinical semantics as the point of interoperability with more structured data types.

  16. Information density converges in dialogue: Towards an information-theoretic model.

    PubMed

    Xu, Yang; Reitter, David

    2018-01-01

    The principle of entropy rate constancy (ERC) states that language users distribute information such that words tend to be equally predictable given previous contexts. We examine the applicability of this principle to spoken dialogue, as previous findings primarily rest on written text. The study takes into account the joint-activity nature of dialogue and the topic shift mechanisms that are different from monologue. It examines how the information contributions from the two dialogue partners interactively evolve as the discourse develops. The increase of local sentence-level information density (predicted by ERC) is shown to apply to dialogue overall. However, when the different roles of interlocutors in introducing new topics are identified, their contribution in information content displays a new converging pattern. We draw explanations to this pattern from multiple perspectives: Casting dialogue as an information exchange system would mean that the pattern is the result of two interlocutors maintaining their own context rather than sharing one. Second, we present some empirical evidence that a model of Interactive Alignment may include information density to explain the effect. Third, we argue that building common ground is a process analogous to information convergence. Thus, we put forward an information-theoretic view of dialogue, under which some existing theories of human dialogue may eventually be unified. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Modeling Emissions and Vertical Plume Transport of Crop Residue Burning Experiments in the Pacific Northwest

    NASA Astrophysics Data System (ADS)

    Zhou, L.; Baker, K. R.; Napelenok, S. L.; Pouliot, G.; Elleman, R. A.; ONeill, S. M.; Urbanski, S. P.; Wong, D. C.

    2017-12-01

    Crop residue burning has long been a common practice in agriculture with the smoke emissions from the burning linked to negative health impacts. A field study in eastern Washington and northern Idaho in August 2013 consisted of multiple burns of well characterized fuels with nearby surface and aerial measurements including trace species concentrations, plume rise height and boundary layer structure. The chemical transport model CMAQ (Community Multiscale Air Quality Model) was used to assess the fire emissions and subsequent vertical plume transport. The study first compared assumptions made by the 2014 National Emission Inventory approach for crop residue burning with the fuel and emissions information obtained from the field study and then investigated the sensitivity of modeled carbon monoxide (CO) and PM2.5 concentrations to these different emission estimates and plume rise treatment with CMAQ. The study suggests that improvements to the current parameterizations are needed in order for CMAQ to reliably reproduce smoke plumes from burning. In addition, there is enough variability in the smoke emissions, stemming from variable field-specific information such as field size, that attempts to model crop residue burning should use field-specific information whenever possible.

  18. Network-constrained group lasso for high-dimensional multinomial classification with application to cancer subtype prediction.

    PubMed

    Tian, Xinyu; Wang, Xuefeng; Chen, Jun

    2014-01-01

    Classic multinomial logit model, commonly used in multiclass regression problem, is restricted to few predictors and does not take into account the relationship among variables. It has limited use for genomic data, where the number of genomic features far exceeds the sample size. Genomic features such as gene expressions are usually related by an underlying biological network. Efficient use of the network information is important to improve classification performance as well as the biological interpretability. We proposed a multinomial logit model that is capable of addressing both the high dimensionality of predictors and the underlying network information. Group lasso was used to induce model sparsity, and a network-constraint was imposed to induce the smoothness of the coefficients with respect to the underlying network structure. To deal with the non-smoothness of the objective function in optimization, we developed a proximal gradient algorithm for efficient computation. The proposed model was compared to models with no prior structure information in both simulations and a problem of cancer subtype prediction with real TCGA (the cancer genome atlas) gene expression data. The network-constrained mode outperformed the traditional ones in both cases.

  19. Measurement and modeling of unsaturated hydraulic conductivity

    USGS Publications Warehouse

    Perkins, Kim S.; Elango, Lakshmanan

    2011-01-01

    The unsaturated zone plays an extremely important hydrologic role that influences water quality and quantity, ecosystem function and health, the connection between atmospheric and terrestrial processes, nutrient cycling, soil development, and natural hazards such as flooding and landslides. Unsaturated hydraulic conductivity is one of the main properties considered to govern flow; however it is very difficult to measure accurately. Knowledge of the highly nonlinear relationship between unsaturated hydraulic conductivity (K) and volumetric water content is required for widely-used models of water flow and solute transport processes in the unsaturated zone. Measurement of unsaturated hydraulic conductivity of sediments is costly and time consuming, therefore use of models that estimate this property from more easily measured bulk-physical properties is common. In hydrologic studies, calculations based on property-transfer models informed by hydraulic property databases are often used in lieu of measured data from the site of interest. Reliance on database-informed predicted values with the use of neural networks has become increasingly common. Hydraulic properties predicted using databases may be adequate in some applications, but not others. This chapter will discuss, by way of examples, various techniques used to measure and model hydraulic conductivity as a function of water content, K. The parameters that describe the K curve obtained by different methods are used directly in Richards’ equation-based numerical models, which have some degree of sensitivity to those parameters. This chapter will explore the complications of using laboratory measured or estimated properties for field scale investigations to shed light on how adequately the processes are represented. Additionally, some more recent concepts for representing unsaturated-zone flow processes will be discussed.

  20. Enabling joined-up decision making with geotemporal information

    NASA Astrophysics Data System (ADS)

    Smith, M. J.; Ahmed, S. E.; Purves, D. W.; Emmott, S.; Joppa, L. N.; Caldararu, S.; Visconti, P.; Newbold, T.; Formica, A. F.

    2015-12-01

    While the use of geospatial data to assist in decision making is becoming increasingly common, the use of geotemporal information: information that can be indexed by geographical space AND time, is much rarer. I will describe our scientific research and software development efforts intended to advance the availability and use of geotemporal information in general. I will show two recent examples of "stacking" geotemporal information to support land use decision making in the Brazilian Amazon and Kenya, involving data-constrained predictive models and empirically derived datasets of road development, deforestation, carbon, agricultural yields, water purification and poverty alleviation services and will show how we use trade-off analyses and constraint reasoning algorithms to explore the costs and benefits of different decisions. For the Brazilian Amazon we explore tradeoffs involved in different deforestation scenarios, while for Kenya we explore the impacts of conserving forest to support international carbon conservation initiatives (REDD+). I will also illustrate the cloud-based software tools we have developed to enable anyone to access geotemporal information, gridded (e.g. climate) or non-gridded (e.g. protected areas), for the past, present or future and incorporate such information into their analyses (e.g. www.fetchclimate.org), including how we train new predictive models to such data using Bayesian techniques: on this latter point I will show how we combine satellite and ground measured data with predictive models to forecast how crops might respond to climate change.

  1. A feedback model of visual attention.

    PubMed

    Spratling, M W; Johnson, M H

    2004-03-01

    Feedback connections are a prominent feature of cortical anatomy and are likely to have a significant functional role in neural information processing. We present a neural network model of cortical feedback that successfully simulates neurophysiological data associated with attention. In this domain, our model can be considered a more detailed, and biologically plausible, implementation of the biased competition model of attention. However, our model is more general as it can also explain a variety of other top-down processes in vision, such as figure/ground segmentation and contextual cueing. This model thus suggests that a common mechanism, involving cortical feedback pathways, is responsible for a range of phenomena and provides a unified account of currently disparate areas of research.

  2. Variation in and risk factors for paediatric inpatient all-cause mortality in a low income setting: data from an emerging clinical information network.

    PubMed

    Gathara, David; Malla, Lucas; Ayieko, Philip; Karuri, Stella; Nyamai, Rachel; Irimu, Grace; van Hensbroek, Michael Boele; Allen, Elizabeth; English, Mike

    2017-04-05

    Hospital mortality data can inform planning for health interventions and may help optimize resource allocation if they are reliable and appropriately interpreted. However such data are often not available in low income countries including Kenya. Data from the Clinical Information Network covering 12 county hospitals' paediatric admissions aged 2-59 months for the periods September 2013 to March 2015 were used to describe mortality across differing contexts and to explore whether simple clinical characteristics used to classify severity of illness in common treatment guidelines are consistently associated with inpatient mortality. Regression models accounting for hospital identity and malaria prevalence (low or high) were used. Multiple imputation for missing data was based on a missing at random assumption with sensitivity analyses based on pattern mixture missing not at random assumptions. The overall cluster adjusted crude mortality rate across hospitals was 6 · 2% with an almost 5 fold variation across sites (95% CI 4 · 9 to 7 · 8; range 2 · 1% - 11 · 0%). Hospital identity was significantly associated with mortality. Clinical features included in guidelines for common diseases to assess severity of illness were consistently associated with mortality in multivariable analyses (AROC =0 · 86). All-cause mortality is highly variable across hospitals and associated with clinical risk factors identified in disease specific guidelines. A panel of these clinical features may provide a basic common data framework as part of improved health information systems to support evaluations of quality and outcomes of care at scale and inform health system strengthening efforts.

  3. Information giving and receiving in hematological malignancy consultations.

    PubMed

    Alexander, Stewart C; Sullivan, Amy M; Back, Anthony L; Tulsky, James A; Goldman, Roberta E; Block, Susan D; Stewart, Susan K; Wilson-Genderson, Maureen; Lee, Stephanie J

    2012-03-01

    Little is known about communication with patients suffering from hematologic malignancies, many of whom are seen by subspecialists in consultation at tertiary-care centers. These subspecialized consultations might provide the best examples of optimal physician-patient communication behaviors, given that these consultations tend to be lengthy, to occur between individuals who have not met before and may have no intention of an ongoing relationship, and which have a goal of providing treatment recommendations. The aim of this paper is to describe and quantify the content of the subspecialty consultation in regards to exchanging information and identify patient and provider characteristics associated with discussion elements. Audio-recorded consultations between 236 patients and 40 hematologists were coded for recommended communication practices. Multilevel models for dichotomous outcomes were created to test associations between patient, physician and consultation characteristics and key discussion elements. Discussions about the purpose of the visit and patient's knowledge about their disease were common. Other elements such as patient's preference for his/her role in decision-making, preferences for information, or understanding of presented information were less common. Treatment recommendations were provided in 97% of the consultations and unambiguous presentations of prognosis occurred in 81% of the consultations. Unambiguous presentations of prognosis were associated with non-White patient race, lower educational status, greater number of questions asked, and specific physician provider. Although some communication behaviors occur in most consultations, others are much less common and could help tailor the amount and type of information discussed. Approximately half of the patients are told unambiguous prognostic estimates for mortality or cure. Copyright © 2011 John Wiley & Sons, Ltd.

  4. Information giving and receiving in hematological malignancy consultations†

    PubMed Central

    Alexander, Stewart C.; Sullivan, Amy M.; Back, Anthony L.; Tulsky, James A.; Goldman, Roberta E.; Block, Susan D.; Stewart, Susan K.; Wilson-Genderson, Maureen; Lee, Stephanie J.

    2012-01-01

    Purpose Little is known about communication with patients suffering from hematologic malignancies, many of whom are seen by subspecialists in consultation at tertiary-care centers. These subspecialized consultations might provide the best examples of optimal physician–patient communication behaviors, given that these consultations tend to be lengthy, to occur between individuals who have not met before and may have no intention of an ongoing relationship, and which have a goal of providing treatment recommendations. The aim of this paper is to describe and quantify the content of the subspecialty consultation in regards to exchanging information and identify patient and provider characteristics associated with discussion elements. Methods Audio-recorded consultations between 236 patients and 40 hematologists were coded for recommended communication practices. Multilevel models for dichotomous outcomes were created to test associations between patient, physician and consultation characteristics and key discussion elements. Results Discussions about the purpose of the visit and patient’s knowledge about their disease were common. Other elements such as patient’s preference for his/her role in decision-making, preferences for information, or understanding of presented information were less common. Treatment recommendations were provided in 97% of the consultations and unambiguous presentations of prognosis occurred in 81% of the consultations. Unambiguous presentations of prognosis were associated with non-White patient race, lower educational status, greater number of questions asked, and specific physician provider. Conclusion Although some communication behaviors occur in most consultations, others are much less common and could help tailor the amount and type of information discussed. Approximately half of the patients are told unambiguous prognostic estimates for mortality or cure. PMID:21294221

  5. Prediction of breast cancer risk by genetic risk factors, overall and by hormone receptor status.

    PubMed

    Hüsing, Anika; Canzian, Federico; Beckmann, Lars; Garcia-Closas, Montserrat; Diver, W Ryan; Thun, Michael J; Berg, Christine D; Hoover, Robert N; Ziegler, Regina G; Figueroa, Jonine D; Isaacs, Claudine; Olsen, Anja; Viallon, Vivian; Boeing, Heiner; Masala, Giovanna; Trichopoulos, Dimitrios; Peeters, Petra H M; Lund, Eiliv; Ardanaz, Eva; Khaw, Kay-Tee; Lenner, Per; Kolonel, Laurence N; Stram, Daniel O; Le Marchand, Loïc; McCarty, Catherine A; Buring, Julie E; Lee, I-Min; Zhang, Shumin; Lindström, Sara; Hankinson, Susan E; Riboli, Elio; Hunter, David J; Henderson, Brian E; Chanock, Stephen J; Haiman, Christopher A; Kraft, Peter; Kaaks, Rudolf

    2012-09-01

    There is increasing interest in adding common genetic variants identified through genome wide association studies (GWAS) to breast cancer risk prediction models. First results from such models showed modest benefits in terms of risk discrimination. Heterogeneity of breast cancer as defined by hormone-receptor status has not been considered in this context. In this study we investigated the predictive capacity of 32 GWAS-detected common variants for breast cancer risk, alone and in combination with classical risk factors, and for tumours with different hormone receptor status. Within the Breast and Prostate Cancer Cohort Consortium, we analysed 6009 invasive breast cancer cases and 7827 matched controls of European ancestry, with data on classical breast cancer risk factors and 32 common gene variants identified through GWAS. Discriminatory ability with respect to breast cancer of specific hormone receptor-status was assessed with the age adjusted and cohort-adjusted concordance statistic (AUROC(a)). Absolute risk scores were calculated with external reference data. Integrated discrimination improvement was used to measure improvements in risk prediction. We found a small but steady increase in discriminatory ability with increasing numbers of genetic variants included in the model (difference in AUROC(a) going from 2.7% to 4%). Discriminatory ability for all models varied strongly by hormone receptor status. Adding information on common polymorphisms provides small but statistically significant improvements in the quality of breast cancer risk prediction models. We consistently observed better performance for receptor-positive cases, but the gain in discriminatory quality is not sufficient for clinical application.

  6. Development of a Common Research Model for Applied CFD Validation Studies

    NASA Technical Reports Server (NTRS)

    Vassberg, John C.; Dehaan, Mark A.; Rivers, S. Melissa; Wahls, Richard A.

    2008-01-01

    The development of a wing/body/nacelle/pylon/horizontal-tail configuration for a common research model is presented, with focus on the aerodynamic design of the wing. Here, a contemporary transonic supercritical wing design is developed with aerodynamic characteristics that are well behaved and of high performance for configurations with and without the nacelle/pylon group. The horizontal tail is robustly designed for dive Mach number conditions and is suitably sized for typical stability and control requirements. The fuselage is representative of a wide/body commercial transport aircraft; it includes a wing-body fairing, as well as a scrubbing seal for the horizontal tail. The nacelle is a single-cowl, high by-pass-ratio, flow-through design with an exit area sized to achieve a natural unforced mass-flow-ratio typical of commercial aircraft engines at cruise. The simplicity of this un-bifurcated nacelle geometry will facilitate grid generation efforts of subsequent CFD validation exercises. Detailed aerodynamic performance data has been generated for this model; however, this information is presented in such a manner as to not bias CFD predictions planned for the fourth AIAA CFD Drag Prediction Workshop, which incorporates this common research model into its blind test cases. The CFD results presented include wing pressure distributions with and without the nacelle/pylon, ML/D trend lines, and drag-divergence curves; the design point for the wing/body configuration is within 1% of its max-ML/D. Plans to test the common research model in the National Transonic Facility and the Ames 11-ft wind tunnels are also discussed.

  7. Connecting Common Genetic Polymorphisms to Protein Function: A Modular Project Sequence for Lecture or Lab

    ERIC Educational Resources Information Center

    Berndsen, Christopher E.; Young, Byron H.; McCormick, Quinlin J.; Enke, Raymond A.

    2016-01-01

    Single nucleotide polymorphisms (SNPs) in DNA can result in phenotypes where the biochemical basis may not be clear due to the lack of protein structures. With the growing number of modeling and simulation software available on the internet, students can now participate in determining how small changes in genetic information impact cellular…

  8. Simulating stand-level harvest prescriptions across landscapes: LANDIS PRO harvest module design

    Treesearch

    Jacob S. Fraser; Hong S. He; Stephen R. Shifley; Wen J. Wang; Frank R. Thompson

    2013-01-01

    Forest landscape models (FLMs) are an important tool for assessing the long-term cumulative effects of harvest over large spatial extents. However, they have not been commonly used to guide forest management planning and on-the-ground operations. This is largely because FLMs track relatively simplistic vegetation information such as age cohort presence/absence, forest...

  9. Effects of weighting schemes on the identification of wildlife corridors generated with least-cost methods

    Treesearch

    Sean A. Parks; Kevin S. McKelvey; Michael K. Schwartz

    2012-01-01

    The importance of movement corridors for maintaining connectivity within metapopulations of wild animals is a cornerstone of conservation. One common approach for determining corridor locations is least-cost corridor (LCC) modeling, which uses algorithms within a geographic information system to search for routes with the lowest cumulative resistance between target...

  10. Temporal Clustering and Sequencing in Short-Term Memory and Episodic Memory

    ERIC Educational Resources Information Center

    Farrell, Simon

    2012-01-01

    A model of short-term memory and episodic memory is presented, with the core assumptions that (a) people parse their continuous experience into episodic clusters and (b) items are clustered together in memory as episodes by binding information within an episode to a common temporal context. Along with the additional assumption that information…

  11. Medial Prefrontal Lesions in Mice Impair Sustained Attention but Spare Maintenance of Information in Working Memory

    ERIC Educational Resources Information Center

    Kahn, Julia B.; Ward, Ryan D.; Kahn, Lora W.; Rudy, Nicole M.; Kandel, Eric R.; Balsam, Peter D.; Simpson, Eleanor H.

    2012-01-01

    Working memory and attention are complex cognitive functions that are disrupted in several neuropsychiatric disorders. Mouse models of such human diseases are commonly subjected to maze-based tests that can neither distinguish between these cognitive functions nor isolate specific aspects of either function. Here, we have adapted a simple visual…

  12. Glossary of AWS Acrinabs. Acronyms, Initialisms, and Abbreviations Commonly Used in Air Weather Service

    DTIC Science & Technology

    1991-01-01

    Foundation FYDP ......... Five Year Defense Plan FSI ............ Fog Stability Index 17 G G ................ gravity, giga- GISM ......... Gridded ...Global Circulation Model GOES-TAP GOES imagery processing & dissemination system GCS .......... grid course GOFS ........ Global Ocean Flux Study GD...Analysis Support System Complex Systems GRID .......... Global Resource Information Data -Base GEMAG ..... geomagnetic GRIST..... grazing-incidence solar

  13. Collaborative filtering on a family of biological targets.

    PubMed

    Erhan, Dumitru; L'heureux, Pierre-Jean; Yue, Shi Yi; Bengio, Yoshua

    2006-01-01

    Building a QSAR model of a new biological target for which few screening data are available is a statistical challenge. However, the new target may be part of a bigger family, for which we have more screening data. Collaborative filtering or, more generally, multi-task learning, is a machine learning approach that improves the generalization performance of an algorithm by using information from related tasks as an inductive bias. We use collaborative filtering techniques for building predictive models that link multiple targets to multiple examples. The more commonalities between the targets, the better the multi-target model that can be built. We show an example of a multi-target neural network that can use family information to produce a predictive model of an undersampled target. We evaluate JRank, a kernel-based method designed for collaborative filtering. We show their performance on compound prioritization for an HTS campaign and the underlying shared representation between targets. JRank outperformed the neural network both in the single- and multi-target models.

  14. Metrological traceability in education: A practical online system for measuring and managing middle school mathematics instruction

    NASA Astrophysics Data System (ADS)

    Torres Irribarra, D.; Freund, R.; Fisher, W.; Wilson, M.

    2015-02-01

    Computer-based, online assessments modelled, designed, and evaluated for adaptively administered invariant measurement are uniquely suited to defining and maintaining traceability to standardized units in education. An assessment of this kind is embedded in the Assessing Data Modeling and Statistical Reasoning (ADM) middle school mathematics curriculum. Diagnostic information about middle school students' learning of statistics and modeling is provided via computer-based formative assessments for seven constructs that comprise a learning progression for statistics and modeling from late elementary through the middle school grades. The seven constructs are: Data Display, Meta-Representational Competence, Conceptions of Statistics, Chance, Modeling Variability, Theory of Measurement, and Informal Inference. The end product is a web-delivered system built with Ruby on Rails for use by curriculum development teams working with classroom teachers in designing, developing, and delivering formative assessments. The online accessible system allows teachers to accurately diagnose students' unique comprehension and learning needs in a common language of real-time assessment, logging, analysis, feedback, and reporting.

  15. Building Interoperable FHIR-Based Vocabulary Mapping Services: A Case Study of OHDSI Vocabularies and Mappings.

    PubMed

    Jiang, Guoqian; Kiefer, Richard; Prud'hommeaux, Eric; Solbrig, Harold R

    2017-01-01

    The OHDSI Common Data Model (CDM) is a deep information model, in which its vocabulary component plays a critical role in enabling consistent coding and query of clinical data. The objective of the study is to create methods and tools to expose the OHDSI vocabularies and mappings as the vocabulary mapping services using two HL7 FHIR core terminology resources ConceptMap and ValueSet. We discuss the benefits and challenges in building the FHIR-based terminology services.

  16. Evolution of System Architectures: Where Do We Need to Fail Next?

    NASA Astrophysics Data System (ADS)

    Bermudez, Luis; Alameh, Nadine; Percivall, George

    2013-04-01

    Innovation requires testing and failing. Thomas Edison was right when he said "I have not failed. I've just found 10,000 ways that won't work". For innovation and improvement of standards to happen, service Architectures have to be tested and tested. Within the Open Geospatial Consortium (OGC), testing of service architectures has occurred for the last 15 years. This talk will present an evolution of these service architectures and a possible future path. OGC is a global forum for the collaboration of developers and users of spatial data products and services, and for the advancement and development of international standards for geospatial interoperability. The OGC Interoperability Program is a series of hands-on, fast paced, engineering initiatives to accelerate the development and acceptance of OGC standards. Each initiative is organized in threads that provide focus under a particular theme. The first testbed, OGC Web Services phase 1, completed in 2003 had four threads: Common Architecture, Web Mapping, Sensor Web and Web Imagery Enablement. The Common Architecture was a cross-thread theme, to ensure that the Web Mapping and Sensor Web experiments built on a base common architecture. The architecture was based on the three main SOA components: Broker, Requestor and Provider. It proposed a general service model defining service interactions and dependencies; categorization of service types; registries to allow discovery and access of services; data models and encodings; and common services (WMS, WFS, WCS). For the latter, there was a clear distinction on the different services: Data Services (e.g. WMS), Application services (e.g. Coordinate transformation) and server-side client applications (e.g. image exploitation). The latest testbed, OGC Web Service phase 9, completed in 2012 had 5 threads: Aviation, Cross-Community Interoperability (CCI), Security and Services Interoperability (SSI), OWS Innovations and Compliance & Interoperability Testing & Evaluation (CITE). Compared to the first testbed, OWS-9 did not have a separate common architecture thread. Instead the emphasis was on brokering information models, securing them and making data available efficiently on mobile devices. The outcome is an architecture based on usability and non-intrusiveness while leveraging mediation of information models from different communities. This talk will use lessons learned from the evolution from OGC Testbed phase 1 to phase 9 to better understand how global and complex infrastructures evolve to support many communities including the Earth System Science Community.

  17. A Research Agenda for the Common Core State Standards: What Information Do Policymakers Need?

    ERIC Educational Resources Information Center

    Rentner, Diane Stark; Ferguson, Maria

    2014-01-01

    This report looks specifically at the information and data needs of policymakers related to the Common Core State Standards (CCSS) and the types of research that could provide this information. The ideas in this report were informed by a series of meetings and discussions about a possible research agenda for the Common Core, sponsored by the…

  18. Information Model Translation to Support a Wider Science Community

    NASA Astrophysics Data System (ADS)

    Hughes, John S.; Crichton, Daniel; Ritschel, Bernd; Hardman, Sean; Joyner, Ronald

    2014-05-01

    The Planetary Data System (PDS), NASA's long-term archive for solar system exploration data, has just released PDS4, a modernization of the PDS architecture, data standards, and technical infrastructure. This next generation system positions the PDS to meet the demands of the coming decade, including big data, international cooperation, distributed nodes, and multiple ways of analysing and interpreting data. It also addresses three fundamental project goals: providing more efficient data delivery by data providers to the PDS, enabling a stable, long-term usable planetary science data archive, and enabling services for the data consumer to find, access, and use the data they require in contemporary data formats. The PDS4 information architecture is used to describe all PDS data using a common model. Captured in an ontology modeling tool it supports a hierarchy of data dictionaries built to the ISO/IEC 11179 standard and is designed to increase flexibility, enable complex searches at the product level, and to promote interoperability that facilitates data sharing both nationally and internationally. A PDS4 information architecture design requirement stipulates that the content of the information model must be translatable to external data definition languages such as XML Schema, XMI/XML, and RDF/XML. To support the semantic Web standards we are now in the process of mapping the contents into RDF/XML to support SPARQL capable databases. We are also building a terminological ontology to support virtually unified data retrieval and access. This paper will provide an overview of the PDS4 information architecture focusing on its domain information model and how the translation and mapping are being accomplished.

  19. A Parameter Subset Selection Algorithm for Mixed-Effects Models

    DOE PAGES

    Schmidt, Kathleen L.; Smith, Ralph C.

    2016-01-01

    Mixed-effects models are commonly used to statistically model phenomena that include attributes associated with a population or general underlying mechanism as well as effects specific to individuals or components of the general mechanism. This can include individual effects associated with data from multiple experiments. However, the parameterizations used to incorporate the population and individual effects are often unidentifiable in the sense that parameters are not uniquely specified by the data. As a result, the current literature focuses on model selection, by which insensitive parameters are fixed or removed from the model. Model selection methods that employ information criteria are applicablemore » to both linear and nonlinear mixed-effects models, but such techniques are limited in that they are computationally prohibitive for large problems due to the number of possible models that must be tested. To limit the scope of possible models for model selection via information criteria, we introduce a parameter subset selection (PSS) algorithm for mixed-effects models, which orders the parameters by their significance. In conclusion, we provide examples to verify the effectiveness of the PSS algorithm and to test the performance of mixed-effects model selection that makes use of parameter subset selection.« less

  20. To Control False Positives in Gene-Gene Interaction Analysis: Two Novel Conditional Entropy-Based Approaches

    PubMed Central

    Lin, Meihua; Li, Haoli; Zhao, Xiaolei; Qin, Jiheng

    2013-01-01

    Genome-wide analysis of gene-gene interactions has been recognized as a powerful avenue to identify the missing genetic components that can not be detected by using current single-point association analysis. Recently, several model-free methods (e.g. the commonly used information based metrics and several logistic regression-based metrics) were developed for detecting non-linear dependence between genetic loci, but they are potentially at the risk of inflated false positive error, in particular when the main effects at one or both loci are salient. In this study, we proposed two conditional entropy-based metrics to challenge this limitation. Extensive simulations demonstrated that the two proposed metrics, provided the disease is rare, could maintain consistently correct false positive rate. In the scenarios for a common disease, our proposed metrics achieved better or comparable control of false positive error, compared to four previously proposed model-free metrics. In terms of power, our methods outperformed several competing metrics in a range of common disease models. Furthermore, in real data analyses, both metrics succeeded in detecting interactions and were competitive with the originally reported results or the logistic regression approaches. In conclusion, the proposed conditional entropy-based metrics are promising as alternatives to current model-based approaches for detecting genuine epistatic effects. PMID:24339984

  1. Bio-Optics of the Chesapeake Bay from Measurements and Radiative Transfer Calculations

    NASA Technical Reports Server (NTRS)

    Tzortziou, Maria; Herman, Jay R.; Gallegos, Charles L.; Neale, Patrick J.; Subramaniam, Ajit; Harding, Lawrence W., Jr.; Ahmad, Ziauddin

    2005-01-01

    We combined detailed bio-optical measurements and radiative transfer (RT) modeling to perform an optical closure experiment for optically complex and biologically productive Chesapeake Bay waters. We used this experiment to evaluate certain assumptions commonly used when modeling bio-optical processes, and to investigate the relative importance of several optical characteristics needed to accurately model and interpret remote sensing ocean-color observations in these Case 2 waters. Direct measurements were made of the magnitude, variability, and spectral characteristics of backscattering and absorption that are critical for accurate parameterizations in satellite bio-optical algorithms and underwater RT simulations. We found that the ratio of backscattering to total scattering in the mid-mesohaline Chesapeake Bay varied considerably depending on particulate loading, distance from land, and mixing processes, and had an average value of 0.0128 at 530 nm. Incorporating information on the magnitude, variability, and spectral characteristics of particulate backscattering into the RT model, rather than using a volume scattering function commonly assumed for turbid waters, was critical to obtaining agreement between RT calculations and measured radiometric quantities. In situ measurements of absorption coefficients need to be corrected for systematic overestimation due to scattering errors, and this correction commonly employs the assumption that absorption by particulate matter at near infrared wavelengths is zero.

  2. Using the Violence Risk Scale-Sexual Offense version in sexual violence risk assessments: Updated risk categories and recidivism estimates from a multisite sample of treated sexual offenders.

    PubMed

    Olver, Mark E; Mundt, James C; Thornton, David; Beggs Christofferson, Sarah M; Kingston, Drew A; Sowden, Justina N; Nicholaichuk, Terry P; Gordon, Audrey; Wong, Stephen C P

    2018-04-30

    The present study sought to develop updated risk categories and recidivism estimates for the Violence Risk Scale-Sexual Offense version (VRS-SO; Wong, Olver, Nicholaichuk, & Gordon, 2003-2017), a sexual offender risk assessment and treatment planning tool. The overarching purpose was to increase the clarity and accuracy of communicating risk assessment information that includes a systematic incorporation of new information (i.e., change) to modify risk estimates. Four treated samples of sexual offenders with VRS-SO pretreatment, posttreatment, and Static-99R ratings were combined with a minimum follow-up period of 10-years postrelease (N = 913). Logistic regression was used to model 5- and 10-year sexual and violent (including sexual) recidivism estimates across 6 different regression models employing specific risk and change score information from the VRS-SO and/or Static-99R. A rationale is presented for clinical applications of select models and the necessity of controlling for baseline risk when utilizing change information across repeated assessments. Information concerning relative risk (percentiles) and absolute risk (recidivism estimates) is integrated with common risk assessment language guidelines to generate new risk categories for the VRS-SO. Guidelines for model selection and forensic clinical application of the risk estimates are discussed. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  3. Extended Graph-Based Models for Enhanced Similarity Search in Cavbase.

    PubMed

    Krotzky, Timo; Fober, Thomas; Hüllermeier, Eyke; Klebe, Gerhard

    2014-01-01

    To calculate similarities between molecular structures, measures based on the maximum common subgraph are frequently applied. For the comparison of protein binding sites, these measures are not fully appropriate since graphs representing binding sites on a detailed atomic level tend to get very large. In combination with an NP-hard problem, a large graph leads to a computationally demanding task. Therefore, for the comparison of binding sites, a less detailed coarse graph model is used building upon so-called pseudocenters. Consistently, a loss of structural data is caused since many atoms are discarded and no information about the shape of the binding site is considered. This is usually resolved by performing subsequent calculations based on additional information. These steps are usually quite expensive, making the whole approach very slow. The main drawback of a graph-based model solely based on pseudocenters, however, is the loss of information about the shape of the protein surface. In this study, we propose a novel and efficient modeling formalism that does not increase the size of the graph model compared to the original approach, but leads to graphs containing considerably more information assigned to the nodes. More specifically, additional descriptors considering surface characteristics are extracted from the local surface and attributed to the pseudocenters stored in Cavbase. These properties are evaluated as additional node labels, which lead to a gain of information and allow for much faster but still very accurate comparisons between different structures.

  4. Cumulative impact of common genetic variants and other risk factors on colorectal cancer risk in 42,103 individuals

    PubMed Central

    Dunlop, Malcolm G.; Tenesa, Albert; Farrington, Susan M.; Ballereau, Stephane; Brewster, David H.; Pharoah, Paul DP.; Schafmayer, Clemens; Hampe, Jochen; Völzke, Henry; Chang-Claude, Jenny; Hoffmeister, Michael; Brenner, Hermann; von Holst, Susanna; Picelli, Simone; Lindblom, Annika; Jenkins, Mark A.; Hopper, John L.; Casey, Graham; Duggan, David; Newcomb, Polly; Abulí, Anna; Bessa, Xavier; Ruiz-Ponte, Clara; Castellví-Bel, Sergi; Niittymäki, Iina; Tuupanen, Sari; Karhu, Auli; Aaltonen, Lauri; Zanke, Brent W.; Hudson, Thomas J.; Gallinger, Steven; Barclay, Ella; Martin, Lynn; Gorman, Maggie; Carvajal-Carmona, Luis; Walther, Axel; Kerr, David; Lubbe, Steven; Broderick, Peter; Chandler, Ian; Pittman, Alan; Penegar, Steven; Campbell, Harry; Tomlinson, Ian; Houlston, Richard S.

    2016-01-01

    Objective Colorectal cancer (CRC) has a substantial heritable component. Common genetic variation has been shown to contribute to CRC risk. In a large, multi-population study, we set out to assess the feasibility of CRC risk prediction using common genetic variant data, combined with other risk factors. We built a risk prediction model and applied it to the Scottish population using available data. Design Nine populations of European descent were studied to develop and validate colorectal cancer risk prediction models. Binary logistic regression was used to assess the combined effect of age, gender, family history (FH) and genotypes at 10 susceptibility loci that individually only modestly influence colorectal cancer risk. Risk models were generated from case-control data incorporating genotypes alone (n=39,266), and in combination with gender, age and family history (n=11,324). Model discriminatory performance was assessed using 10-fold internal cross-validation and externally using 4,187 independent samples. 10-year absolute risk was estimated by modelling genotype and FH with age- and gender-specific population risks. Results Median number of risk alleles was greater in cases than controls (10 vs 9, p<2.2×10−16), confirmed in external validation sets (Sweden p=1.2×10−6, Finland p=2×10−5). Mean per-allele increase in risk was 9% (OR 1.09; 95% CI 1.05–1.13). Discriminative performance was poor across the risk spectrum (area under curve (AUC) for genotypes alone - 0.57; AUC for genotype/age/gender/FH - 0.59). However, modelling genotype data, FH, age and gender with Scottish population data shows the practicalities of identifying a subgroup with >5% predicted 10-year absolute risk. Conclusion We show that genotype data provides additional information that complements age, gender and FH as risk factors. However, individualized genetic risk prediction is not currently feasible. Nonetheless, the modelling exercise suggests public health potential, since it is possible to stratify the population into CRC risk categories, thereby informing targeted prevention and surveillance. PMID:22490517

  5. Automatic classification of animal vocalizations

    NASA Astrophysics Data System (ADS)

    Clemins, Patrick J.

    2005-11-01

    Bioacoustics, the study of animal vocalizations, has begun to use increasingly sophisticated analysis techniques in recent years. Some common tasks in bioacoustics are repertoire determination, call detection, individual identification, stress detection, and behavior correlation. Each research study, however, uses a wide variety of different measured variables, called features, and classification systems to accomplish these tasks. The well-established field of human speech processing has developed a number of different techniques to perform many of the aforementioned bioacoustics tasks. Melfrequency cepstral coefficients (MFCCs) and perceptual linear prediction (PLP) coefficients are two popular feature sets. The hidden Markov model (HMM), a statistical model similar to a finite autonoma machine, is the most commonly used supervised classification model and is capable of modeling both temporal and spectral variations. This research designs a framework that applies models from human speech processing for bioacoustic analysis tasks. The development of the generalized perceptual linear prediction (gPLP) feature extraction model is one of the more important novel contributions of the framework. Perceptual information from the species under study can be incorporated into the gPLP feature extraction model to represent the vocalizations as the animals might perceive them. By including this perceptual information and modifying parameters of the HMM classification system, this framework can be applied to a wide range of species. The effectiveness of the framework is shown by analyzing African elephant and beluga whale vocalizations. The features extracted from the African elephant data are used as input to a supervised classification system and compared to results from traditional statistical tests. The gPLP features extracted from the beluga whale data are used in an unsupervised classification system and the results are compared to labels assigned by experts. The development of a framework from which to build animal vocalization classifiers will provide bioacoustics researchers with a consistent platform to analyze and classify vocalizations. A common framework will also allow studies to compare results across species and institutions. In addition, the use of automated classification techniques can speed analysis and uncover behavioral correlations not readily apparent using traditional techniques.

  6. What Are Common Traumatic Brain Injury (TBI) Symptoms?

    MedlinePlus

    ... NICHD Research Information Find a Study More Information Traumatic Brain Injury (TBI) Condition Information NICHD Research Information Find a ... Care Providers Home Health A to Z List Traumatic Brain Injury (TBI) Condition Information What are common symptoms? Share ...

  7. The Materials Commons: A Collaboration Platform and Information Repository for the Global Materials Community

    NASA Astrophysics Data System (ADS)

    Puchala, Brian; Tarcea, Glenn; Marquis, Emmanuelle. A.; Hedstrom, Margaret; Jagadish, H. V.; Allison, John E.

    2016-08-01

    Accelerating the pace of materials discovery and development requires new approaches and means of collaborating and sharing information. To address this need, we are developing the Materials Commons, a collaboration platform and information repository for use by the structural materials community. The Materials Commons has been designed to be a continuous, seamless part of the scientific workflow process. Researchers upload the results of experiments and computations as they are performed, automatically where possible, along with the provenance information describing the experimental and computational processes. The Materials Commons website provides an easy-to-use interface for uploading and downloading data and data provenance, as well as for searching and sharing data. This paper provides an overview of the Materials Commons. Concepts are also outlined for integrating the Materials Commons with the broader Materials Information Infrastructure that is evolving to support the Materials Genome Initiative.

  8. High Resolution Surface Geometry and Albedo by Combining Laser Altimetry and Visible Images

    NASA Technical Reports Server (NTRS)

    Morris, Robin D.; vonToussaint, Udo; Cheeseman, Peter C.; Clancy, Daniel (Technical Monitor)

    2001-01-01

    The need for accurate geometric and radiometric information over large areas has become increasingly important. Laser altimetry is one of the key technologies for obtaining this geometric information. However, there are important application areas where the observing platform has its orbit constrained by the other instruments it is carrying, and so the spatial resolution that can be recorded by the laser altimeter is limited. In this paper we show how information recorded by one of the other instruments commonly carried, a high-resolution imaging camera, can be combined with the laser altimeter measurements to give a high resolution estimate both of the surface geometry and its reflectance properties. This estimate has an accuracy unavailable from other interpolation methods. We present the results from combining synthetic laser altimeter measurements on a coarse grid with images generated from a surface model to re-create the surface model.

  9. Informational privacy and the public's health: the Model State Public Health Privacy Act.

    PubMed

    Gostin, L O; Hodge, J G; Valdiserri, R O

    2001-09-01

    Protecting public health requires the acquisition, use, and storage of extensive health-related information about individuals. The electronic accumulation and exchange of personal data promises significant public health benefits but also threatens individual privacy; breaches of privacy can lead to individual discrimination in employment, insurance, and government programs. Individuals concerned about privacy invasions may avoid clinical or public health tests, treatments, or research. Although individual privacy protections are critical, comprehensive federal privacy protections do not adequately protect public health data, and existing state privacy laws are inconsistent and fragmented. The Model State Public Health Privacy Act provides strong privacy safeguards for public health data while preserving the ability of state and local public health departments to act for the common good.

  10. An approach to 3D model fusion in GIS systems and its application in a future ECDIS

    NASA Astrophysics Data System (ADS)

    Liu, Tao; Zhao, Depeng; Pan, Mingyang

    2016-04-01

    Three-dimensional (3D) computer graphics technology is widely used in various areas and causes profound changes. As an information carrier, 3D models are becoming increasingly important. The use of 3D models greatly helps to improve the cartographic expression and design. 3D models are more visually efficient, quicker and easier to understand and they can express more detailed geographical information. However, it is hard to efficiently and precisely fuse 3D models in local systems. The purpose of this study is to propose an automatic and precise approach to fuse 3D models in geographic information systems (GIS). It is the basic premise for subsequent uses of 3D models in local systems, such as attribute searching, spatial analysis, and so on. The basic steps of our research are: (1) pose adjustment by principal component analysis (PCA); (2) silhouette extraction by simple mesh silhouette extraction and silhouette merger; (3) size adjustment; (4) position matching. Finally, we implement the above methods in our system Automotive Intelligent Chart (AIC) 3D Electronic Chart Display and Information Systems (ECDIS). The fusion approach we propose is a common method and each calculation step is carefully designed. This approach solves the problem of cross-platform model fusion. 3D models can be from any source. They may be stored in the local cache or retrieved from Internet, or may be manually created by different tools or automatically generated by different programs. The system can be any kind of 3D GIS system.

  11. Probing dynamical symmetry breaking using quantum-entangled photons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Hao; Piryatinski, Andrei; Jerke, Jonathan

    Here, we present an input/output analysis of photon-correlation experiments whereby a quantum mechanically entangled bi-photon state interacts with a material sample placed in one arm of a Hong–Ou–Mandel apparatus. We show that the output signal contains detailed information about subsequent entanglement with the microscopic quantum states in the sample. In particular, we apply the method to an ensemble of emitters interacting with a common photon mode within the open-system Dicke model. Our results indicate considerable dynamical information concerning spontaneous symmetry breaking can be revealed with such an experimental system.

  12. Probing dynamical symmetry breaking using quantum-entangled photons

    DOE PAGES

    Li, Hao; Piryatinski, Andrei; Jerke, Jonathan; ...

    2017-11-15

    Here, we present an input/output analysis of photon-correlation experiments whereby a quantum mechanically entangled bi-photon state interacts with a material sample placed in one arm of a Hong–Ou–Mandel apparatus. We show that the output signal contains detailed information about subsequent entanglement with the microscopic quantum states in the sample. In particular, we apply the method to an ensemble of emitters interacting with a common photon mode within the open-system Dicke model. Our results indicate considerable dynamical information concerning spontaneous symmetry breaking can be revealed with such an experimental system.

  13. Hybrid ontology for semantic information retrieval model using keyword matching indexing system.

    PubMed

    Uthayan, K R; Mala, G S Anandha

    2015-01-01

    Ontology is the process of growth and elucidation of concepts of an information domain being common for a group of users. Establishing ontology into information retrieval is a normal method to develop searching effects of relevant information users require. Keywords matching process with historical or information domain is significant in recent calculations for assisting the best match for specific input queries. This research presents a better querying mechanism for information retrieval which integrates the ontology queries with keyword search. The ontology-based query is changed into a primary order to predicate logic uncertainty which is used for routing the query to the appropriate servers. Matching algorithms characterize warm area of researches in computer science and artificial intelligence. In text matching, it is more dependable to study semantics model and query for conditions of semantic matching. This research develops the semantic matching results between input queries and information in ontology field. The contributed algorithm is a hybrid method that is based on matching extracted instances from the queries and information field. The queries and information domain is focused on semantic matching, to discover the best match and to progress the executive process. In conclusion, the hybrid ontology in semantic web is sufficient to retrieve the documents when compared to standard ontology.

  14. Hybrid Ontology for Semantic Information Retrieval Model Using Keyword Matching Indexing System

    PubMed Central

    Uthayan, K. R.; Anandha Mala, G. S.

    2015-01-01

    Ontology is the process of growth and elucidation of concepts of an information domain being common for a group of users. Establishing ontology into information retrieval is a normal method to develop searching effects of relevant information users require. Keywords matching process with historical or information domain is significant in recent calculations for assisting the best match for specific input queries. This research presents a better querying mechanism for information retrieval which integrates the ontology queries with keyword search. The ontology-based query is changed into a primary order to predicate logic uncertainty which is used for routing the query to the appropriate servers. Matching algorithms characterize warm area of researches in computer science and artificial intelligence. In text matching, it is more dependable to study semantics model and query for conditions of semantic matching. This research develops the semantic matching results between input queries and information in ontology field. The contributed algorithm is a hybrid method that is based on matching extracted instances from the queries and information field. The queries and information domain is focused on semantic matching, to discover the best match and to progress the executive process. In conclusion, the hybrid ontology in semantic web is sufficient to retrieve the documents when compared to standard ontology. PMID:25922851

  15. Porphyry copper deposits of the world: database, map, and grade and tonnage models

    USGS Publications Warehouse

    Singer, Donald A.; Berger, Vladimir Iosifovich; Moring, Barry C.

    2005-01-01

    Mineral deposit models are important in exploration planning and quantitative resource assessments for two reasons: (1) grades and tonnages among deposit types are significantly different, and (2) many types occur in different geologic settings that can be identified from geologic maps. Mineral deposit models are the keystone in combining the diverse geoscience information on geology, mineral occurrences, geophysics, and geochemistry used in resource assessments and mineral exploration. Too few thoroughly explored mineral deposits are available in most local areas for reliable identification of the important geoscience variables or for robust estimation of undiscovered deposits-thus we need mineral-deposit models. Globally based deposit models allow recognition of important features because the global models demonstrate how common different features are. Well-designed and -constructed deposit models allow geologists to know from observed geologic environments the possible mineral deposit types that might exist, and allow economists to determine the possible economic viability of these resources in the region. Thus, mineral deposit models play the central role in transforming geoscience information to a form useful to policy makers. The foundation of mineral deposit models is information about known deposits-the purpose of this publication is to make this kind of information available in digital form for porphyry copper deposits. This report is an update of an earlier publication about porphyry copper deposits. In this report we have added 84 new porphyry copper deposits and removed 12 deposits. In addition, some errors have been corrected and a number of deposits have had some information, such as grades, tonnages, locations, or ages revised. This publication contains a computer file of information on porphyry copper deposits from around the world. It also presents new grade and tonnage models for porphyry copper deposits and for three subtypes of porphyry copper deposits and a map showing the location of all deposits. The value of this information and any derived analyses depends critically on the consistent manner of data gathering. For this reason, we first discuss the rules used in this compilation. Next, the fields of the data file are considered. Finally, we provide new grade and tonnage models.

  16. Acute Diarrheal Syndromic Surveillance

    PubMed Central

    Kam, H.J.; Choi, S.; Cho, J.P.; Min, Y.G.; Park, R.W.

    2010-01-01

    Objective In an effort to identify and characterize the environmental factors that affect the number of patients with acute diarrheal (AD) syndrome, we developed and tested two regional surveillance models including holiday and weather information in addition to visitor records, at emergency medical facilities in the Seoul metropolitan area of Korea. Methods With 1,328,686 emergency department visitor records from the National Emergency Department Information system (NEDIS) and the holiday and weather information, two seasonal ARIMA models were constructed: (1) The simple model (only with total patient number), (2) the environmental factor-added model. The stationary R-squared was utilized as an in-sample model goodness-of-fit statistic for the constructed models, and the cumulative mean of the Mean Absolute Percentage Error (MAPE) was used to measure post-sample forecast accuracy over the next 1 month. Results The (1,0,1)(0,1,1)7 ARIMA model resulted in an adequate model fit for the daily number of AD patient visits over 12 months for both cases. Among various features, the total number of patient visits was selected as a commonly influential independent variable. Additionally, for the environmental factor-added model, holidays and daily precipitation were selected as features that statistically significantly affected model fitting. Stationary R-squared values were changed in a range of 0.651-0.828 (simple), and 0.805-0.844 (environmental factor-added) with p<0.05. In terms of prediction, the MAPE values changed within 0.090-0.120 and 0.089-0.114, respectively. Conclusion The environmental factor-added model yielded better MAPE values. Holiday and weather information appear to be crucial for the construction of an accurate syndromic surveillance model for AD, in addition to the visitor and assessment records. PMID:23616829

  17. The Impact of Parametric Uncertainties on Biogeochemistry in the E3SM Land Model

    NASA Astrophysics Data System (ADS)

    Ricciuto, Daniel; Sargsyan, Khachik; Thornton, Peter

    2018-02-01

    We conduct a global sensitivity analysis (GSA) of the Energy Exascale Earth System Model (E3SM), land model (ELM) to calculate the sensitivity of five key carbon cycle outputs to 68 model parameters. This GSA is conducted by first constructing a Polynomial Chaos (PC) surrogate via new Weighted Iterative Bayesian Compressive Sensing (WIBCS) algorithm for adaptive basis growth leading to a sparse, high-dimensional PC surrogate with 3,000 model evaluations. The PC surrogate allows efficient extraction of GSA information leading to further dimensionality reduction. The GSA is performed at 96 FLUXNET sites covering multiple plant functional types (PFTs) and climate conditions. About 20 of the model parameters are identified as sensitive with the rest being relatively insensitive across all outputs and PFTs. These sensitivities are dependent on PFT, and are relatively consistent among sites within the same PFT. The five model outputs have a majority of their highly sensitive parameters in common. A common subset of sensitive parameters is also shared among PFTs, but some parameters are specific to certain types (e.g., deciduous phenology). The relative importance of these parameters shifts significantly among PFTs and with climatic variables such as mean annual temperature.

  18. From chart tracking to workflow management.

    PubMed Central

    Srinivasan, P.; Vignes, G.; Venable, C.; Hazelwood, A.; Cade, T.

    1994-01-01

    The current interest in system-wide integration appears to be based on the assumption that an organization, by digitizing information and accepting a common standard for the exchange of such information, will improve the accessibility of this information and automatically experience benefits resulting from its more productive use. We do not dispute this reasoning, but assert that an organization's capacity for effective change is proportional to the understanding of the current structure among its personnel. Our workflow manager is based on the use of a Parameterized Petri Net (PPN) model which can be configured to represent an arbitrarily detailed picture of an organization. The PPN model can be animated to observe the model organization in action, and the results of the animation analyzed. This simulation is a dynamic ongoing process which changes with the system and allows members of the organization to pose "what if" questions as a means of exploring opportunities for change. We present, the "workflow management system" as the natural successor to the tracking program, incorporating modeling, scheduling, reactive planning, performance evaluation, and simulation. This workflow management system is more than adequate for meeting the needs of a paper chart tracking system, and, as the patient record is computerized, will serve as a planning and evaluation tool in converting the paper-based health information system into a computer-based system. PMID:7950051

  19. Measures of Microbial Biomass for Soil Carbon Decomposition Models

    NASA Astrophysics Data System (ADS)

    Mayes, M. A.; Dabbs, J.; Steinweg, J. M.; Schadt, C. W.; Kluber, L. A.; Wang, G.; Jagadamma, S.

    2014-12-01

    Explicit parameterization of the decomposition of plant inputs and soil organic matter by microbes is becoming more widely accepted in models of various complexity, ranging from detailed process models to global-scale earth system models. While there are multiple ways to measure microbial biomass, chloroform fumigation-extraction (CFE) is commonly used to parameterize models.. However CFE is labor- and time-intensive, requires toxic chemicals, and it provides no specific information about the composition or function of the microbial community. We investigated correlations between measures of: CFE; DNA extraction yield; QPCR base-gene copy numbers for Bacteria, Fungi and Archaea; phospholipid fatty acid analysis; and direct cell counts to determine the potential for use as proxies for microbial biomass. As our ultimate goal is to develop a reliable, more informative, and faster methods to predict microbial biomass for use in models, we also examined basic soil physiochemical characteristics including texture, organic matter content, pH, etc. to identify multi-factor predictive correlations with one or more measures of the microbial community. Our work will have application to both microbial ecology studies and the next generation of process and earth system models.

  20. Provenance Representation in the Global Change Information System (GCIS)

    NASA Technical Reports Server (NTRS)

    Tilmes, Curt

    2012-01-01

    Global climate change is a topic that has become very controversial despite strong support within the scientific community. It is common for agencies releasing information about climate change to be served with Freedom of Information Act (FOIA) requests for everything that led to that conclusion. Capturing and presenting the provenance, linking to the research papers, data sets, models, analyses, observation instruments and satellites, etc. supporting key findings has the potential to mitigate skepticism in this domain. The U.S. Global Change Research Program (USGCRP) is now coordinating the production of a National Climate Assessment (NCA) that presents our best understanding of global change. We are now developing a Global Change Information System (GCIS) that will present the content of that report and its provenance, including the scientific support for the findings of the assessment. We are using an approach that will present this information both through a human accessible web site as well as a machine readable interface for automated mining of the provenance graph. We plan to use the developing W3C PROV Data Model and Ontology for this system.

  1. Competition between Homophily and Information Entropy Maximization in Social Networks

    PubMed Central

    Zhao, Jichang; Liang, Xiao; Xu, Ke

    2015-01-01

    In social networks, it is conventionally thought that two individuals with more overlapped friends tend to establish a new friendship, which could be stated as homophily breeding new connections. While the recent hypothesis of maximum information entropy is presented as the possible origin of effective navigation in small-world networks. We find there exists a competition between information entropy maximization and homophily in local structure through both theoretical and experimental analysis. This competition suggests that a newly built relationship between two individuals with more common friends would lead to less information entropy gain for them. We demonstrate that in the evolution of the social network, both of the two assumptions coexist. The rule of maximum information entropy produces weak ties in the network, while the law of homophily makes the network highly clustered locally and the individuals would obtain strong and trust ties. A toy model is also presented to demonstrate the competition and evaluate the roles of different rules in the evolution of real networks. Our findings could shed light on the social network modeling from a new perspective. PMID:26334994

  2. A Novel Multilayer Correlation Maximization Model for Improving CCA-Based Frequency Recognition in SSVEP Brain-Computer Interface.

    PubMed

    Jiao, Yong; Zhang, Yu; Wang, Yu; Wang, Bei; Jin, Jing; Wang, Xingyu

    2018-05-01

    Multiset canonical correlation analysis (MsetCCA) has been successfully applied to optimize the reference signals by extracting common features from multiple sets of electroencephalogram (EEG) for steady-state visual evoked potential (SSVEP) recognition in brain-computer interface application. To avoid extracting the possible noise components as common features, this study proposes a sophisticated extension of MsetCCA, called multilayer correlation maximization (MCM) model for further improving SSVEP recognition accuracy. MCM combines advantages of both CCA and MsetCCA by carrying out three layers of correlation maximization processes. The first layer is to extract the stimulus frequency-related information in using CCA between EEG samples and sine-cosine reference signals. The second layer is to learn reference signals by extracting the common features with MsetCCA. The third layer is to re-optimize the reference signals set in using CCA with sine-cosine reference signals again. Experimental study is implemented to validate effectiveness of the proposed MCM model in comparison with the standard CCA and MsetCCA algorithms. Superior performance of MCM demonstrates its promising potential for the development of an improved SSVEP-based brain-computer interface.

  3. Diagnostic indicators for integrated assessment models of climate policy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kriegler, Elmar; Petermann, Nils; Krey, Volker

    2015-01-01

    Integrated assessments of how climate policy interacts with energy-economic systems can be performed by a variety of models with different functional structures. This article proposes a diagnostic scheme that can be applied to a wide range of integrated assessment models to classify differences among models based on their carbon price responses. Model diagnostics can uncover patterns and provide insights into why, under a given scenario, certain types of models behave in observed ways. Such insights are informative since model behavior can have a significant impact on projections of climate change mitigation costs and other policy-relevant information. The authors propose diagnosticmore » indicators to characterize model responses to carbon price signals and test these in a diagnostic study with 11 global models. Indicators describe the magnitude of emission abatement and the associated costs relative to a harmonized baseline, the relative changes in carbon intensity and energy intensity and the extent of transformation in the energy system. This study shows a correlation among indicators suggesting that models can be classified into groups based on common patterns of behavior in response to carbon pricing. Such a classification can help to more easily explain variations among policy-relevant model results.« less

  4. Method for modeling social care processes for national information exchange.

    PubMed

    Miettinen, Aki; Mykkänen, Juha; Laaksonen, Maarit

    2012-01-01

    Finnish social services include 21 service commissions of social welfare including Adoption counselling, Income support, Child welfare, Services for immigrants and Substance abuse care. This paper describes the method used for process modeling in the National project for IT in Social Services in Finland (Tikesos). The process modeling in the project aimed to support common national target state processes from the perspective of national electronic archive, increased interoperability between systems and electronic client documents. The process steps and other aspects of the method are presented. The method was developed, used and refined during the three years of process modeling in the national project.

  5. Predictive risk models for proximal aortic surgery

    PubMed Central

    Díaz, Rocío; Pascual, Isaac; Álvarez, Rubén; Alperi, Alberto; Rozado, Jose; Morales, Carlos; Silva, Jacobo; Morís, César

    2017-01-01

    Predictive risk models help improve decision making, information to our patients and quality control comparing results between surgeons and between institutions. The use of these models promotes competitiveness and led to increasingly better results. All these virtues are of utmost importance when the surgical operation entails high-risk. Although proximal aortic surgery is less frequent than other cardiac surgery operations, this procedure itself is more challenging and technically demanding than other common cardiac surgery techniques. The aim of this study is to review the current status of predictive risk models for patients who undergo proximal aortic surgery, which means aortic root replacement, supracoronary ascending aortic replacement or aortic arch surgery. PMID:28616348

  6. Interactive Sonification Exploring Emergent Behavior Applying Models for Biological Information and Listening

    PubMed Central

    Choi, Insook

    2018-01-01

    Sonification is an open-ended design task to construct sound informing a listener of data. Understanding application context is critical for shaping design requirements for data translation into sound. Sonification requires methodology to maintain reproducibility when data sources exhibit non-linear properties of self-organization and emergent behavior. This research formalizes interactive sonification in an extensible model to support reproducibility when data exhibits emergent behavior. In the absence of sonification theory, extensibility demonstrates relevant methods across case studies. The interactive sonification framework foregrounds three factors: reproducible system implementation for generating sonification; interactive mechanisms enhancing a listener's multisensory observations; and reproducible data from models that characterize emergent behavior. Supramodal attention research suggests interactive exploration with auditory feedback can generate context for recognizing irregular patterns and transient dynamics. The sonification framework provides circular causality as a signal pathway for modeling a listener interacting with emergent behavior. The extensible sonification model adopts a data acquisition pathway to formalize functional symmetry across three subsystems: Experimental Data Source, Sound Generation, and Guided Exploration. To differentiate time criticality and dimensionality of emerging dynamics, tuning functions are applied between subsystems to maintain scale and symmetry of concurrent processes and temporal dynamics. Tuning functions accommodate sonification design strategies that yield order parameter values to render emerging patterns discoverable as well as rehearsable, to reproduce desired instances for clinical listeners. Case studies are implemented with two computational models, Chua's circuit and Swarm Chemistry social agent simulation, generating data in real-time that exhibits emergent behavior. Heuristic Listening is introduced as an informal model of a listener's clinical attention to data sonification through multisensory interaction in a context of structured inquiry. Three methods are introduced to assess the proposed sonification framework: Listening Scenario classification, data flow Attunement, and Sonification Design Patterns to classify sound control. Case study implementations are assessed against these methods comparing levels of abstraction between experimental data and sound generation. Outcomes demonstrate the framework performance as a reference model for representing experimental implementations, also for identifying common sonification structures having different experimental implementations, identifying common functions implemented in different subsystems, and comparing impact of affordances across multiple implementations of listening scenarios. PMID:29755311

  7. Sharing Health Information and Influencing Behavioral Intentions: The Role of Health Literacy, Information Overload, and the Internet in the Diffusion of Healthy Heart Information.

    PubMed

    Crook, Brittani; Stephens, Keri K; Pastorek, Angie E; Mackert, Michael; Donovan, Erin E

    2016-01-01

    Low health literacy remains an extremely common and problematic issue, given that individuals with lower health literacy are more likely to experience health challenges and negative health outcomes. In this study, we use the first three stages of the innovation-decision process found in the theory of diffusion of innovations (Rogers, 2003). We incorporate health literacy into a model explaining how perceived health knowledge, information sharing, attitudes, and behavior are related. Results show that health information sharing explains 33% of the variance in behavioral intentions, indicating that the communicative practice of sharing information can positively impact health outcomes. Further, individuals with high health literacy tend to share less information about heart health than those with lower health literacy. Findings also reveal that perceived heart-health knowledge operates differently than health literacy to predict health outcomes.

  8. Spatial Relation Predicates in Topographic Feature Semantics

    USGS Publications Warehouse

    Varanka, Dalia E.; Caro, Holly K.

    2013-01-01

    Topographic data are designed and widely used for base maps of diverse applications, yet the power of these information sources largely relies on the interpretive skills of map readers and relational database expert users once the data are in map or geographic information system (GIS) form. Advances in geospatial semantic technology offer data model alternatives for explicating concepts and articulating complex data queries and statements. To understand and enrich the vocabulary of topographic feature properties for semantic technology, English language spatial relation predicates were analyzed in three standard topographic feature glossaries. The analytical approach drew from disciplinary concepts in geography, linguistics, and information science. Five major classes of spatial relation predicates were identified from the analysis; representations for most of these are not widely available. The classes are: part-whole (which are commonly modeled throughout semantic and linked-data networks), geometric, processes, human intention, and spatial prepositions. These are commonly found in the ‘real world’ and support the environmental science basis for digital topographical mapping. The spatial relation concepts are based on sets of relation terms presented in this chapter, though these lists are not prescriptive or exhaustive. The results of this study make explicit the concepts forming a broad set of spatial relation expressions, which in turn form the basis for expanding the range of possible queries for topographical data analysis and mapping.

  9. Revisions to the JDL data fusion model

    NASA Astrophysics Data System (ADS)

    Steinberg, Alan N.; Bowman, Christopher L.; White, Franklin E.

    1999-03-01

    The Data Fusion Model maintained by the Joint Directors of Laboratories (JDL) Data Fusion Group is the most widely-used method for categorizing data fusion-related functions. This paper discusses the current effort to revise the expand this model to facilitate the cost-effective development, acquisition, integration and operation of multi- sensor/multi-source systems. Data fusion involves combining information - in the broadest sense - to estimate or predict the state of some aspect of the universe. These may be represented in terms of attributive and relational states. If the job is to estimate the state of a people, it can be useful to include consideration of informational and perceptual states in addition to the physical state. Developing cost-effective multi-source information systems requires a method for specifying data fusion processing and control functions, interfaces, and associate databases. The lack of common engineering standards for data fusion systems has been a major impediment to integration and re-use of available technology: current developments do not lend themselves to objective evaluation, comparison or re-use. This paper reports on proposed revisions and expansions of the JDL Data FUsion model to remedy some of these deficiencies. This involves broadening the functional model and related taxonomy beyond the original military focus, and integrating the Data Fusion Tree Architecture model for system description, design and development.

  10. Reaching rural women: breast cancer prevention information seeking behaviors and interest in Internet, cell phone, and text use.

    PubMed

    Kratzke, Cynthia; Wilson, Susan; Vilchis, Hugo

    2013-02-01

    The purpose of this study was to examine the breast cancer prevention information seeking behaviors among rural women, the prevalence of Internet, cell, and text use, and interest to receive breast cancer prevention information cell and text messages. While growing literature for breast cancer information sources supports the use of the Internet, little is known about breast cancer prevention information seeking behaviors among rural women and mobile technology. Using a cross-sectional study design, data were collected using a survey. McGuire's Input-Ouput Model was used as the framework. Self-reported data were obtained from a convenience sample of 157 women with a mean age of 60 (SD = 12.12) at a rural New Mexico imaging center. Common interpersonal information sources were doctors, nurses, and friends and common channel information sources were television, magazines, and Internet. Overall, 87% used cell phones, 20% had an interest to receive cell phone breast cancer prevention messages, 47% used text messaging, 36% had an interest to receive text breast cancer prevention messages, and 37% had an interest to receive mammogram reminder text messages. Bivariate analysis revealed significant differences between age, income, and race/ethnicity and use of cell phones or text messaging. There were no differences between age and receiving text messages or text mammogram reminders. Assessment of health information seeking behaviors is important for community health educators to target populations for program development. Future research may identify additional socio-cultural differences.

  11. Essential levels of health information in Europe: an action plan for a coherent and sustainable infrastructure.

    PubMed

    Carinci, Fabrizio

    2015-04-01

    The European Union needs a common health information infrastructure to support policy and governance on a routine basis. A stream of initiatives conducted in Europe during the last decade resulted into several success stories, but did not specify a unified framework that could be broadly implemented on a continental level. The recent debate raised a potential controversy on the different roles and responsibilities of policy makers vs the public health community in the construction of such a pan-European health information system. While institutional bodies shall clarify the statutory conditions under which such an endeavour is to be carried out, researchers should define a common framework for optimal cross-border information exchange. This paper conceptualizes a general solution emerging from past experiences, introducing a governance structure and overarching framework that can be realized through four main action lines, underpinned by the key principle of "Essential Levels of Health Information" for Europe. The proposed information model is amenable to be applied in a consistent manner at both national and EU level. If realized, the four action lines outlined here will allow developing a EU health information infrastructure that would effectively integrate best practices emerging from EU public health initiatives, including projects and joint actions carried out during the last ten years. The proposed approach adds new content to the ongoing debate on the future activity of the European Commission in the area of health information. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  12. A systematic review of cost-effectiveness modeling of pharmaceutical therapies in neuropathic pain: variation in practice, key challenges, and recommendations for the future.

    PubMed

    Critchlow, Simone; Hirst, Matthew; Akehurst, Ron; Phillips, Ceri; Philips, Zoe; Sullivan, Will; Dunlop, Will C N

    2017-02-01

    Complexities in the neuropathic-pain care pathway make the condition difficult to manage and difficult to capture in cost-effectiveness models. The aim of this study is to understand, through a systematic review of previous cost-effectiveness studies, some of the key strengths and limitations in data and modeling practices in neuropathic pain. Thus, the aim is to guide future research and practice to improve resource allocation decisions and encourage continued investment to find novel and effective treatments for patients with neuropathic pain. The search strategy was designed to identify peer-reviewed cost-effectiveness evaluations of non-surgical, pharmaceutical therapies for neuropathic pain published since January 2000, accessing five key databases. All identified publications were reviewed and screened according to pre-defined eligibility criteria. Data extraction was designed to reflect key data challenges and approaches to modeling in neuropathic pain and based on published guidelines. The search strategy identified 20 cost-effectiveness analyses meeting the inclusion criteria, of which 14 had original model structures. Cost-effectiveness modeling in neuropathic pain is established and increasing across multiple jurisdictions; however, amongst these studies, there is substantial variation in modeling approach, and there are common limitations. Capturing the effect of treatments upon health outcomes, particularly health-related quality-of-life, is challenging, and the health effects of multiple lines of ineffective treatment, common for patients with neuropathic pain, have not been consistently or robustly modeled. To improve future economic modeling in neuropathic pain, further research is suggested into the effect of multiple lines of treatment and treatment failure upon patient outcomes and subsequent treatment effectiveness; the impact of treatment-emergent adverse events upon patient outcomes; and consistent and appropriate pain measures to inform models. The authors further encourage transparent reporting of inputs used to inform cost-effectiveness models, with robust, comprehensive and clear uncertainty analysis and, where feasible, open-source modeling is encouraged.

  13. A pollution fate and transport model application in a semi-arid region: Is some number better than no number?

    PubMed

    Özcan, Zeynep; Başkan, Oğuz; Düzgün, H Şebnem; Kentel, Elçin; Alp, Emre

    2017-10-01

    Fate and transport models are powerful tools that aid authorities in making unbiased decisions for developing sustainable management strategies. Application of pollution fate and transport models in semi-arid regions has been challenging because of unique hydrological characteristics and limited data availability. Significant temporal and spatial variability in rainfall events, complex interactions between soil, vegetation and topography, and limited water quality and hydrological data due to insufficient monitoring network make it a difficult task to develop reliable models in semi-arid regions. The performances of these models govern the final use of the outcomes such as policy implementation, screening, economical analysis, etc. In this study, a deterministic distributed fate and transport model, SWAT, is applied in Lake Mogan Watershed, a semi-arid region dominated by dry agricultural practices, to estimate nutrient loads and to develop the water budget of the watershed. To minimize the discrepancy due to limited availability of historical water quality data extensive efforts were placed in collecting site-specific data for model inputs such as soil properties, agricultural practice information and land use. Moreover, calibration parameter ranges suggested in the literature are utilized during calibration in order to obtain more realistic representation of Lake Mogan Watershed in the model. Model performance is evaluated using comparisons of the measured data with 95%CI for the simulated data and comparison of unit pollution load estimations with those provided in the literature for similar catchments, in addition to commonly used evaluation criteria such as Nash-Sutcliffe simulation efficiency, coefficient of determination and percent bias. These evaluations demonstrated that even though the model prediction power is not high according to the commonly used model performance criteria, the calibrated model may provide useful information in the comparison of the effects of different management practices on diffuse pollution and water quality in Lake Mogan Watershed. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. FuGEFlow: data model and markup language for flow cytometry

    PubMed Central

    Qian, Yu; Tchuvatkina, Olga; Spidlen, Josef; Wilkinson, Peter; Gasparetto, Maura; Jones, Andrew R; Manion, Frank J; Scheuermann, Richard H; Sekaly, Rafick-Pierre; Brinkman, Ryan R

    2009-01-01

    Background Flow cytometry technology is widely used in both health care and research. The rapid expansion of flow cytometry applications has outpaced the development of data storage and analysis tools. Collaborative efforts being taken to eliminate this gap include building common vocabularies and ontologies, designing generic data models, and defining data exchange formats. The Minimum Information about a Flow Cytometry Experiment (MIFlowCyt) standard was recently adopted by the International Society for Advancement of Cytometry. This standard guides researchers on the information that should be included in peer reviewed publications, but it is insufficient for data exchange and integration between computational systems. The Functional Genomics Experiment (FuGE) formalizes common aspects of comprehensive and high throughput experiments across different biological technologies. We have extended FuGE object model to accommodate flow cytometry data and metadata. Methods We used the MagicDraw modelling tool to design a UML model (Flow-OM) according to the FuGE extension guidelines and the AndroMDA toolkit to transform the model to a markup language (Flow-ML). We mapped each MIFlowCyt term to either an existing FuGE class or to a new FuGEFlow class. The development environment was validated by comparing the official FuGE XSD to the schema we generated from the FuGE object model using our configuration. After the Flow-OM model was completed, the final version of the Flow-ML was generated and validated against an example MIFlowCyt compliant experiment description. Results The extension of FuGE for flow cytometry has resulted in a generic FuGE-compliant data model (FuGEFlow), which accommodates and links together all information required by MIFlowCyt. The FuGEFlow model can be used to build software and databases using FuGE software toolkits to facilitate automated exchange and manipulation of potentially large flow cytometry experimental data sets. Additional project documentation, including reusable design patterns and a guide for setting up a development environment, was contributed back to the FuGE project. Conclusion We have shown that an extension of FuGE can be used to transform minimum information requirements in natural language to markup language in XML. Extending FuGE required significant effort, but in our experiences the benefits outweighed the costs. The FuGEFlow is expected to play a central role in describing flow cytometry experiments and ultimately facilitating data exchange including public flow cytometry repositories currently under development. PMID:19531228

  15. Model-based pH monitor for sensor assessment.

    PubMed

    van Schagen, Kim; Rietveld, Luuk; Veersma, Alex; Babuska, Robert

    2009-01-01

    Owing to the nature of the treatment processes, monitoring the processes based on individual online measurements is difficult or even impossible. However, the measurements (online and laboratory) can be combined with a priori process knowledge, using mathematical models, to objectively monitor the treatment processes and measurement devices. The pH measurement is a commonly used measurement at different stages in the drinking water treatment plant, although it is a unreliable instrument, requiring significant maintenance. It is shown that, using a grey-box model, it is possible to assess the measurement devices effectively, even if detailed information of the specific processes is unknown.

  16. Estimating population ecology models for the WWW market: evidence of competitive oligopolies.

    PubMed

    de Cabo, Ruth Mateos; Gimeno, Ricardo

    2013-01-01

    This paper proposes adapting a particle filtering algorithm to model online Spanish real estate and job search market segments based on the Lotka-Volterra competition equations. For this purpose the authors use data on Internet information searches from Google Trends to proxy for market share. Market share evolution estimations are coherent with those observed in Google Trends. The results show evidence of low website incompatibility in the markets analyzed. Competitive oligopolies are most common in such low-competition markets, instead of the monopolies predicted by theoretical ecology models under strong competition conditions.

  17. Distributed structure-searchable toxicity (DSSTox) public database network: a proposal.

    PubMed

    Richard, Ann M; Williams, ClarLynda R

    2002-01-29

    The ability to assess the potential genotoxicity, carcinogenicity, or other toxicity of pharmaceutical or industrial chemicals based on chemical structure information is a highly coveted and shared goal of varied academic, commercial, and government regulatory groups. These diverse interests often employ different approaches and have different criteria and use for toxicity assessments, but they share a need for unrestricted access to existing public toxicity data linked with chemical structure information. Currently, there exists no central repository of toxicity information, commercial or public, that adequately meets the data requirements for flexible analogue searching, Structure-Activity Relationship (SAR) model development, or building of chemical relational databases (CRD). The distributed structure-searchable toxicity (DSSTox) public database network is being proposed as a community-supported, web-based effort to address these shared needs of the SAR and toxicology communities. The DSSTox project has the following major elements: (1) to adopt and encourage the use of a common standard file format (structure data file (SDF)) for public toxicity databases that includes chemical structure, text and property information, and that can easily be imported into available CRD applications; (2) to implement a distributed source approach, managed by a DSSTox Central Website, that will enable decentralized, free public access to structure-toxicity data files, and that will effectively link knowledgeable toxicity data sources with potential users of these data from other disciplines (such as chemistry, modeling, and computer science); and (3) to engage public/commercial/academic/industry groups in contributing to and expanding this community-wide, public data sharing and distribution effort. The DSSTox project's overall aims are to effect the closer association of chemical structure information with existing toxicity data, and to promote and facilitate structure-based exploration of these data within a common chemistry-based framework that spans toxicological disciplines.

  18. A novel summary report of colonoscopy: timeline visualization providing meaningful colonoscopy video information.

    PubMed

    Cho, Minwoo; Kim, Jee Hyun; Kong, Hyoun Joong; Hong, Kyoung Sup; Kim, Sungwan

    2018-05-01

    The colonoscopy adenoma detection rate depends largely on physician experience and skill, and overlooked colorectal adenomas could develop into cancer. This study assessed a system that detects polyps and summarizes meaningful information from colonoscopy videos. One hundred thirteen consecutive patients had colonoscopy videos prospectively recorded at the Seoul National University Hospital. Informative video frames were extracted using a MATLAB support vector machine (SVM) model and classified as bleeding, polypectomy, tool, residue, thin wrinkle, folded wrinkle, or common. Thin wrinkle, folded wrinkle, and common frames were reanalyzed using SVM for polyp detection. The SVM model was applied hierarchically for effective classification and optimization of the SVM. The mean classification accuracy according to type was over 93%; sensitivity was over 87%. The mean sensitivity for polyp detection was 82.1%, and the positive predicted value (PPV) was 39.3%. Polyps detected using the system were larger (6.3 ± 6.4 vs. 4.9 ± 2.5 mm; P = 0.003) with a more pedunculated morphology (Yamada type III, 10.2 vs. 0%; P < 0.001; Yamada type IV, 2.8 vs. 0%; P < 0.001) than polyps missed by the system. There were no statistically significant differences in polyp distribution or histology between the groups. Informative frames and suspected polyps were presented on a timeline. This summary was evaluated using the system usability scale questionnaire; 89.3% of participants expressed positive opinions. We developed and verified a system to extract meaningful information from colonoscopy videos. Although further improvement and validation of the system is needed, the proposed system is useful for physicians and patients.

  19. Global ethics and principlism.

    PubMed

    Gordon, John-Stewart

    2011-09-01

    This article examines the special relation between common morality and particular moralities in the four-principles approach and its use for global ethics. It is argued that the special dialectical relation between common morality and particular moralities is the key to bridging the gap between ethical universalism and relativism. The four-principles approach is a good model for a global bioethics by virtue of its ability to mediate successfully between universal demands and cultural diversity. The principle of autonomy (i.e., the idea of individual informed consent), however, does need to be revised so as to make it compatible with alternatives such as family- or community-informed consent. The upshot is that the contribution of the four-principles approach to global ethics lies in the so-called dialectical process and its power to deal with cross-cultural issues against the background of universal demands by joining them together.

  20. Sharing Data to Build a Medical Information Commons: From Bermuda to the Global Alliance.

    PubMed

    Cook-Deegan, Robert; Ankeny, Rachel A; Maxson Jones, Kathryn

    2017-08-31

    The Human Genome Project modeled its open science ethos on nematode biology, most famously through daily release of DNA sequence data based on the 1996 Bermuda Principles. That open science philosophy persists, but daily, unfettered release of data has had to adapt to constraints occasioned by the use of data from individual people, broader use of data not only by scientists but also by clinicians and individuals, the global reach of genomic applications and diverse national privacy and research ethics laws, and the rising prominence of a diverse commercial genomics sector. The Global Alliance for Genomics and Health was established to enable the data sharing that is essential for making meaning of genomic variation. Data-sharing policies and practices will continue to evolve as researchers, health professionals, and individuals strive to construct a global medical and scientific information commons.

  1. Influence of Flavors on the Propagation of E-Cigarette–Related Information: Social Media Study

    PubMed Central

    Zhou, Jiaqi; Zeng, Daniel Dajun; Tsui, Kwok Leung

    2018-01-01

    Background Modeling the influence of e-cigarette flavors on information propagation could provide quantitative policy decision support concerning smoking initiation and contagion, as well as e-cigarette regulations. Objective The objective of this study was to characterize the influence of flavors on e-cigarette–related information propagation on social media. Methods We collected a comprehensive dataset of e-cigarette–related discussions from public Pages on Facebook. We identified 11 categories of flavors based on commonly used categorizations. Each post’s frequency of being shared served as a proxy measure of information propagation. We evaluated a set of regression models and chose the hurdle negative binomial model to characterize the influence of different flavors and nonflavor control variables on e-cigarette–related information propagation. Results We found that 5 flavors (sweet, dessert & bakery, fruits, herbs & spices, and tobacco) had significantly negative influences on e-cigarette–related information propagation, indicating the users’ tendency not to share posts related to these flavors. We did not find a positive significance of any flavors, which is contradictory to previous research. In addition, we found that a set of nonflavor–related factors were associated with information propagation. Conclusions Mentions of flavors in posts did not enhance the popularity of e-cigarette–related information. Certain flavors could even have reduced the popularity of information, indicating users’ lack of interest in flavors. Promoting e-cigarette–related information with mention of flavors is not an effective marketing approach. This study implies the potential concern of users about flavorings and suggests a need to regulate the use of flavorings in e-cigarettes. PMID:29572202

  2. Identifying environmental variables explaining genotype-by-environment interaction for body weight of rainbow trout (Onchorynchus mykiss): reaction norm and factor analytic models.

    PubMed

    Sae-Lim, Panya; Komen, Hans; Kause, Antti; Mulder, Han A

    2014-02-26

    Identifying the relevant environmental variables that cause GxE interaction is often difficult when they cannot be experimentally manipulated. Two statistical approaches can be applied to address this question. When data on candidate environmental variables are available, GxE interaction can be quantified as a function of specific environmental variables using a reaction norm model. Alternatively, a factor analytic model can be used to identify the latent common factor that explains GxE interaction. This factor can be correlated with known environmental variables to identify those that are relevant. Previously, we reported a significant GxE interaction for body weight at harvest in rainbow trout reared on three continents. Here we explore their possible causes. Reaction norm and factor analytic models were used to identify which environmental variables (age at harvest, water temperature, oxygen, and photoperiod) may have caused the observed GxE interaction. Data on body weight at harvest was recorded on 8976 offspring reared in various locations: (1) a breeding environment in the USA (nucleus), (2) a recirculating aquaculture system in the Freshwater Institute in West Virginia, USA, (3) a high-altitude farm in Peru, and (4) a low-water temperature farm in Germany. Akaike and Bayesian information criteria were used to compare models. The combination of days to harvest multiplied with daily temperature (Day*Degree) and photoperiod were identified by the reaction norm model as the environmental variables responsible for the GxE interaction. The latent common factor that was identified by the factor analytic model showed the highest correlation with Day*Degree. Day*Degree and photoperiod were the environmental variables that differed most between Peru and other environments. Akaike and Bayesian information criteria indicated that the factor analytical model was more parsimonious than the reaction norm model. Day*Degree and photoperiod were identified as environmental variables responsible for the strong GxE interaction for body weight at harvest in rainbow trout across four environments. Both the reaction norm and the factor analytic models can help identify the environmental variables responsible for GxE interaction. A factor analytic model is preferred over a reaction norm model when limited information on differences in environmental variables between farms is available.

  3. Identifying environmental variables explaining genotype-by-environment interaction for body weight of rainbow trout (Onchorynchus mykiss): reaction norm and factor analytic models

    PubMed Central

    2014-01-01

    Background Identifying the relevant environmental variables that cause GxE interaction is often difficult when they cannot be experimentally manipulated. Two statistical approaches can be applied to address this question. When data on candidate environmental variables are available, GxE interaction can be quantified as a function of specific environmental variables using a reaction norm model. Alternatively, a factor analytic model can be used to identify the latent common factor that explains GxE interaction. This factor can be correlated with known environmental variables to identify those that are relevant. Previously, we reported a significant GxE interaction for body weight at harvest in rainbow trout reared on three continents. Here we explore their possible causes. Methods Reaction norm and factor analytic models were used to identify which environmental variables (age at harvest, water temperature, oxygen, and photoperiod) may have caused the observed GxE interaction. Data on body weight at harvest was recorded on 8976 offspring reared in various locations: (1) a breeding environment in the USA (nucleus), (2) a recirculating aquaculture system in the Freshwater Institute in West Virginia, USA, (3) a high-altitude farm in Peru, and (4) a low-water temperature farm in Germany. Akaike and Bayesian information criteria were used to compare models. Results The combination of days to harvest multiplied with daily temperature (Day*Degree) and photoperiod were identified by the reaction norm model as the environmental variables responsible for the GxE interaction. The latent common factor that was identified by the factor analytic model showed the highest correlation with Day*Degree. Day*Degree and photoperiod were the environmental variables that differed most between Peru and other environments. Akaike and Bayesian information criteria indicated that the factor analytical model was more parsimonious than the reaction norm model. Conclusions Day*Degree and photoperiod were identified as environmental variables responsible for the strong GxE interaction for body weight at harvest in rainbow trout across four environments. Both the reaction norm and the factor analytic models can help identify the environmental variables responsible for GxE interaction. A factor analytic model is preferred over a reaction norm model when limited information on differences in environmental variables between farms is available. PMID:24571451

  4. CheS-Mapper 2.0 for visual validation of (Q)SAR models

    PubMed Central

    2014-01-01

    Background Sound statistical validation is important to evaluate and compare the overall performance of (Q)SAR models. However, classical validation does not support the user in better understanding the properties of the model or the underlying data. Even though, a number of visualization tools for analyzing (Q)SAR information in small molecule datasets exist, integrated visualization methods that allow the investigation of model validation results are still lacking. Results We propose visual validation, as an approach for the graphical inspection of (Q)SAR model validation results. The approach applies the 3D viewer CheS-Mapper, an open-source application for the exploration of small molecules in virtual 3D space. The present work describes the new functionalities in CheS-Mapper 2.0, that facilitate the analysis of (Q)SAR information and allows the visual validation of (Q)SAR models. The tool enables the comparison of model predictions to the actual activity in feature space. The approach is generic: It is model-independent and can handle physico-chemical and structural input features as well as quantitative and qualitative endpoints. Conclusions Visual validation with CheS-Mapper enables analyzing (Q)SAR information in the data and indicates how this information is employed by the (Q)SAR model. It reveals, if the endpoint is modeled too specific or too generic and highlights common properties of misclassified compounds. Moreover, the researcher can use CheS-Mapper to inspect how the (Q)SAR model predicts activity cliffs. The CheS-Mapper software is freely available at http://ches-mapper.org. Graphical abstract Comparing actual and predicted activity values with CheS-Mapper.

  5. Exact Markov chains versus diffusion theory for haploid random mating.

    PubMed

    Tyvand, Peder A; Thorvaldsen, Steinar

    2010-05-01

    Exact discrete Markov chains are applied to the Wright-Fisher model and the Moran model of haploid random mating. Selection and mutations are neglected. At each discrete value of time t there is a given number n of diploid monoecious organisms. The evolution of the population distribution is given in diffusion variables, to compare the two models of random mating with their common diffusion limit. Only the Moran model converges uniformly to the diffusion limit near the boundary. The Wright-Fisher model allows the population size to change with the generations. Diffusion theory tends to under-predict the loss of genetic information when a population enters a bottleneck. 2010 Elsevier Inc. All rights reserved.

  6. ANALYSIS OF CLINICAL AND DERMOSCOPIC FEATURES FOR BASAL CELL CARCINOMA NEURAL NETWORK CLASSIFICATION

    PubMed Central

    Cheng, Beibei; Stanley, R. Joe; Stoecker, William V; Stricklin, Sherea M.; Hinton, Kristen A.; Nguyen, Thanh K.; Rader, Ryan K.; Rabinovitz, Harold S.; Oliviero, Margaret; Moss, Randy H.

    2012-01-01

    Background Basal cell carcinoma (BCC) is the most commonly diagnosed cancer in the United States. In this research, we examine four different feature categories used for diagnostic decisions, including patient personal profile (patient age, gender, etc.), general exam (lesion size and location), common dermoscopic (blue-gray ovoids, leaf-structure dirt trails, etc.), and specific dermoscopic lesion (white/pink areas, semitranslucency, etc.). Specific dermoscopic features are more restricted versions of the common dermoscopic features. Methods Combinations of the four feature categories are analyzed over a data set of 700 lesions, with 350 BCCs and 350 benign lesions, for lesion discrimination using neural network-based techniques, including Evolving Artificial Neural Networks and Evolving Artificial Neural Network Ensembles. Results Experiment results based on ten-fold cross validation for training and testing the different neural network-based techniques yielded an area under the receiver operating characteristic curve as high as 0.981 when all features were combined. The common dermoscopic lesion features generally yielded higher discrimination results than other individual feature categories. Conclusions Experimental results show that combining clinical and image information provides enhanced lesion discrimination capability over either information source separately. This research highlights the potential of data fusion as a model for the diagnostic process. PMID:22724561

  7. Modeling approaches in avian conservation and the role of field biologists

    USGS Publications Warehouse

    Beissinger, Steven R.; Walters, J.R.; Catanzaro, D.G.; Smith, Kimberly G.; Dunning, J.B.; Haig, Susan M.; Noon, Barry; Stith, Bradley M.

    2006-01-01

    This review grew out of our realization that models play an increasingly important role in conservation but are rarely used in the research of most avian biologists. Modelers are creating models that are more complex and mechanistic and that can incorporate more of the knowledge acquired by field biologists. Such models require field biologists to provide more specific information, larger sample sizes, and sometimes new kinds of data, such as habitat-specific demography and dispersal information. Field biologists need to support model development by testing key model assumptions and validating models. The best conservation decisions will occur where cooperative interaction enables field biologists, modelers, statisticians, and managers to contribute effectively. We begin by discussing the general form of ecological models—heuristic or mechanistic, "scientific" or statistical—and then highlight the structure, strengths, weaknesses, and applications of six types of models commonly used in avian conservation: (1) deterministic single-population matrix models, (2) stochastic population viability analysis (PVA) models for single populations, (3) metapopulation models, (4) spatially explicit models, (5) genetic models, and (6) species distribution models. We end by considering their unique attributes, determining whether the assumptions that underlie the structure are valid, and testing the ability of the model to predict the future correctly.

  8. Services Oriented Smart City Platform Based On 3d City Model Visualization

    NASA Astrophysics Data System (ADS)

    Prandi, F.; Soave, M.; Devigili, F.; Andreolli, M.; De Amicis, R.

    2014-04-01

    The rapid technological evolution, which is characterizing all the disciplines involved within the wide concept of smart cities, is becoming a key factor to trigger true user-driven innovation. However to fully develop the Smart City concept to a wide geographical target, it is required an infrastructure that allows the integration of heterogeneous geographical information and sensor networks into a common technological ground. In this context 3D city models will play an increasingly important role in our daily lives and become an essential part of the modern city information infrastructure (Spatial Data Infrastructure). The work presented in this paper describes an innovative Services Oriented Architecture software platform aimed at providing smartcities services on top of 3D urban models. 3D city models are the basis of many applications and can became the platform for integrating city information within the Smart-Cites context. In particular the paper will investigate how the efficient visualisation of 3D city models using different levels of detail (LODs) is one of the pivotal technological challenge to support Smart-Cities applications. The goal is to provide to the final user realistic and abstract 3D representations of the urban environment and the possibility to interact with a massive amounts of semantic information contained into the geospatial 3D city model. The proposed solution, using OCG standards and a custom service to provide 3D city models, lets the users to consume the services and interact with the 3D model via Web in a more effective way.

  9. Recursive formulae and performance comparisons for first mode dynamics of periodic structures

    NASA Astrophysics Data System (ADS)

    Hobeck, Jared D.; Inman, Daniel J.

    2017-05-01

    Periodic structures are growing in popularity especially in the energy harvesting and metastructures communities. Common types of these unique structures are referred to in the literature as zigzag, orthogonal spiral, fan-folded, and longitudinal zigzag structures. Many of these studies on periodic structures have two competing goals in common: (a) minimizing natural frequency, and (b) minimizing mass or volume. These goals suggest that no single design is best for all applications; therefore, there is a need for design optimization and comparison tools which first require efficient easy-to-implement models. All available structural dynamics models for these types of structures do provide exact analytical solutions; however, they are complex requiring tedious implementation and providing more information than necessary for practical applications making them computationally inefficient. This paper presents experimentally validated recursive models that are able to very accurately and efficiently predict the dynamics of the four most common types of periodic structures. The proposed modeling technique employs a combination of static deflection formulae and Rayleigh’s Quotient to estimate the first mode shape and natural frequency of periodic structures having any number of beams. Also included in this paper are the results of an extensive experimental validation study which show excellent agreement between model prediction and measurement. Lastly, the proposed models are used to evaluate the performance of each type of structure. Results of this performance evaluation reveal key advantages and disadvantages associated with each type of structure.

  10. Chronic disease management for depression in US medical practices: results from the Health Tracking Physician Survey.

    PubMed

    Zafar, Waleed; Mojtabai, Ramin

    2011-07-01

    Chronic care model (CCM) envisages a multicomponent systematic remodeling of ambulatory care to improve chronic diseases management. Application of CCM in primary care management of depression has traditionally lagged behind the application of this model in management of other common chronic illnesses. In past research, the use of CCM has been operationalized by measuring the use of evidence-based organized care management processes (CMPs). To compare the use of CMPs in treatment of depression with the use of these processes in treatment of diabetes and asthma and to examine practice-level correlates of this use. Using data from the 2008 Health Tracking Physician Survey, a nationally representative sample of physicians in the United States, we compared the use of 5 different CMPs: written guidelines in English and other languages for self-management, availability of staff to educate patients about self-management, availability of nurse care managers for care coordination, and group meetings of patients with staff. We further examined the association of practice-level characteristics with the use of the 5 CMPs for management of depression. CMPs were more commonly used for management of diabetes and asthma than for depression. The use of CMPs for depression was more common in health maintenance organizations [adjusted odds ratios (AOR) ranging from 2.45 to 5.98 for different CMPs], in practices that provided physicians with feedback regarding quality of care to patients (AOR range, 1.42 to 1.69), and in practices with greater use of clinical information technology (AOR range, 1.06 to 1.11). The application of CMPs in management of depression continues to lag behind other common chronic conditions. Feedbacks on quality of care and expanded use of information technology may improve application of CMPs for depression care in general medical settings.

  11. Studying emotion theories through connectivity analysis: Evidence from generalized psychophysiological interactions and graph theory.

    PubMed

    Huang, Yun-An; Jastorff, Jan; Van den Stock, Jan; Van de Vliet, Laura; Dupont, Patrick; Vandenbulcke, Mathieu

    2018-05-15

    Psychological construction models of emotion state that emotions are variable concepts constructed by fundamental psychological processes, whereas according to basic emotion theory, emotions cannot be divided into more fundamental units and each basic emotion is represented by a unique and innate neural circuitry. In a previous study, we found evidence for the psychological construction account by showing that several brain regions were commonly activated when perceiving different emotions (i.e. a general emotion network). Moreover, this set of brain regions included areas associated with core affect, conceptualization and executive control, as predicted by psychological construction models. Here we investigate directed functional brain connectivity in the same dataset to address two questions: 1) is there a common pathway within the general emotion network for the perception of different emotions and 2) if so, does this common pathway contain information to distinguish between different emotions? We used generalized psychophysiological interactions and information flow indices to examine the connectivity within the general emotion network. The results revealed a general emotion pathway that connects neural nodes involved in core affect, conceptualization, language and executive control. Perception of different emotions could not be accurately classified based on the connectivity patterns from the nodes of the general emotion pathway. Successful classification was achieved when connections outside the general emotion pathway were included. We propose that the general emotion pathway functions as a common pathway within the general emotion network and is involved in shared basic psychological processes across emotions. However, additional connections within the general emotion network are required to classify different emotions, consistent with a constructionist account. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. Modeling U-shaped dose-response curves for manganese using categorical regression.

    PubMed

    Milton, Brittany; Krewski, Daniel; Mattison, Donald R; Karyakina, Nataliya A; Ramoju, Siva; Shilnikova, Natalia; Birkett, Nicholas; Farrell, Patrick J; McGough, Doreen

    2017-01-01

    Manganese is an essential nutrient which can cause adverse effects if ingested to excess or in insufficient amounts, leading to a U-shaped exposure-response relationship. Methods have recently been developed to describe such relationships by simultaneously modeling the exposure-response curves for excess and deficiency. These methods incorporate information from studies with diverse adverse health outcomes within the same analysis by assigning severity scores to achieve a common response metric for exposure-response modeling. We aimed to provide an estimate of the optimal dietary intake of manganese to balance adverse effects from deficient or excess intake. We undertook a systematic review of the literature from 1930 to 2013 and extracted information on adverse effects from manganese deficiency and excess to create a database on manganese toxicity following oral exposure. Although data were available for seven different species, only the data from rats was sufficiently comprehensive to support analytical modelling. The toxicological outcomes were standardized on an 18-point severity scale, allowing for a common analysis of all available toxicological data. Logistic regression modelling was used to simultaneously estimate the exposure-response profile for dietary deficiency and excess for manganese and generate a U-shaped exposure-response curve for all outcomes. Data were available on the adverse effects of 6113 rats. The nadir of the U-shaped joint response curve occurred at a manganese intake of 2.70mg/kgbw/day with a 95% confidence interval of 2.51-3.02. The extremes of both deficient and excess intake were associated with a 90% probability of some measurable adverse event. The manganese database supports estimation of optimal intake based on combining information on adverse effects from systematic review of published experiments. There is a need for more studies on humans. Translation of our results from rats to humans will require adjustment for interspecies differences in sensitivity to manganese. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Through the minefield: teaching climate change in a misinformation-rich environment

    NASA Astrophysics Data System (ADS)

    Bedford, D. P.; Cook, J.; Schuenemann, K. C.; Mandia, S. A.; Cowtan, K.; Nuccitelli, D.

    2016-12-01

    It is now widely accepted that students enter science classrooms with their own, often erroneous, pre-existing models of basic scientific concepts. These misconceptions can interfere with student learning. However, the science of climate change is perhaps distinctive in that a deliberate effort has been undertaken by a variety of individuals and institutions to promulgate and perpetuate misconceptions. Both formal and informal efforts to communicate the science of climate change must therefore contend with the effects of these misconceptions, which may be passionately held and in strong opposition to the findings of peer-reviewed research. This presentation reports on the current state of research on misinformation and misconceptions; identifies common mistakes made in attempting to address misconceptions; and details a model which can help to avoid making most, if not all, of these common mistakes. In sum, research in cognitive psychology has shown that misconceptions are extraordinarily difficult to remove, with individuals commonly rejecting information that does not fit with their existing mental models. Attempts to address misconceptions directly can backfire if too much emphasis is placed on the misconception (i.e. leading with the myth) or by reinforcing the misconception at the expense of more accurate explanations (the familiarity backfire effect). Thus, a preferred approach involves a "myth sandwich" of facts followed by myth, followed by an explanation of how the myth distorts the facts. The misconception is therefore sandwiched between facts. This approach has been tested in a widely subscribed MOOC (Denial 101X, by Cook et al., 2015), and a textbook (Bedford and Cook, 2016). This presentation provides fundamental background on effective climate change myth debunking, and will include preliminary data regarding the efficacy of the "myth sandwich" approach.

  14. OneGeology-Europe: architecture, portal and web services to provide a European geological map

    NASA Astrophysics Data System (ADS)

    Tellez-Arenas, Agnès.; Serrano, Jean-Jacques; Tertre, François; Laxton, John

    2010-05-01

    OneGeology-Europe is a large ambitious project to make geological spatial data further known and accessible. The OneGeology-Europe project develops an integrated system of data to create and make accessible for the first time through the internet the geological map of the whole of Europe. The architecture implemented by the project is web services oriented, based on the OGC standards: the geological map is not a centralized database but is composed by several web services, each of them hosted by a European country involved in the project. Since geological data are elaborated differently from country to country, they are difficult to share. OneGeology-Europe, while providing more detailed and complete information, will foster even beyond the geological community an easier exchange of data within Europe and globally. This implies an important work regarding the harmonization of the data, both model and the content. OneGeology-Europe is characterised by the high technological capacity of the EU Member States, and has the final goal to achieve the harmonisation of European geological survey data according to common standards. As a direct consequence Europe will make a further step in terms of innovation and information dissemination, continuing to play a world leading role in the development of geosciences information. The scope of the common harmonized data model was defined primarily by the requirements of the geological map of Europe, but in addition users were consulted and the requirements of both INSPIRE and ‘high-resolution' geological maps were considered. The data model is based on GeoSciML, developed since 2006 by a group of Geological Surveys. The data providers involved in the project implemented a new component that allows the web services to deliver the geological map expressed into GeoSciML. In order to capture the information describing the geological units of the map of Europe the scope of the data model needs to include lithology; age; genesis and metamorphic character. For high resolution maps physical properties, bedding characteristics and weathering also need to be added. Furthermore, Geological data held by national geological surveys is generally described in national language of the country. The project has to deal with the multilingual issue, an important requirement of the INSPIRE directive. The project provides a list of harmonized vocabularies, a set of web services to deal with them, and a web site for helping the geoscientists while mapping the terms used into the national datasets into these vocabularies. The web services provided by each data provider, with the particular component that allows them to deliver the harmonised data model and to handle the multilingualism, are the first part of the architecture. The project also implements a web portal that provides several functionalities. Thanks to the common data model implemented by each web service delivering a part of the geological map, and using OGC SLD standards, the client offers the following option. A user can request for a sub-selection of the map, for instance searching on a particular attribute such as "age is quaternary", and display only the parts of the map according to the filter. Using the web services on the common vocabularies, the data displayed are translated. The project started September 2008 for two years, with 29 partners from 20 countries (20 partners are Geological Surveys). The budget is 3.25 M€, with a European Commission contribution of 2.6 M€. The paper will describe the technical solutions to implement OneGeology-Europe components: the profile of the common data model to exchange geological data, the web services to view and access geological data; and a geoportal to provide the user with a user-friendly way to discover, view and access geological data.

  15. Modifications of the U.S. Geological Survey modular, finite-difference, ground-water flow model to read and write geographic information system files

    USGS Publications Warehouse

    Orzol, Leonard L.; McGrath, Timothy S.

    1992-01-01

    This report documents modifications to the U.S. Geological Survey modular, three-dimensional, finite-difference, ground-water flow model, commonly called MODFLOW, so that it can read and write files used by a geographic information system (GIS). The modified model program is called MODFLOWARC. Simulation programs such as MODFLOW generally require large amounts of input data and produce large amounts of output data. Viewing data graphically, generating head contours, and creating or editing model data arrays such as hydraulic conductivity are examples of tasks that currently are performed either by the use of independent software packages or by tedious manual editing, manipulating, and transferring data. Programs such as GIS programs are commonly used to facilitate preparation of the model input data and analyze model output data; however, auxiliary programs are frequently required to translate data between programs. Data translations are required when different programs use different data formats. Thus, the user might use GIS techniques to create model input data, run a translation program to convert input data into a format compatible with the ground-water flow model, run the model, run a translation program to convert the model output into the correct format for GIS, and use GIS to display and analyze this output. MODFLOWARC, avoids the two translation steps and transfers data directly to and from the ground-water-flow model. This report documents the design and use of MODFLOWARC and includes instructions for data input/output of the Basic, Block-centered flow, River, Recharge, Well, Drain, Evapotranspiration, General-head boundary, and Streamflow-routing packages. The modification to MODFLOW and the Streamflow-Routing package was minimized. Flow charts and computer-program code describe the modifications to the original computer codes for each of these packages. Appendix A contains a discussion on the operation of MODFLOWARC using a sample problem.

  16. Predicting Energy Performance of a Net-Zero Energy Building: A Statistical Approach

    PubMed Central

    Kneifel, Joshua; Webb, David

    2016-01-01

    Performance-based building requirements have become more prevalent because it gives freedom in building design while still maintaining or exceeding the energy performance required by prescriptive-based requirements. In order to determine if building designs reach target energy efficiency improvements, it is necessary to estimate the energy performance of a building using predictive models and different weather conditions. Physics-based whole building energy simulation modeling is the most common approach. However, these physics-based models include underlying assumptions and require significant amounts of information in order to specify the input parameter values. An alternative approach to test the performance of a building is to develop a statistically derived predictive regression model using post-occupancy data that can accurately predict energy consumption and production based on a few common weather-based factors, thus requiring less information than simulation models. A regression model based on measured data should be able to predict energy performance of a building for a given day as long as the weather conditions are similar to those during the data collection time frame. This article uses data from the National Institute of Standards and Technology (NIST) Net-Zero Energy Residential Test Facility (NZERTF) to develop and validate a regression model to predict the energy performance of the NZERTF using two weather variables aggregated to the daily level, applies the model to estimate the energy performance of hypothetical NZERTFs located in different cities in the Mixed-Humid climate zone, and compares these estimates to the results from already existing EnergyPlus whole building energy simulations. This regression model exhibits agreement with EnergyPlus predictive trends in energy production and net consumption, but differs greatly in energy consumption. The model can be used as a framework for alternative and more complex models based on the experimental data collected from the NZERTF. PMID:27956756

  17. Predicting Energy Performance of a Net-Zero Energy Building: A Statistical Approach.

    PubMed

    Kneifel, Joshua; Webb, David

    2016-09-01

    Performance-based building requirements have become more prevalent because it gives freedom in building design while still maintaining or exceeding the energy performance required by prescriptive-based requirements. In order to determine if building designs reach target energy efficiency improvements, it is necessary to estimate the energy performance of a building using predictive models and different weather conditions. Physics-based whole building energy simulation modeling is the most common approach. However, these physics-based models include underlying assumptions and require significant amounts of information in order to specify the input parameter values. An alternative approach to test the performance of a building is to develop a statistically derived predictive regression model using post-occupancy data that can accurately predict energy consumption and production based on a few common weather-based factors, thus requiring less information than simulation models. A regression model based on measured data should be able to predict energy performance of a building for a given day as long as the weather conditions are similar to those during the data collection time frame. This article uses data from the National Institute of Standards and Technology (NIST) Net-Zero Energy Residential Test Facility (NZERTF) to develop and validate a regression model to predict the energy performance of the NZERTF using two weather variables aggregated to the daily level, applies the model to estimate the energy performance of hypothetical NZERTFs located in different cities in the Mixed-Humid climate zone, and compares these estimates to the results from already existing EnergyPlus whole building energy simulations. This regression model exhibits agreement with EnergyPlus predictive trends in energy production and net consumption, but differs greatly in energy consumption. The model can be used as a framework for alternative and more complex models based on the experimental data collected from the NZERTF.

  18. TU-C-18A-01: Models of Risk From Low-Dose Radiation Exposures: What Does the Evidence Say?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bushberg, J; Boreham, D; Ulsh, B

    2014-06-15

    At dose levels of (approximately) 500 mSv or more, increased cancer incidence and mortality have been clearly demonstrated. However, at the low doses of radiation used in medical imaging, the relationship between dose and cancer risk is not well established. As such, assumptions about the shape of the dose-response curve are made. These assumptions, or risk models, are used to estimate potential long term effects. Common models include 1) the linear non-threshold (LNT) model, 2) threshold models with either a linear or curvilinear dose response above the threshold, and 3) a hormetic model, where the risk is initially decreased belowmore » background levels before increasing. The choice of model used when making radiation risk or protection calculations and decisions can have significant implications on public policy and health care decisions. However, the ongoing debate about which risk model best describes the dose-response relationship at low doses of radiation makes informed decision making difficult. This symposium will review the two fundamental approaches to determining the risk associated with low doses of ionizing radiation, namely radiation epidemiology and radiation biology. The strengths and limitations of each approach will be reviewed, the results of recent studies presented, and the appropriateness of different risk models for various real world scenarios discussed. Examples of well-designed and poorly-designed studies will be provided to assist medical physicists in 1) critically evaluating publications in the field and 2) communicating accurate information to medical professionals, patients, and members of the general public. Equipped with the best information that radiation epidemiology and radiation biology can currently provide, and an understanding of the limitations of such information, individuals and organizations will be able to make more informed decisions regarding questions such as 1) how much shielding to install at medical facilities, 2) at what dose level are risk vs. benefit discussions with patients appropriate, 3) at what dose level should we tell a pregnant woman that the baby’s health risk from a prenatal radiation exposure is “significant”, 4) is informed consent needed for patients undergoing medical imaging, and 5) at what dose level is evacuation appropriate after a radiological accident. Examples of the tremendous impact that choosing different risks models can have on the answers to these types of questions will be given.A moderated panel discussion will allow audience members to pose questions to the faculty members, each of whom is an established expert in his respective discipline. Learning Objectives: Understand the fundamental principles, strengths and limitations of radiation epidemiology and radiation biology for determining the risk from exposures to low doses of ionizing radiation Become familiar with common models of risk used to describe the dose-response relationship at low dose levels Learn to identify strengths and weaknesses in studies designed to measure the effect of low doses of ionizing radiation Understand the implications of different risk models on public policy and health care decisions.« less

  19. Common Mental Disorders among Occupational Groups: Contributions of the Latent Class Model

    PubMed Central

    Martins Carvalho, Fernando; de Araújo, Tânia Maria

    2016-01-01

    Background. The Self-Reporting Questionnaire (SRQ-20) is widely used for evaluating common mental disorders. However, few studies have evaluated the SRQ-20 measurements performance in occupational groups. This study aimed to describe manifestation patterns of common mental disorders symptoms among workers populations, by using latent class analysis. Methods. Data derived from 9,959 Brazilian workers, obtained from four cross-sectional studies that used similar methodology, among groups of informal workers, teachers, healthcare workers, and urban workers. Common mental disorders were measured by using SRQ-20. Latent class analysis was performed on each database separately. Results. Three classes of symptoms were confirmed in the occupational categories investigated. In all studies, class I met better criteria for suspicion of common mental disorders. Class II discriminated workers with intermediate probability of answers to the items belonging to anxiety, sadness, and energy decrease that configure common mental disorders. Class III was composed of subgroups of workers with low probability to respond positively to questions for screening common mental disorders. Conclusions. Three patterns of symptoms of common mental disorders were identified in the occupational groups investigated, ranging from distinctive features to low probabilities of occurrence. The SRQ-20 measurements showed stability in capturing nonpsychotic symptoms. PMID:27630999

  20. The Myth of the Rational Decision Maker: A Framework for Applying and Enhancing Heuristic and Intuitive Decision Making by School Leaders

    ERIC Educational Resources Information Center

    Davis, Stephen H.

    2004-01-01

    This article takes a critical look at administrative decision making in schools and the extent to which complex decisions conform to normative models and common expectations of rationality. An alternative framework for administrative decision making is presented that is informed, but not driven, by theories of rationality. The framework assumes…

Top